* Rearrange/rename test_kintsugu => test_custom_network
why:
Debug, fix and test more general problems related to running
nimbus on a custom network.
* Update UInt265/Json parser for --custom-network command line option
why:
As found out with the Kintsugi configuration, block number and balance
have the same Nim type which led to misunderstandings. This patch makes
sure that UInt265 encoded string values "0x11" decodes to 17, and "b"
and "11" to 11.
* Refactored genesis.toBlock() => genesis.toBlockHeader()
why:
The function toBlock(g,db) may return different results depending on
whether the db descriptor argument is nil, or initialised. This is due
to the db.config data sub-descriptor which may give various outcomes
for the baseFee field of the genesis header.
Also, the version where db is non-nil initialised is used internally
only. So the public rewrite toBlockHeader() that replaces the toBlock()
function expects a full set of NetworkParams.
* update comments
* Rename toBlockHeader() => toGenesisHeader()
why:
Polymorphic prototype used for BaseChainDB or NetworkParams argument.
With a BaseChainDB descriptor argument, the name shall imply that the
header is generated from the config fields rather than fetched from
the database.
* Added command line option --static-peers-file
why:
Handy feature to keep peer nodes in a file, similar to the
--bootstrap-file option.
Any transaction where tx.sender has a CODEHASH != EMPTYCODEHASH MUST
be rejected as invalid, where EMPTYCODEHASH =
0xc5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470.
The invalid transaction MUST be rejected by the client and not be included
in a block. A block containing such a transaction MUST be considered invalid.
because EIP-4399 override mixDigest validation rule,
there is no need to check mixDigest == ZERO_HASH
`mixDigest` will carry POS block randomness value
Includes a simple test harness for the merge interop M1 milestone
This aims to enable connecting nimbus-eth2 to nimbus-eth1 within
the testing protocol described here:
https://github.com/status-im/nimbus-eth2/blob/amphora-merge-interop/docs/interop_merge.md
To execute the work-in-progress test, please run:
In terminal 1:
tests/amphora/launch-nimbus.sh
In terminal 2:
tests/amphora/check-merge-test-vectors.sh
details:
For documentation, see comments in the file tx_pool.nim.
For prettified manual pages run 'make docs' in the nimbus directory and
point your web browser to the newly created 'docs' directory.
why:
Previously, the function 'snapshot_desc.loadSnapshot()' contained the
equivalent of 'eth.decode(@[],SnapshotData)' for some type 'SnapshotData'
which should result in an exception of type 'RlpTypeMismatch'.
Before mid October, this worked for all systems on the Github CI. Since
then, a segfault message in the Github CI can be reproduced on all 64bit
Windows wuns when running 'build/all_tests <id-of-test_txpool>' after the
failed 'make test' directive (the latter one needs to be extended by
'|| true'.) This error cannot be reproduced on my local Win7/64 system
with the same MSYS2 and gcc 11.2.0 compiler.
The fix is, rather than catching an exception, to explicitly check the
first argument of 'eth.decode(@[],SnapshotData)' and act if it is empty.
also:
removed some obsolete {.inline.} annotations.
why:
Previous version was based on lru_cache which is ugly. This module is
based on the stew/keyed_queue library module.
other:
There are still some other modules rely on lru_cache which should be
removed.
* Redesign of BaseVMState descriptor
why:
BaseVMState provides an environment for executing transactions. The
current descriptor also provides data that cannot generally be known
within the execution environment, e.g. the total gasUsed which is
available not before after all transactions have finished.
Also, the BaseVMState constructor has been replaced by a constructor
that does not need pre-initialised input of the account database.
also:
Previous constructor and some fields are provided with a deprecated
annotation (producing a lot of noise.)
* Replace legacy directives in production sources
* Replace legacy directives in unit test sources
* fix CI (missing premix update)
* Remove legacy directives
* chase CI problem
* rebased
* Re-introduce 'AccountsCache' constructor optimisation for 'BaseVmState' re-initialisation
why:
Constructing a new 'AccountsCache' descriptor can be avoided sometimes
when the current state root is properly positioned already. Such a
feature existed already as the update function 'initStateDB()' for the
'BaseChanDB' where the accounts cache was linked into this desctiptor.
The function 'initStateDB()' was removed and re-implemented into the
'BaseVmState' constructor without optimisation. The old version was of
restricted use as a wrong accounts cache state would unconditionally
throw an exception rather than conceptually ask for a remedy.
The optimised 'BaseVmState' re-initialisation has been implemented for
the 'persistBlocks()' function.
also:
moved some test helpers to 'test/replay' folder
* Remove unused & undocumented fields from Chain descriptor
why:
Reduces attack surface in general & improves reading the code.
details:
1. The check for cumulativeGasUsed + tx.gasLimit <= header.gasLimit
makes neither sense nor is it part of the Eip1559 specs. Nevertheless
a check tx.gasLimit <= header.gasLimit is added to satisfy some
unit test (see comments in validateTransaction() body.)
2. As a replacement check for the one removed in 1, a check for
cumulativeGasUsed + gasBurned <= header.gasLimit has been added
(see comments in processTransactionImpl() body.)
3. Prototypes for processTransaction() variants have been cleaned up and
commented.
why:
Detail 1. in particular produces an error for tightly packed blocks when
the last tx in the list has a generous gasLimit.
Add the new [Arrow Glacier fork](https://eips.ethereum.org/EIPS/eip-4345).
Only the difficulty calculation is changed, but as a new fork it still affects
a number of places in the code.
To the best of my knowledge the change is only scheduled on Mainnet.
In addition:
- The fork date comments in `chain_config.nim` have been checked against the
real networks, set consistently in UTC instead of random timezones, and made
neater. Maybe we'll keep these when transferring config to a file someday.
- It's added to forkid hash tests (EIP-2124/EIP-2364), of course.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
* PoW wrapper for verification & mining
why:
It eases data management of per-Epoch lookup tables. Also some unit
tests show limits of usefulness on non-specialised machines for
mining besides developing tests.
details:
For PoW verification, this patch provides a pretty wrapper hiding the
details of the ethash/Hashimoto lookup cache management.
For mining on my development system without special hardware, the
underlying ethash functions are prohibitively slow. It takes
* ~20 minutes to prepare the full ethash/Hashimoto lookup dataset
* a second to run ~25k nonce tests (in the mining loop)
The mining part might be of some use for generating test data for
the tx-pool, though.
* Using PowRef as replacement for EpochHashCache + hashimotoLight()
* Fix typo (CI failed)
why:
was below log level when testing locally
* fix canonical naming
detected when running hive consensus simulator.
when processing an invalid block header and then
a new valid block header with the same block number,
the state root of the stateDB object should be updated
or reverted to parent stateRoot.
using intermediate stateRoot will trigger the hexary trie assertion.
previously, every time the VMState was created, it will also create
new stateDB, and this action will nullify the advantages of cached accounts.
the new changes will conserve the accounts cache if the executed blocks
are contiguous. if not the stateDB need to be reinited.
this changes also allow rpcCallEvm and rpcEstimateGas executed properly
using current stateDB instead of creating new one each time they are called.
Fixes#864 "Sync progress stops at Goerli block 4494913", and equivalent on
other networks.
The block body fetcher in `blockchain_sync.nim` had an incorrect assumption
about how peers respond to `GetBlockBodies`. It was issuing requests for N
block bodies and incorrectly handling replies which contained fewer than N
bodies.
Having received up to 192 headers in a batch, it split the range into smaller
`GetBlockBodies` requests, fetched each reply, then combined replies. The
effect was Nimbus requested batches of 128+64 block bodies, received gaps in
the reply sequence, then aborted.
That meant it repeatedly fetched data, then discarded it, and fetched it again,
dropping good peers in the process.
Aborted and restarted batches occurred with earlier blocks too, but this became
more pronounced until there were no suitable peers at batch 4494913..4495104.
Here's a trace:
```
TRC 2021-09-29 02:40:24.977+01:00 Requesting block headers file=blockchain_sync.nim:224 start=4494913 count=192 peer=<ENODE>
TRC 2021-09-29 02:40:24.977+01:00 >> Sending eth.GetBlockHeaders (0x03) file=protocol_eth65.nim:51 peer=<PEER> startBlock=4494913 max=192
TRC 2021-09-29 02:40:25.005+01:00 << Got reply eth.BlockHeaders (0x04) file=protocol_eth65.nim:51 peer=<PEER> count=192
TRC 2021-09-29 02:40:25.007+01:00 >> Sending eth.GetBlockBodies (0x05) file=protocol_eth65.nim:51 peer=<PEER> count=128
TRC 2021-09-29 02:40:25.209+01:00 << Got reply eth.BlockBodies (0x06) file=protocol_eth65.nim:51 peer=<PEER> count=13
TRC 2021-09-29 02:40:25.210+01:00 >> Sending eth.GetBlockBodies (0x05) file=protocol_eth65.nim:51 peer=<PEER> count=64
TRC 2021-09-29 02:40:25.290+01:00 << Got reply eth.BlockBodies (0x06) file=protocol_eth65.nim:51 peer=<PEER> count=64
WRN 2021-09-29 02:40:25.306+01:00 Bodies len != headers.len file=blockchain_sync.nim:276 bodies=77 headers=192
TRC 2021-09-29 02:40:25.306+01:00 peer disconnected file=blockchain_sync.nim:403 peer=<PEER>
TRC 2021-09-29 02:40:25.306+01:00 Finished obtaining blocks file=blockchain_sync.nim:303 peer=<PEER>
```
In practice, for modern peers, Nimbus received shorter replies than it assumed
depending on the block sizes on the chain. Geth/Erigon has 2MiB `BlockBodies`
response size soft limit. OpenEthereum has 4MiB.
Up to Berlin (EIP-2929), Nimbus's fetcher failed often, but there were still
some peers serving what Nimbus needed.
Just after the start of Berlin, at batch 4494913..4495104 on Goerli, zero peers
responded with full size replies for the whole batch, so Nimbus couldn't
progress past that point. But there was already a problem happening before
that for large blocks, dropping good peers and repeatedly fetching the same
block data.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
pre EIP1559 max(gasCost) is tx.gasLimit * tx.gasPrice
the new EIP1559 max(gasCost) before the transaction can be executed is
tx.gasLimit * tx.maxFeePerGas
this is a preparation for migration to confutils based config
although there is still some getConfiguration usage in tests code
it will be removed after new config arrived
Nimbus types generally use the bit count not the byte count, e.g. `UInt256`,
`Hash256`, so make `ZERO_HASH256` (which has type `Hash256`) fit this pattern.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
* Provide PoA voting header generator
why:
Handy for hive/smoke test
details:
Header generator is a re-implementation of the generator previously
used for the canonical reference tests.
* try fixing ci out-of-mem condition
why:
for some reason, the ci began behaving like a real win7/i386 machine
where gcc is limited to 64k optimiser space
* fix comments, typos ..
* Provide API
details:
API is bundled via clique.nim.
* Set extraValidation as default for PoA chains
why:
This triggers consensus verification and an update of the list
of authorised signers. These signers are integral part of the
PoA block chain.
todo:
Option argument to control validation for the nimbus binary.
* Fix snapshot state block number
why:
Using sub-sequence here, so the len() function was wrong.
* Optional start where block verification begins
why:
Can speed up time building loading initial parts of block chain. For
PoA, this allows to prove & test that authorised signers can be
(correctly) calculated starting at any point on the block chain.
todo:
On Goerli around blocks #193537..#197568, processing time increases
disproportionally -- needs to be understand
* For Clique test, get old grouping back (7 transactions per log entry)
why:
Forgot to change back after troubleshooting
* Fix field/function/module-name misunderstanding
why:
Make compilation work
* Use eth_types.blockHash() rather than utils.hash() in Clique modules
why:
Prefer lib module
* Dissolve snapshot_misc.nim
details:
.. into clique_verify.nim (the other source file clique_unused.nim
is inactive)
* Hide unused AsyncLock in Clique descriptor
details:
Unused here but was part of the Go reference implementation
* Remove fakeDiff flag from Clique descriptor
details:
This flag was a kludge in the Go reference implementation used for the
canonical tests. The tests have been adapted so there is no need for
the fakeDiff flag and its implementation.
* Not observing minimum distance from epoch sync point
why:
For compiling PoA state, the go implementation will walk back to the
epoch header with at least 90000 blocks apart from the current header
in the absence of other synchronisation points.
Here just the nearest epoch header is used. The assumption is that all
the checkpoints before have been vetted already regardless of the
current branch.
details:
The behaviour of using the nearest vs the minimum distance epoch is
controlled by a flag and can be changed at run time.
* Analysing processing time (patch adds some debugging/visualisation support)
why:
At the first half million blocks of the Goerli replay, blocks on the
interval #194854..#196224 take exceptionally long to process, but not
due to PoA processing.
details:
It turns out that much time is spent in p2p/excecutor.processBlock()
where the elapsed transaction execution time is significantly greater
for many of these blocks.
Between the 1371 blocks #194854..#196224 there are 223 blocks with more
than 1/2 seconds execution time whereas there are only 4 such blocks
before and 13 such after this range up to #504192.
* fix debugging symbol in clique_desc (causes CI failing)
* Fixing canonical reference tests
why:
Two errors were introduced earlier but ovelooked:
1. "Remove fakeDiff flag .." patch was incomplete
2. "Not observing minimum distance .." introduced problem w/tests 23/24
details:
Fixing 2. needed to revert the behaviour by setting the
applySnapsMinBacklog flag for the Clique descriptor. Also a new
test was added to lock the new behaviour.
* Remove cruft
why:
Clique/PoA processing was intended to take place somewhere in
executor/process_block.processBlock() but was decided later to run
from chain/persist_block.persistBlock() instead.
* Update API comment
* ditto
Using the same packet tracing format to match `protocol_eth65`.
There aren't many calls, and this makes them clear.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Disable some trace messages which appeared a lot in the output and probably
aren't so useful any more, when block processing is functioning well at high
speed.
Turning on the trace level globally is useful to get a feel for what's
happening, but only if each category is kept to a reasonable amount.
As well as overwhelming the output so that it's hard to see general activity,
some of these messages happen so much they severely slow down processing. Ones
called every time an EVM opcode uses some gas are particularly extreme.
These messages have all been chosen as things which are probably not useful any
more (the relevant functionality has been debugged and is tested plenty).
These have been commented out rather than removed. It may be that turning
trace topics on/off, or other selection, is a better longer term solution, but
that will require better command line options and good defaults for sure.
(I think higher levels `tracev` and `tracevv` levels (extra verbose) would be
more useful for this sort of deep tracing on request.)
For now, enabling `--log-level:TRACE` on the command line is quite useful as
long as we keep each category reasonable, and this patch tries to keep that
balance.
- Don't show "has transactions" on virtually every block imported.
- Don't show "Sender" and "txHash" lines on every transaction processed.
- Don't show "GAS CONSUMPTION" on every opcode executed", this is way too much.
- Don't show "GAS RETURNED" and "GAS REFUND" on each contract call.
- Don't show "op: Stop" on every Stop opcode, which means every transaction.
- Don't show "Insufficient funds" whenever a contract can't call another.
- Don't show "ECRecover", "SHA256 precompile", "RIPEMD160", "Identity"
or even "Call precompile" every time a precompile is called. These are
very well tested now.
- Don't show "executeOpcodes error" whenever a contract returns an error.
(This is changed to `trace` too, it's a normal event that is well tested.)
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Move `blockchain_sync.nim` from `nim-eth` to `nimbus-eth1`.
This lets `blockchain_sync` use the `eth/65` protocol to synchronise with more
modern peers than before.
Practically, the effect is the sync process runs more quickly and reliably than
before. It finds usable peers, and they are up to date.
Note, this is mostly old code, and it mostly performs "classic sync", the
original Ethereum method. Here's a summary of this code:
- It decides on a blockchain canonical head by sampling a few peers.
- Starting from block 0 (genesis), it downloads each block header and
block, mostly in order.
- After it downloads each block, it executes the EVM transactions in that block
and updates state trie from that, before going to the next block.
- This way the database state is updated by EVM executions in block order,
and new state is persisted to the trie database after each block.
Even though it mentions Geth "fast sync" (comments near end of file), and has
some elements, it isn't really. The most obvious missing part is this code
_doesn't download a state trie_, it calculates all state from block 0.
Geth "fast sync" has several parts:
1. Find an agreed common chain among several peers to treat as probably secure,
and a sufficiently long suffix to provide "statistical economic consensus"
when it is validated.
2. Perform a subset of PoW calculations, skipping forward over a segment to
verify some of the PoWs according to a pattern in the relevant paper.
3. Download the state trie from the block at the start of that last segment.
4. Execute only the blocks/transactions in that last segment, using the
downloaded state trie, to fill out the later states and properly validate the
blocks in the last segment.
Some other issues with `blockchain_sync` code:
- If it ever reaches the head of the chain, it doesn't follow new blocks with
increasing block numbers, at least not rapidly.
- If the chain undergoes a reorg, this code won't fetch a block number it has
already fetched, so it can't accept the reorg. It will end up conflicted
with peers. This hasn't mattered because the development focus has been on
the bulk of the catching up process, not the real-time head and reorgs.
- So it probably doesn't work correctly when it gets close to the head due to
many small reorgs, though it might for subtle reasons.
- Some of the network message handling isn't sufficiently robust, and it
discards some replies that have valid data according to specification.
- On rare occasions the initial query mapping block hash to number can
fail (because the peer's state changes).
- It makes some assumptions about the state of peers based on their responses
which may not be valid (I'm not convinced they are). The method for working
out "trusted" peers that agree a common chain prefix is clever. It compares
peers by asking each peer if it has the header matching another peer's
canonical head block by hash. But it's not clear that merely knowing about a
block constitutes agreement about the canonical chain. (If it did, query by
block number would give the same answer more authoritatively.)
Nonetheless, being able to run this sync process on `eth/65` is useful.
<# interactive rebase in progress; onto 66532e8a
Signed-off-by: Jamie Lokier <jamie@shareable.org>
* Renamed source file clique_utils => clique_helpers
why:
New name is more in line with other modules where local libraries
are named similarly.
* re-implemented PoA verification module as clique_verify.nim
details:
The verification code was ported from the go sources and provisionally
stored in the clique_misc.nim source file.
todo:
Bring it to life.
* re-design Snapshot descriptor as: ref object
why:
Avoids some copying descriptor objects
details:
The snapshot management in clique_snapshot.nim has been cleaned up.
todo:
There is a lot of unnecessary copying & sub-list manipulation of
seq[BlockHeader] lists which needs to be simplified by managing
index intervals.
* optimised sequence handling for Clique/PoA
why:
To much ado about nothing
details:
* Working with shallow sequences inside PoA processing avoids
unnecessary copying.
* Using degenerate lists in the cliqueVerify() batch where only the
parent (and no other ancestor) is needed.
todo:
Expose only functions that are needed, shallow sequences should be
handles with care.
* fix var-parameter function argument
* Activate PoA engine -- currently proof of concept
details:
PoA engine is activated with newChain(extraValidation = true) applied
to a PoA network.
status and todo:
The extraValidation flag on the Chain object can be set at a later
state which allows to pre-load parts of the block chain without
verification. Setting it later will only go back the block chain to
the latest epoch checkpoint. This is inherent to the Clique protocol,
needs testing though.
PoA engine works in fine weather mode on Goerli replay. With the
canonical eip-225 tests, there are quite a few fringe conditions
that fail. These can easily fudged over to make things work but need
some more work to understand and correct properly.
* Make the last offending verification header available
why:
Makes some fringe case tests work.
details:
Within a failed transaction comprising several blocks, this
feature help to identify the offending block if there was a
PoA verification error.
* Make PoA header verifier store the final snapshot
why:
The last snapshot needed by the verifier is the one of the parent but
the list of authorised signer is derived from the current snapshot. So
updating to the latest snapshot provides the latest signers list.
details:
Also, PoA processing has been implemented as transaction in
persistBlocks() with Clique state rollback.
Clique tests succeed now.
* Avoiding double yields in iterator => replaced by template
why:
Tanks to Andri who observed it (see #762)
* Calibrate logging interval and fix logging event detection
why:
Logging interval as copied from Go implementation was too large and
needed re-calibration. Elapsed time calculation was bonkers, negative
the wrong way round.
* re-shuffled Clique functions
why:
Due to the port from the go-sources, the interface logic is not optimal
for nimbus. The main visible function is currently snapshot() and most
of the _procurement_ of this function result has been moved to a
sub-directory.
* run eip-225 Clique test against p2p/chain.persistBlocks()
why:
Previously, loading the test block chains was fugdged with the purpose
only to fill the database. As it is now clear how nimbus works on
Goerli, the same can be achieved with a more realistic scenario.
details:
Eventually these tests will be pre-cursor to the reply tests for the
Goerli chain supporting TDD approach with more simple cases.
* fix exception annotations for executor module
why:
needed for exception tracking
details:
main annoyance are vmState methods (in state.nim) which potentially
throw a base level Exception (a proc would only throws CatchableError)
* split p2p/chain into sub-modules and fix exception annotations
why:
make space for implementing PoA stuff
* provide over-loadable Clique PRNG
why:
There is a PRNG provided for generating reproducible number sequences.
The functions which employ the PRNG to generate time slots were ported
ported from the go-implementation. They are currently unused.
* implement trusted signer assembly in p2p/chain.persistBlocks()
details:
* PoA processing moved there at the end of a transaction. Currently,
there is no action (eg. transaction rollback) if this fails.
* The unit tests with staged blocks work ok. In particular, there should
be tests with to-be-rejected blocks.
* TODO: 1.Optimise throughput/cache handling; 2.Verify headers
* fix statement cast in pool.nim
* added table features to LRU cache
why:
Clique uses the LRU cache using a mixture of volatile online items
from the LRU cache and database checkpoints for hard synchronisation.
For performance, Clique needs more table like features.
details:
First, last, and query key added, as well as efficient random delete
added. Also key-item pair iterator added for debugging.
* re-factored LRU snapshot caching
why:
Caching was sub-optimal (aka. bonkers) in that it skipped over memory
caches in many cases and so mostly rebuild the snapshot from the
last on-disk checkpoint.
details;
The LRU snapshot toValue() handler has been moved into the module
clique_snapshot. This is for the fact that toValue() is not supposed
to see the whole LRU cache database. So there must be a higher layer
working with the the whole LRU cache and the on-disk checkpoint
database.
also:
some clean up
todo:
The code still assumes that the block headers are valid in itself. This
is particular important when an epoch header (aka re-sync header) is
processed as it must contain the PoA result of all previous headers.
So blocks need to be verified when they come in before used for PoA
processing.
* fix some snapshot cache fringe cases
why:
Must not index empty sequences in clique_snapshot module
* extract unused clique/mining support into separate file
why:
mining is currently unsupported by nimbus
* Replay first 51840 transactions from Goerli block chain
why:
Currently Goerli is loaded but the block headers are not verified.
Replaying allows real data PoA development.
details:
Simple stupid gzipped dump/undump layer for debugging based on
the zlib module (no nim-faststream support.)
This is a replay running against p2p/chain.persistBlocks() where
the data were captured from.
* prepare stubs for PoA engine
* split executor source into sup-modules
why:
make room for updates, clique integration should go into
executor/update_poastate.nim
* Simplify p2p/executor.processBlock() function prototype
why:
vmState argument always wraps basicChainDB
* split processBlock() into sub-functions
why:
isolate the part where it will support clique/poa
* provided additional processTransaction() function prototype without _fork_ argument
why:
with the exception of some tests, the _fork_ argument is always derived
from the other prototype argument _vmState_
details:
similar situation with makeReceipt()
* provide new processBlock() version explicitly supporting PoA
details:
The new processBlock() version supporting PoA is the general one also
supporting non-PoA networks, it needs an additional _Clique_ descriptor
function argument for PoA state (if any.)
The old processBlock() function without the _Clique_ descriptor argument
retorns an error on PoA networgs (e.g. Goerli.)
* re-implemented Clique descriptor as _ref object_
why:
gives more flexibility when moving around the descriptor object
details:
also cleaned up a bit the clique sources
* comments for clarifying handling of Clique/PoA state descriptor
Transaction and BlockHeader already updated in nim-eth repo
to support EIP-1559
EIP-1559 header validation and gasLimit validation
already implemented in previous commit
This commit deals with block validation:
- Effective gasPrice per EIP-1559
- new miner reward based on priorityFee
This preparation is needed for subsequent
EIPs included in London.
- Add London to Fork enum
- Block number to fork
- Parsing London fork in chain config
- Prepare gas costs table for London
- Prepare EVM opcode dispatcher for London
- Block rewards for London
- Prepare hive script for London
* continue importing rlp blocks
why:
a chain of blocks to be imported might have legit blocks
after rejected blocks
details:
import loop only stops if the import list is exhausted or if there
was a decoding error. this adds another four to the count of successful
no-hive tests.
* verify DAO marked extra data field in block header
why:
was ignored, scores another two no-hive tests
* verify minimum required difficulty in header validator
why:
two more nohive tests to succeed
details:
* subsumed extended header tests under validateKinship() and renamed it
more appropriately validateHeaderAndKinship()
* enhanced readability of p2p/chain.nim
* cleaned up test_blockchain_json.nim
* verify positive gasUsed unless no transactions
why:
solves another to nohive tests
details:
straightened test_blockchain_json chech so there is no unconditional
rejection anymore (based on the input test scenario)
* Re-adjust canonical head to parent of block to be inserted
why:
of the failing tests that remain to be solved, 30 of those will succeed
if the canonical database chain head is cleverly adjusted -- yes, it
looks like a hack, indeed.
details:
at the moment, this hack works for the non-hive tests only and is
triggered by a boolean argument passed on to the chain.persistBlocks()
method.
* Use parent instead of canonical head for block to be inserted
why:
side chains need to be inserted typically somewhere before the
canonical head.
details:
the previous _hack_ was unnecessary and removed, it was inspired by
some verification in persistBlocks() which explicitly referenced the
canonical head (which now might or might not refer to the newly inserted
header.)
* remove unnecessary code + comment
why:
some handy features were intended to support the unit test from
the clique/clique_test.go source (the other one is from
clique/snapshot_test.go.)
as this test cannot realistically be implemented without the full
api (includes mining support), it is left as that