* Add resolve call for Portal networks
And:
- Refactor some code by adding a findNodeVerified call
- add the portal network lookup json-rpc call that uses resolve
- add usage of this lookup in the portal testnet tests
- Additional comments
* Let recordsFromBytes fail on invalid ENRs
This behaviour is more similar as how it is done in discovery v5
base layer.
bugfix and features:
- Switch to the Chronos HTTP client (adds support for HTTPS)
- Allow dynamic RPC method names in the 'rpc' macro
- Restore the support for using the news package
- Add basic discv5 and portal json-rpc calls and activate them in
fluffy
- Renames in the rpc folder
- Add local testnet script and run this script in CI
- bump nim-eth
* Add SSZ Unions through case objects
* Add connection id content response test and improve other test vectors
* Implement content keys and ids for state network as per spec
Content keys case object is used so that it can be serialized and
deserialized as an SSZ Union.
* Let message Union in Portal wire protocol start at 0 as per new spec
- Search for the node on an incoming portal message and try to add
it to the routing table
- Don't rely on discv5 nodes for bootstrapping portal networks for
now.
- Attempt to repopulate the routing table when it is at 0 or drops
to 0.
detected when running hive consensus simulator.
when processing an invalid block header and then
a new valid block header with the same block number,
the state root of the stateDB object should be updated
or reverted to parent stateRoot.
using intermediate stateRoot will trigger the hexary trie assertion.
previously, every time the VMState was created, it will also create
new stateDB, and this action will nullify the advantages of cached accounts.
the new changes will conserve the accounts cache if the executed blocks
are contiguous. if not the stateDB need to be reinited.
this changes also allow rpcCallEvm and rpcEstimateGas executed properly
using current stateDB instead of creating new one each time they are called.
Fixes#868 "Gas usage consensus error at Mainnet block 6001128", and equivalent
on other networks. Mainnet sync is able to continue past 6001128 after this.
Here's a trace:
```
TRC 2021-09-29 15:13:21.532+01:00 Persisting blocks file=persist_blocks.nim:43 fromBlock=6000961 toBlock=6001152
...
DBG 2021-09-29 15:14:35.925+01:00 gasUsed neq cumulativeGasUsed file=process_block.nim:68 gasUsed=7999726 cumulativeGasUsed=7989726
TRC 2021-09-29 15:14:35.925+01:00 peer disconnected file=blockchain_sync.nim:407 peer=<PEER:IP>
```
Similar output is seen at many blocks in the range 6001128..6001204.
The bug is when handling a combination of `CREATE` or `CREATE2`, along with
`SELFDESTRUCT` applied to the new contract address.
Init code for a contract can't return non-empty code and do `SELFDESTRUCT` at
the same time, because `SELFDESTRUCT` returns empty data.
But it is possible to return non-empty code in a newly created, self-destructed
account if the init code calls `DELEGATECALL` or `CALLCODE` to other code which
uses `SELFDESTRUCT`.
In this case we must still charge gas and write the code. This shows on
Mainnet blocks 6001128..6001204, where the gas difference matters. The code
must be written because the new code can be called later in the transaction
too, before self-destruction wipes the account at the end.
There are actually three semantic changes here for a self-destructed, new
contract:
- Gas is charged.
- The code is written to the account.
- It can fail due to insufficient gas.
This patch almost exactly reverts a15805e4 "fix applyCreateMessage" from
2019-02-28. I wonder what that fixed.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Fixes an off by 1 error where `EIP170_CODE_SIZE_LIMIT` was being treated as the
lowest invalid value by EVM code, but the highest valid value by witness code.
To remove confusion, this is renamed to `EIP170_MAX_CODE_SIZE` with value
0x6000, which matches the name (`MAX_CODE_SIZE`) and value used for this limit
in [EIP-170](https://eips.ethereum.org/EIPS/eip-170).
Signed-off-by: Jamie Lokier <jamie@shareable.org>
Fixes#864 "Sync progress stops at Goerli block 4494913", and equivalent on
other networks.
The block body fetcher in `blockchain_sync.nim` had an incorrect assumption
about how peers respond to `GetBlockBodies`. It was issuing requests for N
block bodies and incorrectly handling replies which contained fewer than N
bodies.
Having received up to 192 headers in a batch, it split the range into smaller
`GetBlockBodies` requests, fetched each reply, then combined replies. The
effect was Nimbus requested batches of 128+64 block bodies, received gaps in
the reply sequence, then aborted.
That meant it repeatedly fetched data, then discarded it, and fetched it again,
dropping good peers in the process.
Aborted and restarted batches occurred with earlier blocks too, but this became
more pronounced until there were no suitable peers at batch 4494913..4495104.
Here's a trace:
```
TRC 2021-09-29 02:40:24.977+01:00 Requesting block headers file=blockchain_sync.nim:224 start=4494913 count=192 peer=<ENODE>
TRC 2021-09-29 02:40:24.977+01:00 >> Sending eth.GetBlockHeaders (0x03) file=protocol_eth65.nim:51 peer=<PEER> startBlock=4494913 max=192
TRC 2021-09-29 02:40:25.005+01:00 << Got reply eth.BlockHeaders (0x04) file=protocol_eth65.nim:51 peer=<PEER> count=192
TRC 2021-09-29 02:40:25.007+01:00 >> Sending eth.GetBlockBodies (0x05) file=protocol_eth65.nim:51 peer=<PEER> count=128
TRC 2021-09-29 02:40:25.209+01:00 << Got reply eth.BlockBodies (0x06) file=protocol_eth65.nim:51 peer=<PEER> count=13
TRC 2021-09-29 02:40:25.210+01:00 >> Sending eth.GetBlockBodies (0x05) file=protocol_eth65.nim:51 peer=<PEER> count=64
TRC 2021-09-29 02:40:25.290+01:00 << Got reply eth.BlockBodies (0x06) file=protocol_eth65.nim:51 peer=<PEER> count=64
WRN 2021-09-29 02:40:25.306+01:00 Bodies len != headers.len file=blockchain_sync.nim:276 bodies=77 headers=192
TRC 2021-09-29 02:40:25.306+01:00 peer disconnected file=blockchain_sync.nim:403 peer=<PEER>
TRC 2021-09-29 02:40:25.306+01:00 Finished obtaining blocks file=blockchain_sync.nim:303 peer=<PEER>
```
In practice, for modern peers, Nimbus received shorter replies than it assumed
depending on the block sizes on the chain. Geth/Erigon has 2MiB `BlockBodies`
response size soft limit. OpenEthereum has 4MiB.
Up to Berlin (EIP-2929), Nimbus's fetcher failed often, but there were still
some peers serving what Nimbus needed.
Just after the start of Berlin, at batch 4494913..4495104 on Goerli, zero peers
responded with full size replies for the whole batch, so Nimbus couldn't
progress past that point. But there was already a problem happening before
that for large blocks, dropping good peers and repeatedly fetching the same
block data.
Signed-off-by: Jamie Lokier <jamie@shareable.org>
pre EIP1559 max(gasCost) is tx.gasLimit * tx.gasPrice
the new EIP1559 max(gasCost) before the transaction can be executed is
tx.gasLimit * tx.maxFeePerGas
* Add a basic ContentDB for Portal networks
* Use ContentDB in StateNetwork
* Avoid probably some form of sandwich problem by re-exporting kvstore_sqlite3 from content_db