The `BlockHeader` structure in `nim-eth` was updated with support for
EIP-4895 (withdrawals). To enable the `nim-eth` bump, the ingress of
`BlockHeader` structures has been hardened to reject headers that have
the new `withdrawalsRoot` field until proper withdrawals support exists.
https://github.com/status-im/nim-eth/pull/562
* Add headers with proof content type and use it for verification
- Add BlockHeaderWithProof content type & content
- Use BlockHeaderWithProof content to verify if chain data is
part of the canonical chain
- Adjust parser & seeder code to be able to seed these headers
with proof
- Adjust eth_data_exporter to be able to export custom header
ranges for which to build proofs (mostly for testing)
There is currently quite some ugliness & clean-up needed for which
a big part is due tos upporting both BlockHeader and
BlockHeaderWithProof on the network.
* Change accumulator proof to array / SSZ vector type
- Change accumulator proof to SSZ vector instead of SSZ list.
- Add and use general buildProof and buildHeaderWithProof func.
* Make the BlockHeaderWithProof an SSZ Union with None option
* Update portal-spec-tests to master commit
- Can write epoch accumulators to files now with eth_data_exporter
- RPC requests to gossip epoch accumulators now uses these files
instead of building on the fly
- Other build accumulator calls are adjusted and only used for
tests and thus moved to testing folder
For accumulator building we now use intermediary header epoch
files which renders the use of a temporary db for this no longer
needed. Code was already no longer in use.
Portal master accumulator was removed from the network specs as a
content type shared on the network, as since the merge this is
a finite accumulator (pre-merge only).
So in this PR the accumulator gets removed as network type and
gets instead baked into the library. Building it is done by
seperate tooling (eth_data_exporter).
Because of this a lot of extra code can be removed that was
located in history_network, content_db, portal_protocol, etc.
Also removed to option to build the accumulator at start-up
of fluffy as this takes several minutes making it not viable.
It can still be loaded from a provided file however.
The ssz accumulator file is for now stored in the recently
created portal-spec-tests repository.
- Let accumulator finish its last pre merge epoch (hash_tree_root
on incomplete epoch).
- Adjust code to use isPreMerge and remove isCurrentEpoch
- Split up tests to a set that runs with a mainnet merge block
number and a set that runs with a testing value.
Also requires us to split header data propagation from block body
and receipts propagation as the now fixed bug would allow for more
data to be gossiped even when data does not get validated (which
requires the headers).
* Fix bug in inCurrentEpoch and improve accumulator related tests
- Fix negative wraparound / underflow in inCurrentEpoch
- Add tests in accumulator tests to verify the above
- Add header offer tests with accumulator that does and doesn't
contain historical epochs
- Additional clean-up of history tests
- enable canonicalVerify in the tests
* Testnet improvements
Increase timeout for reading
Add more logs
Offer endpoint can fail due to talkReq timeout, to avoid
test failure, retry it few times until success.
- Add retries when the network request failed on validation of
block header, body or receipts.
- Shuffle the first set of neighbours that the request is send to
in order to not always hit the same peer first for the same content
- Some general clean-up
- Move the accumulator definitions to a history accumulator file
- Add accumulator build helper calls + temporary database
- Add a header gossip content key encoding test
- Refactor & some cleanup
- More consistent naming in calls used by the JSON-RPC debug API
- Use storeContent everywhere to make sure content is only stored
if in range
- Remove populateHistoryDb subcommand and replace by JSON-RPC call
storeContent
- Remove some whitespace and fix some indentations
Allow also concurrent neighborhood gossip jobs when seeding data
into the network.
Update Grafana dashboard for two additional metrics regarding
lookups in neighborhood gossip.