* Use --styleCheck:error for Fluffy + fixes
There seems to be a clash between the names of an object field and
a proc here. As workaround names of the procs are changed.
* Fix case style for rpc client calls in test_portal_testnet
* Fix style case for utp test
- Use the new createRpcSigsFromNim for client json-rpc API
- Avoid importing any nimbus/rpc specifics, use only web3 and
fluffy local rpc code
- Adjust tools making use of the client side API
The --networks option is to select the networks alias the Portal
sub-protocols.
The --network option is deprecated and replaced with the
--portal-network option to avoid confusion with --networks option.
This commit also remove the --state flag which was a temporary
quick-fix.
* Add nph check to fluffy CI lint
* Add a section on nph usage in the fluffy.guide
* Update copyright years for altered files
* Avoid chained methods formatting style in db code
* Update nph in CI to v0.5
* Remove leftover commented import
* Move comment to avoid nph turning complex list into simple list (nph bug)
* Update nph in CI to v0.5.1
* Formatting fluffy with nph v0.5.1
- Link network gossip validation to LC processor validation
- QuickFix put/get for optimistic and finality updates
- Minor fixes and clean-up
- Improve bridge gossip for new LC Updates
- Adjust local testnet script to be able to locally test this
- Adjust test and skip broken test
* Launch Fluffy builds directly from make to avoid compile issue
Without this change, builds on latest macos fails when ulimit is
not set to 1024. But it will still cause libbacktrace error to occur
when launching the binaries so it would be still advised to
set it to 1024.
* Fix fluffy local testnet for some macOS systems
And some additional improvements to the script + run the fluffy
nodes at INFO log-level to speed-up the testing time.
* Split up fluffy tests in separate targets
This way the two test binaries can be build and ran
concurrently.
* Reduce Nim 1.6 compiler warnings/hints for Fluffy and Nimbus proxy
Mostly raises Defect removals, TaintedString removal and some
unnecessary imports.
Also updating the copyright years alongside.
* Further reduce Nim 1.6 compiler warnings/hints for Nimbus
* Move BlockHeaderWithProof content to content key selector 0
- Remove as content type with content key selector 4
- Replace regular block header with BlockHeaderWithProof at
content key selector 0
* Apply blockHeader content key also to bridge
* Add tests for header with proof generation and verification
* Fix Portal Hive fails by correcting Portal history JSON RPC API
- Field naming in discv5_nodeInfo
- Call naming of portal_historyStore
- Other: some proc to func adjustements
* Add headers with proof content type and use it for verification
- Add BlockHeaderWithProof content type & content
- Use BlockHeaderWithProof content to verify if chain data is
part of the canonical chain
- Adjust parser & seeder code to be able to seed these headers
with proof
- Adjust eth_data_exporter to be able to export custom header
ranges for which to build proofs (mostly for testing)
There is currently quite some ugliness & clean-up needed for which
a big part is due tos upporting both BlockHeader and
BlockHeaderWithProof on the network.
* Change accumulator proof to array / SSZ vector type
- Change accumulator proof to SSZ vector instead of SSZ list.
- Add and use general buildProof and buildHeaderWithProof func.
* Make the BlockHeaderWithProof an SSZ Union with None option
* Update portal-spec-tests to master commit
Portal master accumulator was removed from the network specs as a
content type shared on the network, as since the merge this is
a finite accumulator (pre-merge only).
So in this PR the accumulator gets removed as network type and
gets instead baked into the library. Building it is done by
seperate tooling (eth_data_exporter).
Because of this a lot of extra code can be removed that was
located in history_network, content_db, portal_protocol, etc.
Also removed to option to build the accumulator at start-up
of fluffy as this takes several minutes making it not viable.
It can still be loaded from a provided file however.
The ssz accumulator file is for now stored in the recently
created portal-spec-tests repository.
Also requires us to split header data propagation from block body
and receipts propagation as the now fixed bug would allow for more
data to be gossiped even when data does not get validated (which
requires the headers).
* Testnet improvements
Increase timeout for reading
Add more logs
Offer endpoint can fail due to talkReq timeout, to avoid
test failure, retry it few times until success.
Allow also concurrent neighborhood gossip jobs when seeding data
into the network.
Update Grafana dashboard for two additional metrics regarding
lookups in neighborhood gossip.
* Improvements to the propagation and seeding of data
- Use a lookup for nodes selection in neighborhoodGossip
- Rework populate db code and add `propagateBlockHistoryDb` call
and portal_history__propagateBlock json-rpc call
- Small adjustment to blockwalk
* Avoid storing out-of-range data in the propagate db calls
* Add block bodies to the propagation and lookups
- Read and propagate block bodies next to the headers
- Add block bodies content (via lookups) to the eth_getBlockByHash
call
- Test the above in test_portal_testnet
* Fix storage/propagation of block bodies
- Data format is an actual block: [header, txs, uncles], which
requires some adjustment to store the block body
- Added also `eth_getBlockTransactionCountByHash` json rpc call
This is to avoid having a json rpc call fail if there was a
previous call done more than 10 seconds ago. 10 seconds because
that is the default timeout on the http server side.
* Sharing block header data around in a Portal history network (PoC)
- Rework PortalStream to have an instance per PortalProtocol (this
needs to be improved eventually). Each instance uses the same
UtpDiscv5Protocol instance.
- Add processContent on receival of accepted data
- Add dumb neighborhoodGossip: dumb in the sense that it only
offers one piece of content at a time.
- Add to / adjust populate_db to also allow for propagation of
the data and add debug rpc call: portal_history_propagate
- Add eth_rpc_client
- Add eth_getBlockbyHash (no txs or uncles) to eth API
- Add additional test to test_portal_testnet which loads 5 block
headers to 1 node, and offers this data to few nodes, which should
propagate it over the network further. Next query every node for
this data.
* Adjust paths on which Fluffy CI is triggered
* Add documentation on the local testnet
* Improve the tests of the local testnet
The local testnet test was rather flaky and would occasionally
fail. It has been made more robust by adding the ENRs directly
to the routing table instead of doing some random lookups.
Additionally, the amount of nodes were increased (=64), ip limits
configuration was added, and the bits-per-hop value was set to 1
in order to make the lookups more likely to hit the network
instead of only the local routing table.
Failure is obviously still possible to happen when sufficient
packets get lost. If this turns out to be the case with the current
amount of nodes, we might have to revise the testing strategy here.
* Disable lookup test for State network
Disable lookup test for State network due to issue with custom
distance function causing the lookup to not always converging
towards the target.
Currently bootstrap nodes for discv5 and for the Portal nodes
were provided through separate cli arguments. This is however
confusing and cumbersome as typically when (currently) testing
a node will have both discv5 and the Portal networks enabled.
We merge them into one argument.
If a node happens not to support a Portal network, it will be
removed after message request failure.
Commit also contains additional clean-up and nim-eth bump.
* Add resolve call for Portal networks
And:
- Refactor some code by adding a findNodeVerified call
- add the portal network lookup json-rpc call that uses resolve
- add usage of this lookup in the portal testnet tests
- Additional comments
* Let recordsFromBytes fail on invalid ENRs
This behaviour is more similar as how it is done in discovery v5
base layer.