Allow for the Fluffy Portal bridge to inject receipts. This
requires a web3 endpoint to be provided, and currently only
Alchemy is supported due to the used JSON-RPC endpoint.
It is only (mostly) a skeleton, not further developed and not
used. It is unlikely the way forward either when further
developing the Portal state network.
* Reduce Nim 1.6 compiler warnings/hints for Fluffy and Nimbus proxy
Mostly raises Defect removals, TaintedString removal and some
unnecessary imports.
Also updating the copyright years alongside.
* Further reduce Nim 1.6 compiler warnings/hints for Nimbus
Two unresolved items currently:
- Three tests that are temporarily disabled as they fail in the
macro_assembler code, which seems to be due to an ambigious
identifier Stop (Ops and chronos ServerCommand enum).
- i386 CI disabled as it fails at Nim compilation already. Failed
tests where already ignored for this target.
The portal_*Offer call was changed in the specs to actually do
gossip. Make it no longer possible to test purely an offer with
one node. Add OfferReal call for now until spec potentially gets
adjusted.
Also Add some Node information logging for FindContent and Offer
to be able to better debug failures and interoperability.
* Fix Portal Hive fails by correcting Portal history JSON RPC API
- Field naming in discv5_nodeInfo
- Call naming of portal_historyStore
- Other: some proc to func adjustements
- Switch from using Option to Opt which allows for smoother
usage with already existing Result types
- With all moved to Opt, make more use of valueOr to avoid
too many if else clause indentation and unstead have a more
clear error path at each step
- Remove dead code, char limits, style guide, etc.
- Replace getEncodedKeyForContent with ContentKey.init
and use ContentKey.init for each type
* Add headers with proof content type and use it for verification
- Add BlockHeaderWithProof content type & content
- Use BlockHeaderWithProof content to verify if chain data is
part of the canonical chain
- Adjust parser & seeder code to be able to seed these headers
with proof
- Adjust eth_data_exporter to be able to export custom header
ranges for which to build proofs (mostly for testing)
There is currently quite some ugliness & clean-up needed for which
a big part is due tos upporting both BlockHeader and
BlockHeaderWithProof on the network.
* Change accumulator proof to array / SSZ vector type
- Change accumulator proof to SSZ vector instead of SSZ list.
- Add and use general buildProof and buildHeaderWithProof func.
* Make the BlockHeaderWithProof an SSZ Union with None option
* Update portal-spec-tests to master commit
- Can write epoch accumulators to files now with eth_data_exporter
- RPC requests to gossip epoch accumulators now uses these files
instead of building on the fly
- Other build accumulator calls are adjusted and only used for
tests and thus moved to testing folder
Also requires us to split header data propagation from block body
and receipts propagation as the now fixed bug would allow for more
data to be gossiped even when data does not get validated (which
requires the headers).
- Add retries when the network request failed on validation of
block header, body or receipts.
- Shuffle the first set of neighbours that the request is send to
in order to not always hit the same peer first for the same content
- Some general clean-up
- More consistent naming in calls used by the JSON-RPC debug API
- Use storeContent everywhere to make sure content is only stored
if in range
- Remove populateHistoryDb subcommand and replace by JSON-RPC call
storeContent
- Remove some whitespace and fix some indentations
* Improvements to the propagation and seeding of data
- Use a lookup for nodes selection in neighborhoodGossip
- Rework populate db code and add `propagateBlockHistoryDb` call
and portal_history__propagateBlock json-rpc call
- Small adjustment to blockwalk
* Avoid storing out-of-range data in the propagate db calls
* Remove unused import of config to avoid select_backend db import
- Importing nimbus-eth1 config.nim causes import of select_backend
which will default cause an import of kvstore_rocksdb and thus a
require rocksdb. Remove unused one to avoid rocksdb dependency
for Fluffy.
- Remove some whitespace in bridge_client (to make fluffy CI
trigger for sure).
* Use specific cache keys for fluffy CI workflow
* Disable Fluffy CI reproducibility test
* Add block bodies to the propagation and lookups
- Read and propagate block bodies next to the headers
- Add block bodies content (via lookups) to the eth_getBlockByHash
call
- Test the above in test_portal_testnet
* Fix storage/propagation of block bodies
- Data format is an actual block: [header, txs, uncles], which
requires some adjustment to store the block body
- Added also `eth_getBlockTransactionCountByHash` json rpc call
* Sharing block header data around in a Portal history network (PoC)
- Rework PortalStream to have an instance per PortalProtocol (this
needs to be improved eventually). Each instance uses the same
UtpDiscv5Protocol instance.
- Add processContent on receival of accepted data
- Add dumb neighborhoodGossip: dumb in the sense that it only
offers one piece of content at a time.
- Add to / adjust populate_db to also allow for propagation of
the data and add debug rpc call: portal_history_propagate
- Add eth_rpc_client
- Add eth_getBlockbyHash (no txs or uncles) to eth API
- Add additional test to test_portal_testnet which loads 5 block
headers to 1 node, and offers this data to few nodes, which should
propagate it over the network further. Next query every node for
this data.
* Adjust paths on which Fluffy CI is triggered
* Add documentation on the local testnet
* Improve the tests of the local testnet
The local testnet test was rather flaky and would occasionally
fail. It has been made more robust by adding the ENRs directly
to the routing table instead of doing some random lookups.
Additionally, the amount of nodes were increased (=64), ip limits
configuration was added, and the bits-per-hop value was set to 1
in order to make the lookups more likely to hit the network
instead of only the local routing table.
Failure is obviously still possible to happen when sufficient
packets get lost. If this turns out to be the case with the current
amount of nodes, we might have to revise the testing strategy here.
* Disable lookup test for State network
Disable lookup test for State network due to issue with custom
distance function causing the lookup to not always converging
towards the target.
- Add portal network ping and findNodes JSON-RPC endpoints
- clean-up of some cli arguments
- Add protocol_interop.md document and adjust/fix other
documentation
* Add resolve call for Portal networks
And:
- Refactor some code by adding a findNodeVerified call
- add the portal network lookup json-rpc call that uses resolve
- add usage of this lookup in the portal testnet tests
- Additional comments
* Let recordsFromBytes fail on invalid ENRs
This behaviour is more similar as how it is done in discovery v5
base layer.
- Add basic discv5 and portal json-rpc calls and activate them in
fluffy
- Renames in the rpc folder
- Add local testnet script and run this script in CI
- bump nim-eth