* Started state bridge.
* Implement call to fetch stateDiffs using trace_replayBlockTransactions.
* Convert JSON responses to stateDiff types.
* State updates working for first few blocks.
* Correctly building state for first 200K blocks.
* Add storage of code and cleanup.
* Start state bridge refactor.
* More cleanup and fixes.
* Use RocksDb as backend for state.
* Implement transactions.
* Build RocksDb dependency when building fluffy tools.
* Move code to world state helper.
* Implement producer and consumer queue.
* Cleanup exceptions.
* Improve logging.
* Add update caches to DatabaseRef backends.
- EpochAccumulator got renamed to EpochRecord
- MasterAccumulator is not HistoricalHashesAccumulator
- The List size for the accumulator got a different maximum which
also result in a different encoding and HTR
* Started implementation of state endpoints.
* Add rpc calls and server stubs.
* Initial implementation of getAccountProof and getStorageProof.
* Refactor validation to use toAccount utils functions.
* Add state endpoints tests.
- Use the new createRpcSigsFromNim for client json-rpc API
- Avoid importing any nimbus/rpc specifics, use only web3 and
fluffy local rpc code
- Adjust tools making use of the client side API
* Add nph check to fluffy CI lint
* Add a section on nph usage in the fluffy.guide
* Update copyright years for altered files
* Avoid chained methods formatting style in db code
* Update nph in CI to v0.5
* Remove leftover commented import
* Move comment to avoid nph turning complex list into simple list (nph bug)
* Update nph in CI to v0.5.1
* Formatting fluffy with nph v0.5.1
Allow for the Fluffy Portal bridge to inject receipts. This
requires a web3 endpoint to be provided, and currently only
Alchemy is supported due to the used JSON-RPC endpoint.
The portal_*Offer call was changed in the specs to actually do
gossip. Make it no longer possible to test purely an offer with
one node. Add OfferReal call for now until spec potentially gets
adjusted.
Also Add some Node information logging for FindContent and Offer
to be able to better debug failures and interoperability.
* Fix Portal Hive fails by correcting Portal history JSON RPC API
- Field naming in discv5_nodeInfo
- Call naming of portal_historyStore
- Other: some proc to func adjustements
- Can write epoch accumulators to files now with eth_data_exporter
- RPC requests to gossip epoch accumulators now uses these files
instead of building on the fly
- Other build accumulator calls are adjusted and only used for
tests and thus moved to testing folder
Also requires us to split header data propagation from block body
and receipts propagation as the now fixed bug would allow for more
data to be gossiped even when data does not get validated (which
requires the headers).
- More consistent naming in calls used by the JSON-RPC debug API
- Use storeContent everywhere to make sure content is only stored
if in range
- Remove populateHistoryDb subcommand and replace by JSON-RPC call
storeContent
- Remove some whitespace and fix some indentations
* Improvements to the propagation and seeding of data
- Use a lookup for nodes selection in neighborhoodGossip
- Rework populate db code and add `propagateBlockHistoryDb` call
and portal_history__propagateBlock json-rpc call
- Small adjustment to blockwalk
* Avoid storing out-of-range data in the propagate db calls
* Sharing block header data around in a Portal history network (PoC)
- Rework PortalStream to have an instance per PortalProtocol (this
needs to be improved eventually). Each instance uses the same
UtpDiscv5Protocol instance.
- Add processContent on receival of accepted data
- Add dumb neighborhoodGossip: dumb in the sense that it only
offers one piece of content at a time.
- Add to / adjust populate_db to also allow for propagation of
the data and add debug rpc call: portal_history_propagate
- Add eth_rpc_client
- Add eth_getBlockbyHash (no txs or uncles) to eth API
- Add additional test to test_portal_testnet which loads 5 block
headers to 1 node, and offers this data to few nodes, which should
propagate it over the network further. Next query every node for
this data.
* Adjust paths on which Fluffy CI is triggered
* Add documentation on the local testnet
* Improve the tests of the local testnet
The local testnet test was rather flaky and would occasionally
fail. It has been made more robust by adding the ENRs directly
to the routing table instead of doing some random lookups.
Additionally, the amount of nodes were increased (=64), ip limits
configuration was added, and the bits-per-hop value was set to 1
in order to make the lookups more likely to hit the network
instead of only the local routing table.
Failure is obviously still possible to happen when sufficient
packets get lost. If this turns out to be the case with the current
amount of nodes, we might have to revise the testing strategy here.
* Disable lookup test for State network
Disable lookup test for State network due to issue with custom
distance function causing the lookup to not always converging
towards the target.
* Add resolve call for Portal networks
And:
- Refactor some code by adding a findNodeVerified call
- add the portal network lookup json-rpc call that uses resolve
- add usage of this lookup in the portal testnet tests
- Additional comments
* Let recordsFromBytes fail on invalid ENRs
This behaviour is more similar as how it is done in discovery v5
base layer.
- Add basic discv5 and portal json-rpc calls and activate them in
fluffy
- Renames in the rpc folder
- Add local testnet script and run this script in CI
- bump nim-eth