* Add discv5 provided ENR directly to Portal protocol routing table
Previous version of getting the ENR from the discv5 routing table
would not work due to the order of first calling the talk protocol
handler and only after that the addEnr to the disc5 routing table.
Instead of changing this order, pass along the ENR directly to
avoid this additional getNode call.
* Still request ENR from discv5 if it wasn't passed via handshake
* Nimbus folder environment update
details:
* Integrated `CoreDbRef` for the sources in the `nimbus` sub-folder.
* The `nimbus` program does not compile yet as it needs the updates
in the parallel `stateless` sub-folder.
* Stateless environment update
details:
* Integrated `CoreDbRef` for the sources in the `stateless` sub-folder.
* The `nimbus` program compiles now.
* Premix environment update
details:
* Integrated `CoreDbRef` for the sources in the `premix` sub-folder.
* Fluffy environment update
details:
* Integrated `CoreDbRef` for the sources in the `fluffy` sub-folder.
* Tools environment update
details:
* Integrated `CoreDbRef` for the sources in the `tools` sub-folder.
* Nodocker environment update
details:
* Integrated `CoreDbRef` for the sources in the
`hive_integration/nodocker` sub-folder.
* Tests environment update
details:
* Integrated `CoreDbRef` for the sources in the `tests` sub-folder.
* The unit tests compile and run cleanly now.
* Generalise `CoreDbRef` to any `select_backend` supported database
why:
Generalisation was just missed due to overcoming some compiler oddity
which was tied to rocksdb for testing.
* Suppress compiler warning for `newChainDB()`
why:
Warning was added to this function which must be wrapped so that
any `CatchableError` is re-raised as `Defect`.
* Split off persistent `CoreDbRef` constructor into separate file
why:
This allows to compile a memory only database version without linking
the backend library.
* Use memory `CoreDbRef` database by default
detail:
Persistent DB constructor needs to import `db/core_db/persistent
why:
Most tests use memory DB anyway. This avoids linking `-lrocksdb` or
any other backend by default.
* fix `toLegacyBackend()` availability check
why:
got garbled after memory/persistent split.
* Clarify raw access to MPT for snap sync handler
why:
Logically, `kvt` is not the raw access for the hexary trie (although
this holds for the legacy database)
It is only (mostly) a skeleton, not further developed and not
used. It is unlikely the way forward either when further
developing the Portal state network.
* Refactoring in preparation for time-based forking.
* Timestamp-based hard-fork-transition.
* Workaround SideEffect issue / compiler bug for both failing locations in Portal history code
---------
Co-authored-by: kdeme <kim.demey@gmail.com>
* Reduce Nim 1.6 compiler warnings/hints for Fluffy and Nimbus proxy
Mostly raises Defect removals, TaintedString removal and some
unnecessary imports.
Also updating the copyright years alongside.
* Further reduce Nim 1.6 compiler warnings/hints for Nimbus
* Move BlockHeaderWithProof content to content key selector 0
- Remove as content type with content key selector 4
- Replace regular block header with BlockHeaderWithProof at
content key selector 0
* Apply blockHeader content key also to bridge
* Add tests for header with proof generation and verification
This is an example of how the beacon state historical_roots could
be added with a proof to be able to provide on the network and
verify by means of proof against the state root.
* Fix Portal Hive fails by correcting Portal history JSON RPC API
- Field naming in discv5_nodeInfo
- Call naming of portal_historyStore
- Other: some proc to func adjustements
- Switch from using Option to Opt which allows for smoother
usage with already existing Result types
- With all moved to Opt, make more use of valueOr to avoid
too many if else clause indentation and unstead have a more
clear error path at each step
- Remove dead code, char limits, style guide, etc.
- Replace getEncodedKeyForContent with ContentKey.init
and use ContentKey.init for each type
* Add headers with proof content type and use it for verification
- Add BlockHeaderWithProof content type & content
- Use BlockHeaderWithProof content to verify if chain data is
part of the canonical chain
- Adjust parser & seeder code to be able to seed these headers
with proof
- Adjust eth_data_exporter to be able to export custom header
ranges for which to build proofs (mostly for testing)
There is currently quite some ugliness & clean-up needed for which
a big part is due tos upporting both BlockHeader and
BlockHeaderWithProof on the network.
* Change accumulator proof to array / SSZ vector type
- Change accumulator proof to SSZ vector instead of SSZ list.
- Add and use general buildProof and buildHeaderWithProof func.
* Make the BlockHeaderWithProof an SSZ Union with None option
* Update portal-spec-tests to master commit
- Can write epoch accumulators to files now with eth_data_exporter
- RPC requests to gossip epoch accumulators now uses these files
instead of building on the fly
- Other build accumulator calls are adjusted and only used for
tests and thus moved to testing folder
Portal master accumulator was removed from the network specs as a
content type shared on the network, as since the merge this is
a finite accumulator (pre-merge only).
So in this PR the accumulator gets removed as network type and
gets instead baked into the library. Building it is done by
seperate tooling (eth_data_exporter).
Because of this a lot of extra code can be removed that was
located in history_network, content_db, portal_protocol, etc.
Also removed to option to build the accumulator at start-up
of fluffy as this takes several minutes making it not viable.
It can still be loaded from a provided file however.
The ssz accumulator file is for now stored in the recently
created portal-spec-tests repository.
- Let accumulator finish its last pre merge epoch (hash_tree_root
on incomplete epoch).
- Adjust code to use isPreMerge and remove isCurrentEpoch
- Split up tests to a set that runs with a mainnet merge block
number and a set that runs with a testing value.
* Fix bug in inCurrentEpoch and improve accumulator related tests
- Fix negative wraparound / underflow in inCurrentEpoch
- Add tests in accumulator tests to verify the above
- Add header offer tests with accumulator that does and doesn't
contain historical epochs
- Additional clean-up of history tests
- enable canonicalVerify in the tests
- Move the accumulator definitions to a history accumulator file
- Add accumulator build helper calls + temporary database
- Add a header gossip content key encoding test
- Refactor & some cleanup
* Sharing block header data around in a Portal history network (PoC)
- Rework PortalStream to have an instance per PortalProtocol (this
needs to be improved eventually). Each instance uses the same
UtpDiscv5Protocol instance.
- Add processContent on receival of accepted data
- Add dumb neighborhoodGossip: dumb in the sense that it only
offers one piece of content at a time.
- Add to / adjust populate_db to also allow for propagation of
the data and add debug rpc call: portal_history_propagate
- Add eth_rpc_client
- Add eth_getBlockbyHash (no txs or uncles) to eth API
- Add additional test to test_portal_testnet which loads 5 block
headers to 1 node, and offers this data to few nodes, which should
propagate it over the network further. Next query every node for
this data.
* Adjust paths on which Fluffy CI is triggered
* Add documentation on the local testnet
* Improve the tests of the local testnet
The local testnet test was rather flaky and would occasionally
fail. It has been made more robust by adding the ENRs directly
to the routing table instead of doing some random lookups.
Additionally, the amount of nodes were increased (=64), ip limits
configuration was added, and the bits-per-hop value was set to 1
in order to make the lookups more likely to hit the network
instead of only the local routing table.
Failure is obviously still possible to happen when sufficient
packets get lost. If this turns out to be the case with the current
amount of nodes, we might have to revise the testing strategy here.
* Disable lookup test for State network
Disable lookup test for State network due to issue with custom
distance function causing the lookup to not always converging
towards the target.
- Allow access to contentDB from portal wire protocol
- Use this to do the db.get in `handleFindContent` directly
- Use this to check the `contentKeys` list in `handleOffer`
* Change History content key to us SSZ Union and adjust tests
* Change slot to byteBE instead of LE
This is currently not specified in the Portal network
specifications, but we are using already BE for the actual content
key, so change it also here to remain consistent.
Currently bootstrap nodes for discv5 and for the Portal nodes
were provided through separate cli arguments. This is however
confusing and cumbersome as typically when (currently) testing
a node will have both discv5 and the Portal networks enabled.
We merge them into one argument.
If a node happens not to support a Portal network, it will be
removed after message request failure.
Commit also contains additional clean-up and nim-eth bump.
* Rename FindNode to FindNodes as per spec
* Use consistently lower case starting camelCase for consts
Style guide nep1 allows for both CamelCase and camelCase for
consts, but we seem to use more often camelCase. Using this now
consistently.
* Change some procs to func
- Add basic discv5 and portal json-rpc calls and activate them in
fluffy
- Renames in the rpc folder
- Add local testnet script and run this script in CI
- bump nim-eth
* Add SSZ Unions through case objects
* Add connection id content response test and improve other test vectors
* Implement content keys and ids for state network as per spec
Content keys case object is used so that it can be serialized and
deserialized as an SSZ Union.
* Let message Union in Portal wire protocol start at 0 as per new spec
* Add a basic ContentDB for Portal networks
* Use ContentDB in StateNetwork
* Avoid probably some form of sandwich problem by re-exporting kvstore_sqlite3 from content_db
* Allow for passing Portal specific bootstrap nodes
* Fix to also replaceNode when decodeMessage fails
* Add portal bootstrap node tests and reorder test cases
* Generalize netork layer for portal
* Make messages free from any content references
* Use portal network in main fluffy module
* Fix cli
* Use lookup in portal network
* Avoid using result