This is a minimal set of changes to make things work with the new types
in nim-eth - this is the minimal PR that merely resolves
incompatibilities while the full change set would include more cleanup
and migration.
* Fix multiple instances running from same dataDir
* Add exclusive lock on lock file
* Unlock lock file on process exit
* Fix minor issues in lock file implementation
The revalidateMax value is lowered to have a quicker ramp up of
the radiusCache + to keep it healthier.
The defaultMaxGossipNodes value is lowered because with the
current value a Nodes lookup is triggered almost always.
This value is dependant on the content replication value. This
is dependant on the network (and subnetwork) because of the amount
of nodes and their radius/storage capacity.
* Support RPC API namespaces as cli parameter.
* Fluffy now uses rpcFlags on startup.
* Update testnet script to enable all RPC APIs.
* Update Fluffy book and move web3 call into eth calls.
This bug would have as effect that our radius cache would not get
filled by any outgoing pings, causing:
- Node lookups to always be occurring on NH gossip
- POKEs to much more rarely
Also add metrics for the amount of offers done via POKE mechanism.
* Make stop functions wait for completion before return.
* Implement graceful shutdown.
* Shutdown rpc and metric servers if enabled.
* Move metrics and rpc servers out of PortalNode.
* Enable state network by default. Create status log loop for state and beacon networks. Create status log loop for portal node. Implement stop functions.
* Improve state endpoint genesis test and cover cases when accounts, code and slots doesn't exist.
* Refactor state endpoints to support returning partial proofs.
* Implement getProofs in state endpoints.
* Add tests for getProofs and improve code.
* Implement eth_getProof JSON-RPC api in Fluffy.
- Add new content + content key functionality for header by number
- Remove EpochRecords from the network
- Add pruning call for the EpochRecords + required deprecated
functionality
- Adjust getBlock and getBlockHashByNumber to make use of the
new functionality instead
- Delete content_verifier as it was only verifying the now
deprecated EpochRecord
This avoid restarting the node always with a full radius, which
causes the node the be bombarded with offers which it later has
to delete anyhow.
In order to implement this functionality, several changes were
made as the radius needed to move from the Portal wire protocol
current location to the contentDB and beaconDB, which is
conceptually more correct anyhow.
So radius is now part of the database objects and a handler is
used in the portal wire protocol to access its value.
* Return default values when account, slot or code doesn't exist.
* Handle case when storage doesn't exist due to account not existing or being a non contract account.
* Cleaning up, removing cruft and debugging statements
* Make `aristo_delta` fluffy compatible
why:
A sub-module that uses `chronicles` must import all possible
modules used by a parent module that imports the sub-module.
* update TODO
* Extract sub-tree deletion functions into separate sub-modules
* Move/rename `aristo_desc.accLruSize` => `aristo_constants.ACC_LRU_SIZE`
* Lazily delete sub-trees
why:
This gives some control of the memory used to keep the deleted vertices
in the cached layers. For larger sub-trees, keys and vertices might be
on the persistent backend to a large extend. This would pull an amount
of extra information from the backend into the cached layer.
For lazy deleting it is enough to remember sub-trees by a small set of
(at most 16) sub-roots to be processed when storing persistent data.
Marking the tree root deleted immediately allows to let most of the code
base work as before.
* Comments and cosmetics
* No need to import all for `Aristo` here
* Kludge to make `chronicle` usage in sub-modules work with `fluffy`
why:
That `fluffy` would not run with any logging in `core_deb` is a problem
I have known for a while. Up to now, logging was only used for debugging.
With the current `Aristo` PR, there are cases where logging might be
wanted but this works only if `chronicles` runs without the
`json[dynamic]` sinks.
So this should be re-visited.
* More of a kludge
* Update state network to use addressHash instead of address in contract trie and contract code content keys.
* Fix path calculation bug in getParent when working with extension nodes.
* Bump portal spec tests repo.
* Finish updating tests due to portal test vector changes.
* Update Fluffy state bridge to use addressHash.
* Update Fluffy book with correct commands for running portal hive tests.
* Use RPC batching to send offer requests and filter out duplicate offers.
* Lookup offers after gossip to check if gossip successful.
* Use multiple workers for gossiping offers.
* Update Fluffy state network logging.
* Use single RPC calls instead of batching.
* Update cli parameters.
* Fix bug in contract trie offer building.
* Create block offers queue and collect account preimages.
* Implement iterators to return account and storage proofs and bytecode from updatedCaches.
* Implement building offers from proofs.
* Refactor BlockDataRef type to only include required fields.
* Store block data in database.
* Improve state diff types.
* Implement start state backfill from specific block.
* Record last persisted block number in database.
* Persist account preimages in db.
* Apply state updates for DAO hard fork.
* Implement state gossip of block offers via portal JSON RPC.
- Add basic validation for LC bootstrap gossip, validating either
by trusted block root (only 1) when not synced, or by comparing
with the header of the latest finality update when synced.
- Update portal_bridge beacon to also gossip bootstraps into the
network on each end of epoch.