* Aligns response with openapi spec
* Fixes json serialization for local content response
* Review comments by Mark
* Fixes missed rename
* Removes array type for dataList
* Add MarketError
Add MarketError and convert all EthersErrors (ProviderError, SignerError) to MarketError
* Include token contract call in conversion of ethers error
This flag was originally used to change OnChainClock behavior when using hardhat as an ethereum chain source due to a very strange bug which would mark the timestamp of new blocks as one second off the timestamp they should have been.
The issue has since been worked around in another manner, and thus this flag is no longer needed.
* json > nim-serde bump
Should wait until serde is integrated into nim-ethers before making these changes as there will be less import exceptions required.
* bump nim-serde
* change func to proc due to chronicles side effects
* import serde into utils/json, use as proxy
import nim-serde into utils/json and use utils/json as a proxy for serde functions, including overloading `%` and `fromJson` for application types.
* update tests to use serde
* bump serde to latest
* remove testjson -- no longer needed
* bump serde in nimble
* updates to reconcile rebase with master
* use a real verifying contract address
* contracts: cleanup
* marketplacesuite: set correct circuit files, interval mining
* Proofs tests updates
Contains changes to get the proving tests working reliably.
* integration: use correct circom artifacts for creating proofs
* integration: cleanup
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* wire prover into node
* stricter case object checks
* return correct proof
* misc renames
* adding usefull traces
* fix nodes and tolerance to match expected params
* format challenges in logs
* add circom compat to solidity groth16 convertion
* update
* bump time to give nodes time to load with all circom artifacts
* misc
* misc
* use correct dataset geometry in erasure
* make errors more searchable
* use parens around `=? (await...)` calls
* styling
* styling
* use push raises
* fix to match constructor arguments
* merge master
* merge master
* integration: fix proof parameters for a test
Increased times due to ZK proof generation.
Increased storage requirement because we're now hosting
5 slots instead of 1.
* sales: calculate initial proof at start of period
reason: this ensures that the period (and therefore
the challenge) doesn't change while we're calculating
the proof
* integration: fix proof parameters for tests
Increased times due to waiting on next period.
Fixed data to be of right size.
Updated expected payout due to hosting 5 slots.
* sales: wait for stable proof challenge
When the block pointer is nearing the
wrap-around point, we wait another period
before calculating a proof.
* fix merge conflict
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* rework cli to accept circuit params
* check circom files extension
* adding new required cli changes
* don't use ufcs
* persistence is a command now
* use `nimOldCaseObjects` switch for nim confutils compat
* misc
* Update cli integration tests
* Fix: simulateProofFailures option is not for validator
* moving circom params under `prover` command
* update tests
* Use circuit assets from codex-contract-eth in tests
* Add "prover" cli command to tests
* use correct stores
* make `verifier` a cmd option
* update circuit artifacts path
* fix cli tests
* Update integration tests to use cli commands
Integration tests have been updated to use the new cli commands. The api for usage in the integration tests has also changed a bit.
Proofs tests have been updated to use 5 nodes and 8 blocks of data. The remaining integration tests also need to be updated.
* remove parsedCli from CodexConfig
Instead, parse the cli args on the fly when needed
* remove unneeded gcsafes
* graceful shutdowns
Where possible, do not raise assert, as other nodes in the test may already be running. Instead, raise exceptions, catch them in multinodes.nim, and attempt to do a teardown before failing the test.
`abortOnError` is set to true so that `fail()` will quit immediately, after teardown has been run.
* update testmarketplace to new api, with valid EC params
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* Update codex-contracts-eth
* contracts: update G2Point definition
* integration: disable automatic advancing of time
reason: makes reasoning about timing in tests harder,
because the period is set to 60 seconds in the
marketplace configuration, but this code switches to
a new period every 500 milliseconds
* integration: fix parameters of marketplace payouts test
* integration: update test settings
* integration: fix typo
* integration: workaround for hardhat issue
Subscriptions expire after 5 minutes when using
websockets. Use http and polling instead.
* integration: remove origDatasetSizeInBlocks
* integration: fix proof parameters for test
* integration: do not log output by default
* integration: fix failure rate in test
* integration: fix warning
* integration: include clock in logs
* integration: allow for more periods
5 periods was cutting it close, if we get too much
pointer downtime, then the test would fail
* clock: add 1 second leeway before acting on timeouts
* sales: do not raise in proving loop when slot is cancelled
Allow the onCancelled callback to handle cancellation, and
the onFailed callback to handle failed requests.
* sales: cleanup proving tests
* sales: fix sales agent tests
* sales: stop cancellation loop when request started, finished or failed
* sales: fix flaky test
* sales: fix another flaky test
* clock: add comment explaining the + 1 second
Co-Authored-By: benbierens <thatbenbierens@gmail.com>
---------
Co-authored-by: benbierens <thatbenbierens@gmail.com>
* market: use `pending` blocktag when querying onchain state
* clock: use wall clock in integration tests
reason: we'll need to wait for the next period in
integration tests, and we can't do that if the
time doesn't advance
* clock: remove unused field
* integration: use pending block time to get current time
* clock: fix on-chain clock for hardhat
Only use 'latest' block for updates
Only update the first time you see a block
* integration: do not start tests with a very outdated block
* integration: allow for longer expiry period
* Applies peer-scoped lock to peer task handler.
* Replace async lock with delete-first approach.
* Cleanup some logging
* Adds inFlight flag to WantListEntry
* Clears inflight flag when local retrieval fails.
* Adds test for setting of in-flight
* Adds test for clearing in-flight when lookup fails
* Review comments by Tomasz
---------
Co-authored-by: gmega <giuliano.mega@gmail.com>
* add block cancellation support + tests
* tie issueCancellations into resolveBlocks for proper exception tracking, address comments
* pull cancellation as separate primitive in BlockExcNetwork
* use allFutures, rename issueBlockCancellations -> cancelBlocks
* use trc instead of wrn to register send error
* do not log peer IDs
* rework backend to instantiate key at initialization
* add groth16 convertes for solidity
* prover taks num samples on construction
* add zkey file
* rework helpers
* rename types
* update tests
* reworked test helpers
* rename types
* rework test
* test all slots artifacts
* bump to latest version
* don't pass erasure
* use correct stores and construct erasure inside the node
* fix tests to match new constructor
* remove prover argument
* review commets
* revert failing on no-prover for now
* small cleanup
* comment out invalid proofs broken test
* refactor multi node test suite
Refactor the multinode test suite into the marketplace test suite.
- Arbitrary number of nodes can be started with each test: clients, providers, validators
- Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file.
- Log files for all nodes can be persisted on disk, with configuration at the test-level
- Log files, if persisted (as specified in the test), will be persisted to a CI artifact
- Node config is specified at the test-level instead of the suite-level
- Node/Hardhat process starting/stopping is now async, and runs much faster
- Per-node config includes:
- simulating proof failures
- logging to file
- log level
- log topics
- storage quota
- debug (print logs to stdout)
- Tests find next available ports when starting nodes, as closing ports on Windows can lag
- Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests).
- If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively.
- If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively.
* additional logging for debug purposes
* address PR feedback
- fix spelling
- revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR.
- remove method label from raiseAssert
- remove unused import
* Use API instead of command exec to test for free port
Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs.
* clean up
* remove upraises annotations from tests
* Update tests to work with updated erasure coding slot sizes
* update dataset size, nodes, tolerance to match valid ec params
Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage.
Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params).
All integration tests pass when the async `clock.now` changes are reverted.
* dont use async clock for now
* fix workflow
* move integration logs uplod to reusable
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* wire in circom backend
* should contain leafs
* adding circom compad and circuits deps
* update windows build
* fix windows build
* improve test names
* move proving defaults to codextypes
* remove unnedded inmports and move defaults to codextypes
* capture error code on backend failure
* Smart contracts update: Groth16Proof instead of bytes
* Use dummy verifier for now, until we can create ZK proofs
* Fix tests: submit proof only when slot is filled
* Submit dummy proofs for now
* More detailed log when proof submission failed
* Use dummy verifier for integration tests
For now at least
* Fix mistake in blanket renaming to ethProvider
* Update to latest codex-contracts-eth
* feat: zkey-hash from chain
* Fix zkeyHash
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* bump nim-poseidon2 to export fromBytes
Before the changes in this branch, fromBytes was likely being resolved by nim-stew, or other dependency. With the changes in this branch, that dependency was removed and fromBytes could no longer be resolved. By exporting fromBytes from nim-poseidon, the correct resolution is now happening.
* fixes to get compiling after rebasing master
* Add support for Result types being logged using formatIt
* Setting up testfixture for proof datasampler
* Sets up calculating number of cells in a slot
* Sets up tests for bitwise modulo
* Implements cell index collection
* setting up slot blocks module
* Implements getting treeCID from slot
* implements getting slot blocks by index
* Implements out-of-range check for slot index
* cleanup
* Sets up getting sample from block
* Implements selecting a cell sample from a block
* Implements building a minitree for block cells
* Adds method to get dataset block index from slot block index
* It's running
* splits up indexing
* almost there
* Fixes test. Implementation is now functional
* Refactoring to object-oriented
* Cleanup
* Lining up output type with updated reference code.
* setting up
* Updates expected samples
* Updates proof checking test to match new format
* move builder to own dir
* move sampler to own dir
* fix paths
* various changes to add support for the sampler
* wip sampler implementation
* don't use upraises
* wip sampler integration
* misc
* move tests around
* Various fixes to select correct slot and block index
* removing old tests
* cleanup
* misc
fix tests that work with correct cell indices
* remove unused file
* fixup logging
* add logscope
* truncate entropy to 31 bytes, otherwise it might be > than mod
* forwar getCidAndProof to local store
* misc
* Adds missing test for initial-proving state
* reverting back to correct slot/block indexing
* fix tests for revert
* misc
* misc
---------
Co-authored-by: benbierens <thatbenbierens@gmail.com>
* rework merkle tree support
* rename merkletree -> codexmerkletree
* treed and proof encoding/decoding
* style
* adding codex merkle and coders tests
* use default hash codec
* proof size changed
* add from nodes test
* shorte file names
* wip poseidon tree
* shorten file names
* root returns a result
* import poseidon tests
* fix merge issues and cleanup a few warnings
* setting up slot builder
* Getting cids in slot
* ensures blocks are devisable by number of slots
* wip
* Implements indexing strategies
* Swaps in indexing strategy into erasure.
* wires slot and indexing tests up
* Fixes issue where indexing strategy stepped gives wrong values for smallest of ranges
* debugs indexing strategies
* Can select slot blocks
* finding number of pad cells
* Implements building slot tree
* finishes implementing slot builder
* Adds check that block size is a multiple of cell size
* Cleanup slotbuilder
* Review comments by Tomasz
* Fixes issue where ecK was used as numberOfSlots.
* rework merkle tree support
* deps
* rename merkletree -> codexmerkletree
* treed and proof encoding/decoding
* style
* adding codex merkle and coders tests
* remove new codecs for now
* proof size changed
* add from nodes test
* shorte file names
* wip poseidon tree
* shorten file names
* fix bad `elements` iter
* bump
* bump
* wip
* reworking slotbuilder
* move out of manifest
* expose getCidAndProof
* import index strat...
* remove getMHash
* remove unused artifacts
* alias zero
* add digest for multihash
* merge issues
* remove unused hashes
* add option to result converter
* misc
* fix tests
* add helper to derive EC block count
* rename method
* misc
* bump
* extract slot root building into own proc
* revert to manifest to accessor
---------
Co-authored-by: benbierens <thatbenbierens@gmail.com>
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* Workaround for Hardhat timestamp bug
Likely due to a Hardhat bug in which the callbacks for subscription events are called and awaited before updating its local understanding of the last block time, Hardhat will report a block time in the `newHeads` event that is generally 1 second before the time reported from `getLatestBlock.timestamp`. This was causing issues with the OnChainClock's offset and therefore the `now()` used by the `OnChainClock` would sometimes be off by a second (or more), causing tests to fail.
This commit introduce a `codex_use_hardhat` compilation flag, that when set, will always get the latest block timestamp from Hardhat via the `getLatestBlock.timestamp` RPC call for `OnChainClock.now` calls. Otherwise, the last block timestamp reported in the `newHeads` event will be used.
Update the docker dist tests compilation flag for simulated proof failures (it was not correct), and explicitly add the `codex_use_hardhat=false` for clarity.
* enable simulated proof failures for coverage
* comment out failing test on linux -- will be replaced
* bump codex contracts eth
* add back clock offset for non-hardhat cases
* bump codex-contracts-eth
increases pointer by 67 blocks each period increase
* Add `codex_use_hardhat` flag to coverage tests
* Adds test for encoding/decoding protected manifest
* Setting up verifiable manifest
* mysterious mysteries
* Successful encoding test for verifiable manifests
* extracts toF out of users of manifest code
* Update codex/manifest/coders.nim
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
* Review comments by Dmitriy
* Adds missing verifiable print to $ method.
* Replace poseidon2 F type with int as temporary stand-in for verification hashes
* Replaces verification hash placeholder with CID
---------
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* Add get active slot /slots/{slotId} to REST api, use utils/json
- Add endpoint /slots/{slotId} to get an active SalesAgent from the Sales module. Used in integration tests to test when a sale has reached a certain state. Those integration test changes will be included in a larger PR, coming later.
- Add OpenAPI changes for new endpoint and associated components
- Use utils/json instead of nim-json-serialization. Required exemption of imports from several packages that export nim-json-serialization by default.
* Only except `toJson` from import/export of chronicles
In proving.onCancelled, the `waitFor state.loop.cancelAndWait()` was never completing. Turns out this was not needed, because when changing states, the current state's run is cancelled, which automatically cancels the state prove loop, because it is a child of proving.run. Therefore, the logic to cancelAndWait the prove loop was removed as it was not needed.
* Fix REST endpoints semantics
* update endpoint description
* update, operation id
* Adding enum support
* make enum descerializer public
* add support for listing manifests
* test `/data` endpoint to list local manifests
* debug leftovers
* remove commented out line
* use str on JString types, `$` will preserve `"`
* Adding enum support
* deserialize cid test
* make enum descerializer public
* unify fromJson for objects and refs
* add enum descerialization testing
* Blockexchange uses merkle root and index to fetch blocks
* Links the network store getTree to the local store.
* Update codex/stores/repostore.nim
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Tomasz Bekas <tomasz.bekas@gmail.com>
* Rework erasure.nim to include recent cleanup
* Revert accidential changes to lib versions
* Addressing review comments
* Storing proofs instead of trees
* Fix a comment
* Fix broken tests
* Fix for broken testerasure.nim
* Addressing PR comments
---------
Signed-off-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: benbierens <thatbenbierens@gmail.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* Adds endpoint for listing files (manifests) in node. Useful for demo UI.
* Moves upload/download/files into content API calls.
* Cleans up json serialization for manifest
* Cleans up some more json serialization
* Moves block iteration and decoding to node.nim
* Moves api methods into their own init procs.
* Applies RestContent api object.
* Replaces format methods with Rest objects in json.nim
* Unused import
* Review comments by Adam
* Fixes issue where content/local endpoint clashes with content/cid.
* faulty merge resolution
* Renames content API to data.
* Fixes faulty rebase
* Adds test for data/local API
* Renames local and download api.
## Problem
When Availabilities are created, the amount of bytes in the Availability are reserved in the repo, so those bytes on disk cannot be written to otherwise. When a request for storage is received by a node, if a previously created Availability is matched, an attempt will be made to fill a slot in the request (more accurately, the request's slots are added to the SlotQueue, and eventually those slots will be processed). During download, bytes that were reserved for the Availability were released (as they were written to disk). To prevent more bytes from being released than were reserved in the Availability, the Availability was marked as used during the download, so that no other requests would match the Availability, and therefore no new downloads (and byte releases) would begin. The unfortunate downside to this, is that the number of Availabilities a node has determines the download concurrency capacity. If, for example, a node creates a single Availability that covers all available disk space the operator is willing to use, that single Availability would mean that only one download could occur at a time, meaning the node could potentially miss out on storage opportunities.
## Solution
To alleviate the concurrency issue, each time a slot is processed, a Reservation is created, which takes size (aka reserved bytes) away from the Availability and stores them in the Reservation object. This can be done as many times as needed as long as there are enough bytes remaining in the Availability. Therefore, concurrent downloads are no longer limited by the number of Availabilities. Instead, they would more likely be limited to the SlotQueue's `maxWorkers`.
From a database design perspective, an Availability has zero or more Reservations.
Reservations are persisted in the RepoStore's metadata, along with Availabilities. The metadata store key path for Reservations is ` meta / sales / reservations / <availabilityId> / <reservationId>`, while Availabilities are stored one level up, eg `meta / sales / reservations / <availabilityId> `, allowing all Reservations for an Availability to be queried (this is not currently needed, but may be useful when work to restore Availability size is implemented, more on this later).
### Lifecycle
When a reservation is created, its size is deducted from the Availability, and when a reservation is deleted, any remaining size (bytes not written to disk) is returned to the Availability. If the request finishes, is cancelled (expired), or an error occurs, the Reservation is deleted (and any undownloaded bytes returned to the Availability). In addition, when the Sales module starts, any Reservations that are not actively being used in a filled slot, are deleted.
Having a Reservation persisted until after a storage request is completed, will allow for the originally set Availability size to be reclaimed once a request contract has been completed. This is a feature that is yet to be implemented, however the work in this PR is a step in the direction towards enabling this.
### Unknowns
Reservation size is determined by the `StorageAsk.slotSize`. If during download, more bytes than `slotSize` are attempted to be downloaded than this, then the Reservation update will fail, and the state machine will move to a `SaleErrored` state, deleting the Reservation. This will likely prevent the slot from being filled.
### Notes
Based on #514
* cleanup erasure coding
* moar cleanup
* fix off by 1 issues in tests
* style
* consolidate decoding data code
* simplify tuple unpacking
* fix retrieve purchase
We don't support single blocks for now
* Apply suggestions from code review
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Dmitriy Ryajov <dryajov@gmail.com>
---------
Signed-off-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* [docs] fix two client scenario: add missing collateral
* [integration] separate step to wait for node to be started
* [cli] add option to specify ethereum private key
* Remove unused imports
* Fix warnings
* [integration] move type definitions to correct place
* [integration] wait a bit longer for a node to start in debug mode
When e.g. running against Taiko test net rpc, the node start
takes longer
* [integration] simplify handling of codex node and client
* [integration] add Taiko integration test
* [contracts] await token approval confirmation before next tx
* [contracts] deployment address of marketplace on Taiko
* [cli] --eth-private-key now takes a file name
Instead of supplying the private key on the command line,
expect the private key to be in a file with the correct
permissions.
* [utils] Fixes undeclared `activeChroniclesStream` on Windows
* [build] update nim-ethers to include PR #52
Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
* [cli] Better error messages when reading eth private key
Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
* [integration] simplify reading of cmd line arguments
Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
* [build] update to latest version of nim-ethers
* [contracts] updated contract address for Taiko L2
* [build] update codex contracts to latest version
---------
Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
* [sales] remove availability check before adding to slot queue
* [sales] add missing return statement
* [tests] remove 'eventuallyCheck' helper
* [sales] remove reservations from slot queue
* [tests] rename module `eventually` -> `always`
* [sales] increase slot queue size
Because it will now also hold items for which we haven't
checked availability yet.
* Improve integration testing client (CodexClient) and json serialization
The current client used for integration testing against the REST endpoints for Codex accepts and passes primitive types. This caused a hard to diagnose bug where a `uint` was not being deserialized correctly.
In addition, the json de/serializing done between the CodexClient and REST client was not easy to read and was not tested.
These changes bring non-primitive types to most of the CodexClient functions, allowing us to lean on the compiler to ensure we're providing correct typings. More importantly, a json de/serialization util was created as a drop-in replacement for the std/json lib, with the main two differences being that field serialization is opt-in (instead of opt-out as in the case of json_serialization) and serialization errors are captured and logged, making debugging serialization issues much easier.
* Update integration test to use nodes=2 and tolerance=1
* clean up
* logs
* poll log
* more logs
* time log for upload steps
* Adds metric and log for block retrieval time
* adds some logging
* move logging for storestream block indices
* Log at start and end of block iteration
* applies branch blockstore-bugfix
* Cleanup
* Cleanup
Add handling of empty blocks in the RepoStore.
* Add empty block handling to repostore for put, del, has
Also added tests for all empty block handling blockstore operations. This showed there was an ambiguous identifier present for `hasBlock`, so one of the two `hasBlock` definitions was removed in `repostore`.
* Change CacheStore to RepoStore in testerasure
As CacheStore is not used in the node, update the Datastore used in the erasure coding tests to be a RepoStore. This ensures that the K > 1 cases are being tested, where they will produce empty padding blocks in the erasure-coded manifests.
* extra utilities and tweaks
* add atlas lock
* update ignores
* break build into it's own script
* update url rules
* base off codexdht's
* compile fixes for Nim 1.6.14
* update submodules
* convert mapFailure to procs to work around type resolution issues
* add toml parser for multiaddress
* change error type on keyutils
* bump nimbus build to use 1.6.14
* update gitignore
* adding new deps submodules
* bump nim ci version
* even more fixes
* more libp2p changes
* update keys
* fix eventually function
* adding coverage test file
* move coverage to build.nims
* use nimcache/coverage
* move libp2p import for tests into helper.nim
* remove named bin
* bug fixes for networkpeers (from Dmitriy)
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>