* add changes to use chronos v4 in compat mode
* switch chronos to compat fix branch
* use nimbus-build-system with configurable Nim repo
* add missing imports
* add missing await
* bump compat
* pin nim version in Makefile
* add await instead of asyncSpawn to advertisement queue loop
* bump DHT to v0.5.0
* allow error state of `onBatch` to propagate upwards in test code
* pin Nim compiler commit to avoid fetching stale branch
* make CI build against branch head instead of merge
* fix handling of return values in testslotqueue
* Rest API: add erasure coding constraints when requesting storage
* clean up
* Make error message for "dataset too small" more informative.
* fix API integration test
---------
Co-authored-by: gmega <giuliano.mega@gmail.com>
* Fix verifiable manifest constructor
* Add integration test for verifiable manifest download
Add integration test for testing download of verifiable dataset after creating request for storage
* add missing import
* add testecbug to integration suite
* Remove hardhat instance from integration test
* change description, drop echo
---------
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>
* integration: move REST API tests into their own module
* integration: move upload and download tests into their own module
* integration: move purchasing tests into their own module
* integration: move marketplace tests to the right module
* integration: mine a block *after* starting nodes
To ensure that tests involving multiple nodes do
not start with out-of-sync clocks
* Fix: do not swallow CancellationErrors
* integration: avoid underflow in UInt256
* network: remove unnecessary error handling
No Exceptions can occur, only Defects, because everything
is asyncSpawned.
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* network: do not raise in asyncSpawned proc
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* Aligns response with openapi spec
* Fixes json serialization for local content response
* Review comments by Mark
* Fixes missed rename
* Removes array type for dataList
* json > nim-serde bump
Should wait until serde is integrated into nim-ethers before making these changes as there will be less import exceptions required.
* bump nim-serde
* change func to proc due to chronicles side effects
* import serde into utils/json, use as proxy
import nim-serde into utils/json and use utils/json as a proxy for serde functions, including overloading `%` and `fromJson` for application types.
* update tests to use serde
* bump serde to latest
* remove testjson -- no longer needed
* bump serde in nimble
* updates to reconcile rebase with master
* use a real verifying contract address
* contracts: cleanup
* marketplacesuite: set correct circuit files, interval mining
* Proofs tests updates
Contains changes to get the proving tests working reliably.
* integration: use correct circom artifacts for creating proofs
* integration: cleanup
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* wire prover into node
* stricter case object checks
* return correct proof
* misc renames
* adding usefull traces
* fix nodes and tolerance to match expected params
* format challenges in logs
* add circom compat to solidity groth16 convertion
* update
* bump time to give nodes time to load with all circom artifacts
* misc
* misc
* use correct dataset geometry in erasure
* make errors more searchable
* use parens around `=? (await...)` calls
* styling
* styling
* use push raises
* fix to match constructor arguments
* merge master
* merge master
* integration: fix proof parameters for a test
Increased times due to ZK proof generation.
Increased storage requirement because we're now hosting
5 slots instead of 1.
* sales: calculate initial proof at start of period
reason: this ensures that the period (and therefore
the challenge) doesn't change while we're calculating
the proof
* integration: fix proof parameters for tests
Increased times due to waiting on next period.
Fixed data to be of right size.
Updated expected payout due to hosting 5 slots.
* sales: wait for stable proof challenge
When the block pointer is nearing the
wrap-around point, we wait another period
before calculating a proof.
* fix merge conflict
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* rework cli to accept circuit params
* check circom files extension
* adding new required cli changes
* don't use ufcs
* persistence is a command now
* use `nimOldCaseObjects` switch for nim confutils compat
* misc
* Update cli integration tests
* Fix: simulateProofFailures option is not for validator
* moving circom params under `prover` command
* update tests
* Use circuit assets from codex-contract-eth in tests
* Add "prover" cli command to tests
* use correct stores
* make `verifier` a cmd option
* update circuit artifacts path
* fix cli tests
* Update integration tests to use cli commands
Integration tests have been updated to use the new cli commands. The api for usage in the integration tests has also changed a bit.
Proofs tests have been updated to use 5 nodes and 8 blocks of data. The remaining integration tests also need to be updated.
* remove parsedCli from CodexConfig
Instead, parse the cli args on the fly when needed
* remove unneeded gcsafes
* graceful shutdowns
Where possible, do not raise assert, as other nodes in the test may already be running. Instead, raise exceptions, catch them in multinodes.nim, and attempt to do a teardown before failing the test.
`abortOnError` is set to true so that `fail()` will quit immediately, after teardown has been run.
* update testmarketplace to new api, with valid EC params
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* Update codex-contracts-eth
* contracts: update G2Point definition
* integration: disable automatic advancing of time
reason: makes reasoning about timing in tests harder,
because the period is set to 60 seconds in the
marketplace configuration, but this code switches to
a new period every 500 milliseconds
* integration: fix parameters of marketplace payouts test
* integration: update test settings
* integration: fix typo
* integration: workaround for hardhat issue
Subscriptions expire after 5 minutes when using
websockets. Use http and polling instead.
* integration: remove origDatasetSizeInBlocks
* integration: fix proof parameters for test
* integration: do not log output by default
* integration: fix failure rate in test
* integration: fix warning
* integration: include clock in logs
* integration: allow for more periods
5 periods was cutting it close, if we get too much
pointer downtime, then the test would fail
* market: use `pending` blocktag when querying onchain state
* clock: use wall clock in integration tests
reason: we'll need to wait for the next period in
integration tests, and we can't do that if the
time doesn't advance
* clock: remove unused field
* integration: use pending block time to get current time
* clock: fix on-chain clock for hardhat
Only use 'latest' block for updates
Only update the first time you see a block
* integration: do not start tests with a very outdated block
* integration: allow for longer expiry period
* don't pass erasure
* use correct stores and construct erasure inside the node
* fix tests to match new constructor
* remove prover argument
* review commets
* revert failing on no-prover for now
* small cleanup
* comment out invalid proofs broken test
* refactor multi node test suite
Refactor the multinode test suite into the marketplace test suite.
- Arbitrary number of nodes can be started with each test: clients, providers, validators
- Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file.
- Log files for all nodes can be persisted on disk, with configuration at the test-level
- Log files, if persisted (as specified in the test), will be persisted to a CI artifact
- Node config is specified at the test-level instead of the suite-level
- Node/Hardhat process starting/stopping is now async, and runs much faster
- Per-node config includes:
- simulating proof failures
- logging to file
- log level
- log topics
- storage quota
- debug (print logs to stdout)
- Tests find next available ports when starting nodes, as closing ports on Windows can lag
- Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests).
- If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively.
- If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively.
* additional logging for debug purposes
* address PR feedback
- fix spelling
- revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR.
- remove method label from raiseAssert
- remove unused import
* Use API instead of command exec to test for free port
Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs.
* clean up
* remove upraises annotations from tests
* Update tests to work with updated erasure coding slot sizes
* update dataset size, nodes, tolerance to match valid ec params
Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage.
Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params).
All integration tests pass when the async `clock.now` changes are reverted.
* dont use async clock for now
* fix workflow
* move integration logs uplod to reusable
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* Smart contracts update: Groth16Proof instead of bytes
* Use dummy verifier for now, until we can create ZK proofs
* Fix tests: submit proof only when slot is filled
* Submit dummy proofs for now
* More detailed log when proof submission failed
* Use dummy verifier for integration tests
For now at least
* Fix mistake in blanket renaming to ethProvider
* Update to latest codex-contracts-eth
* feat: zkey-hash from chain
* Fix zkeyHash
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* bump nim-poseidon2 to export fromBytes
Before the changes in this branch, fromBytes was likely being resolved by nim-stew, or other dependency. With the changes in this branch, that dependency was removed and fromBytes could no longer be resolved. By exporting fromBytes from nim-poseidon, the correct resolution is now happening.
* fixes to get compiling after rebasing master
* Add support for Result types being logged using formatIt
`skip` seems to working only sometimes. It intermittently fails on Windows in CI. The reason for skipping the test in the first place was to prevent intermittent failures of the test itself, so adding `skip` did not improve the situation, unfortunately.
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* temporarily remove failing integration test
Temporarily removing this test for now as it occasionally fails on macos/linux. This test has been completely refactored in PR #607 in which it will be retintroduced.
* skip instead of comment out
* Workaround for Hardhat timestamp bug
Likely due to a Hardhat bug in which the callbacks for subscription events are called and awaited before updating its local understanding of the last block time, Hardhat will report a block time in the `newHeads` event that is generally 1 second before the time reported from `getLatestBlock.timestamp`. This was causing issues with the OnChainClock's offset and therefore the `now()` used by the `OnChainClock` would sometimes be off by a second (or more), causing tests to fail.
This commit introduce a `codex_use_hardhat` compilation flag, that when set, will always get the latest block timestamp from Hardhat via the `getLatestBlock.timestamp` RPC call for `OnChainClock.now` calls. Otherwise, the last block timestamp reported in the `newHeads` event will be used.
Update the docker dist tests compilation flag for simulated proof failures (it was not correct), and explicitly add the `codex_use_hardhat=false` for clarity.
* enable simulated proof failures for coverage
* comment out failing test on linux -- will be replaced
* bump codex contracts eth
* add back clock offset for non-hardhat cases
* bump codex-contracts-eth
increases pointer by 67 blocks each period increase
* Add `codex_use_hardhat` flag to coverage tests
* Add get active slot /slots/{slotId} to REST api, use utils/json
- Add endpoint /slots/{slotId} to get an active SalesAgent from the Sales module. Used in integration tests to test when a sale has reached a certain state. Those integration test changes will be included in a larger PR, coming later.
- Add OpenAPI changes for new endpoint and associated components
- Use utils/json instead of nim-json-serialization. Required exemption of imports from several packages that export nim-json-serialization by default.
* Only except `toJson` from import/export of chronicles
* Fix REST endpoints semantics
* update endpoint description
* update, operation id
* Adding enum support
* make enum descerializer public
* add support for listing manifests
* test `/data` endpoint to list local manifests
* debug leftovers
* remove commented out line
* Adds endpoint for listing files (manifests) in node. Useful for demo UI.
* Moves upload/download/files into content API calls.
* Cleans up json serialization for manifest
* Cleans up some more json serialization
* Moves block iteration and decoding to node.nim
* Moves api methods into their own init procs.
* Applies RestContent api object.
* Replaces format methods with Rest objects in json.nim
* Unused import
* Review comments by Adam
* Fixes issue where content/local endpoint clashes with content/cid.
* faulty merge resolution
* Renames content API to data.
* Fixes faulty rebase
* Adds test for data/local API
* Renames local and download api.
* cleanup erasure coding
* moar cleanup
* fix off by 1 issues in tests
* style
* consolidate decoding data code
* simplify tuple unpacking
* fix retrieve purchase
We don't support single blocks for now
* Apply suggestions from code review
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Dmitriy Ryajov <dryajov@gmail.com>
---------
Signed-off-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* [docs] fix two client scenario: add missing collateral
* [integration] separate step to wait for node to be started
* [cli] add option to specify ethereum private key
* Remove unused imports
* Fix warnings
* [integration] move type definitions to correct place
* [integration] wait a bit longer for a node to start in debug mode
When e.g. running against Taiko test net rpc, the node start
takes longer
* [integration] simplify handling of codex node and client
* [integration] add Taiko integration test
* [contracts] await token approval confirmation before next tx
* [contracts] deployment address of marketplace on Taiko
* [cli] --eth-private-key now takes a file name
Instead of supplying the private key on the command line,
expect the private key to be in a file with the correct
permissions.
* [utils] Fixes undeclared `activeChroniclesStream` on Windows
* [build] update nim-ethers to include PR #52
Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
* [cli] Better error messages when reading eth private key
Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
* [integration] simplify reading of cmd line arguments
Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
* [build] update to latest version of nim-ethers
* [contracts] updated contract address for Taiko L2
* [build] update codex contracts to latest version
---------
Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
* [sales] remove availability check before adding to slot queue
* [sales] add missing return statement
* [tests] remove 'eventuallyCheck' helper
* [sales] remove reservations from slot queue
* [tests] rename module `eventually` -> `always`
* [sales] increase slot queue size
Because it will now also hold items for which we haven't
checked availability yet.
* Improve integration testing client (CodexClient) and json serialization
The current client used for integration testing against the REST endpoints for Codex accepts and passes primitive types. This caused a hard to diagnose bug where a `uint` was not being deserialized correctly.
In addition, the json de/serializing done between the CodexClient and REST client was not easy to read and was not tested.
These changes bring non-primitive types to most of the CodexClient functions, allowing us to lean on the compiler to ensure we're providing correct typings. More importantly, a json de/serialization util was created as a drop-in replacement for the std/json lib, with the main two differences being that field serialization is opt-in (instead of opt-out as in the case of json_serialization) and serialization errors are captured and logged, making debugging serialization issues much easier.
* Update integration test to use nodes=2 and tolerance=1
* clean up
* extra utilities and tweaks
* add atlas lock
* update ignores
* break build into it's own script
* update url rules
* base off codexdht's
* compile fixes for Nim 1.6.14
* update submodules
* convert mapFailure to procs to work around type resolution issues
* add toml parser for multiaddress
* change error type on keyutils
* bump nimbus build to use 1.6.14
* update gitignore
* adding new deps submodules
* bump nim ci version
* even more fixes
* more libp2p changes
* update keys
* fix eventually function
* adding coverage test file
* move coverage to build.nims
* use nimcache/coverage
* move libp2p import for tests into helper.nim
* remove named bin
* bug fixes for networkpeers (from Dmitriy)
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
## Slot queue
Adds a slot queue, as per the [slot queue design](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#slot-queue).
Any time storage is requested, all slots from that request are immediately added to the queue. Finished, Canclled, Failed requests remove all slots with that request id from the queue. SlotFreed events add a new slot to the queue and SlotFilled events remove the slot from the queue. This allows popping of a slot each time one is processed, making things much simpler.
When an entire request of slots is added to the queue, the slot indices are shuffled randomly to hopefully prevent nodes that pick up the same storage requested event from clashing on the first processed slot index. This allowed removal of assigning a random slot index in the SalePreparing state and it also ensured that all SalesAgents will have a slot index assigned to them at the start thus the removal of the optional slotIndex.
Remove slotId from SlotFreed event as it was not being used. RequestId and slotIndex were added to the SlotFreed event earlier and those are now being used
The slot queue invariant that prioritises queue items added to the queue relies on a scoring mechanism to sort them based on the [sort order in the design document](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#sort-order).
When a storage request is handled by the sales module, a slot index was randomly assigned and then the slot was filled. Now, a random slot index is only assigned when adding an entire request to the slot queue. Additionally, the slot is checked that its state is `SlotState.Free` before continuing with the download process.
SlotQueue should always ensure the underlying AsyncHeapQueue has one less than the maximum items, ensuring the SlotQueue can always have space to add an additional item regardless if it’s full or not.
Constructing `SlotQueue.workers` in `SlotQueue.new` calls `newAsyncQueue` which causes side effects, so the construction call had to be moved to `SlotQueue.start`.
Prevent loading request from contract (network request) if there is an existing item in queue for that request.
Check availability before adding request to queue.
Add ability to query market contract for past events. When new availabilities are added, the `onReservationAdded` callback is triggered in which past `StorageRequested` events are queried, and those slots are added to the queue (filtered by availability on `push` and filtered by state in `SalePreparing`).
#### Request Workers
Limit the concurrent requests being processed in the queue by using a limited pool of workers (default = 3). Workers are in a data structure of type `AsyncQueue[SlotQueueWorker]`. This allows us to await a `popFirst` for available workers inside of the main SlotQueue event loop
Add an `onCleanUp` that stops the agents and removes them from the sales module agent list. `onCleanUp` is called from sales end states (eg ignored, cancelled, finished, failed, errored).
Add a `doneProcessing` future to `SlotQueueWorker` to be completed in the `OnProcessSlot` callback. Each `doneProcessing` future created is cancelled and awaited in `SlotQueue.stop` (thanks to `TrackableFuturees`), which forced `stop` to become async.
- Cancel dispatched workers and the `onProcessSlot` callbacks, prevents zombie callbacks
#### Add TrackableFutures
Allow tracking of futures in a module so they can be cancelled at a later time. Useful for asyncSpawned futures, but works for any future.
### Sales module
The sales module needed to subscribe to request events to ensure that the request queue was managed correctly on each event. In the process of doing this, the sales agents were updated to avoid subscribing to events in each agent, and instead dispatch received events from the sales module to all created sales agents. This would prevent memory leaks on having too many eventemitters subscribed to.
- prevent removal of agents from sales module while stopping, otherwise the agents seq len is modified while iterating
An additional sales agent state was added, `SalePreparing`, that handles all state machine setup, such as retrieving the request and subscribing to events that were previously in the `SaleDownloading` state.
Once agents have parked in an end state (eg ignored, cancelled, finished, failed, errored), they were not getting cleaned up and the sales module was keeping a handle on their reference. An `onCleanUp` callback was created to be called after the state machine enters an end state, which could prevent a memory leak if the number of requests coming in is high.
Move the SalesAgent callback raises pragmas from the Sales module to the proc definition in SalesAgent. This avoids having to catch `Exception`.
- remove unneeded error handling as pragmas were moved
Move sales.subscriptions from an object containing named subscriptions to a `seq[Subscription]` directly on the sales object.
Sales tests: shut down repo after sales stop, to fix SIGABRT in CI
### Add async Promise API
- modelled after JavaScript Promise API
- alternative to `asyncSpawn` that allows handling of async calls in a synchronous context (including access to the synchronous closure) with less additional procs to be declared
- Write less code, catch errors that would otherwise defect in asyncspawn, and execute a callback after completion
- Add cancellation callbacks to utils/then, ensuring cancellations are handled properly
## Dependencies
- bump codex-contracts-eth to support slot queue (https://github.com/codex-storage/codex-contracts-eth/pull/61)
- bump nim-ethers to 0.5.0
- Bump nim-json-rpc submodule to 0bf2bcb
---------
Co-authored-by: Jaremy Creechley <creechley@gmail.com>
Previously it could take up to one second to complete
the future. This messed with the timings in the
integration tests and made them less predictable.
* Fixes/workarounds for nimsuggest failures in codex.nim.
* remove rng prefix - it appears to work now
* format new's to be more consistent
* making proc formatting a bit more consistent