nim-dagger/tests/examples.nim

104 lines
3.1 KiB
Nim
Raw Normal View History

import std/random
import std/sequtils
Load purchase state from chain (#283) * [purchasing] Simplify test * [utils] Move StorageRequest.example up one level * [purchasing] Load purchases from market * [purchasing] load purchase states * Implement myRequest() and getState() methods for OnChainMarket * [proofs] Fix intermittently failing tests Ensures that examples of proofs in tests are never of length 0; these are considered invalid proofs by the smart contract logic. * [contracts] Fix failing test With the new solidity contracts update, a contract can only be paid out after it started. * [market] Add method to get request end time * [purchasing] wait until purchase is finished Purchase.wait() would previously wait until purchase was started, now we wait until it is finished. * [purchasing] Handle 'finished' and 'failed' states * [marketplace] move to failed state once request fails - Add support for subscribing to request failure events. - Add supporting contract tests for subscribing to request failure events. - Allow the PurchaseStarted state to move to PurchaseFailure once a request failure event is emitted - Add supporting tests for moving from PurchaseStarted to PurchaseFailure - Add state transition tests for PurchaseUnknown. * [marketplace] Fix test with longer sleepAsync * [integration] Add function to restart a codex node * [purchasing] Set client address before requesting storage To prevent the purchase id (which equals the request id) from changing once it's been submitted. * [contracts] Fix: OnChainMarket.getState() Had the wrong method signature before * [purchasing] Load purchases on node start * [purchasing] Rename state 'PurchaseError' to 'PurchaseErrored' Allows for an exception type called 'PurchaseError' * [purchasing] Load purchases in background No longer calls market.getRequest() for every purchase on node start. * [contracts] Add `$` for RequestId, SlotId and Nonce To aid with debugging * [purchasing] Add Purchasing.stop() To ensure that all contract interactions have both a start() and a stop() for * [tests] Remove sleepAsync where possible Use `eventually` loop instead, to make sure that we're not waiting unnecessarily. * [integration] Fix: handle non-json response in test * [purchasing] Add purchase state to json * [integration] Ensure that purchase is submitted before restart Fixes test failure on slower CI * [purchasing] re-implement `description` as method Allows description to be set in the same module where the state type is defined. Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [contracts] fix typo Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [market] Use more generic error type Should we decide to change the provider type later Co-authored-by: Eric Mastro <eric.mastro@gmail.com> Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
2022-11-08 07:10:17 +00:00
import std/times
[marketplace] Availability improvements (#535) ## Problem When Availabilities are created, the amount of bytes in the Availability are reserved in the repo, so those bytes on disk cannot be written to otherwise. When a request for storage is received by a node, if a previously created Availability is matched, an attempt will be made to fill a slot in the request (more accurately, the request's slots are added to the SlotQueue, and eventually those slots will be processed). During download, bytes that were reserved for the Availability were released (as they were written to disk). To prevent more bytes from being released than were reserved in the Availability, the Availability was marked as used during the download, so that no other requests would match the Availability, and therefore no new downloads (and byte releases) would begin. The unfortunate downside to this, is that the number of Availabilities a node has determines the download concurrency capacity. If, for example, a node creates a single Availability that covers all available disk space the operator is willing to use, that single Availability would mean that only one download could occur at a time, meaning the node could potentially miss out on storage opportunities. ## Solution To alleviate the concurrency issue, each time a slot is processed, a Reservation is created, which takes size (aka reserved bytes) away from the Availability and stores them in the Reservation object. This can be done as many times as needed as long as there are enough bytes remaining in the Availability. Therefore, concurrent downloads are no longer limited by the number of Availabilities. Instead, they would more likely be limited to the SlotQueue's `maxWorkers`. From a database design perspective, an Availability has zero or more Reservations. Reservations are persisted in the RepoStore's metadata, along with Availabilities. The metadata store key path for Reservations is ` meta / sales / reservations / <availabilityId> / <reservationId>`, while Availabilities are stored one level up, eg `meta / sales / reservations / <availabilityId> `, allowing all Reservations for an Availability to be queried (this is not currently needed, but may be useful when work to restore Availability size is implemented, more on this later). ### Lifecycle When a reservation is created, its size is deducted from the Availability, and when a reservation is deleted, any remaining size (bytes not written to disk) is returned to the Availability. If the request finishes, is cancelled (expired), or an error occurs, the Reservation is deleted (and any undownloaded bytes returned to the Availability). In addition, when the Sales module starts, any Reservations that are not actively being used in a filled slot, are deleted. Having a Reservation persisted until after a storage request is completed, will allow for the originally set Availability size to be reclaimed once a request contract has been completed. This is a feature that is yet to be implemented, however the work in this PR is a step in the direction towards enabling this. ### Unknowns Reservation size is determined by the `StorageAsk.slotSize`. If during download, more bytes than `slotSize` are attempted to be downloaded than this, then the Reservation update will fail, and the state machine will move to a `SaleErrored` state, deleting the Reservation. This will likely prevent the slot from being filled. ### Notes Based on #514
2023-09-29 04:33:08 +00:00
import std/typetraits
Validator (#387) * [contracts] Add SlotFreed event * [integration] allow test node to be stopped twice * [cli] add --validator option * [contracts] remove dead code * [contracts] instantiate OnChainMarket and OnChainClock only once * [contracts] add Validation * [sales] remove duplicate import * [market] add missing import * [market] subscribe to all SlotFilled events * [market] add freeSlot() * [sales] fix warnings * [market] subscribe to SlotFreed events * [contracts] fix warning * [validator] keep track of filled slots * [validation] remove slots that have ended * [proving] absorb Proofs into Market Both Proofs and Market are abstractions around the Marketplace contract, having them separately is more trouble than it's worth at the moment. * [market] add markProofAsMissing() * [clock] speed up waiting for clock in tests * [validator] mark proofs as missing * [timer] fix error on node shutdown * [cli] handle --persistence and --validator separately * [market] allow retrieval of proof timeout value * [validator] do not subscribe to SlotFreed events Freed slots are already handled in removeSlotsThatHaveEnded(), and onSlotsFreed() interfered with its iterator. * [validator] Start validation at the start of a new period To decrease the likelihood that we hit the validation timeout. * [validator] do not mark proofs as missing after timeout * [market] check whether proof can be marked as missing * [validator] simplify validation Simulate a transaction to mark proof as missing, instead of trying to keep track of all the conditions that may lead to a proof being marked as missing. * [build] use nim-ethers PR #40 Uses "pending" blocktag instead of "latest" blocktag for better simulation of transactions before sending them. https://github.com/status-im/nim-ethers/pull/40 * [integration] integration test for validator * [validator] monitor a maximum number of slots Adds cli parameter --validator-max-slots. * [market] fix missing collateral argument After rebasing, add the new argument to fillSlot calls. * [build] update to nim-ethers 0.2.5 * [validator] use Set instead of Table to keep track of slots * [validator] add logging * [validator] add test for slot failure * [market] use "pending" blocktag to use more up to date block time * [contracts] remove unused import * [validator] fix: wait until after period ends The smart contract checks that 'end < block.timestamp', so we need to wait until the block timestamp is greater than the period end.
2023-04-19 13:06:00 +00:00
import pkg/codex/contracts/requests
refactor: multinode integration test refactor (#662) * refactor multi node test suite Refactor the multinode test suite into the marketplace test suite. - Arbitrary number of nodes can be started with each test: clients, providers, validators - Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file. - Log files for all nodes can be persisted on disk, with configuration at the test-level - Log files, if persisted (as specified in the test), will be persisted to a CI artifact - Node config is specified at the test-level instead of the suite-level - Node/Hardhat process starting/stopping is now async, and runs much faster - Per-node config includes: - simulating proof failures - logging to file - log level - log topics - storage quota - debug (print logs to stdout) - Tests find next available ports when starting nodes, as closing ports on Windows can lag - Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests). - If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively. - If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively. * additional logging for debug purposes * address PR feedback - fix spelling - revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR. - remove method label from raiseAssert - remove unused import * Use API instead of command exec to test for free port Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs. * clean up * remove upraises annotations from tests * Update tests to work with updated erasure coding slot sizes * update dataset size, nodes, tolerance to match valid ec params Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage. Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params). All integration tests pass when the async `clock.now` changes are reverted. * dont use async clock for now * fix workflow * move integration logs uplod to reusable --------- Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
import pkg/codex/rng
import pkg/codex/contracts/proofs
Slot queue (#455) ## Slot queue Adds a slot queue, as per the [slot queue design](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#slot-queue). Any time storage is requested, all slots from that request are immediately added to the queue. Finished, Canclled, Failed requests remove all slots with that request id from the queue. SlotFreed events add a new slot to the queue and SlotFilled events remove the slot from the queue. This allows popping of a slot each time one is processed, making things much simpler. When an entire request of slots is added to the queue, the slot indices are shuffled randomly to hopefully prevent nodes that pick up the same storage requested event from clashing on the first processed slot index. This allowed removal of assigning a random slot index in the SalePreparing state and it also ensured that all SalesAgents will have a slot index assigned to them at the start thus the removal of the optional slotIndex. Remove slotId from SlotFreed event as it was not being used. RequestId and slotIndex were added to the SlotFreed event earlier and those are now being used The slot queue invariant that prioritises queue items added to the queue relies on a scoring mechanism to sort them based on the [sort order in the design document](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#sort-order). When a storage request is handled by the sales module, a slot index was randomly assigned and then the slot was filled. Now, a random slot index is only assigned when adding an entire request to the slot queue. Additionally, the slot is checked that its state is `SlotState.Free` before continuing with the download process. SlotQueue should always ensure the underlying AsyncHeapQueue has one less than the maximum items, ensuring the SlotQueue can always have space to add an additional item regardless if it’s full or not. Constructing `SlotQueue.workers` in `SlotQueue.new` calls `newAsyncQueue` which causes side effects, so the construction call had to be moved to `SlotQueue.start`. Prevent loading request from contract (network request) if there is an existing item in queue for that request. Check availability before adding request to queue. Add ability to query market contract for past events. When new availabilities are added, the `onReservationAdded` callback is triggered in which past `StorageRequested` events are queried, and those slots are added to the queue (filtered by availability on `push` and filtered by state in `SalePreparing`). #### Request Workers Limit the concurrent requests being processed in the queue by using a limited pool of workers (default = 3). Workers are in a data structure of type `AsyncQueue[SlotQueueWorker]`. This allows us to await a `popFirst` for available workers inside of the main SlotQueue event loop Add an `onCleanUp` that stops the agents and removes them from the sales module agent list. `onCleanUp` is called from sales end states (eg ignored, cancelled, finished, failed, errored). Add a `doneProcessing` future to `SlotQueueWorker` to be completed in the `OnProcessSlot` callback. Each `doneProcessing` future created is cancelled and awaited in `SlotQueue.stop` (thanks to `TrackableFuturees`), which forced `stop` to become async. - Cancel dispatched workers and the `onProcessSlot` callbacks, prevents zombie callbacks #### Add TrackableFutures Allow tracking of futures in a module so they can be cancelled at a later time. Useful for asyncSpawned futures, but works for any future. ### Sales module The sales module needed to subscribe to request events to ensure that the request queue was managed correctly on each event. In the process of doing this, the sales agents were updated to avoid subscribing to events in each agent, and instead dispatch received events from the sales module to all created sales agents. This would prevent memory leaks on having too many eventemitters subscribed to. - prevent removal of agents from sales module while stopping, otherwise the agents seq len is modified while iterating An additional sales agent state was added, `SalePreparing`, that handles all state machine setup, such as retrieving the request and subscribing to events that were previously in the `SaleDownloading` state. Once agents have parked in an end state (eg ignored, cancelled, finished, failed, errored), they were not getting cleaned up and the sales module was keeping a handle on their reference. An `onCleanUp` callback was created to be called after the state machine enters an end state, which could prevent a memory leak if the number of requests coming in is high. Move the SalesAgent callback raises pragmas from the Sales module to the proc definition in SalesAgent. This avoids having to catch `Exception`. - remove unneeded error handling as pragmas were moved Move sales.subscriptions from an object containing named subscriptions to a `seq[Subscription]` directly on the sales object. Sales tests: shut down repo after sales stop, to fix SIGABRT in CI ### Add async Promise API - modelled after JavaScript Promise API - alternative to `asyncSpawn` that allows handling of async calls in a synchronous context (including access to the synchronous closure) with less additional procs to be declared - Write less code, catch errors that would otherwise defect in asyncspawn, and execute a callback after completion - Add cancellation callbacks to utils/then, ensuring cancellations are handled properly ## Dependencies - bump codex-contracts-eth to support slot queue (https://github.com/codex-storage/codex-contracts-eth/pull/61) - bump nim-ethers to 0.5.0 - Bump nim-json-rpc submodule to 0bf2bcb --------- Co-authored-by: Jaremy Creechley <creechley@gmail.com>
2023-07-25 02:50:30 +00:00
import pkg/codex/sales/slotqueue
import pkg/codex/stores
refactor: multinode integration test refactor (#662) * refactor multi node test suite Refactor the multinode test suite into the marketplace test suite. - Arbitrary number of nodes can be started with each test: clients, providers, validators - Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file. - Log files for all nodes can be persisted on disk, with configuration at the test-level - Log files, if persisted (as specified in the test), will be persisted to a CI artifact - Node config is specified at the test-level instead of the suite-level - Node/Hardhat process starting/stopping is now async, and runs much faster - Per-node config includes: - simulating proof failures - logging to file - log level - log topics - storage quota - debug (print logs to stdout) - Tests find next available ports when starting nodes, as closing ports on Windows can lag - Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests). - If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively. - If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively. * additional logging for debug purposes * address PR feedback - fix spelling - revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR. - remove method label from raiseAssert - remove unused import * Use API instead of command exec to test for free port Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs. * clean up * remove upraises annotations from tests * Update tests to work with updated erasure coding slot sizes * update dataset size, nodes, tolerance to match valid ec params Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage. Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params). All integration tests pass when the async `clock.now` changes are reverted. * dont use async clock for now * fix workflow * move integration logs uplod to reusable --------- Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
import pkg/codex/units
refactor: multinode integration test refactor (#662) * refactor multi node test suite Refactor the multinode test suite into the marketplace test suite. - Arbitrary number of nodes can be started with each test: clients, providers, validators - Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file. - Log files for all nodes can be persisted on disk, with configuration at the test-level - Log files, if persisted (as specified in the test), will be persisted to a CI artifact - Node config is specified at the test-level instead of the suite-level - Node/Hardhat process starting/stopping is now async, and runs much faster - Per-node config includes: - simulating proof failures - logging to file - log level - log topics - storage quota - debug (print logs to stdout) - Tests find next available ports when starting nodes, as closing ports on Windows can lag - Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests). - If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively. - If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively. * additional logging for debug purposes * address PR feedback - fix spelling - revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR. - remove method label from raiseAssert - remove unused import * Use API instead of command exec to test for free port Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs. * clean up * remove upraises annotations from tests * Update tests to work with updated erasure coding slot sizes * update dataset size, nodes, tolerance to match valid ec params Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage. Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params). All integration tests pass when the async `clock.now` changes are reverted. * dont use async clock for now * fix workflow * move integration logs uplod to reusable --------- Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
import pkg/chronos
import pkg/stew/byteutils
import pkg/stint
refactor: multinode integration test refactor (#662) * refactor multi node test suite Refactor the multinode test suite into the marketplace test suite. - Arbitrary number of nodes can be started with each test: clients, providers, validators - Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file. - Log files for all nodes can be persisted on disk, with configuration at the test-level - Log files, if persisted (as specified in the test), will be persisted to a CI artifact - Node config is specified at the test-level instead of the suite-level - Node/Hardhat process starting/stopping is now async, and runs much faster - Per-node config includes: - simulating proof failures - logging to file - log level - log topics - storage quota - debug (print logs to stdout) - Tests find next available ports when starting nodes, as closing ports on Windows can lag - Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests). - If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively. - If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively. * additional logging for debug purposes * address PR feedback - fix spelling - revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR. - remove method label from raiseAssert - remove unused import * Use API instead of command exec to test for free port Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs. * clean up * remove upraises annotations from tests * Update tests to work with updated erasure coding slot sizes * update dataset size, nodes, tolerance to match valid ec params Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage. Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params). All integration tests pass when the async `clock.now` changes are reverted. * dont use async clock for now * fix workflow * move integration logs uplod to reusable --------- Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
import ./codex/helpers/randomchunker
export randomchunker
export units
proc exampleString*(length: int): string =
let chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
result = newString(length) # Create a new empty string with a given length
for i in 0..<length:
result[i] = chars[rand(chars.len-1)] # Generate a random index and set the string's character
proc example*[T: SomeInteger](_: type T): T =
rand(T)
proc example*[T,N](_: type array[N, T]): array[N, T] =
for item in result.mitems:
item = T.example
proc example*[T](_: type seq[T]): seq[T] =
let length = uint8.example.int
newSeqWith(length, T.example)
proc example*(_: type UInt256): UInt256 =
UInt256.fromBytes(array[32, byte].example)
[marketplace] Availability improvements (#535) ## Problem When Availabilities are created, the amount of bytes in the Availability are reserved in the repo, so those bytes on disk cannot be written to otherwise. When a request for storage is received by a node, if a previously created Availability is matched, an attempt will be made to fill a slot in the request (more accurately, the request's slots are added to the SlotQueue, and eventually those slots will be processed). During download, bytes that were reserved for the Availability were released (as they were written to disk). To prevent more bytes from being released than were reserved in the Availability, the Availability was marked as used during the download, so that no other requests would match the Availability, and therefore no new downloads (and byte releases) would begin. The unfortunate downside to this, is that the number of Availabilities a node has determines the download concurrency capacity. If, for example, a node creates a single Availability that covers all available disk space the operator is willing to use, that single Availability would mean that only one download could occur at a time, meaning the node could potentially miss out on storage opportunities. ## Solution To alleviate the concurrency issue, each time a slot is processed, a Reservation is created, which takes size (aka reserved bytes) away from the Availability and stores them in the Reservation object. This can be done as many times as needed as long as there are enough bytes remaining in the Availability. Therefore, concurrent downloads are no longer limited by the number of Availabilities. Instead, they would more likely be limited to the SlotQueue's `maxWorkers`. From a database design perspective, an Availability has zero or more Reservations. Reservations are persisted in the RepoStore's metadata, along with Availabilities. The metadata store key path for Reservations is ` meta / sales / reservations / <availabilityId> / <reservationId>`, while Availabilities are stored one level up, eg `meta / sales / reservations / <availabilityId> `, allowing all Reservations for an Availability to be queried (this is not currently needed, but may be useful when work to restore Availability size is implemented, more on this later). ### Lifecycle When a reservation is created, its size is deducted from the Availability, and when a reservation is deleted, any remaining size (bytes not written to disk) is returned to the Availability. If the request finishes, is cancelled (expired), or an error occurs, the Reservation is deleted (and any undownloaded bytes returned to the Availability). In addition, when the Sales module starts, any Reservations that are not actively being used in a filled slot, are deleted. Having a Reservation persisted until after a storage request is completed, will allow for the originally set Availability size to be reclaimed once a request contract has been completed. This is a feature that is yet to be implemented, however the work in this PR is a step in the direction towards enabling this. ### Unknowns Reservation size is determined by the `StorageAsk.slotSize`. If during download, more bytes than `slotSize` are attempted to be downloaded than this, then the Reservation update will fail, and the state machine will move to a `SaleErrored` state, deleting the Reservation. This will likely prevent the slot from being filled. ### Notes Based on #514
2023-09-29 04:33:08 +00:00
proc example*[T: distinct](_: type T): T =
type baseType = T.distinctBase
T(baseType.example)
Load purchase state from chain (#283) * [purchasing] Simplify test * [utils] Move StorageRequest.example up one level * [purchasing] Load purchases from market * [purchasing] load purchase states * Implement myRequest() and getState() methods for OnChainMarket * [proofs] Fix intermittently failing tests Ensures that examples of proofs in tests are never of length 0; these are considered invalid proofs by the smart contract logic. * [contracts] Fix failing test With the new solidity contracts update, a contract can only be paid out after it started. * [market] Add method to get request end time * [purchasing] wait until purchase is finished Purchase.wait() would previously wait until purchase was started, now we wait until it is finished. * [purchasing] Handle 'finished' and 'failed' states * [marketplace] move to failed state once request fails - Add support for subscribing to request failure events. - Add supporting contract tests for subscribing to request failure events. - Allow the PurchaseStarted state to move to PurchaseFailure once a request failure event is emitted - Add supporting tests for moving from PurchaseStarted to PurchaseFailure - Add state transition tests for PurchaseUnknown. * [marketplace] Fix test with longer sleepAsync * [integration] Add function to restart a codex node * [purchasing] Set client address before requesting storage To prevent the purchase id (which equals the request id) from changing once it's been submitted. * [contracts] Fix: OnChainMarket.getState() Had the wrong method signature before * [purchasing] Load purchases on node start * [purchasing] Rename state 'PurchaseError' to 'PurchaseErrored' Allows for an exception type called 'PurchaseError' * [purchasing] Load purchases in background No longer calls market.getRequest() for every purchase on node start. * [contracts] Add `$` for RequestId, SlotId and Nonce To aid with debugging * [purchasing] Add Purchasing.stop() To ensure that all contract interactions have both a start() and a stop() for * [tests] Remove sleepAsync where possible Use `eventually` loop instead, to make sure that we're not waiting unnecessarily. * [integration] Fix: handle non-json response in test * [purchasing] Add purchase state to json * [integration] Ensure that purchase is submitted before restart Fixes test failure on slower CI * [purchasing] re-implement `description` as method Allows description to be set in the same module where the state type is defined. Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [contracts] fix typo Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [market] Use more generic error type Should we decide to change the provider type later Co-authored-by: Eric Mastro <eric.mastro@gmail.com> Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
2022-11-08 07:10:17 +00:00
proc example*(_: type StorageRequest): StorageRequest =
StorageRequest(
client: Address.example,
ask: StorageAsk(
slots: 4,
slotSize: (1 * 1024 * 1024 * 1024).u256, # 1 Gigabyte
duration: (10 * 60 * 60).u256, # 10 hours
collateral: 200.u256,
Load purchase state from chain (#283) * [purchasing] Simplify test * [utils] Move StorageRequest.example up one level * [purchasing] Load purchases from market * [purchasing] load purchase states * Implement myRequest() and getState() methods for OnChainMarket * [proofs] Fix intermittently failing tests Ensures that examples of proofs in tests are never of length 0; these are considered invalid proofs by the smart contract logic. * [contracts] Fix failing test With the new solidity contracts update, a contract can only be paid out after it started. * [market] Add method to get request end time * [purchasing] wait until purchase is finished Purchase.wait() would previously wait until purchase was started, now we wait until it is finished. * [purchasing] Handle 'finished' and 'failed' states * [marketplace] move to failed state once request fails - Add support for subscribing to request failure events. - Add supporting contract tests for subscribing to request failure events. - Allow the PurchaseStarted state to move to PurchaseFailure once a request failure event is emitted - Add supporting tests for moving from PurchaseStarted to PurchaseFailure - Add state transition tests for PurchaseUnknown. * [marketplace] Fix test with longer sleepAsync * [integration] Add function to restart a codex node * [purchasing] Set client address before requesting storage To prevent the purchase id (which equals the request id) from changing once it's been submitted. * [contracts] Fix: OnChainMarket.getState() Had the wrong method signature before * [purchasing] Load purchases on node start * [purchasing] Rename state 'PurchaseError' to 'PurchaseErrored' Allows for an exception type called 'PurchaseError' * [purchasing] Load purchases in background No longer calls market.getRequest() for every purchase on node start. * [contracts] Add `$` for RequestId, SlotId and Nonce To aid with debugging * [purchasing] Add Purchasing.stop() To ensure that all contract interactions have both a start() and a stop() for * [tests] Remove sleepAsync where possible Use `eventually` loop instead, to make sure that we're not waiting unnecessarily. * [integration] Fix: handle non-json response in test * [purchasing] Add purchase state to json * [integration] Ensure that purchase is submitted before restart Fixes test failure on slower CI * [purchasing] re-implement `description` as method Allows description to be set in the same module where the state type is defined. Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [contracts] fix typo Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [market] Use more generic error type Should we decide to change the provider type later Co-authored-by: Eric Mastro <eric.mastro@gmail.com> Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
2022-11-08 07:10:17 +00:00
proofProbability: 4.u256, # require a proof roughly once every 4 periods
reward: 84.u256,
maxSlotLoss: 2 # 2 slots can be freed without data considered to be lost
),
content: StorageContent(
cid: "zb2rhheVmk3bLks5MgzTqyznLu1zqGH5jrfTA1eAZXrjx7Vob",
merkleRoot: array[32, byte].example
Load purchase state from chain (#283) * [purchasing] Simplify test * [utils] Move StorageRequest.example up one level * [purchasing] Load purchases from market * [purchasing] load purchase states * Implement myRequest() and getState() methods for OnChainMarket * [proofs] Fix intermittently failing tests Ensures that examples of proofs in tests are never of length 0; these are considered invalid proofs by the smart contract logic. * [contracts] Fix failing test With the new solidity contracts update, a contract can only be paid out after it started. * [market] Add method to get request end time * [purchasing] wait until purchase is finished Purchase.wait() would previously wait until purchase was started, now we wait until it is finished. * [purchasing] Handle 'finished' and 'failed' states * [marketplace] move to failed state once request fails - Add support for subscribing to request failure events. - Add supporting contract tests for subscribing to request failure events. - Allow the PurchaseStarted state to move to PurchaseFailure once a request failure event is emitted - Add supporting tests for moving from PurchaseStarted to PurchaseFailure - Add state transition tests for PurchaseUnknown. * [marketplace] Fix test with longer sleepAsync * [integration] Add function to restart a codex node * [purchasing] Set client address before requesting storage To prevent the purchase id (which equals the request id) from changing once it's been submitted. * [contracts] Fix: OnChainMarket.getState() Had the wrong method signature before * [purchasing] Load purchases on node start * [purchasing] Rename state 'PurchaseError' to 'PurchaseErrored' Allows for an exception type called 'PurchaseError' * [purchasing] Load purchases in background No longer calls market.getRequest() for every purchase on node start. * [contracts] Add `$` for RequestId, SlotId and Nonce To aid with debugging * [purchasing] Add Purchasing.stop() To ensure that all contract interactions have both a start() and a stop() for * [tests] Remove sleepAsync where possible Use `eventually` loop instead, to make sure that we're not waiting unnecessarily. * [integration] Fix: handle non-json response in test * [purchasing] Add purchase state to json * [integration] Ensure that purchase is submitted before restart Fixes test failure on slower CI * [purchasing] re-implement `description` as method Allows description to be set in the same module where the state type is defined. Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [contracts] fix typo Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [market] Use more generic error type Should we decide to change the provider type later Co-authored-by: Eric Mastro <eric.mastro@gmail.com> Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
2022-11-08 07:10:17 +00:00
),
expiry:(60 * 60).u256, # 1 hour ,
Load purchase state from chain (#283) * [purchasing] Simplify test * [utils] Move StorageRequest.example up one level * [purchasing] Load purchases from market * [purchasing] load purchase states * Implement myRequest() and getState() methods for OnChainMarket * [proofs] Fix intermittently failing tests Ensures that examples of proofs in tests are never of length 0; these are considered invalid proofs by the smart contract logic. * [contracts] Fix failing test With the new solidity contracts update, a contract can only be paid out after it started. * [market] Add method to get request end time * [purchasing] wait until purchase is finished Purchase.wait() would previously wait until purchase was started, now we wait until it is finished. * [purchasing] Handle 'finished' and 'failed' states * [marketplace] move to failed state once request fails - Add support for subscribing to request failure events. - Add supporting contract tests for subscribing to request failure events. - Allow the PurchaseStarted state to move to PurchaseFailure once a request failure event is emitted - Add supporting tests for moving from PurchaseStarted to PurchaseFailure - Add state transition tests for PurchaseUnknown. * [marketplace] Fix test with longer sleepAsync * [integration] Add function to restart a codex node * [purchasing] Set client address before requesting storage To prevent the purchase id (which equals the request id) from changing once it's been submitted. * [contracts] Fix: OnChainMarket.getState() Had the wrong method signature before * [purchasing] Load purchases on node start * [purchasing] Rename state 'PurchaseError' to 'PurchaseErrored' Allows for an exception type called 'PurchaseError' * [purchasing] Load purchases in background No longer calls market.getRequest() for every purchase on node start. * [contracts] Add `$` for RequestId, SlotId and Nonce To aid with debugging * [purchasing] Add Purchasing.stop() To ensure that all contract interactions have both a start() and a stop() for * [tests] Remove sleepAsync where possible Use `eventually` loop instead, to make sure that we're not waiting unnecessarily. * [integration] Fix: handle non-json response in test * [purchasing] Add purchase state to json * [integration] Ensure that purchase is submitted before restart Fixes test failure on slower CI * [purchasing] re-implement `description` as method Allows description to be set in the same module where the state type is defined. Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [contracts] fix typo Co-authored-by: Eric Mastro <eric.mastro@gmail.com> * [market] Use more generic error type Should we decide to change the provider type later Co-authored-by: Eric Mastro <eric.mastro@gmail.com> Co-authored-by: Eric Mastro <eric.mastro@gmail.com>
2022-11-08 07:10:17 +00:00
nonce: Nonce.example
)
proc example*(_: type Slot): Slot =
let request = StorageRequest.example
let slotIndex = rand(request.ask.slots.int).u256
Slot(request: request, slotIndex: slotIndex)
Slot queue (#455) ## Slot queue Adds a slot queue, as per the [slot queue design](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#slot-queue). Any time storage is requested, all slots from that request are immediately added to the queue. Finished, Canclled, Failed requests remove all slots with that request id from the queue. SlotFreed events add a new slot to the queue and SlotFilled events remove the slot from the queue. This allows popping of a slot each time one is processed, making things much simpler. When an entire request of slots is added to the queue, the slot indices are shuffled randomly to hopefully prevent nodes that pick up the same storage requested event from clashing on the first processed slot index. This allowed removal of assigning a random slot index in the SalePreparing state and it also ensured that all SalesAgents will have a slot index assigned to them at the start thus the removal of the optional slotIndex. Remove slotId from SlotFreed event as it was not being used. RequestId and slotIndex were added to the SlotFreed event earlier and those are now being used The slot queue invariant that prioritises queue items added to the queue relies on a scoring mechanism to sort them based on the [sort order in the design document](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#sort-order). When a storage request is handled by the sales module, a slot index was randomly assigned and then the slot was filled. Now, a random slot index is only assigned when adding an entire request to the slot queue. Additionally, the slot is checked that its state is `SlotState.Free` before continuing with the download process. SlotQueue should always ensure the underlying AsyncHeapQueue has one less than the maximum items, ensuring the SlotQueue can always have space to add an additional item regardless if it’s full or not. Constructing `SlotQueue.workers` in `SlotQueue.new` calls `newAsyncQueue` which causes side effects, so the construction call had to be moved to `SlotQueue.start`. Prevent loading request from contract (network request) if there is an existing item in queue for that request. Check availability before adding request to queue. Add ability to query market contract for past events. When new availabilities are added, the `onReservationAdded` callback is triggered in which past `StorageRequested` events are queried, and those slots are added to the queue (filtered by availability on `push` and filtered by state in `SalePreparing`). #### Request Workers Limit the concurrent requests being processed in the queue by using a limited pool of workers (default = 3). Workers are in a data structure of type `AsyncQueue[SlotQueueWorker]`. This allows us to await a `popFirst` for available workers inside of the main SlotQueue event loop Add an `onCleanUp` that stops the agents and removes them from the sales module agent list. `onCleanUp` is called from sales end states (eg ignored, cancelled, finished, failed, errored). Add a `doneProcessing` future to `SlotQueueWorker` to be completed in the `OnProcessSlot` callback. Each `doneProcessing` future created is cancelled and awaited in `SlotQueue.stop` (thanks to `TrackableFuturees`), which forced `stop` to become async. - Cancel dispatched workers and the `onProcessSlot` callbacks, prevents zombie callbacks #### Add TrackableFutures Allow tracking of futures in a module so they can be cancelled at a later time. Useful for asyncSpawned futures, but works for any future. ### Sales module The sales module needed to subscribe to request events to ensure that the request queue was managed correctly on each event. In the process of doing this, the sales agents were updated to avoid subscribing to events in each agent, and instead dispatch received events from the sales module to all created sales agents. This would prevent memory leaks on having too many eventemitters subscribed to. - prevent removal of agents from sales module while stopping, otherwise the agents seq len is modified while iterating An additional sales agent state was added, `SalePreparing`, that handles all state machine setup, such as retrieving the request and subscribing to events that were previously in the `SaleDownloading` state. Once agents have parked in an end state (eg ignored, cancelled, finished, failed, errored), they were not getting cleaned up and the sales module was keeping a handle on their reference. An `onCleanUp` callback was created to be called after the state machine enters an end state, which could prevent a memory leak if the number of requests coming in is high. Move the SalesAgent callback raises pragmas from the Sales module to the proc definition in SalesAgent. This avoids having to catch `Exception`. - remove unneeded error handling as pragmas were moved Move sales.subscriptions from an object containing named subscriptions to a `seq[Subscription]` directly on the sales object. Sales tests: shut down repo after sales stop, to fix SIGABRT in CI ### Add async Promise API - modelled after JavaScript Promise API - alternative to `asyncSpawn` that allows handling of async calls in a synchronous context (including access to the synchronous closure) with less additional procs to be declared - Write less code, catch errors that would otherwise defect in asyncspawn, and execute a callback after completion - Add cancellation callbacks to utils/then, ensuring cancellations are handled properly ## Dependencies - bump codex-contracts-eth to support slot queue (https://github.com/codex-storage/codex-contracts-eth/pull/61) - bump nim-ethers to 0.5.0 - Bump nim-json-rpc submodule to 0bf2bcb --------- Co-authored-by: Jaremy Creechley <creechley@gmail.com>
2023-07-25 02:50:30 +00:00
proc example*(_: type SlotQueueItem): SlotQueueItem =
let request = StorageRequest.example
let slot = Slot.example
SlotQueueItem.init(request, slot.slotIndex.truncate(uint16))
proc example(_: type G1Point): G1Point =
G1Point(x: UInt256.example, y: UInt256.example)
proc example(_: type G2Point): G2Point =
G2Point(
x: Fp2Element(real: UInt256.example, imag: UInt256.example),
y: Fp2Element(real: UInt256.example, imag: UInt256.example)
)
proc example*(_: type Groth16Proof): Groth16Proof =
Groth16Proof(
a: G1Point.example,
b: G2Point.example,
c: G1Point.example
)
refactor: multinode integration test refactor (#662) * refactor multi node test suite Refactor the multinode test suite into the marketplace test suite. - Arbitrary number of nodes can be started with each test: clients, providers, validators - Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file. - Log files for all nodes can be persisted on disk, with configuration at the test-level - Log files, if persisted (as specified in the test), will be persisted to a CI artifact - Node config is specified at the test-level instead of the suite-level - Node/Hardhat process starting/stopping is now async, and runs much faster - Per-node config includes: - simulating proof failures - logging to file - log level - log topics - storage quota - debug (print logs to stdout) - Tests find next available ports when starting nodes, as closing ports on Windows can lag - Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests). - If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively. - If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively. * additional logging for debug purposes * address PR feedback - fix spelling - revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR. - remove method label from raiseAssert - remove unused import * Use API instead of command exec to test for free port Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs. * clean up * remove upraises annotations from tests * Update tests to work with updated erasure coding slot sizes * update dataset size, nodes, tolerance to match valid ec params Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage. Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params). All integration tests pass when the async `clock.now` changes are reverted. * dont use async clock for now * fix workflow * move integration logs uplod to reusable --------- Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
proc example*(_: type RandomChunker, blocks: int): Future[string] {.async.} =
# doAssert blocks >= 3, "must be more than 3 blocks"
let rng = Rng.instance()
let chunker = RandomChunker.new(
rng, size = DefaultBlockSize * blocks.NBytes, chunkSize = DefaultBlockSize)
var data: seq[byte]
while (let moar = await chunker.getBytes(); moar != []):
data.add moar
return byteutils.toHex(data)
proc example*(_: type RandomChunker): Future[string] {.async.} =
await RandomChunker.example(3)