* feat: repair is rewarded
* chore: update contracts repo
* feat: proving loop handles repair case
* test: assert repair state
* chore: update contracts repo
* fix: upon unknown state of repair go to error
* Avoid cancelling states when slot is filled
* improve logging
Improves logging for situations where a Sale should be ignored instead of being considered an error, including when reservation is not allowed and when a slot was filled by another host.
* remove onSlotFilled unit tests from states
* convert EthersError to MarketError
* change `canReserveSlot` and `reserveSlot` parameters
Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.
* Add SaleSlotReserving
Adds a new state, SaleSlotReserving, that attempts to reserve a slot before downloading.
If the slot cannot be reserved, the state moves to SaleIgnored.
On error, the state moves to SaleErrored.
SaleIgnored is also updated to pass in `reprocessSlot` and `returnBytes`, controlling the behaviour in the Sales module after the slot is ignored. This is because previously it was assumed that SaleIgnored was only reached when there was no Availability. This is no longer the case, since SaleIgnored can now be reached when a slot cannot be reserved.
* Update SalePreparing
Specify `reprocessSlot` and `returnBytes` when moving to `SaleIgnored` from `SalePreparing`.
Update tests to include test for a raised CatchableError.
* Fix unit test
* Modify `canReserveSlot` and `reverseSlot` params after rebase
* Update MockMarket with new `canReserveSlot` and `reserveSlot` params
* fix after rebase
also bump codex-contracts-eth to master
* All sort of tweaks
* docs: availability's minPrice doc
* Revert changes to the two node test example
* Change default EC params in REST API
Change default EC params in REST API to 3 nodes and 1 tolerance.
Adjust integration tests to honour these settings.
---------
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* pulls in datastore-leveldb update
* bump
* Applies LevelDb as metadata store. Adds option for repostore.
* Sets submodule to main branch
* I can do syntax, me
* Removes wildcard from metadata query key
* Applies leveldb instead of sqlite-in-memory for tests
* Restores query key wildcard.
* Pins nim-datastore to latest master
* bumps leveldb to 0.1.4
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* add seen flag
* Add MockSlotQueueItem and better prioritisation tests
* Update seen priority, and include in SlotQueueItem.init
* Re-add processed slots to queue
Re-add processed slots to queue if the sale was ignored or errored
* add pausing of queue
- when processing slots in queue, pause queue if item was marked seen
- if availability size is increased, trigger onAvailabilityAdded callback
- in sales, on availability added, clear 'seen' flags, then unpause the queue
- when items pushed to the queue, unpause the queue
* remove unused NoMatchingAvailabilityError from slotqueue
The slot queue should also have nothing to do with availabilities
* when all availabilities are empty, pause the queue
An empty availability is defined as size < DefaultBlockSize as this means even the smallest possible request could not be served. However, this is up for discussion.
* remove availability from onAvailabilitiesEmptied callback
* refactor onAvailabilityAdded and onAvailabilitiesEmptied
onAvailabilityAdded and onAvailabilitiesEmptied are now only called from reservations.update (and eventually reservations.delete once implemented).
- Add empty routine for Availability and Reservation
- Add allEmpty routine for Availability and Reservation, which returns true when all all Availability or Reservation objects in the datastore are empty.
* SlotQueue test support updates
* Sales module test support updates
* Reservations module tests for queue pausing
* Sales module tests for queue pausing
Includes tests for sales states cancelled, errored, ignored to ensure onCleanUp is called with correct parameters
* SlotQueue module tests for queue pausing
* fix existing sales test
* PR feedback
- indent `self.unpause`
- update comment for `clearSeenFlags`
* reprocessSlot in SaleErrored only when coming from downloading
* remove pausing of queue when availabilities are "emptied"
Queue pausing when all availiabilies are "emptied" is not necessary, given that the node would not be able to service slots once all its availabilities' freeSize are too small for the slots in the queue, and would then be paused anyway.
Add test that asserts the queue is paused once the freeSpace of availabilities drops too low to fill slots in the queue.
* Update clearing of seen flags
The asyncheapqueue update overload would need to check index bounds and ultimately a different solution was found using the mitems iterator.
* fix test
request.id was different before updating request.ask.slots, and that id was used to set the state in mockmarket.
* Change filled/cleanup future to nil, so no await is needed
* add wait to allow items to be added to queue
* do not unpause queue when seen items are pushed
* re-add seen item back to queue once paused
Previously, when a seen item was processed, it was first popped off the queue, then the queue was paused waiting to process that item once the queue was unpaused. Now, when a seen item is processed, it is popped off the queue, the queue is paused, then the item is re-added to the queue and the queue will wait until unpaused before it will continue popping items off the queue. If the item was not re-added to the queue, it would have been processed immediately once unpaused, however there may have been other items with higher priority pushed to the queue in the meantime. The queue would not be unpaused if those added items were already seen. In particular, this may happen when ignored items due to lack of availability are re-added to a paused queue. Those ignored items will likely have a higher priority than the item that was just seen (due to it having been processed first), causing the queue to the be paused.
* address PR comments
* integration: move REST API tests into their own module
* integration: move upload and download tests into their own module
* integration: move purchasing tests into their own module
* integration: move marketplace tests to the right module
* integration: mine a block *after* starting nodes
To ensure that tests involving multiple nodes do
not start with out-of-sync clocks
* Fix: do not swallow CancellationErrors
* integration: avoid underflow in UInt256
* network: remove unnecessary error handling
No Exceptions can occur, only Defects, because everything
is asyncSpawned.
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* network: do not raise in asyncSpawned proc
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* Add MarketError
Add MarketError and convert all EthersErrors (ProviderError, SignerError) to MarketError
* Include token contract call in conversion of ethers error
* use a real verifying contract address
* contracts: cleanup
* marketplacesuite: set correct circuit files, interval mining
* Proofs tests updates
Contains changes to get the proving tests working reliably.
* integration: use correct circom artifacts for creating proofs
* integration: cleanup
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* wire prover into node
* stricter case object checks
* return correct proof
* misc renames
* adding usefull traces
* fix nodes and tolerance to match expected params
* format challenges in logs
* add circom compat to solidity groth16 convertion
* update
* bump time to give nodes time to load with all circom artifacts
* misc
* misc
* use correct dataset geometry in erasure
* make errors more searchable
* use parens around `=? (await...)` calls
* styling
* styling
* use push raises
* fix to match constructor arguments
* merge master
* merge master
* integration: fix proof parameters for a test
Increased times due to ZK proof generation.
Increased storage requirement because we're now hosting
5 slots instead of 1.
* sales: calculate initial proof at start of period
reason: this ensures that the period (and therefore
the challenge) doesn't change while we're calculating
the proof
* integration: fix proof parameters for tests
Increased times due to waiting on next period.
Fixed data to be of right size.
Updated expected payout due to hosting 5 slots.
* sales: wait for stable proof challenge
When the block pointer is nearing the
wrap-around point, we wait another period
before calculating a proof.
* fix merge conflict
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* clock: add 1 second leeway before acting on timeouts
* sales: do not raise in proving loop when slot is cancelled
Allow the onCancelled callback to handle cancellation, and
the onFailed callback to handle failed requests.
* sales: cleanup proving tests
* sales: fix sales agent tests
* sales: stop cancellation loop when request started, finished or failed
* sales: fix flaky test
* sales: fix another flaky test
* clock: add comment explaining the + 1 second
Co-Authored-By: benbierens <thatbenbierens@gmail.com>
---------
Co-authored-by: benbierens <thatbenbierens@gmail.com>
* refactor multi node test suite
Refactor the multinode test suite into the marketplace test suite.
- Arbitrary number of nodes can be started with each test: clients, providers, validators
- Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file.
- Log files for all nodes can be persisted on disk, with configuration at the test-level
- Log files, if persisted (as specified in the test), will be persisted to a CI artifact
- Node config is specified at the test-level instead of the suite-level
- Node/Hardhat process starting/stopping is now async, and runs much faster
- Per-node config includes:
- simulating proof failures
- logging to file
- log level
- log topics
- storage quota
- debug (print logs to stdout)
- Tests find next available ports when starting nodes, as closing ports on Windows can lag
- Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests).
- If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively.
- If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively.
* additional logging for debug purposes
* address PR feedback
- fix spelling
- revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR.
- remove method label from raiseAssert
- remove unused import
* Use API instead of command exec to test for free port
Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs.
* clean up
* remove upraises annotations from tests
* Update tests to work with updated erasure coding slot sizes
* update dataset size, nodes, tolerance to match valid ec params
Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage.
Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params).
All integration tests pass when the async `clock.now` changes are reverted.
* dont use async clock for now
* fix workflow
* move integration logs uplod to reusable
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* Smart contracts update: Groth16Proof instead of bytes
* Use dummy verifier for now, until we can create ZK proofs
* Fix tests: submit proof only when slot is filled
* Submit dummy proofs for now
* More detailed log when proof submission failed
* Use dummy verifier for integration tests
For now at least
* Fix mistake in blanket renaming to ethProvider
* Update to latest codex-contracts-eth
* feat: zkey-hash from chain
* Fix zkeyHash
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* bump nim-poseidon2 to export fromBytes
Before the changes in this branch, fromBytes was likely being resolved by nim-stew, or other dependency. With the changes in this branch, that dependency was removed and fromBytes could no longer be resolved. By exporting fromBytes from nim-poseidon, the correct resolution is now happening.
* fixes to get compiling after rebasing master
* Add support for Result types being logged using formatIt
* Setting up testfixture for proof datasampler
* Sets up calculating number of cells in a slot
* Sets up tests for bitwise modulo
* Implements cell index collection
* setting up slot blocks module
* Implements getting treeCID from slot
* implements getting slot blocks by index
* Implements out-of-range check for slot index
* cleanup
* Sets up getting sample from block
* Implements selecting a cell sample from a block
* Implements building a minitree for block cells
* Adds method to get dataset block index from slot block index
* It's running
* splits up indexing
* almost there
* Fixes test. Implementation is now functional
* Refactoring to object-oriented
* Cleanup
* Lining up output type with updated reference code.
* setting up
* Updates expected samples
* Updates proof checking test to match new format
* move builder to own dir
* move sampler to own dir
* fix paths
* various changes to add support for the sampler
* wip sampler implementation
* don't use upraises
* wip sampler integration
* misc
* move tests around
* Various fixes to select correct slot and block index
* removing old tests
* cleanup
* misc
fix tests that work with correct cell indices
* remove unused file
* fixup logging
* add logscope
* truncate entropy to 31 bytes, otherwise it might be > than mod
* forwar getCidAndProof to local store
* misc
* Adds missing test for initial-proving state
* reverting back to correct slot/block indexing
* fix tests for revert
* misc
* misc
---------
Co-authored-by: benbierens <thatbenbierens@gmail.com>
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* Add get active slot /slots/{slotId} to REST api, use utils/json
- Add endpoint /slots/{slotId} to get an active SalesAgent from the Sales module. Used in integration tests to test when a sale has reached a certain state. Those integration test changes will be included in a larger PR, coming later.
- Add OpenAPI changes for new endpoint and associated components
- Use utils/json instead of nim-json-serialization. Required exemption of imports from several packages that export nim-json-serialization by default.
* Only except `toJson` from import/export of chronicles
In proving.onCancelled, the `waitFor state.loop.cancelAndWait()` was never completing. Turns out this was not needed, because when changing states, the current state's run is cancelled, which automatically cancels the state prove loop, because it is a child of proving.run. Therefore, the logic to cancelAndWait the prove loop was removed as it was not needed.
## Problem
When Availabilities are created, the amount of bytes in the Availability are reserved in the repo, so those bytes on disk cannot be written to otherwise. When a request for storage is received by a node, if a previously created Availability is matched, an attempt will be made to fill a slot in the request (more accurately, the request's slots are added to the SlotQueue, and eventually those slots will be processed). During download, bytes that were reserved for the Availability were released (as they were written to disk). To prevent more bytes from being released than were reserved in the Availability, the Availability was marked as used during the download, so that no other requests would match the Availability, and therefore no new downloads (and byte releases) would begin. The unfortunate downside to this, is that the number of Availabilities a node has determines the download concurrency capacity. If, for example, a node creates a single Availability that covers all available disk space the operator is willing to use, that single Availability would mean that only one download could occur at a time, meaning the node could potentially miss out on storage opportunities.
## Solution
To alleviate the concurrency issue, each time a slot is processed, a Reservation is created, which takes size (aka reserved bytes) away from the Availability and stores them in the Reservation object. This can be done as many times as needed as long as there are enough bytes remaining in the Availability. Therefore, concurrent downloads are no longer limited by the number of Availabilities. Instead, they would more likely be limited to the SlotQueue's `maxWorkers`.
From a database design perspective, an Availability has zero or more Reservations.
Reservations are persisted in the RepoStore's metadata, along with Availabilities. The metadata store key path for Reservations is ` meta / sales / reservations / <availabilityId> / <reservationId>`, while Availabilities are stored one level up, eg `meta / sales / reservations / <availabilityId> `, allowing all Reservations for an Availability to be queried (this is not currently needed, but may be useful when work to restore Availability size is implemented, more on this later).
### Lifecycle
When a reservation is created, its size is deducted from the Availability, and when a reservation is deleted, any remaining size (bytes not written to disk) is returned to the Availability. If the request finishes, is cancelled (expired), or an error occurs, the Reservation is deleted (and any undownloaded bytes returned to the Availability). In addition, when the Sales module starts, any Reservations that are not actively being used in a filled slot, are deleted.
Having a Reservation persisted until after a storage request is completed, will allow for the originally set Availability size to be reclaimed once a request contract has been completed. This is a feature that is yet to be implemented, however the work in this PR is a step in the direction towards enabling this.
### Unknowns
Reservation size is determined by the `StorageAsk.slotSize`. If during download, more bytes than `slotSize` are attempted to be downloaded than this, then the Reservation update will fail, and the state machine will move to a `SaleErrored` state, deleting the Reservation. This will likely prevent the slot from being filled.
### Notes
Based on #514
* [sales] remove availability check before adding to slot queue
* [sales] add missing return statement
* [tests] remove 'eventuallyCheck' helper
* [sales] remove reservations from slot queue
* [tests] rename module `eventually` -> `always`
* [sales] increase slot queue size
Because it will now also hold items for which we haven't
checked availability yet.
* Improve integration testing client (CodexClient) and json serialization
The current client used for integration testing against the REST endpoints for Codex accepts and passes primitive types. This caused a hard to diagnose bug where a `uint` was not being deserialized correctly.
In addition, the json de/serializing done between the CodexClient and REST client was not easy to read and was not tested.
These changes bring non-primitive types to most of the CodexClient functions, allowing us to lean on the compiler to ensure we're providing correct typings. More importantly, a json de/serialization util was created as a drop-in replacement for the std/json lib, with the main two differences being that field serialization is opt-in (instead of opt-out as in the case of json_serialization) and serialization errors are captured and logged, making debugging serialization issues much easier.
* Update integration test to use nodes=2 and tolerance=1
* clean up
* Simplify `.then` (promise api) and tests
* Remove tracked future when cancelled. Add tracked future tests
* Track and cancel statemachine futures
The futures created in each asyncstatemachine instance are tracked, and each future is cancelled and waited in `stop`.
Change `asyncstatemachine.stop` to be async so `machine.trackedFutures.cancelAndWait` could be called.
Add a constructor for `asyncstatemachine` that initialises the `trackedFutures` instance, and call the constructor from derived class constructors.
## Slot queue
Adds a slot queue, as per the [slot queue design](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#slot-queue).
Any time storage is requested, all slots from that request are immediately added to the queue. Finished, Canclled, Failed requests remove all slots with that request id from the queue. SlotFreed events add a new slot to the queue and SlotFilled events remove the slot from the queue. This allows popping of a slot each time one is processed, making things much simpler.
When an entire request of slots is added to the queue, the slot indices are shuffled randomly to hopefully prevent nodes that pick up the same storage requested event from clashing on the first processed slot index. This allowed removal of assigning a random slot index in the SalePreparing state and it also ensured that all SalesAgents will have a slot index assigned to them at the start thus the removal of the optional slotIndex.
Remove slotId from SlotFreed event as it was not being used. RequestId and slotIndex were added to the SlotFreed event earlier and those are now being used
The slot queue invariant that prioritises queue items added to the queue relies on a scoring mechanism to sort them based on the [sort order in the design document](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#sort-order).
When a storage request is handled by the sales module, a slot index was randomly assigned and then the slot was filled. Now, a random slot index is only assigned when adding an entire request to the slot queue. Additionally, the slot is checked that its state is `SlotState.Free` before continuing with the download process.
SlotQueue should always ensure the underlying AsyncHeapQueue has one less than the maximum items, ensuring the SlotQueue can always have space to add an additional item regardless if it’s full or not.
Constructing `SlotQueue.workers` in `SlotQueue.new` calls `newAsyncQueue` which causes side effects, so the construction call had to be moved to `SlotQueue.start`.
Prevent loading request from contract (network request) if there is an existing item in queue for that request.
Check availability before adding request to queue.
Add ability to query market contract for past events. When new availabilities are added, the `onReservationAdded` callback is triggered in which past `StorageRequested` events are queried, and those slots are added to the queue (filtered by availability on `push` and filtered by state in `SalePreparing`).
#### Request Workers
Limit the concurrent requests being processed in the queue by using a limited pool of workers (default = 3). Workers are in a data structure of type `AsyncQueue[SlotQueueWorker]`. This allows us to await a `popFirst` for available workers inside of the main SlotQueue event loop
Add an `onCleanUp` that stops the agents and removes them from the sales module agent list. `onCleanUp` is called from sales end states (eg ignored, cancelled, finished, failed, errored).
Add a `doneProcessing` future to `SlotQueueWorker` to be completed in the `OnProcessSlot` callback. Each `doneProcessing` future created is cancelled and awaited in `SlotQueue.stop` (thanks to `TrackableFuturees`), which forced `stop` to become async.
- Cancel dispatched workers and the `onProcessSlot` callbacks, prevents zombie callbacks
#### Add TrackableFutures
Allow tracking of futures in a module so they can be cancelled at a later time. Useful for asyncSpawned futures, but works for any future.
### Sales module
The sales module needed to subscribe to request events to ensure that the request queue was managed correctly on each event. In the process of doing this, the sales agents were updated to avoid subscribing to events in each agent, and instead dispatch received events from the sales module to all created sales agents. This would prevent memory leaks on having too many eventemitters subscribed to.
- prevent removal of agents from sales module while stopping, otherwise the agents seq len is modified while iterating
An additional sales agent state was added, `SalePreparing`, that handles all state machine setup, such as retrieving the request and subscribing to events that were previously in the `SaleDownloading` state.
Once agents have parked in an end state (eg ignored, cancelled, finished, failed, errored), they were not getting cleaned up and the sales module was keeping a handle on their reference. An `onCleanUp` callback was created to be called after the state machine enters an end state, which could prevent a memory leak if the number of requests coming in is high.
Move the SalesAgent callback raises pragmas from the Sales module to the proc definition in SalesAgent. This avoids having to catch `Exception`.
- remove unneeded error handling as pragmas were moved
Move sales.subscriptions from an object containing named subscriptions to a `seq[Subscription]` directly on the sales object.
Sales tests: shut down repo after sales stop, to fix SIGABRT in CI
### Add async Promise API
- modelled after JavaScript Promise API
- alternative to `asyncSpawn` that allows handling of async calls in a synchronous context (including access to the synchronous closure) with less additional procs to be declared
- Write less code, catch errors that would otherwise defect in asyncspawn, and execute a callback after completion
- Add cancellation callbacks to utils/then, ensuring cancellations are handled properly
## Dependencies
- bump codex-contracts-eth to support slot queue (https://github.com/codex-storage/codex-contracts-eth/pull/61)
- bump nim-ethers to 0.5.0
- Bump nim-json-rpc submodule to 0bf2bcb
---------
Co-authored-by: Jaremy Creechley <creechley@gmail.com>
* Fixes/workarounds for nimsuggest failures in codex.nim.
* remove rng prefix - it appears to work now
* format new's to be more consistent
* making proc formatting a bit more consistent
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
* [build] disable XCannotRaiseY hint
There are too many {.raises:[Defect].} in the
libraries that we use, drowning out all other
warnings and hints
* [build] disable BareExcept warning
Not yet enabled in a released version of Nim,
so libraries that we depend on have not fixed
this yet, drowning out our own hints and warnings
* [build] disable DotLikeOps warning
dot-like ops were an experiment that is not going
land in Nim
* [build] compile log statements in tests
When running tests, all log statements are compiled.
They are filtered out at runtime during a test run.
* [build] do not build executable when running unit test
It's already built in the integration test
* [build] Fix warnings
- remove unused code
- remove unused imports
- stop using deprecated stuff
* [build] Put compiler flags behind nim version checks
* [CI] remove Nim 1.2 compatibility