mirror of
https://github.com/codex-storage/nim-codex.git
synced 2025-02-22 17:48:24 +00:00
* fix: createReservation lock (#825) * fix: createReservation lock * fix: additional locking places * fix: acquire lock * chore: feedback Co-authored-by: markspanbroek <mark@spanbroek.net> Signed-off-by: Adam Uhlíř <adam@uhlir.dev> * feat: withLock template and fixed tests * fix: use proc for MockReservations constructor * chore: feedback Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> Signed-off-by: Adam Uhlíř <adam@uhlir.dev> * chore: feedback implementation --------- Signed-off-by: Adam Uhlíř <adam@uhlir.dev> Co-authored-by: markspanbroek <mark@spanbroek.net> Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> * Block deletion with ref count & repostore refactor (#631) * Fix StoreStream so it doesn't return parity bytes (#838) * fix storestream so it doesn\'t return parity bits for protected/verifiable manifests * use Cid.example instead of creating a mock manually * Fix verifiable manifest initialization (#839) * fix verifiable manifest initialization * fix linearstrategy, use verifiableStrategy to select blocks for slots * check for both strategies in attribute inheritance test * ci: add verify_circuit=true to the releases (#840) * provisional fix so EC errors do not crash the node on download (#841) * prevent node crashing with `not val.isNil` (#843) * bump nim-leopard to handle no parity data (#845) * Fix verifiable manifest constructor (#844) * Fix verifiable manifest constructor * Add integration test for verifiable manifest download Add integration test for testing download of verifiable dataset after creating request for storage * add missing import * add testecbug to integration suite * Remove hardhat instance from integration test * change description, drop echo --------- Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> Co-authored-by: gmega <giuliano.mega@gmail.com> * Bump Nim to 1.6.21 (#851) * bump Nim to 1.6.21 (range type reset fixes) * remove incompatible versions from compiler matrix * feat(rest): adds erasure coding constraints when requesting storage (#848) * Rest API: add erasure coding constraints when requesting storage * clean up * Make error message for "dataset too small" more informative. * fix API integration test --------- Co-authored-by: gmega <giuliano.mega@gmail.com> * Prover workshop band-aid (#853) * add prover bandaid * Improve error message text Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> --------- Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> * Bandaid for failing erasure coding (#855) * Update Release workflow (#858) Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Fixes prover behavior with singleton proof trees (#859) * add logs and test * add Merkle proof checks * factor out Circom input normalization, fix proof input serialization * add test and update existing ones * update circuit assets * add back trace message * switch contracts to fix branch * update codex-contracts-eth to latest * do not expose prove with prenormalized inputs * Chronos v4 Update (v3 Compat Mode) (#814) * add changes to use chronos v4 in compat mode * switch chronos to compat fix branch * use nimbus-build-system with configurable Nim repo * add missing imports * add missing await * bump compat * pin nim version in Makefile * add await instead of asyncSpawn to advertisement queue loop * bump DHT to v0.5.0 * allow error state of `onBatch` to propagate upwards in test code * pin Nim compiler commit to avoid fetching stale branch * make CI build against branch head instead of merge * fix handling of return values in testslotqueue * Downgrade to gcc 13 on Windows (#874) * Downgrade to gcc 13 on Windows Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Increase build job timeout to 90 minutes Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> --------- Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Add MIT/Apache licenses (#861) * Add MIT/Apache licenses * Center "Apache License" Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> * remove wrong legal entity; rename apache license file --------- Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> * Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869) * Add OPTIONS endpoint to allow the content-type header exec git commit --amend --no-edit -S * Remove useless header "Access-Control-Headers" and add cache Signed-off-by: Arnaud <arnaud@status.im> --------- Signed-off-by: Arnaud <arnaud@status.im> Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com> * chore: add `downtimeProduct` config parameter (#867) * chore: add `downtimeProduct` config parameter * bump codex-contracts-eth to master * Support CORS preflight requests when the storage request api returns an error (#878) * Add CORS headers when the REST API is returning an error * Use the allowedOrigin instead of the wilcard when setting the origin Signed-off-by: Arnaud <arnaud@status.im> --------- Signed-off-by: Arnaud <arnaud@status.im> * refactor(marketplace): generic querying of historical marketplace events (#872) * refactor(marketplace): move marketplace events to the Market abstraction Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction. * Remove unneeded conversion * Switch to generic implementation of event querying * change parent type to MarketplaceEvent * Remove extra license file (#876) * remove extra license * center "apache license" * Update advertising (#862) * Setting up advertiser * Wires up advertiser * cleanup * test compiles * tests pass * setting up test for advertiser * Finishes advertiser tests * fixes commonstore tests * Review comments by Giuliano * Race condition found by Giuliano * Review comment by Dmitriy Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com> Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com> * fixes tests --------- Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com> Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com> * feat: add `--payout-address` (#870) * feat: add `--payout-address` Allows SPs to be paid out to a separate address, keeping their profits secure. Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client. * Remove optional payoutAddress Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`. * Update integration tests to include --payout-address * move payoutAddress from fillSlot to freeSlot * Update integration tests to use required payoutAddress - to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter. * support client payout address - withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address. * fix integration test adds --payout-address to validators * refactor: support withdrawFunds and freeSlot optional parameters - withdrawFunds has an optional parameter for withdrawRecipient - freeSlot has optional parameters for rewardRecipient and collateralRecipient - change --payout-address to --reward-recipient to match contract signature naming * Revert "Update integration tests to include --payout-address" This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964. There are some valid improvements to the integration tests, but they can be handled in a separate PR. * small fix * bump contracts to fix marketplace spec * bump codex-contracts-eth, now rebased on master * bump codex-contracts-eth now that feat/reward-address has been merged to master * clean up, comments * Rework circuit downloader (#882) * Introduces a start method to prover * Moves backend creation into start method * sets up three paths for backend initialization * Extracts backend initialization to backend-factory * Implements loading backend from cli files or previously downloaded local files * Wires up downloading and unzipping * functional implementation * Fixes testprover.nim * Sets up tests for backendfactory * includes libzip-dev * pulls in updated contracts * removes integration cli tests for r1cs, wasm, and zkey file arguments. * Fixes issue where inner-scope values are lost before returning * sets local proof verification for dist-test images * Adds two traces and bumps nim-ethers * Adds separate path for circuit files * Create circuit dir if not exists * fix: make sure requestStorage is mined * fix: correct place to plug confirm * test: fixing contracts tests * Restores gitmodules * restores nim-datastore reference * Sets up downloader exe * sets up tool skeleton * implements getting of circuit hash * Implements downloader tool * sets up test skeleton * Implements test for cirdl * includes testTools in testAll * Cleanup building.md * cleans up previous downloader implementation * cleans up testbackendfactory * moves start of prover into node.nim * Fills in arguments in example command * Initializes backend in prover constructor * Restores tests * Restores tests for cli instructions * Review comments by Dmitriy, part 1 * Quotes path in download instruction. * replaces curl with chronos http session * Moves cirdl build output to 'build' folder. * Fixes chronicles log output * Add cirdl support to the codex Dockerfile Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Add cirdl support to the docker entrypoint Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Add cirdl support to the release workflow Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Disable verify_circuit flag for releases Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Removes backendFactory placeholder type * wip * Replaces zip library with status-im/zippy library (which supports zip and tar) * Updates cirdl to not change circuitdir folder * Switches from zip to tar.gz * Review comments by Dmitriy * updates codex-contracts-eth * Adds testTools to CI * Adds check for access to config.circuitdir * Update fixture circuit zkey * Update matrix to run tools tests on Windows * Adds 'deps' dependency for cirdl * Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var * Review comments by Giuliano --------- Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> Co-authored-by: Adam Uhlíř <adam@uhlir.dev> Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com> * Support CORS for POST and PATCH availability endpoints (#897) * Adds testnet marketplace address to known deployments (#911) * API tweaks for OpenAPI, errors and endpoints (#886) * All sort of tweaks * docs: availability's minPrice doc * Revert changes to the two node test example * Change default EC params in REST API Change default EC params in REST API to 3 nodes and 1 tolerance. Adjust integration tests to honour these settings. --------- Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> --------- Signed-off-by: Adam Uhlíř <adam@uhlir.dev> Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> Signed-off-by: Arnaud <arnaud@status.im> Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com> Co-authored-by: Adam Uhlíř <adam@uhlir.dev> Co-authored-by: markspanbroek <mark@spanbroek.net> Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com> Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com> Co-authored-by: Arnaud <arno.deville@gmail.com> Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com> Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com> Co-authored-by: Arnaud <arnaud@status.im>
597 lines
21 KiB
Nim
597 lines
21 KiB
Nim
import std/sequtils
|
|
import std/sugar
|
|
import std/times
|
|
import pkg/chronos
|
|
import pkg/datastore/typedds
|
|
import pkg/questionable
|
|
import pkg/questionable/results
|
|
import pkg/codex/sales
|
|
import pkg/codex/sales/salesdata
|
|
import pkg/codex/sales/salescontext
|
|
import pkg/codex/sales/reservations
|
|
import pkg/codex/sales/slotqueue
|
|
import pkg/codex/stores/repostore
|
|
import pkg/codex/blocktype as bt
|
|
import pkg/codex/node
|
|
import ../../asynctest
|
|
import ../helpers
|
|
import ../helpers/mockmarket
|
|
import ../helpers/mockclock
|
|
import ../helpers/always
|
|
import ../examples
|
|
import ./helpers/periods
|
|
|
|
asyncchecksuite "Sales - start":
|
|
let
|
|
proof = Groth16Proof.example
|
|
repoTmp = TempLevelDb.new()
|
|
metaTmp = TempLevelDb.new()
|
|
|
|
var request: StorageRequest
|
|
var sales: Sales
|
|
var market: MockMarket
|
|
var clock: MockClock
|
|
var reservations: Reservations
|
|
var repo: RepoStore
|
|
var queue: SlotQueue
|
|
var itemsProcessed: seq[SlotQueueItem]
|
|
|
|
setup:
|
|
request = StorageRequest(
|
|
ask: StorageAsk(
|
|
slots: 4,
|
|
slotSize: 100.u256,
|
|
duration: 60.u256,
|
|
reward: 10.u256,
|
|
collateral: 200.u256,
|
|
),
|
|
content: StorageContent(
|
|
cid: "some cid"
|
|
),
|
|
expiry: (getTime() + initDuration(hours=1)).toUnix.u256
|
|
)
|
|
|
|
market = MockMarket.new()
|
|
clock = MockClock.new()
|
|
let repoDs = repoTmp.newDb()
|
|
let metaDs = metaTmp.newDb()
|
|
repo = RepoStore.new(repoDs, metaDs)
|
|
await repo.start()
|
|
sales = Sales.new(market, clock, repo)
|
|
reservations = sales.context.reservations
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
return success()
|
|
|
|
sales.onExpiryUpdate = proc(rootCid: string, expiry: SecondsSince1970): Future[?!void] {.async.} =
|
|
return success()
|
|
|
|
queue = sales.context.slotQueue
|
|
sales.onProve = proc(slot: Slot, challenge: ProofChallenge): Future[?!Groth16Proof] {.async.} =
|
|
return success(proof)
|
|
itemsProcessed = @[]
|
|
request.expiry = (clock.now() + 42).u256
|
|
|
|
teardown:
|
|
await sales.stop()
|
|
await repo.stop()
|
|
await repoTmp.destroyDb()
|
|
await metaTmp.destroyDb()
|
|
|
|
proc fillSlot(slotIdx: UInt256 = 0.u256) {.async.} =
|
|
let address = await market.getSigner()
|
|
let slot = MockSlot(requestId: request.id,
|
|
slotIndex: slotIdx,
|
|
proof: proof,
|
|
host: address)
|
|
market.filled.add slot
|
|
market.slotState[slotId(request.id, slotIdx)] = SlotState.Filled
|
|
|
|
test "load slots when Sales module starts":
|
|
let me = await market.getSigner()
|
|
|
|
request.ask.slots = 2
|
|
market.requested = @[request]
|
|
market.requestState[request.id] = RequestState.New
|
|
|
|
let slot0 = MockSlot(requestId: request.id,
|
|
slotIndex: 0.u256,
|
|
proof: proof,
|
|
host: me)
|
|
await fillSlot(slot0.slotIndex)
|
|
|
|
let slot1 = MockSlot(requestId: request.id,
|
|
slotIndex: 1.u256,
|
|
proof: proof,
|
|
host: me)
|
|
await fillSlot(slot1.slotIndex)
|
|
|
|
market.activeSlots[me] = @[request.slotId(0.u256), request.slotId(1.u256)]
|
|
market.requested = @[request]
|
|
market.activeRequests[me] = @[request.id]
|
|
|
|
await sales.start()
|
|
|
|
check eventually sales.agents.len == 2
|
|
check sales.agents.any(agent => agent.data.requestId == request.id and agent.data.slotIndex == 0.u256)
|
|
check sales.agents.any(agent => agent.data.requestId == request.id and agent.data.slotIndex == 1.u256)
|
|
|
|
asyncchecksuite "Sales":
|
|
let
|
|
proof = Groth16Proof.example
|
|
repoTmp = TempLevelDb.new()
|
|
metaTmp = TempLevelDb.new()
|
|
|
|
var availability: Availability
|
|
var request: StorageRequest
|
|
var sales: Sales
|
|
var market: MockMarket
|
|
var clock: MockClock
|
|
var reservations: Reservations
|
|
var repo: RepoStore
|
|
var queue: SlotQueue
|
|
var itemsProcessed: seq[SlotQueueItem]
|
|
|
|
setup:
|
|
availability = Availability(
|
|
totalSize: 100.u256,
|
|
freeSize: 100.u256,
|
|
duration: 60.u256,
|
|
minPrice: 600.u256,
|
|
maxCollateral: 400.u256
|
|
)
|
|
request = StorageRequest(
|
|
ask: StorageAsk(
|
|
slots: 4,
|
|
slotSize: 100.u256,
|
|
duration: 60.u256,
|
|
reward: 10.u256,
|
|
collateral: 200.u256,
|
|
),
|
|
content: StorageContent(
|
|
cid: "some cid"
|
|
),
|
|
expiry: (getTime() + initDuration(hours=1)).toUnix.u256
|
|
)
|
|
|
|
market = MockMarket.new()
|
|
|
|
let me = await market.getSigner()
|
|
market.activeSlots[me] = @[]
|
|
market.requestEnds[request.id] = request.expiry.toSecondsSince1970
|
|
|
|
clock = MockClock.new()
|
|
let repoDs = repoTmp.newDb()
|
|
let metaDs = metaTmp.newDb()
|
|
repo = RepoStore.new(repoDs, metaDs)
|
|
await repo.start()
|
|
sales = Sales.new(market, clock, repo)
|
|
reservations = sales.context.reservations
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
return success()
|
|
|
|
sales.onExpiryUpdate = proc(rootCid: string, expiry: SecondsSince1970): Future[?!void] {.async.} =
|
|
return success()
|
|
|
|
queue = sales.context.slotQueue
|
|
sales.onProve = proc(slot: Slot, challenge: ProofChallenge): Future[?!Groth16Proof] {.async.} =
|
|
return success(proof)
|
|
await sales.start()
|
|
itemsProcessed = @[]
|
|
|
|
teardown:
|
|
await sales.stop()
|
|
await repo.stop()
|
|
await repoTmp.destroyDb()
|
|
await metaTmp.destroyDb()
|
|
|
|
proc allowRequestToStart {.async.} =
|
|
# wait until we're in initialproving state
|
|
await sleepAsync(10.millis)
|
|
# it won't start proving until the next period
|
|
await clock.advanceToNextPeriod(market)
|
|
|
|
proc getAvailability: Availability =
|
|
let key = availability.id.key.get
|
|
(waitFor reservations.get(key, Availability)).get
|
|
|
|
proc createAvailability() =
|
|
let a = waitFor reservations.createAvailability(
|
|
availability.totalSize,
|
|
availability.duration,
|
|
availability.minPrice,
|
|
availability.maxCollateral
|
|
)
|
|
availability = a.get # update id
|
|
|
|
proc notProcessed(itemsProcessed: seq[SlotQueueItem],
|
|
request: StorageRequest): bool =
|
|
let items = SlotQueueItem.init(request)
|
|
for i in 0..<items.len:
|
|
if itemsProcessed.contains(items[i]):
|
|
return false
|
|
return true
|
|
|
|
proc addRequestToSaturatedQueue(): Future[StorageRequest] {.async.} =
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
await sleepAsync(10.millis)
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
|
|
var request1 = StorageRequest.example
|
|
request1.ask.collateral = request.ask.collateral + 1
|
|
createAvailability()
|
|
# saturate queue
|
|
while queue.len < queue.size - 1:
|
|
await market.requestStorage(StorageRequest.example)
|
|
# send request
|
|
await market.requestStorage(request1)
|
|
await sleepAsync(5.millis) # wait for request slots to be added to queue
|
|
return request1
|
|
|
|
proc wasIgnored(): bool =
|
|
let run = proc(): Future[bool] {.async.} =
|
|
always (
|
|
getAvailability().freeSize == availability.freeSize and
|
|
(waitFor reservations.all(Reservation)).get.len == 0
|
|
)
|
|
waitFor run()
|
|
|
|
test "processes all request's slots once StorageRequested emitted":
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
let items = SlotQueueItem.init(request)
|
|
check eventually items.allIt(itemsProcessed.contains(it))
|
|
|
|
test "removes slots from slot queue once RequestCancelled emitted":
|
|
let request1 = await addRequestToSaturatedQueue()
|
|
market.emitRequestCancelled(request1.id)
|
|
check always itemsProcessed.notProcessed(request1)
|
|
|
|
test "removes request from slot queue once RequestFailed emitted":
|
|
let request1 = await addRequestToSaturatedQueue()
|
|
market.emitRequestFailed(request1.id)
|
|
check always itemsProcessed.notProcessed(request1)
|
|
|
|
test "removes request from slot queue once RequestFulfilled emitted":
|
|
let request1 = await addRequestToSaturatedQueue()
|
|
market.emitRequestFulfilled(request1.id)
|
|
check always itemsProcessed.notProcessed(request1)
|
|
|
|
test "removes slot index from slot queue once SlotFilled emitted":
|
|
let request1 = await addRequestToSaturatedQueue()
|
|
market.emitSlotFilled(request1.id, 1.u256)
|
|
let expected = SlotQueueItem.init(request1, 1'u16)
|
|
check always (not itemsProcessed.contains(expected))
|
|
|
|
test "adds slot index to slot queue once SlotFreed emitted":
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
|
|
createAvailability()
|
|
market.requested.add request # "contract" must be able to return request
|
|
market.emitSlotFreed(request.id, 2.u256)
|
|
|
|
let expected = SlotQueueItem.init(request, 2.uint16)
|
|
check eventually itemsProcessed.contains(expected)
|
|
|
|
test "items in queue are readded (and marked seen) once ignored":
|
|
await market.requestStorage(request)
|
|
let items = SlotQueueItem.init(request)
|
|
await sleepAsync(10.millis) # queue starts paused, allow items to be added to the queue
|
|
check eventually queue.paused
|
|
# The first processed item will be will have been re-pushed with `seen =
|
|
# true`. Then, once this item is processed by the queue, its 'seen' flag
|
|
# will be checked, at which point the queue will be paused. This test could
|
|
# check item existence in the queue, but that would require inspecting
|
|
# onProcessSlot to see which item was first, and overridding onProcessSlot
|
|
# will prevent the queue working as expected in the Sales module.
|
|
check eventually queue.len == 4
|
|
|
|
for item in items:
|
|
check queue.contains(item)
|
|
|
|
for i in 0..<queue.len:
|
|
check queue[i].seen
|
|
|
|
test "queue is paused once availability is insufficient to service slots in queue":
|
|
createAvailability() # enough to fill a single slot
|
|
await market.requestStorage(request)
|
|
let items = SlotQueueItem.init(request)
|
|
await sleepAsync(10.millis) # queue starts paused, allow items to be added to the queue
|
|
check eventually queue.paused
|
|
# The first processed item/slot will be filled (eventually). Subsequent
|
|
# items will be processed and eventually re-pushed with `seen = true`. Once
|
|
# a "seen" item is processed by the queue, the queue is paused. In the
|
|
# meantime, the other items that are process, marked as seen, and re-added
|
|
# to the queue may be processed simultaneously as the queue pausing.
|
|
# Therefore, there should eventually be 3 items remaining in the queue, all
|
|
# seen.
|
|
check eventually queue.len == 3
|
|
for i in 0..<queue.len:
|
|
check queue[i].seen
|
|
|
|
test "availability size is reduced by request slot size when fully downloaded":
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
let blk = bt.Block.new( @[1.byte] ).get
|
|
await onBatch( blk.repeat(request.ask.slotSize.truncate(int)) )
|
|
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
check eventually getAvailability().freeSize == availability.freeSize - request.ask.slotSize
|
|
|
|
test "non-downloaded bytes are returned to availability once finished":
|
|
var slotIndex = 0.u256
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
slotIndex = slot
|
|
let blk = bt.Block.new( @[1.byte] ).get
|
|
await onBatch(@[ blk ])
|
|
|
|
let sold = newFuture[void]()
|
|
sales.onSale = proc(request: StorageRequest, slotIndex: UInt256) =
|
|
sold.complete()
|
|
|
|
createAvailability()
|
|
let origSize = availability.freeSize
|
|
await market.requestStorage(request)
|
|
await allowRequestToStart()
|
|
await sold
|
|
|
|
# complete request
|
|
market.slotState[request.slotId(slotIndex)] = SlotState.Finished
|
|
clock.advance(request.ask.duration.truncate(int64))
|
|
|
|
check eventually getAvailability().freeSize == origSize - 1
|
|
|
|
test "ignores download when duration not long enough":
|
|
availability.duration = request.ask.duration - 1
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
check wasIgnored()
|
|
|
|
test "ignores request when slot size is too small":
|
|
availability.totalSize = request.ask.slotSize - 1
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
check wasIgnored()
|
|
|
|
test "ignores request when reward is too low":
|
|
availability.minPrice = request.ask.pricePerSlot + 1
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
check wasIgnored()
|
|
|
|
test "ignores request when asked collateral is too high":
|
|
var tooBigCollateral = request
|
|
tooBigCollateral.ask.collateral = availability.maxCollateral + 1
|
|
createAvailability()
|
|
await market.requestStorage(tooBigCollateral)
|
|
check wasIgnored()
|
|
|
|
test "ignores request when slot state is not free":
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
market.slotState[request.slotId(0.u256)] = SlotState.Filled
|
|
market.slotState[request.slotId(1.u256)] = SlotState.Filled
|
|
market.slotState[request.slotId(2.u256)] = SlotState.Filled
|
|
market.slotState[request.slotId(3.u256)] = SlotState.Filled
|
|
check wasIgnored()
|
|
|
|
test "retrieves and stores data locally":
|
|
var storingRequest: StorageRequest
|
|
var storingSlot: UInt256
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
storingRequest = request
|
|
storingSlot = slot
|
|
return success()
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
check eventually storingRequest == request
|
|
check storingSlot < request.ask.slots.u256
|
|
|
|
test "handles errors during state run":
|
|
var saleFailed = false
|
|
sales.onProve = proc(slot: Slot, challenge: ProofChallenge): Future[?!Groth16Proof] {.async.} =
|
|
# raise exception so machine.onError is called
|
|
raise newException(ValueError, "some error")
|
|
|
|
# onClear is called in SaleErrored.run
|
|
sales.onClear = proc(request: StorageRequest,
|
|
idx: UInt256) =
|
|
saleFailed = true
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
await allowRequestToStart()
|
|
|
|
check eventually saleFailed
|
|
|
|
test "makes storage available again when data retrieval fails":
|
|
let error = newException(IOError, "data retrieval failed")
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
return failure(error)
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
check getAvailability().freeSize == availability.freeSize
|
|
|
|
test "generates proof of storage":
|
|
var provingRequest: StorageRequest
|
|
var provingSlot: UInt256
|
|
sales.onProve = proc(slot: Slot, challenge: ProofChallenge): Future[?!Groth16Proof] {.async.} =
|
|
provingRequest = slot.request
|
|
provingSlot = slot.slotIndex
|
|
return success(Groth16Proof.example)
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
await allowRequestToStart()
|
|
|
|
check eventually provingRequest == request
|
|
check provingSlot < request.ask.slots.u256
|
|
|
|
test "fills a slot":
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
await allowRequestToStart()
|
|
|
|
check eventually market.filled.len > 0
|
|
check market.filled[0].requestId == request.id
|
|
check market.filled[0].slotIndex < request.ask.slots.u256
|
|
check market.filled[0].proof == proof
|
|
check market.filled[0].host == await market.getSigner()
|
|
|
|
test "calls onFilled when slot is filled":
|
|
var soldRequest = StorageRequest.default
|
|
var soldSlotIndex = UInt256.high
|
|
sales.onSale = proc(request: StorageRequest,
|
|
slotIndex: UInt256) =
|
|
soldRequest = request
|
|
soldSlotIndex = slotIndex
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
await allowRequestToStart()
|
|
|
|
check eventually soldRequest == request
|
|
check soldSlotIndex < request.ask.slots.u256
|
|
|
|
test "calls onClear when storage becomes available again":
|
|
# fail the proof intentionally to trigger `agent.finish(success=false)`,
|
|
# which then calls the onClear callback
|
|
sales.onProve = proc(slot: Slot, challenge: ProofChallenge): Future[?!Groth16Proof] {.async.} =
|
|
raise newException(IOError, "proof failed")
|
|
var clearedRequest: StorageRequest
|
|
var clearedSlotIndex: UInt256
|
|
sales.onClear = proc(request: StorageRequest,
|
|
slotIndex: UInt256) =
|
|
clearedRequest = request
|
|
clearedSlotIndex = slotIndex
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
await allowRequestToStart()
|
|
|
|
check eventually clearedRequest == request
|
|
check clearedSlotIndex < request.ask.slots.u256
|
|
|
|
test "makes storage available again when other host fills the slot":
|
|
let otherHost = Address.example
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
await sleepAsync(chronos.hours(1))
|
|
return success()
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
for slotIndex in 0..<request.ask.slots:
|
|
market.fillSlot(request.id, slotIndex.u256, proof, otherHost)
|
|
check eventually (await reservations.all(Availability)).get == @[availability]
|
|
|
|
test "makes storage available again when request expires":
|
|
let expiry = getTime().toUnix() + 10
|
|
market.requestExpiry[request.id] = expiry
|
|
|
|
let origSize = availability.freeSize
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
await sleepAsync(chronos.hours(1))
|
|
return success()
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
|
|
# If we would not await, then the `clock.set` would run "too fast" as the `subscribeCancellation()`
|
|
# would otherwise not set the timeout early enough as it uses `clock.now` in the deadline calculation.
|
|
await sleepAsync(chronos.milliseconds(100))
|
|
market.requestState[request.id]=RequestState.Cancelled
|
|
clock.set(expiry + 1)
|
|
check eventually (await reservations.all(Availability)).get == @[availability]
|
|
check getAvailability().freeSize == origSize
|
|
|
|
test "verifies that request is indeed expired from onchain before firing onCancelled":
|
|
let expiry = getTime().toUnix() + 10
|
|
# ensure only one slot, otherwise once bytes are returned to the
|
|
# availability, the queue will be unpaused and availability will be consumed
|
|
# by other slots
|
|
request.ask.slots = 1.uint64
|
|
market.requestExpiry[request.id] = expiry
|
|
|
|
let origSize = availability.freeSize
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
await sleepAsync(chronos.hours(1))
|
|
return success()
|
|
createAvailability()
|
|
await market.requestStorage(request)
|
|
market.requestState[request.id]=RequestState.New # "On-chain" is the request still ongoing even after local expiration
|
|
|
|
# If we would not await, then the `clock.set` would run "too fast" as the `subscribeCancellation()`
|
|
# would otherwise not set the timeout early enough as it uses `clock.now` in the deadline calculation.
|
|
await sleepAsync(chronos.milliseconds(100))
|
|
clock.set(expiry + 1)
|
|
check getAvailability().freeSize == 0
|
|
|
|
market.requestState[request.id]=RequestState.Cancelled # Now "on-chain" is also expired
|
|
check eventually getAvailability().freeSize == origSize
|
|
|
|
test "loads active slots from market":
|
|
let me = await market.getSigner()
|
|
|
|
request.ask.slots = 2
|
|
market.requested = @[request]
|
|
market.requestState[request.id] = RequestState.New
|
|
|
|
proc fillSlot(slotIdx: UInt256 = 0.u256) {.async.} =
|
|
let address = await market.getSigner()
|
|
let slot = MockSlot(requestId: request.id,
|
|
slotIndex: slotIdx,
|
|
proof: proof,
|
|
host: address)
|
|
market.filled.add slot
|
|
market.slotState[slotId(request.id, slotIdx)] = SlotState.Filled
|
|
|
|
let slot0 = MockSlot(requestId: request.id,
|
|
slotIndex: 0.u256,
|
|
proof: proof,
|
|
host: me)
|
|
await fillSlot(slot0.slotIndex)
|
|
|
|
let slot1 = MockSlot(requestId: request.id,
|
|
slotIndex: 1.u256,
|
|
proof: proof,
|
|
host: me)
|
|
await fillSlot(slot1.slotIndex)
|
|
market.activeSlots[me] = @[request.slotId(0.u256), request.slotId(1.u256)]
|
|
market.requested = @[request]
|
|
market.activeRequests[me] = @[request.id]
|
|
|
|
await sales.load()
|
|
|
|
check eventually sales.agents.len == 2
|
|
check sales.agents.any(agent => agent.data.requestId == request.id and agent.data.slotIndex == 0.u256)
|
|
check sales.agents.any(agent => agent.data.requestId == request.id and agent.data.slotIndex == 1.u256)
|
|
|
|
test "deletes inactive reservations on load":
|
|
createAvailability()
|
|
discard await reservations.createReservation(
|
|
availability.id,
|
|
100.u256,
|
|
RequestId.example,
|
|
UInt256.example)
|
|
check (await reservations.all(Reservation)).get.len == 1
|
|
await sales.load()
|
|
check (await reservations.all(Reservation)).get.len == 0
|
|
check getAvailability().freeSize == availability.freeSize # was restored
|