mirror of
https://github.com/codex-storage/nim-codex.git
synced 2025-01-09 20:45:38 +00:00
1d161d383e
## Slot queue Adds a slot queue, as per the [slot queue design](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#slot-queue). Any time storage is requested, all slots from that request are immediately added to the queue. Finished, Canclled, Failed requests remove all slots with that request id from the queue. SlotFreed events add a new slot to the queue and SlotFilled events remove the slot from the queue. This allows popping of a slot each time one is processed, making things much simpler. When an entire request of slots is added to the queue, the slot indices are shuffled randomly to hopefully prevent nodes that pick up the same storage requested event from clashing on the first processed slot index. This allowed removal of assigning a random slot index in the SalePreparing state and it also ensured that all SalesAgents will have a slot index assigned to them at the start thus the removal of the optional slotIndex. Remove slotId from SlotFreed event as it was not being used. RequestId and slotIndex were added to the SlotFreed event earlier and those are now being used The slot queue invariant that prioritises queue items added to the queue relies on a scoring mechanism to sort them based on the [sort order in the design document](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#sort-order). When a storage request is handled by the sales module, a slot index was randomly assigned and then the slot was filled. Now, a random slot index is only assigned when adding an entire request to the slot queue. Additionally, the slot is checked that its state is `SlotState.Free` before continuing with the download process. SlotQueue should always ensure the underlying AsyncHeapQueue has one less than the maximum items, ensuring the SlotQueue can always have space to add an additional item regardless if it’s full or not. Constructing `SlotQueue.workers` in `SlotQueue.new` calls `newAsyncQueue` which causes side effects, so the construction call had to be moved to `SlotQueue.start`. Prevent loading request from contract (network request) if there is an existing item in queue for that request. Check availability before adding request to queue. Add ability to query market contract for past events. When new availabilities are added, the `onReservationAdded` callback is triggered in which past `StorageRequested` events are queried, and those slots are added to the queue (filtered by availability on `push` and filtered by state in `SalePreparing`). #### Request Workers Limit the concurrent requests being processed in the queue by using a limited pool of workers (default = 3). Workers are in a data structure of type `AsyncQueue[SlotQueueWorker]`. This allows us to await a `popFirst` for available workers inside of the main SlotQueue event loop Add an `onCleanUp` that stops the agents and removes them from the sales module agent list. `onCleanUp` is called from sales end states (eg ignored, cancelled, finished, failed, errored). Add a `doneProcessing` future to `SlotQueueWorker` to be completed in the `OnProcessSlot` callback. Each `doneProcessing` future created is cancelled and awaited in `SlotQueue.stop` (thanks to `TrackableFuturees`), which forced `stop` to become async. - Cancel dispatched workers and the `onProcessSlot` callbacks, prevents zombie callbacks #### Add TrackableFutures Allow tracking of futures in a module so they can be cancelled at a later time. Useful for asyncSpawned futures, but works for any future. ### Sales module The sales module needed to subscribe to request events to ensure that the request queue was managed correctly on each event. In the process of doing this, the sales agents were updated to avoid subscribing to events in each agent, and instead dispatch received events from the sales module to all created sales agents. This would prevent memory leaks on having too many eventemitters subscribed to. - prevent removal of agents from sales module while stopping, otherwise the agents seq len is modified while iterating An additional sales agent state was added, `SalePreparing`, that handles all state machine setup, such as retrieving the request and subscribing to events that were previously in the `SaleDownloading` state. Once agents have parked in an end state (eg ignored, cancelled, finished, failed, errored), they were not getting cleaned up and the sales module was keeping a handle on their reference. An `onCleanUp` callback was created to be called after the state machine enters an end state, which could prevent a memory leak if the number of requests coming in is high. Move the SalesAgent callback raises pragmas from the Sales module to the proc definition in SalesAgent. This avoids having to catch `Exception`. - remove unneeded error handling as pragmas were moved Move sales.subscriptions from an object containing named subscriptions to a `seq[Subscription]` directly on the sales object. Sales tests: shut down repo after sales stop, to fix SIGABRT in CI ### Add async Promise API - modelled after JavaScript Promise API - alternative to `asyncSpawn` that allows handling of async calls in a synchronous context (including access to the synchronous closure) with less additional procs to be declared - Write less code, catch errors that would otherwise defect in asyncspawn, and execute a callback after completion - Add cancellation callbacks to utils/then, ensuring cancellations are handled properly ## Dependencies - bump codex-contracts-eth to support slot queue (https://github.com/codex-storage/codex-contracts-eth/pull/61) - bump nim-ethers to 0.5.0 - Bump nim-json-rpc submodule to 0bf2bcb --------- Co-authored-by: Jaremy Creechley <creechley@gmail.com>
428 lines
16 KiB
Nim
428 lines
16 KiB
Nim
import std/sets
|
|
import std/sequtils
|
|
import std/sugar
|
|
import std/times
|
|
import pkg/asynctest
|
|
import pkg/chronos
|
|
import pkg/datastore
|
|
import pkg/questionable
|
|
import pkg/questionable/results
|
|
import pkg/codex/sales
|
|
import pkg/codex/sales/salesdata
|
|
import pkg/codex/sales/salescontext
|
|
import pkg/codex/sales/reservations
|
|
import pkg/codex/sales/slotqueue
|
|
import pkg/codex/stores/repostore
|
|
import pkg/codex/proving
|
|
import pkg/codex/blocktype as bt
|
|
import pkg/codex/node
|
|
import ../helpers/mockmarket
|
|
import ../helpers/mockclock
|
|
import ../helpers/eventually
|
|
import ../examples
|
|
import ./helpers
|
|
|
|
asyncchecksuite "Sales":
|
|
let proof = exampleProof()
|
|
|
|
var availability: Availability
|
|
var request: StorageRequest
|
|
var sales: Sales
|
|
var market: MockMarket
|
|
var clock: MockClock
|
|
var proving: Proving
|
|
var reservations: Reservations
|
|
var repo: RepoStore
|
|
var queue: SlotQueue
|
|
var itemsProcessed: seq[SlotQueueItem]
|
|
|
|
setup:
|
|
availability = Availability.init(
|
|
size=100.u256,
|
|
duration=60.u256,
|
|
minPrice=600.u256,
|
|
maxCollateral=400.u256
|
|
)
|
|
request = StorageRequest(
|
|
ask: StorageAsk(
|
|
slots: 4,
|
|
slotSize: 100.u256,
|
|
duration: 60.u256,
|
|
reward: 10.u256,
|
|
collateral: 200.u256,
|
|
),
|
|
content: StorageContent(
|
|
cid: "some cid"
|
|
),
|
|
expiry: (getTime() + initDuration(hours=1)).toUnix.u256
|
|
)
|
|
|
|
market = MockMarket.new()
|
|
clock = MockClock.new()
|
|
proving = Proving.new()
|
|
let repoDs = SQLiteDatastore.new(Memory).tryGet()
|
|
let metaDs = SQLiteDatastore.new(Memory).tryGet()
|
|
repo = RepoStore.new(repoDs, metaDs)
|
|
await repo.start()
|
|
sales = Sales.new(market, clock, proving, repo)
|
|
reservations = sales.context.reservations
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
return success()
|
|
queue = sales.context.slotQueue
|
|
proving.onProve = proc(slot: Slot): Future[seq[byte]] {.async.} =
|
|
return proof
|
|
await sales.start()
|
|
request.expiry = (clock.now() + 42).u256
|
|
itemsProcessed = @[]
|
|
|
|
teardown:
|
|
await sales.stop()
|
|
await repo.stop()
|
|
|
|
proc getAvailability: ?!Availability =
|
|
waitFor reservations.get(availability.id)
|
|
|
|
proc notProcessed(itemsProcessed: seq[SlotQueueItem],
|
|
request: StorageRequest): bool =
|
|
let items = SlotQueueItem.init(request)
|
|
for i in 0..<items.len:
|
|
if itemsProcessed.contains(items[i]):
|
|
return false
|
|
return true
|
|
|
|
proc addRequestToSaturatedQueue(): Future[StorageRequest] {.async.} =
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
await sleepAsync(10.millis)
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
|
|
var request1 = StorageRequest.example
|
|
request1.ask.collateral = request.ask.collateral + 1
|
|
discard await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
await market.requestStorage(request1)
|
|
await sleepAsync(5.millis) # wait for request slots to be added to queue
|
|
return request1
|
|
|
|
test "processes all request's slots once StorageRequested emitted":
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
let items = SlotQueueItem.init(request)
|
|
check eventually items.allIt(itemsProcessed.contains(it))
|
|
|
|
test "removes slots from slot queue once RequestCancelled emitted":
|
|
let request1 = await addRequestToSaturatedQueue()
|
|
market.emitRequestCancelled(request1.id)
|
|
check always itemsProcessed.notProcessed(request1)
|
|
|
|
test "removes request from slot queue once RequestFailed emitted":
|
|
let request1 = await addRequestToSaturatedQueue()
|
|
market.emitRequestFailed(request1.id)
|
|
check always itemsProcessed.notProcessed(request1)
|
|
|
|
test "removes request from slot queue once RequestFulfilled emitted":
|
|
let request1 = await addRequestToSaturatedQueue()
|
|
market.emitRequestFulfilled(request1.id)
|
|
check always itemsProcessed.notProcessed(request1)
|
|
|
|
test "removes slot index from slot queue once SlotFilled emitted":
|
|
let request1 = await addRequestToSaturatedQueue()
|
|
market.emitSlotFilled(request1.id, 1.u256)
|
|
let expected = SlotQueueItem.init(request1, 1'u16)
|
|
check always (not itemsProcessed.contains(expected))
|
|
|
|
test "adds slot index to slot queue once SlotFreed emitted":
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
|
|
check isOk await reservations.reserve(availability)
|
|
market.requested.add request # "contract" must be able to return request
|
|
market.emitSlotFreed(request.id, 2.u256)
|
|
|
|
let expected = SlotQueueItem.init(request, 2.uint16)
|
|
check eventually itemsProcessed.contains(expected)
|
|
|
|
test "request slots are not added to the slot queue when no availabilities exist":
|
|
var itemsProcessed: seq[SlotQueueItem] = @[]
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
|
|
await market.requestStorage(request)
|
|
# check that request was ignored due to no matching availability
|
|
check always itemsProcessed.len == 0
|
|
|
|
test "non-matching availabilities/requests are not added to the slot queue":
|
|
var itemsProcessed: seq[SlotQueueItem] = @[]
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
|
|
let nonMatchingAvailability = Availability.init(
|
|
size=100.u256,
|
|
duration=60.u256,
|
|
minPrice=601.u256, # too high
|
|
maxCollateral=400.u256
|
|
)
|
|
check isOk await reservations.reserve(nonMatchingAvailability)
|
|
await market.requestStorage(request)
|
|
# check that request was ignored due to no matching availability
|
|
check always itemsProcessed.len == 0
|
|
|
|
test "adds past requests to queue once availability added":
|
|
var itemsProcessed: seq[SlotQueueItem] = @[]
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
itemsProcessed.add item
|
|
done.complete()
|
|
|
|
await market.requestStorage(request)
|
|
|
|
# now add matching availability
|
|
check isOk await reservations.reserve(availability)
|
|
check eventually itemsProcessed.len == request.ask.slots.int
|
|
|
|
test "makes storage unavailable when downloading a matched request":
|
|
var used = false
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
without avail =? await reservations.get(availability.id):
|
|
fail()
|
|
used = avail.used
|
|
return success()
|
|
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually used
|
|
|
|
test "reduces remaining availability size after download":
|
|
let blk = bt.Block.example
|
|
request.ask.slotSize = blk.data.len.u256
|
|
availability.size = request.ask.slotSize + 1
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
await onBatch(@[blk])
|
|
return success()
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually getAvailability().?size == success 1.u256
|
|
|
|
test "ignores download when duration not long enough":
|
|
availability.duration = request.ask.duration - 1
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check getAvailability().?size == success availability.size
|
|
|
|
test "ignores request when slot size is too small":
|
|
availability.size = request.ask.slotSize - 1
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check getAvailability().?size == success availability.size
|
|
|
|
test "ignores request when reward is too low":
|
|
availability.minPrice = request.ask.pricePerSlot + 1
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check getAvailability().?size == success availability.size
|
|
|
|
test "availability remains unused when request is ignored":
|
|
availability.minPrice = request.ask.pricePerSlot + 1
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check getAvailability().?used == success false
|
|
|
|
test "ignores request when asked collateral is too high":
|
|
var tooBigCollateral = request
|
|
tooBigCollateral.ask.collateral = availability.maxCollateral + 1
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(tooBigCollateral)
|
|
check getAvailability().?size == success availability.size
|
|
|
|
test "ignores request when slot state is not free":
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
market.slotState[request.slotId(0.u256)] = SlotState.Filled
|
|
market.slotState[request.slotId(1.u256)] = SlotState.Filled
|
|
market.slotState[request.slotId(2.u256)] = SlotState.Filled
|
|
market.slotState[request.slotId(3.u256)] = SlotState.Filled
|
|
check getAvailability().?size == success availability.size
|
|
|
|
test "retrieves and stores data locally":
|
|
var storingRequest: StorageRequest
|
|
var storingSlot: UInt256
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
storingRequest = request
|
|
storingSlot = slot
|
|
return success()
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually storingRequest == request
|
|
check storingSlot < request.ask.slots.u256
|
|
|
|
test "handles errors during state run":
|
|
var saleFailed = false
|
|
proving.onProve = proc(slot: Slot): Future[seq[byte]] {.async.} =
|
|
# raise exception so machine.onError is called
|
|
raise newException(ValueError, "some error")
|
|
|
|
# onClear is called in SaleErrored.run
|
|
sales.onClear = proc(request: StorageRequest,
|
|
idx: UInt256) =
|
|
saleFailed = true
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually saleFailed
|
|
|
|
test "makes storage available again when data retrieval fails":
|
|
let error = newException(IOError, "data retrieval failed")
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
return failure(error)
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually getAvailability().?used == success false
|
|
check getAvailability().?size == success availability.size
|
|
|
|
test "generates proof of storage":
|
|
var provingRequest: StorageRequest
|
|
var provingSlot: UInt256
|
|
proving.onProve = proc(slot: Slot): Future[seq[byte]] {.async.} =
|
|
provingRequest = slot.request
|
|
provingSlot = slot.slotIndex
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually provingRequest == request
|
|
check provingSlot < request.ask.slots.u256
|
|
|
|
test "fills a slot":
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually market.filled.len == 1
|
|
check market.filled[0].requestId == request.id
|
|
check market.filled[0].slotIndex < request.ask.slots.u256
|
|
check market.filled[0].proof == proof
|
|
check market.filled[0].host == await market.getSigner()
|
|
|
|
test "calls onSale when slot is filled":
|
|
var soldAvailability: Availability
|
|
var soldRequest: StorageRequest
|
|
var soldSlotIndex: UInt256
|
|
sales.onSale = proc(request: StorageRequest,
|
|
slotIndex: UInt256) =
|
|
if a =? availability:
|
|
soldAvailability = a
|
|
soldRequest = request
|
|
soldSlotIndex = slotIndex
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually soldAvailability == availability
|
|
check soldRequest == request
|
|
check soldSlotIndex < request.ask.slots.u256
|
|
|
|
test "calls onClear when storage becomes available again":
|
|
# fail the proof intentionally to trigger `agent.finish(success=false)`,
|
|
# which then calls the onClear callback
|
|
proving.onProve = proc(slot: Slot): Future[seq[byte]] {.async.} =
|
|
raise newException(IOError, "proof failed")
|
|
var clearedRequest: StorageRequest
|
|
var clearedSlotIndex: UInt256
|
|
sales.onClear = proc(request: StorageRequest,
|
|
slotIndex: UInt256) =
|
|
clearedRequest = request
|
|
clearedSlotIndex = slotIndex
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually clearedRequest == request
|
|
check clearedSlotIndex < request.ask.slots.u256
|
|
|
|
test "makes storage available again when other host fills the slot":
|
|
let otherHost = Address.example
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
await sleepAsync(chronos.hours(1))
|
|
return success()
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
for slotIndex in 0..<request.ask.slots:
|
|
market.fillSlot(request.id, slotIndex.u256, proof, otherHost)
|
|
check eventually (await reservations.allAvailabilities) == @[availability]
|
|
|
|
test "makes storage available again when request expires":
|
|
sales.onStore = proc(request: StorageRequest,
|
|
slot: UInt256,
|
|
onBatch: BatchProc): Future[?!void] {.async.} =
|
|
await sleepAsync(chronos.hours(1))
|
|
return success()
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
clock.set(request.expiry.truncate(int64))
|
|
check eventually (await reservations.allAvailabilities) == @[availability]
|
|
|
|
test "adds proving for slot when slot is filled":
|
|
var soldSlotIndex: UInt256
|
|
sales.onSale = proc(request: StorageRequest,
|
|
slotIndex: UInt256) =
|
|
soldSlotIndex = slotIndex
|
|
check proving.slots.len == 0
|
|
check isOk await reservations.reserve(availability)
|
|
await market.requestStorage(request)
|
|
check eventually proving.slots.len == 1
|
|
check proving.slots.contains(Slot(request: request, slotIndex: soldSlotIndex))
|
|
|
|
test "loads active slots from market":
|
|
let me = await market.getSigner()
|
|
|
|
request.ask.slots = 2
|
|
market.requested = @[request]
|
|
market.requestState[request.id] = RequestState.New
|
|
|
|
proc fillSlot(slotIdx: UInt256 = 0.u256) {.async.} =
|
|
let address = await market.getSigner()
|
|
let slot = MockSlot(requestId: request.id,
|
|
slotIndex: slotIdx,
|
|
proof: proof,
|
|
host: address)
|
|
market.filled.add slot
|
|
market.slotState[slotId(request.id, slotIdx)] = SlotState.Filled
|
|
|
|
let slot0 = MockSlot(requestId: request.id,
|
|
slotIndex: 0.u256,
|
|
proof: proof,
|
|
host: me)
|
|
await fillSlot(slot0.slotIndex)
|
|
|
|
let slot1 = MockSlot(requestId: request.id,
|
|
slotIndex: 1.u256,
|
|
proof: proof,
|
|
host: me)
|
|
await fillSlot(slot1.slotIndex)
|
|
market.activeSlots[me] = @[request.slotId(0.u256), request.slotId(1.u256)]
|
|
market.requested = @[request]
|
|
market.activeRequests[me] = @[request.id]
|
|
|
|
await sales.load()
|
|
let expected = SalesData(requestId: request.id, request: some request)
|
|
# because sales.load() calls agent.start, we won't know the slotIndex
|
|
# randomly selected for the agent, and we also won't know the value of
|
|
# `failed`/`fulfilled`/`cancelled` futures, so we need to compare
|
|
# the properties we know
|
|
# TODO: when calling sales.load(), slot index should be restored and not
|
|
# randomly re-assigned, so this may no longer be needed
|
|
proc `==` (data0, data1: SalesData): bool =
|
|
return data0.requestId == data1.requestId and
|
|
data0.request == data1.request
|
|
|
|
check eventually sales.agents.len == 2
|
|
check sales.agents.all(agent => agent.data == expected)
|