Slot queue (#455)
## Slot queue
Adds a slot queue, as per the [slot queue design](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#slot-queue).
Any time storage is requested, all slots from that request are immediately added to the queue. Finished, Canclled, Failed requests remove all slots with that request id from the queue. SlotFreed events add a new slot to the queue and SlotFilled events remove the slot from the queue. This allows popping of a slot each time one is processed, making things much simpler.
When an entire request of slots is added to the queue, the slot indices are shuffled randomly to hopefully prevent nodes that pick up the same storage requested event from clashing on the first processed slot index. This allowed removal of assigning a random slot index in the SalePreparing state and it also ensured that all SalesAgents will have a slot index assigned to them at the start thus the removal of the optional slotIndex.
Remove slotId from SlotFreed event as it was not being used. RequestId and slotIndex were added to the SlotFreed event earlier and those are now being used
The slot queue invariant that prioritises queue items added to the queue relies on a scoring mechanism to sort them based on the [sort order in the design document](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#sort-order).
When a storage request is handled by the sales module, a slot index was randomly assigned and then the slot was filled. Now, a random slot index is only assigned when adding an entire request to the slot queue. Additionally, the slot is checked that its state is `SlotState.Free` before continuing with the download process.
SlotQueue should always ensure the underlying AsyncHeapQueue has one less than the maximum items, ensuring the SlotQueue can always have space to add an additional item regardless if it’s full or not.
Constructing `SlotQueue.workers` in `SlotQueue.new` calls `newAsyncQueue` which causes side effects, so the construction call had to be moved to `SlotQueue.start`.
Prevent loading request from contract (network request) if there is an existing item in queue for that request.
Check availability before adding request to queue.
Add ability to query market contract for past events. When new availabilities are added, the `onReservationAdded` callback is triggered in which past `StorageRequested` events are queried, and those slots are added to the queue (filtered by availability on `push` and filtered by state in `SalePreparing`).
#### Request Workers
Limit the concurrent requests being processed in the queue by using a limited pool of workers (default = 3). Workers are in a data structure of type `AsyncQueue[SlotQueueWorker]`. This allows us to await a `popFirst` for available workers inside of the main SlotQueue event loop
Add an `onCleanUp` that stops the agents and removes them from the sales module agent list. `onCleanUp` is called from sales end states (eg ignored, cancelled, finished, failed, errored).
Add a `doneProcessing` future to `SlotQueueWorker` to be completed in the `OnProcessSlot` callback. Each `doneProcessing` future created is cancelled and awaited in `SlotQueue.stop` (thanks to `TrackableFuturees`), which forced `stop` to become async.
- Cancel dispatched workers and the `onProcessSlot` callbacks, prevents zombie callbacks
#### Add TrackableFutures
Allow tracking of futures in a module so they can be cancelled at a later time. Useful for asyncSpawned futures, but works for any future.
### Sales module
The sales module needed to subscribe to request events to ensure that the request queue was managed correctly on each event. In the process of doing this, the sales agents were updated to avoid subscribing to events in each agent, and instead dispatch received events from the sales module to all created sales agents. This would prevent memory leaks on having too many eventemitters subscribed to.
- prevent removal of agents from sales module while stopping, otherwise the agents seq len is modified while iterating
An additional sales agent state was added, `SalePreparing`, that handles all state machine setup, such as retrieving the request and subscribing to events that were previously in the `SaleDownloading` state.
Once agents have parked in an end state (eg ignored, cancelled, finished, failed, errored), they were not getting cleaned up and the sales module was keeping a handle on their reference. An `onCleanUp` callback was created to be called after the state machine enters an end state, which could prevent a memory leak if the number of requests coming in is high.
Move the SalesAgent callback raises pragmas from the Sales module to the proc definition in SalesAgent. This avoids having to catch `Exception`.
- remove unneeded error handling as pragmas were moved
Move sales.subscriptions from an object containing named subscriptions to a `seq[Subscription]` directly on the sales object.
Sales tests: shut down repo after sales stop, to fix SIGABRT in CI
### Add async Promise API
- modelled after JavaScript Promise API
- alternative to `asyncSpawn` that allows handling of async calls in a synchronous context (including access to the synchronous closure) with less additional procs to be declared
- Write less code, catch errors that would otherwise defect in asyncspawn, and execute a callback after completion
- Add cancellation callbacks to utils/then, ensuring cancellations are handled properly
## Dependencies
- bump codex-contracts-eth to support slot queue (https://github.com/codex-storage/codex-contracts-eth/pull/61)
- bump nim-ethers to 0.5.0
- Bump nim-json-rpc submodule to 0bf2bcb
---------
Co-authored-by: Jaremy Creechley <creechley@gmail.com>
2023-07-25 02:50:30 +00:00
|
|
|
import std/sequtils
|
|
|
|
import pkg/asynctest
|
|
|
|
import pkg/chronicles
|
|
|
|
import pkg/chronos
|
|
|
|
import pkg/datastore
|
|
|
|
import pkg/questionable
|
|
|
|
import pkg/questionable/results
|
|
|
|
|
|
|
|
import pkg/codex/sales/reservations
|
|
|
|
import pkg/codex/sales/slotqueue
|
|
|
|
import pkg/codex/stores
|
|
|
|
|
2023-08-01 23:47:57 +00:00
|
|
|
import ../helpers
|
Slot queue (#455)
## Slot queue
Adds a slot queue, as per the [slot queue design](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#slot-queue).
Any time storage is requested, all slots from that request are immediately added to the queue. Finished, Canclled, Failed requests remove all slots with that request id from the queue. SlotFreed events add a new slot to the queue and SlotFilled events remove the slot from the queue. This allows popping of a slot each time one is processed, making things much simpler.
When an entire request of slots is added to the queue, the slot indices are shuffled randomly to hopefully prevent nodes that pick up the same storage requested event from clashing on the first processed slot index. This allowed removal of assigning a random slot index in the SalePreparing state and it also ensured that all SalesAgents will have a slot index assigned to them at the start thus the removal of the optional slotIndex.
Remove slotId from SlotFreed event as it was not being used. RequestId and slotIndex were added to the SlotFreed event earlier and those are now being used
The slot queue invariant that prioritises queue items added to the queue relies on a scoring mechanism to sort them based on the [sort order in the design document](https://github.com/codex-storage/codex-research/blob/master/design/sales.md#sort-order).
When a storage request is handled by the sales module, a slot index was randomly assigned and then the slot was filled. Now, a random slot index is only assigned when adding an entire request to the slot queue. Additionally, the slot is checked that its state is `SlotState.Free` before continuing with the download process.
SlotQueue should always ensure the underlying AsyncHeapQueue has one less than the maximum items, ensuring the SlotQueue can always have space to add an additional item regardless if it’s full or not.
Constructing `SlotQueue.workers` in `SlotQueue.new` calls `newAsyncQueue` which causes side effects, so the construction call had to be moved to `SlotQueue.start`.
Prevent loading request from contract (network request) if there is an existing item in queue for that request.
Check availability before adding request to queue.
Add ability to query market contract for past events. When new availabilities are added, the `onReservationAdded` callback is triggered in which past `StorageRequested` events are queried, and those slots are added to the queue (filtered by availability on `push` and filtered by state in `SalePreparing`).
#### Request Workers
Limit the concurrent requests being processed in the queue by using a limited pool of workers (default = 3). Workers are in a data structure of type `AsyncQueue[SlotQueueWorker]`. This allows us to await a `popFirst` for available workers inside of the main SlotQueue event loop
Add an `onCleanUp` that stops the agents and removes them from the sales module agent list. `onCleanUp` is called from sales end states (eg ignored, cancelled, finished, failed, errored).
Add a `doneProcessing` future to `SlotQueueWorker` to be completed in the `OnProcessSlot` callback. Each `doneProcessing` future created is cancelled and awaited in `SlotQueue.stop` (thanks to `TrackableFuturees`), which forced `stop` to become async.
- Cancel dispatched workers and the `onProcessSlot` callbacks, prevents zombie callbacks
#### Add TrackableFutures
Allow tracking of futures in a module so they can be cancelled at a later time. Useful for asyncSpawned futures, but works for any future.
### Sales module
The sales module needed to subscribe to request events to ensure that the request queue was managed correctly on each event. In the process of doing this, the sales agents were updated to avoid subscribing to events in each agent, and instead dispatch received events from the sales module to all created sales agents. This would prevent memory leaks on having too many eventemitters subscribed to.
- prevent removal of agents from sales module while stopping, otherwise the agents seq len is modified while iterating
An additional sales agent state was added, `SalePreparing`, that handles all state machine setup, such as retrieving the request and subscribing to events that were previously in the `SaleDownloading` state.
Once agents have parked in an end state (eg ignored, cancelled, finished, failed, errored), they were not getting cleaned up and the sales module was keeping a handle on their reference. An `onCleanUp` callback was created to be called after the state machine enters an end state, which could prevent a memory leak if the number of requests coming in is high.
Move the SalesAgent callback raises pragmas from the Sales module to the proc definition in SalesAgent. This avoids having to catch `Exception`.
- remove unneeded error handling as pragmas were moved
Move sales.subscriptions from an object containing named subscriptions to a `seq[Subscription]` directly on the sales object.
Sales tests: shut down repo after sales stop, to fix SIGABRT in CI
### Add async Promise API
- modelled after JavaScript Promise API
- alternative to `asyncSpawn` that allows handling of async calls in a synchronous context (including access to the synchronous closure) with less additional procs to be declared
- Write less code, catch errors that would otherwise defect in asyncspawn, and execute a callback after completion
- Add cancellation callbacks to utils/then, ensuring cancellations are handled properly
## Dependencies
- bump codex-contracts-eth to support slot queue (https://github.com/codex-storage/codex-contracts-eth/pull/61)
- bump nim-ethers to 0.5.0
- Bump nim-json-rpc submodule to 0bf2bcb
---------
Co-authored-by: Jaremy Creechley <creechley@gmail.com>
2023-07-25 02:50:30 +00:00
|
|
|
import ../helpers/mockmarket
|
|
|
|
import ../helpers/eventually
|
|
|
|
import ../examples
|
|
|
|
|
|
|
|
suite "Slot queue start/stop":
|
|
|
|
|
|
|
|
var repo: RepoStore
|
|
|
|
var repoDs: Datastore
|
|
|
|
var metaDs: SQLiteDatastore
|
|
|
|
var reservations: Reservations
|
|
|
|
var queue: SlotQueue
|
|
|
|
|
|
|
|
setup:
|
|
|
|
repoDs = SQLiteDatastore.new(Memory).tryGet()
|
|
|
|
metaDs = SQLiteDatastore.new(Memory).tryGet()
|
|
|
|
repo = RepoStore.new(repoDs, metaDs)
|
|
|
|
reservations = Reservations.new(repo)
|
|
|
|
queue = SlotQueue.new(reservations)
|
|
|
|
|
|
|
|
teardown:
|
|
|
|
await queue.stop()
|
|
|
|
|
|
|
|
test "starts out not running":
|
|
|
|
check not queue.running
|
|
|
|
|
|
|
|
test "can call start multiple times, and when already running":
|
|
|
|
asyncSpawn queue.start()
|
|
|
|
asyncSpawn queue.start()
|
|
|
|
check queue.running
|
|
|
|
|
|
|
|
test "can call stop when alrady stopped":
|
|
|
|
await queue.stop()
|
|
|
|
check not queue.running
|
|
|
|
|
|
|
|
test "can call stop when running":
|
|
|
|
asyncSpawn queue.start()
|
|
|
|
await queue.stop()
|
|
|
|
check not queue.running
|
|
|
|
|
|
|
|
test "can call stop multiple times":
|
|
|
|
asyncSpawn queue.start()
|
|
|
|
await queue.stop()
|
|
|
|
await queue.stop()
|
|
|
|
check not queue.running
|
|
|
|
|
|
|
|
suite "Slot queue workers":
|
|
|
|
|
|
|
|
var repo: RepoStore
|
|
|
|
var repoDs: Datastore
|
|
|
|
var metaDs: SQLiteDatastore
|
|
|
|
var availability: Availability
|
|
|
|
var reservations: Reservations
|
|
|
|
var queue: SlotQueue
|
|
|
|
|
|
|
|
proc onProcessSlot(item: SlotQueueItem, doneProcessing: Future[void]) {.async.} =
|
|
|
|
await sleepAsync(1000.millis)
|
|
|
|
# this is not illustrative of the realistic scenario as the
|
|
|
|
# `doneProcessing` future would be passed to another context before being
|
|
|
|
# completed and therefore is not as simple as making the callback async
|
|
|
|
doneProcessing.complete()
|
|
|
|
|
|
|
|
setup:
|
|
|
|
let request = StorageRequest.example
|
|
|
|
repoDs = SQLiteDatastore.new(Memory).tryGet()
|
|
|
|
metaDs = SQLiteDatastore.new(Memory).tryGet()
|
|
|
|
let quota = request.ask.slotSize.truncate(uint) * 100 + 1
|
|
|
|
repo = RepoStore.new(repoDs, metaDs, quotaMaxBytes = quota)
|
|
|
|
reservations = Reservations.new(repo)
|
|
|
|
# create an availability that should always match
|
|
|
|
availability = Availability.init(
|
|
|
|
size = request.ask.slotSize * 100,
|
|
|
|
duration = request.ask.duration * 100,
|
|
|
|
minPrice = request.ask.pricePerSlot div 100,
|
|
|
|
maxCollateral = request.ask.collateral * 100
|
|
|
|
)
|
|
|
|
queue = SlotQueue.new(reservations, maxSize = 5, maxWorkers = 3)
|
|
|
|
queue.onProcessSlot = onProcessSlot
|
|
|
|
discard await reservations.reserve(availability)
|
|
|
|
|
|
|
|
proc startQueue = asyncSpawn queue.start()
|
|
|
|
|
|
|
|
teardown:
|
|
|
|
await queue.stop()
|
|
|
|
|
|
|
|
test "activeWorkers should be 0 when not running":
|
|
|
|
check queue.activeWorkers == 0
|
|
|
|
|
|
|
|
test "maxWorkers cannot be 0":
|
|
|
|
expect ValueError:
|
|
|
|
discard SlotQueue.new(reservations, maxSize = 1, maxWorkers = 0)
|
|
|
|
|
|
|
|
test "maxWorkers cannot surpass maxSize":
|
|
|
|
expect ValueError:
|
|
|
|
discard SlotQueue.new(reservations, maxSize = 1, maxWorkers = 2)
|
|
|
|
|
|
|
|
test "does not surpass max workers":
|
|
|
|
startQueue()
|
|
|
|
let item1 = SlotQueueItem.example
|
|
|
|
let item2 = SlotQueueItem.example
|
|
|
|
let item3 = SlotQueueItem.example
|
|
|
|
let item4 = SlotQueueItem.example
|
|
|
|
check (await queue.push(item1)).isOk
|
|
|
|
check (await queue.push(item2)).isOk
|
|
|
|
check (await queue.push(item3)).isOk
|
|
|
|
check (await queue.push(item4)).isOk
|
|
|
|
check eventually queue.activeWorkers == 3
|
|
|
|
|
|
|
|
test "discards workers once processing completed":
|
|
|
|
proc processSlot(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
|
|
await sleepAsync(1.millis)
|
|
|
|
done.complete()
|
|
|
|
|
|
|
|
queue.onProcessSlot = processSlot
|
|
|
|
|
|
|
|
startQueue()
|
|
|
|
let item1 = SlotQueueItem.example
|
|
|
|
let item2 = SlotQueueItem.example
|
|
|
|
let item3 = SlotQueueItem.example
|
|
|
|
let item4 = SlotQueueItem.example
|
|
|
|
check (await queue.push(item1)).isOk # finishes after 1.millis
|
|
|
|
check (await queue.push(item2)).isOk # finishes after 1.millis
|
|
|
|
check (await queue.push(item3)).isOk # finishes after 1.millis
|
|
|
|
check (await queue.push(item4)).isOk
|
|
|
|
check eventually queue.activeWorkers == 1
|
|
|
|
|
|
|
|
suite "Slot queue":
|
|
|
|
|
|
|
|
var onProcessSlotCalled = false
|
|
|
|
var onProcessSlotCalledWith: seq[(RequestId, uint16)]
|
|
|
|
var repo: RepoStore
|
|
|
|
var repoDs: Datastore
|
|
|
|
var metaDs: SQLiteDatastore
|
|
|
|
var availability: Availability
|
|
|
|
var reservations: Reservations
|
|
|
|
var queue: SlotQueue
|
|
|
|
let maxWorkers = 2
|
|
|
|
var unpauseQueue: Future[void]
|
|
|
|
var paused: bool
|
|
|
|
|
|
|
|
proc newSlotQueue(maxSize, maxWorkers: int, processSlotDelay = 1.millis) =
|
|
|
|
queue = SlotQueue.new(reservations, maxWorkers, maxSize.uint16)
|
|
|
|
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
|
|
|
await sleepAsync(processSlotDelay)
|
|
|
|
trace "processing item", requestId = item.requestId, slotIndex = item.slotIndex
|
|
|
|
onProcessSlotCalled = true
|
|
|
|
onProcessSlotCalledWith.add (item.requestId, item.slotIndex)
|
|
|
|
done.complete()
|
|
|
|
asyncSpawn queue.start()
|
|
|
|
|
|
|
|
setup:
|
|
|
|
onProcessSlotCalled = false
|
|
|
|
onProcessSlotCalledWith = @[]
|
|
|
|
let request = StorageRequest.example
|
|
|
|
repoDs = SQLiteDatastore.new(Memory).tryGet()
|
|
|
|
metaDs = SQLiteDatastore.new(Memory).tryGet()
|
|
|
|
let quota = request.ask.slotSize.truncate(uint) * 100 + 1
|
|
|
|
repo = RepoStore.new(repoDs, metaDs, quotaMaxBytes = quota)
|
|
|
|
reservations = Reservations.new(repo)
|
|
|
|
# create an availability that should always match
|
|
|
|
availability = Availability.init(
|
|
|
|
size = request.ask.slotSize * 100,
|
|
|
|
duration = request.ask.duration * 100,
|
|
|
|
minPrice = request.ask.pricePerSlot div 100,
|
|
|
|
maxCollateral = request.ask.collateral * 100
|
|
|
|
)
|
|
|
|
discard await reservations.reserve(availability)
|
|
|
|
|
|
|
|
teardown:
|
|
|
|
paused = false
|
|
|
|
|
|
|
|
await queue.stop()
|
|
|
|
|
|
|
|
test "starts out empty":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
check queue.len == 0
|
|
|
|
check $queue == "[]"
|
|
|
|
|
|
|
|
test "reports correct size":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
check queue.size == 2
|
|
|
|
|
|
|
|
test "correctly compares SlotQueueItems":
|
|
|
|
var requestA = StorageRequest.example
|
|
|
|
requestA.ask.duration = 1.u256
|
|
|
|
requestA.ask.reward = 1.u256
|
|
|
|
check requestA.ask.pricePerSlot == 1.u256
|
|
|
|
requestA.ask.collateral = 100000.u256
|
|
|
|
requestA.expiry = 1001.u256
|
|
|
|
|
|
|
|
var requestB = StorageRequest.example
|
|
|
|
requestB.ask.duration = 100.u256
|
|
|
|
requestB.ask.reward = 1000.u256
|
|
|
|
check requestB.ask.pricePerSlot == 100000.u256
|
|
|
|
requestB.ask.collateral = 1.u256
|
|
|
|
requestB.expiry = 1000.u256
|
|
|
|
|
|
|
|
let itemA = SlotQueueItem.init(requestA, 0)
|
|
|
|
let itemB = SlotQueueItem.init(requestB, 0)
|
|
|
|
check itemB < itemA # B higher priority than A
|
|
|
|
check itemA > itemB
|
|
|
|
|
|
|
|
test "expands available all possible slot indices on init":
|
|
|
|
let request = StorageRequest.example
|
|
|
|
let items = SlotQueueItem.init(request)
|
|
|
|
check items.len.uint64 == request.ask.slots
|
|
|
|
var checked = 0
|
|
|
|
for slotIndex in 0'u16..<request.ask.slots.uint16:
|
|
|
|
check items.anyIt(it == SlotQueueItem.init(request, slotIndex))
|
|
|
|
inc checked
|
|
|
|
check checked == items.len
|
|
|
|
|
|
|
|
test "can process items":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
let item1 = SlotQueueItem.example
|
|
|
|
let item2 = SlotQueueItem.example
|
|
|
|
check (await queue.push(item1)).isOk
|
|
|
|
check (await queue.push(item2)).isOk
|
|
|
|
check eventually onProcessSlotCalledWith == @[
|
|
|
|
(item1.requestId, item1.slotIndex),
|
|
|
|
(item2.requestId, item2.slotIndex)
|
|
|
|
]
|
|
|
|
|
|
|
|
test "can push items past number of maxWorkers":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
let item0 = SlotQueueItem.example
|
|
|
|
let item1 = SlotQueueItem.example
|
|
|
|
let item2 = SlotQueueItem.example
|
|
|
|
let item3 = SlotQueueItem.example
|
|
|
|
let item4 = SlotQueueItem.example
|
|
|
|
check isOk (await queue.push(item0))
|
|
|
|
check isOk (await queue.push(item1))
|
|
|
|
check isOk (await queue.push(item2))
|
|
|
|
check isOk (await queue.push(item3))
|
|
|
|
check isOk (await queue.push(item4))
|
|
|
|
|
|
|
|
test "populates item with exisiting request metadata":
|
|
|
|
newSlotQueue(maxSize = 8, maxWorkers = 1, processSlotDelay = 10.millis)
|
|
|
|
let request0 = StorageRequest.example
|
|
|
|
var request1 = StorageRequest.example
|
|
|
|
request1.ask.collateral += 1.u256
|
|
|
|
let items0 = SlotQueueItem.init(request0)
|
|
|
|
let items1 = SlotQueueItem.init(request1)
|
|
|
|
check (await queue.push(items0)).isOk
|
|
|
|
check (await queue.push(items1)).isOk
|
|
|
|
let populated = !queue.populateItem(request1.id, 12'u16)
|
|
|
|
check populated.requestId == request1.id
|
|
|
|
check populated.slotIndex == 12'u16
|
|
|
|
check populated.slotSize == request1.ask.slotSize
|
|
|
|
check populated.duration == request1.ask.duration
|
|
|
|
check populated.reward == request1.ask.reward
|
|
|
|
check populated.collateral == request1.ask.collateral
|
|
|
|
|
|
|
|
test "does not find exisiting request metadata":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
let item = SlotQueueItem.example
|
|
|
|
check queue.populateItem(item.requestId, 12'u16).isNone
|
|
|
|
|
|
|
|
test "can support uint16.high slots":
|
|
|
|
var request = StorageRequest.example
|
|
|
|
let maxUInt16 = uint16.high
|
|
|
|
let uint64Slots = uint64(maxUInt16)
|
|
|
|
request.ask.slots = uint64Slots
|
|
|
|
let items = SlotQueueItem.init(request.id, request.ask, request.expiry)
|
|
|
|
check items.len.uint16 == maxUInt16
|
|
|
|
|
|
|
|
test "cannot support greater than uint16.high slots":
|
|
|
|
var request = StorageRequest.example
|
|
|
|
let int32Slots = uint16.high.int32 + 1
|
|
|
|
let uint64Slots = uint64(int32Slots)
|
|
|
|
request.ask.slots = uint64Slots
|
|
|
|
expect SlotsOutOfRangeError:
|
|
|
|
discard SlotQueueItem.init(request.id, request.ask, request.expiry)
|
|
|
|
|
|
|
|
test "cannot push duplicate items":
|
|
|
|
newSlotQueue(maxSize = 6, maxWorkers = 1, processSlotDelay = 15.millis)
|
|
|
|
let item0 = SlotQueueItem.example
|
|
|
|
let item1 = SlotQueueItem.example
|
|
|
|
let item2 = SlotQueueItem.example
|
|
|
|
check isOk (await queue.push(item0))
|
|
|
|
check isOk (await queue.push(item1))
|
|
|
|
check (await queue.push(@[item2, item2, item2, item2])).error of SlotQueueItemExistsError
|
|
|
|
|
|
|
|
test "can add items past max maxSize":
|
|
|
|
newSlotQueue(maxSize = 4, maxWorkers = 2, processSlotDelay = 10.millis)
|
|
|
|
let item1 = SlotQueueItem.example
|
|
|
|
let item2 = SlotQueueItem.example
|
|
|
|
let item3 = SlotQueueItem.example
|
|
|
|
let item4 = SlotQueueItem.example
|
|
|
|
check (await queue.push(item1)).isOk
|
|
|
|
check (await queue.push(item2)).isOk
|
|
|
|
check (await queue.push(item3)).isOk
|
|
|
|
check (await queue.push(item4)).isOk
|
|
|
|
check eventually onProcessSlotCalledWith.len == 4
|
|
|
|
|
|
|
|
test "can delete items":
|
|
|
|
newSlotQueue(maxSize = 6, maxWorkers = 2, processSlotDelay = 10.millis)
|
|
|
|
let item0 = SlotQueueItem.example
|
|
|
|
let item1 = SlotQueueItem.example
|
|
|
|
let item2 = SlotQueueItem.example
|
|
|
|
let item3 = SlotQueueItem.example
|
|
|
|
check (await queue.push(item0)).isOk
|
|
|
|
check (await queue.push(item1)).isOk
|
|
|
|
check (await queue.push(item2)).isOk
|
|
|
|
check (await queue.push(item3)).isOk
|
|
|
|
queue.delete(item3)
|
|
|
|
check not queue.contains(item3)
|
|
|
|
|
|
|
|
test "can delete item by request id and slot id":
|
|
|
|
newSlotQueue(maxSize = 8, maxWorkers = 1, processSlotDelay = 10.millis)
|
|
|
|
let request0 = StorageRequest.example
|
|
|
|
var request1 = StorageRequest.example
|
|
|
|
request1.ask.collateral += 1.u256
|
|
|
|
let items0 = SlotQueueItem.init(request0)
|
|
|
|
let items1 = SlotQueueItem.init(request1)
|
|
|
|
check (await queue.push(items0)).isOk
|
|
|
|
check (await queue.push(items1)).isOk
|
|
|
|
let last = items1[items1.high]
|
|
|
|
check eventually queue.contains(last)
|
|
|
|
queue.delete(last.requestId, last.slotIndex)
|
|
|
|
check not onProcessSlotCalledWith.anyIt(
|
|
|
|
it == (last.requestId, last.slotIndex)
|
|
|
|
)
|
|
|
|
|
|
|
|
test "can delete all items by request id":
|
|
|
|
newSlotQueue(maxSize = 8, maxWorkers = 1, processSlotDelay = 10.millis)
|
|
|
|
let request0 = StorageRequest.example
|
|
|
|
var request1 = StorageRequest.example
|
|
|
|
request1.ask.collateral += 1.u256
|
|
|
|
let items0 = SlotQueueItem.init(request0)
|
|
|
|
let items1 = SlotQueueItem.init(request1)
|
|
|
|
check (await queue.push(items0)).isOk
|
|
|
|
check (await queue.push(items1)).isOk
|
|
|
|
queue.delete(request1.id)
|
|
|
|
check not onProcessSlotCalledWith.anyIt(it[0] == request1.id)
|
|
|
|
|
|
|
|
test "can check if contains item":
|
|
|
|
newSlotQueue(maxSize = 6, maxWorkers = 1, processSlotDelay = 10.millis)
|
|
|
|
let request0 = StorageRequest.example
|
|
|
|
var request1 = StorageRequest.example
|
|
|
|
var request2 = StorageRequest.example
|
|
|
|
var request3 = StorageRequest.example
|
|
|
|
var request4 = StorageRequest.example
|
|
|
|
var request5 = StorageRequest.example
|
|
|
|
request1.ask.collateral = request0.ask.collateral + 1
|
|
|
|
request2.ask.collateral = request1.ask.collateral + 1
|
|
|
|
request3.ask.collateral = request2.ask.collateral + 1
|
|
|
|
request4.ask.collateral = request3.ask.collateral + 1
|
|
|
|
request5.ask.collateral = request4.ask.collateral + 1
|
|
|
|
let item0 = SlotQueueItem.init(request0, 0)
|
|
|
|
let item1 = SlotQueueItem.init(request1, 0)
|
|
|
|
let item2 = SlotQueueItem.init(request2, 0)
|
|
|
|
let item3 = SlotQueueItem.init(request3, 0)
|
|
|
|
let item4 = SlotQueueItem.init(request4, 0)
|
|
|
|
let item5 = SlotQueueItem.init(request5, 0)
|
|
|
|
check queue.contains(item5) == false
|
|
|
|
check (await queue.push(@[item0, item1, item2, item3, item4, item5])).isOk
|
|
|
|
check queue.contains(item5)
|
|
|
|
|
|
|
|
test "sorts items by profitability ascending (higher pricePerSlot = higher priority)":
|
|
|
|
var request = StorageRequest.example
|
|
|
|
let item0 = SlotQueueItem.init(request, 0)
|
|
|
|
request.ask.reward += 1.u256
|
|
|
|
let item1 = SlotQueueItem.init(request, 1)
|
|
|
|
check item1 < item0
|
|
|
|
|
|
|
|
test "sorts items by collateral ascending (less required collateral = higher priority)":
|
|
|
|
var request = StorageRequest.example
|
|
|
|
let item0 = SlotQueueItem.init(request, 0)
|
|
|
|
request.ask.collateral -= 1.u256
|
|
|
|
let item1 = SlotQueueItem.init(request, 1)
|
|
|
|
check item1 < item0
|
|
|
|
|
|
|
|
test "sorts items by expiry descending (longer expiry = higher priority)":
|
|
|
|
var request = StorageRequest.example
|
|
|
|
let item0 = SlotQueueItem.init(request, 0)
|
|
|
|
request.expiry += 1.u256
|
|
|
|
let item1 = SlotQueueItem.init(request, 1)
|
|
|
|
check item1 < item0
|
|
|
|
|
|
|
|
test "sorts items by slot size ascending (smaller dataset = higher priority)":
|
|
|
|
var request = StorageRequest.example
|
|
|
|
let item0 = SlotQueueItem.init(request, 0)
|
|
|
|
request.ask.slotSize -= 1.u256
|
|
|
|
let item1 = SlotQueueItem.init(request, 1)
|
|
|
|
check item1 < item0
|
|
|
|
|
|
|
|
test "should call callback once an item is added":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
let item = SlotQueueItem.example
|
|
|
|
check not onProcessSlotCalled
|
|
|
|
check (await queue.push(item)).isOk
|
|
|
|
check eventually onProcessSlotCalled
|
|
|
|
|
|
|
|
test "should only process item once":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
let item = SlotQueueItem.example
|
|
|
|
check (await queue.push(item)).isOk
|
|
|
|
check eventually onProcessSlotCalledWith == @[
|
|
|
|
(item.requestId, item.slotIndex)
|
|
|
|
]
|
|
|
|
|
|
|
|
test "should process items in correct order":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
# sleeping after push allows the slotqueue loop to iterate,
|
|
|
|
# calling the callback for each pushed/updated item
|
|
|
|
var request = StorageRequest.example
|
|
|
|
let item0 = SlotQueueItem.init(request, 0)
|
|
|
|
request.ask.reward += 1.u256
|
|
|
|
let item1 = SlotQueueItem.init(request, 1)
|
|
|
|
request.ask.reward += 1.u256
|
|
|
|
let item2 = SlotQueueItem.init(request, 2)
|
|
|
|
request.ask.reward += 1.u256
|
|
|
|
let item3 = SlotQueueItem.init(request, 3)
|
|
|
|
|
|
|
|
check (await queue.push(item0)).isOk
|
|
|
|
await sleepAsync(1.millis)
|
|
|
|
check (await queue.push(item1)).isOk
|
|
|
|
await sleepAsync(1.millis)
|
|
|
|
check (await queue.push(item2)).isOk
|
|
|
|
await sleepAsync(1.millis)
|
|
|
|
check (await queue.push(item3)).isOk
|
|
|
|
|
|
|
|
check eventually (
|
|
|
|
onProcessSlotCalledWith == @[
|
|
|
|
(item0.requestId, item0.slotIndex),
|
|
|
|
(item1.requestId, item1.slotIndex),
|
|
|
|
(item2.requestId, item2.slotIndex),
|
|
|
|
(item3.requestId, item3.slotIndex),
|
|
|
|
]
|
|
|
|
)
|
|
|
|
|
|
|
|
test "fails to push when there's no matching availability":
|
|
|
|
newSlotQueue(maxSize = 2, maxWorkers = 2)
|
|
|
|
discard await reservations.release(availability.id,
|
|
|
|
availability.size.truncate(uint))
|
|
|
|
|
|
|
|
let item = SlotQueueItem.example
|
|
|
|
check (await queue.push(item)).error of NoMatchingAvailabilityError
|