Slava 484124db09
Release v0.1.4 (#912)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
2024-09-24 13:19:58 +03:00

600 lines
16 KiB
Nim

import std/sequtils
import std/random
import std/algorithm
import pkg/stew/byteutils
import pkg/chronos
import pkg/libp2p/errors
import pkg/libp2p/routing_record
import pkg/codexdht/discv5/protocol as discv5
import pkg/codex/rng
import pkg/codex/blockexchange
import pkg/codex/stores
import pkg/codex/chunker
import pkg/codex/discovery
import pkg/codex/blocktype
import pkg/codex/utils/asyncheapqueue
import pkg/codex/manifest
import ../../../asynctest
import ../../helpers
import ../../examples
asyncchecksuite "NetworkStore engine basic":
var
rng: Rng
seckey: PrivateKey
peerId: PeerId
chunker: Chunker
wallet: WalletRef
blockDiscovery: Discovery
peerStore: PeerCtxStore
pendingBlocks: PendingBlocksManager
blocks: seq[Block]
done: Future[void]
setup:
rng = Rng.instance()
seckey = PrivateKey.random(rng[]).tryGet()
peerId = PeerId.init(seckey.getPublicKey().tryGet()).tryGet()
chunker = RandomChunker.new(Rng.instance(), size = 1024'nb, chunkSize = 256'nb)
wallet = WalletRef.example
blockDiscovery = Discovery.new()
peerStore = PeerCtxStore.new()
pendingBlocks = PendingBlocksManager.new()
while true:
let chunk = await chunker.getBytes()
if chunk.len <= 0:
break
blocks.add(Block.new(chunk).tryGet())
done = newFuture[void]()
test "Should send want list to new peers":
proc sendWantList(
id: PeerId,
addresses: seq[BlockAddress],
priority: int32 = 0,
cancel: bool = false,
wantType: WantType = WantType.WantHave,
full: bool = false,
sendDontHave: bool = false) {.gcsafe, async.} =
check addresses.mapIt($it.cidOrTreeCid).sorted == blocks.mapIt( $it.cid ).sorted
done.complete()
let
network = BlockExcNetwork(request: BlockExcRequest(
sendWantList: sendWantList,
))
localStore = CacheStore.new(blocks.mapIt( it ))
discovery = DiscoveryEngine.new(
localStore,
peerStore,
network,
blockDiscovery,
pendingBlocks)
advertiser = Advertiser.new(
localStore,
blockDiscovery
)
engine = BlockExcEngine.new(
localStore,
wallet,
network,
discovery,
advertiser,
peerStore,
pendingBlocks)
for b in blocks:
discard engine.pendingBlocks.getWantHandle(b.cid)
await engine.setupPeer(peerId)
await done.wait(100.millis)
test "Should send account to new peers":
let pricing = Pricing.example
proc sendAccount(peer: PeerId, account: Account) {.gcsafe, async.} =
check account.address == pricing.address
done.complete()
let
network = BlockExcNetwork(
request: BlockExcRequest(
sendAccount: sendAccount
))
localStore = CacheStore.new()
discovery = DiscoveryEngine.new(
localStore,
peerStore,
network,
blockDiscovery,
pendingBlocks)
advertiser = Advertiser.new(
localStore,
blockDiscovery
)
engine = BlockExcEngine.new(
localStore,
wallet,
network,
discovery,
advertiser,
peerStore,
pendingBlocks)
engine.pricing = pricing.some
await engine.setupPeer(peerId)
await done.wait(100.millis)
asyncchecksuite "NetworkStore engine handlers":
var
rng: Rng
seckey: PrivateKey
peerId: PeerId
chunker: Chunker
wallet: WalletRef
blockDiscovery: Discovery
peerStore: PeerCtxStore
pendingBlocks: PendingBlocksManager
network: BlockExcNetwork
engine: BlockExcEngine
discovery: DiscoveryEngine
advertiser: Advertiser
peerCtx: BlockExcPeerCtx
localStore: BlockStore
blocks: seq[Block]
const NopSendWantCancellationsProc = proc(
id: PeerId,
addresses: seq[BlockAddress]
) {.gcsafe, async.} = discard
setup:
rng = Rng.instance()
chunker = RandomChunker.new(rng, size = 1024'nb, chunkSize = 256'nb)
while true:
let chunk = await chunker.getBytes()
if chunk.len <= 0:
break
blocks.add(Block.new(chunk).tryGet())
seckey = PrivateKey.random(rng[]).tryGet()
peerId = PeerId.init(seckey.getPublicKey().tryGet()).tryGet()
wallet = WalletRef.example
blockDiscovery = Discovery.new()
peerStore = PeerCtxStore.new()
pendingBlocks = PendingBlocksManager.new()
localStore = CacheStore.new()
network = BlockExcNetwork()
discovery = DiscoveryEngine.new(
localStore,
peerStore,
network,
blockDiscovery,
pendingBlocks)
advertiser = Advertiser.new(
localStore,
blockDiscovery
)
engine = BlockExcEngine.new(
localStore,
wallet,
network,
discovery,
advertiser,
peerStore,
pendingBlocks)
peerCtx = BlockExcPeerCtx(
id: peerId
)
engine.peers.add(peerCtx)
test "Should schedule block requests":
let
wantList = makeWantList(
blocks.mapIt( it.cid ),
wantType = WantType.WantBlock) # only `wantBlock` are stored in `peerWants`
proc handler() {.async.} =
let ctx = await engine.taskQueue.pop()
check ctx.id == peerId
# only `wantBlock` scheduled
check ctx.peerWants.mapIt( it.address.cidOrTreeCid ) == blocks.mapIt( it.cid )
let done = handler()
await engine.wantListHandler(peerId, wantList)
await done
test "Should handle want list":
let
done = newFuture[void]()
wantList = makeWantList(blocks.mapIt( it.cid ))
proc sendPresence(peerId: PeerId, presence: seq[BlockPresence]) {.gcsafe, async.} =
check presence.mapIt( it.address ) == wantList.entries.mapIt( it.address )
done.complete()
engine.network = BlockExcNetwork(
request: BlockExcRequest(
sendPresence: sendPresence
))
await allFuturesThrowing(
allFinished(blocks.mapIt( localStore.putBlock(it) )))
await engine.wantListHandler(peerId, wantList)
await done
test "Should handle want list - `dont-have`":
let
done = newFuture[void]()
wantList = makeWantList(
blocks.mapIt( it.cid ),
sendDontHave = true)
proc sendPresence(peerId: PeerId, presence: seq[BlockPresence]) {.gcsafe, async.} =
check presence.mapIt( it.address ) == wantList.entries.mapIt( it.address )
for p in presence:
check:
p.`type` == BlockPresenceType.DontHave
done.complete()
engine.network = BlockExcNetwork(request: BlockExcRequest(
sendPresence: sendPresence
))
await engine.wantListHandler(peerId, wantList)
await done
test "Should handle want list - `dont-have` some blocks":
let
done = newFuture[void]()
wantList = makeWantList(
blocks.mapIt( it.cid ),
sendDontHave = true)
proc sendPresence(peerId: PeerId, presence: seq[BlockPresence]) {.gcsafe, async.} =
for p in presence:
if p.address.cidOrTreeCid != blocks[0].cid and p.address.cidOrTreeCid != blocks[1].cid:
check p.`type` == BlockPresenceType.DontHave
else:
check p.`type` == BlockPresenceType.Have
done.complete()
engine.network = BlockExcNetwork(
request: BlockExcRequest(
sendPresence: sendPresence
))
(await engine.localStore.putBlock(blocks[0])).tryGet()
(await engine.localStore.putBlock(blocks[1])).tryGet()
await engine.wantListHandler(peerId, wantList)
await done
test "Should store blocks in local store":
let pending = blocks.mapIt(
engine.pendingBlocks.getWantHandle( it.cid )
)
let blocksDelivery = blocks.mapIt(BlockDelivery(blk: it, address: it.address))
# Install NOP for want list cancellations so they don't cause a crash
engine.network = BlockExcNetwork(
request: BlockExcRequest(sendWantCancellations: NopSendWantCancellationsProc))
await engine.blocksDeliveryHandler(peerId, blocksDelivery)
let resolved = await allFinished(pending)
check resolved.mapIt( it.read ) == blocks
for b in blocks:
let present = await engine.localStore.hasBlock(b.cid)
check present.tryGet()
test "Should send payments for received blocks":
let
done = newFuture[void]()
account = Account(address: EthAddress.example)
peerContext = peerStore.get(peerId)
peerContext.account = account.some
peerContext.blocks = blocks.mapIt(
(it.address, Presence(address: it.address, price: rand(uint16).u256))
).toTable
engine.network = BlockExcNetwork(
request: BlockExcRequest(
sendPayment: proc(receiver: PeerId, payment: SignedState) {.gcsafe, async.} =
let
amount =
blocks.mapIt(
peerContext.blocks[it.address].price
).foldl(a + b)
balances = !payment.state.outcome.balances(Asset)
check receiver == peerId
check balances[account.address.toDestination] == amount
done.complete(),
# Install NOP for want list cancellations so they don't cause a crash
sendWantCancellations: NopSendWantCancellationsProc
))
await engine.blocksDeliveryHandler(peerId, blocks.mapIt(
BlockDelivery(blk: it, address: it.address)))
await done.wait(100.millis)
test "Should handle block presence":
var
handles: Table[Cid, Future[Block]]
proc sendWantList(
id: PeerId,
addresses: seq[BlockAddress],
priority: int32 = 0,
cancel: bool = false,
wantType: WantType = WantType.WantHave,
full: bool = false,
sendDontHave: bool = false) {.gcsafe, async.} =
engine.pendingBlocks.resolve(blocks
.filterIt( it.address in addresses )
.mapIt(BlockDelivery(blk: it, address: it.address)))
engine.network = BlockExcNetwork(
request: BlockExcRequest(
sendWantList: sendWantList
))
# only Cids in peer want lists are requested
handles = blocks.mapIt(
(it.cid, engine.pendingBlocks.getWantHandle( it.cid ))).toTable
let price = UInt256.example
await engine.blockPresenceHandler(
peerId,
blocks.mapIt(
PresenceMessage.init(
Presence(
address: it.address,
have: true,
price: price
))))
for a in blocks.mapIt(it.address):
check a in peerCtx.peerHave
check peerCtx.blocks[a].price == price
test "Should send cancellations for received blocks":
let
pending = blocks.mapIt(engine.pendingBlocks.getWantHandle(it.cid))
blocksDelivery = blocks.mapIt(BlockDelivery(blk: it, address: it.address))
cancellations = newTable(
blocks.mapIt((it.address, newFuture[void]())).toSeq
)
proc sendWantCancellations(
id: PeerId,
addresses: seq[BlockAddress]
) {.gcsafe, async.} =
for address in addresses:
cancellations[address].complete()
engine.network = BlockExcNetwork(
request: BlockExcRequest(
sendWantCancellations: sendWantCancellations
))
await engine.blocksDeliveryHandler(peerId, blocksDelivery)
discard await allFinished(pending)
await allFuturesThrowing(cancellations.values().toSeq)
asyncchecksuite "Task Handler":
var
rng: Rng
seckey: PrivateKey
peerId: PeerId
chunker: Chunker
wallet: WalletRef
blockDiscovery: Discovery
peerStore: PeerCtxStore
pendingBlocks: PendingBlocksManager
network: BlockExcNetwork
engine: BlockExcEngine
discovery: DiscoveryEngine
advertiser: Advertiser
localStore: BlockStore
peersCtx: seq[BlockExcPeerCtx]
peers: seq[PeerId]
blocks: seq[Block]
setup:
rng = Rng.instance()
chunker = RandomChunker.new(rng, size = 1024, chunkSize = 256'nb)
while true:
let chunk = await chunker.getBytes()
if chunk.len <= 0:
break
blocks.add(Block.new(chunk).tryGet())
seckey = PrivateKey.random(rng[]).tryGet()
peerId = PeerId.init(seckey.getPublicKey().tryGet()).tryGet()
wallet = WalletRef.example
blockDiscovery = Discovery.new()
peerStore = PeerCtxStore.new()
pendingBlocks = PendingBlocksManager.new()
localStore = CacheStore.new()
network = BlockExcNetwork()
discovery = DiscoveryEngine.new(
localStore,
peerStore,
network,
blockDiscovery,
pendingBlocks)
advertiser = Advertiser.new(
localStore,
blockDiscovery
)
engine = BlockExcEngine.new(
localStore,
wallet,
network,
discovery,
advertiser,
peerStore,
pendingBlocks)
peersCtx = @[]
for i in 0..3:
let seckey = PrivateKey.random(rng[]).tryGet()
peers.add(PeerId.init(seckey.getPublicKey().tryGet()).tryGet())
peersCtx.add(BlockExcPeerCtx(
id: peers[i]
))
peerStore.add(peersCtx[i])
engine.pricing = Pricing.example.some
test "Should send want-blocks in priority order":
proc sendBlocksDelivery(
id: PeerId,
blocksDelivery: seq[BlockDelivery]) {.gcsafe, async.} =
check blocksDelivery.len == 2
check:
blocksDelivery[1].address == blocks[0].address
blocksDelivery[0].address == blocks[1].address
for blk in blocks:
(await engine.localStore.putBlock(blk)).tryGet()
engine.network.request.sendBlocksDelivery = sendBlocksDelivery
# second block to send by priority
peersCtx[0].peerWants.add(
WantListEntry(
address: blocks[0].address,
priority: 49,
cancel: false,
wantType: WantType.WantBlock,
sendDontHave: false)
)
# first block to send by priority
peersCtx[0].peerWants.add(
WantListEntry(
address: blocks[1].address,
priority: 50,
cancel: false,
wantType: WantType.WantBlock,
sendDontHave: false)
)
await engine.taskHandler(peersCtx[0])
test "Should set in-flight for outgoing blocks":
proc sendBlocksDelivery(
id: PeerId,
blocksDelivery: seq[BlockDelivery]) {.gcsafe, async.} =
check peersCtx[0].peerWants[0].inFlight
for blk in blocks:
(await engine.localStore.putBlock(blk)).tryGet()
engine.network.request.sendBlocksDelivery = sendBlocksDelivery
peersCtx[0].peerWants.add(WantListEntry(
address: blocks[0].address,
priority: 50,
cancel: false,
wantType: WantType.WantBlock,
sendDontHave: false,
inFlight: false)
)
await engine.taskHandler(peersCtx[0])
test "Should clear in-flight when local lookup fails":
peersCtx[0].peerWants.add(WantListEntry(
address: blocks[0].address,
priority: 50,
cancel: false,
wantType: WantType.WantBlock,
sendDontHave: false,
inFlight: false)
)
await engine.taskHandler(peersCtx[0])
check not peersCtx[0].peerWants[0].inFlight
test "Should send presence":
let present = blocks
let missing = @[Block.new("missing".toBytes).tryGet()]
let price = (!engine.pricing).price
proc sendPresence(id: PeerId, presence: seq[BlockPresence]) {.gcsafe, async.} =
check presence.mapIt(!Presence.init(it)) == @[
Presence(address: present[0].address, have: true, price: price),
Presence(address: present[1].address, have: true, price: price),
Presence(address: missing[0].address, have: false)
]
for blk in blocks:
(await engine.localStore.putBlock(blk)).tryGet()
engine.network.request.sendPresence = sendPresence
# have block
peersCtx[0].peerWants.add(
WantListEntry(
address: present[0].address,
priority: 1,
cancel: false,
wantType: WantType.WantHave,
sendDontHave: false)
)
# have block
peersCtx[0].peerWants.add(
WantListEntry(
address: present[1].address,
priority: 1,
cancel: false,
wantType: WantType.WantHave,
sendDontHave: false)
)
# don't have block
peersCtx[0].peerWants.add(
WantListEntry(
address: missing[0].address,
priority: 1,
cancel: false,
wantType: WantType.WantHave,
sendDontHave: false)
)
await engine.taskHandler(peersCtx[0])