Slava 484124db09
Release v0.1.4 (#912)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
2024-09-24 13:19:58 +03:00

650 lines
20 KiB
Nim

## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import std/sequtils
import std/sets
import std/options
import std/algorithm
import std/sugar
import pkg/chronos
import pkg/libp2p/[cid, switch, multihash, multicodec]
import pkg/metrics
import pkg/stint
import pkg/questionable
import ../../stores/blockstore
import ../../blocktype
import ../../utils
import ../../merkletree
import ../../logutils
import ../../manifest
import ../protobuf/blockexc
import ../protobuf/presence
import ../network
import ../peers
import ./payments
import ./discovery
import ./advertiser
import ./pendingblocks
export peers, pendingblocks, payments, discovery
logScope:
topics = "codex blockexcengine"
declareCounter(codex_block_exchange_want_have_lists_sent, "codex blockexchange wantHave lists sent")
declareCounter(codex_block_exchange_want_have_lists_received, "codex blockexchange wantHave lists received")
declareCounter(codex_block_exchange_want_block_lists_sent, "codex blockexchange wantBlock lists sent")
declareCounter(codex_block_exchange_want_block_lists_received, "codex blockexchange wantBlock lists received")
declareCounter(codex_block_exchange_blocks_sent, "codex blockexchange blocks sent")
declareCounter(codex_block_exchange_blocks_received, "codex blockexchange blocks received")
const
DefaultMaxPeersPerRequest* = 10
DefaultTaskQueueSize = 100
DefaultConcurrentTasks = 10
# DefaultMaxRetries = 3
# DefaultConcurrentDiscRequests = 10
# DefaultConcurrentAdvertRequests = 10
# DefaultDiscoveryTimeout = 1.minutes
# DefaultMaxQueriedBlocksCache = 1000
# DefaultMinPeersPerBlock = 3
type
TaskHandler* = proc(task: BlockExcPeerCtx): Future[void] {.gcsafe.}
TaskScheduler* = proc(task: BlockExcPeerCtx): bool {.gcsafe.}
BlockExcEngine* = ref object of RootObj
localStore*: BlockStore # Local block store for this instance
network*: BlockExcNetwork # Petwork interface
peers*: PeerCtxStore # Peers we're currently actively exchanging with
taskQueue*: AsyncHeapQueue[BlockExcPeerCtx] # Peers we're currently processing tasks for
concurrentTasks: int # Number of concurrent peers we're serving at any given time
blockexcTasks: seq[Future[void]] # Future to control blockexc task
blockexcRunning: bool # Indicates if the blockexc task is running
pendingBlocks*: PendingBlocksManager # Blocks we're awaiting to be resolved
peersPerRequest: int # Max number of peers to request from
wallet*: WalletRef # Nitro wallet for micropayments
pricing*: ?Pricing # Optional bandwidth pricing
blockFetchTimeout*: Duration # Timeout for fetching blocks over the network
discovery*: DiscoveryEngine
advertiser*: Advertiser
Pricing* = object
address*: EthAddress
price*: UInt256
# attach task scheduler to engine
proc scheduleTask(b: BlockExcEngine, task: BlockExcPeerCtx): bool {.gcsafe} =
b.taskQueue.pushOrUpdateNoWait(task).isOk()
proc blockexcTaskRunner(b: BlockExcEngine): Future[void] {.gcsafe.}
proc start*(b: BlockExcEngine) {.async.} =
## Start the blockexc task
##
await b.discovery.start()
await b.advertiser.start()
trace "Blockexc starting with concurrent tasks", tasks = b.concurrentTasks
if b.blockexcRunning:
warn "Starting blockexc twice"
return
b.blockexcRunning = true
for i in 0..<b.concurrentTasks:
b.blockexcTasks.add(blockexcTaskRunner(b))
proc stop*(b: BlockExcEngine) {.async.} =
## Stop the blockexc blockexc
##
await b.discovery.stop()
await b.advertiser.stop()
trace "NetworkStore stop"
if not b.blockexcRunning:
warn "Stopping blockexc without starting it"
return
b.blockexcRunning = false
for task in b.blockexcTasks:
if not task.finished:
trace "Awaiting task to stop"
await task.cancelAndWait()
trace "Task stopped"
trace "NetworkStore stopped"
proc sendWantHave(
b: BlockExcEngine,
address: BlockAddress, # pluralize this entire call chain, please
excluded: seq[BlockExcPeerCtx],
peers: seq[BlockExcPeerCtx]): Future[void] {.async.} =
trace "Sending wantHave request to peers", address
for p in peers:
if p notin excluded:
if address notin p.peerHave:
await b.network.request.sendWantList(
p.id,
@[address],
wantType = WantType.WantHave) # we only want to know if the peer has the block
proc sendWantBlock(
b: BlockExcEngine,
address: BlockAddress, # pluralize this entire call chain, please
blockPeer: BlockExcPeerCtx): Future[void] {.async.} =
trace "Sending wantBlock request to", peer = blockPeer.id, address
await b.network.request.sendWantList(
blockPeer.id,
@[address],
wantType = WantType.WantBlock) # we want this remote to send us a block
proc monitorBlockHandle(
b: BlockExcEngine,
handle: Future[Block],
address: BlockAddress,
peerId: PeerId) {.async.} =
try:
discard await handle
except CancelledError as exc:
trace "Block handle cancelled", address, peerId
except CatchableError as exc:
warn "Error block handle, disconnecting peer", address, exc = exc.msg, peerId
# TODO: really, this is just a quick and dirty way of
# preventing hitting the same "bad" peer every time, however,
# we might as well discover this on or next iteration, so
# it doesn't mean that we're never talking to this peer again.
# TODO: we need a lot more work around peer selection and
# prioritization
# drop unresponsive peer
await b.network.switch.disconnect(peerId)
b.discovery.queueFindBlocksReq(@[address.cidOrTreeCid])
proc requestBlock*(
b: BlockExcEngine,
address: BlockAddress,
): Future[?!Block] {.async.} =
let blockFuture = b.pendingBlocks.getWantHandle(address, b.blockFetchTimeout)
if not b.pendingBlocks.isInFlight(address):
let peers = b.peers.selectCheapest(address)
if peers.len == 0:
b.discovery.queueFindBlocksReq(@[address.cidOrTreeCid])
let maybePeer =
if peers.len > 0:
peers[hash(address) mod peers.len].some
elif b.peers.len > 0:
toSeq(b.peers)[hash(address) mod b.peers.len].some
else:
BlockExcPeerCtx.none
if peer =? maybePeer:
asyncSpawn b.monitorBlockHandle(blockFuture, address, peer.id)
b.pendingBlocks.setInFlight(address)
await b.sendWantBlock(address, peer)
codex_block_exchange_want_block_lists_sent.inc()
await b.sendWantHave(address, @[peer], toSeq(b.peers))
codex_block_exchange_want_have_lists_sent.inc()
# Don't let timeouts bubble up. We can't be too broad here or we break
# cancellations.
try:
success await blockFuture
except AsyncTimeoutError as err:
failure err
proc requestBlock*(
b: BlockExcEngine,
cid: Cid
): Future[?!Block] =
b.requestBlock(BlockAddress.init(cid))
proc blockPresenceHandler*(
b: BlockExcEngine,
peer: PeerId,
blocks: seq[BlockPresence]) {.async.} =
let
peerCtx = b.peers.get(peer)
wantList = toSeq(b.pendingBlocks.wantList)
if peerCtx.isNil:
return
for blk in blocks:
if presence =? Presence.init(blk):
peerCtx.setPresence(presence)
let
peerHave = peerCtx.peerHave
dontWantCids = peerHave.filterIt(
it notin wantList
)
if dontWantCids.len > 0:
peerCtx.cleanPresence(dontWantCids)
let
wantCids = wantList.filterIt(
it in peerHave
)
if wantCids.len > 0:
trace "Peer has blocks in our wantList", peer, wantCount = wantCids.len
discard await allFinished(
wantCids.mapIt(b.sendWantBlock(it, peerCtx)))
# if none of the connected peers report our wants in their have list,
# fire up discovery
b.discovery.queueFindBlocksReq(
toSeq(b.pendingBlocks.wantListCids)
.filter do(cid: Cid) -> bool:
not b.peers.anyIt( cid in it.peerHaveCids ))
proc scheduleTasks(b: BlockExcEngine, blocksDelivery: seq[BlockDelivery]) {.async.} =
let
cids = blocksDelivery.mapIt( it.blk.cid )
# schedule any new peers to provide blocks to
for p in b.peers:
for c in cids: # for each cid
# schedule a peer if it wants at least one cid
# and we have it in our local store
if c in p.peerWantsCids:
if await (c in b.localStore):
if b.scheduleTask(p):
trace "Task scheduled for peer", peer = p.id
else:
warn "Unable to schedule task for peer", peer = p.id
break # do next peer
proc cancelBlocks(b: BlockExcEngine, addrs: seq[BlockAddress]) {.async.} =
## Tells neighboring peers that we're no longer interested in a block.
trace "Sending block request cancellations to peers", addrs = addrs.len
let failed = (await allFinished(
b.peers.mapIt(
b.network.request.sendWantCancellations(
peer = it.id,
addresses = addrs))))
.filterIt(it.failed)
if failed.len > 0:
warn "Failed to send block request cancellations to peers", peers = failed.len
proc resolveBlocks*(b: BlockExcEngine, blocksDelivery: seq[BlockDelivery]) {.async.} =
b.pendingBlocks.resolve(blocksDelivery)
await b.scheduleTasks(blocksDelivery)
await b.cancelBlocks(blocksDelivery.mapIt(it.address))
proc resolveBlocks*(b: BlockExcEngine, blocks: seq[Block]) {.async.} =
await b.resolveBlocks(
blocks.mapIt(
BlockDelivery(blk: it, address: BlockAddress(leaf: false, cid: it.cid)
)))
proc payForBlocks(engine: BlockExcEngine,
peer: BlockExcPeerCtx,
blocksDelivery: seq[BlockDelivery]) {.async.} =
let
sendPayment = engine.network.request.sendPayment
price = peer.price(blocksDelivery.mapIt(it.address))
if payment =? engine.wallet.pay(peer, price):
trace "Sending payment for blocks", price, len = blocksDelivery.len
await sendPayment(peer.id, payment)
proc validateBlockDelivery(
b: BlockExcEngine,
bd: BlockDelivery): ?!void =
if bd.address notin b.pendingBlocks:
return failure("Received block is not currently a pending block")
if bd.address.leaf:
without proof =? bd.proof:
return failure("Missing proof")
if proof.index != bd.address.index:
return failure("Proof index " & $proof.index & " doesn't match leaf index " & $bd.address.index)
without leaf =? bd.blk.cid.mhash.mapFailure, err:
return failure("Unable to get mhash from cid for block, nested err: " & err.msg)
without treeRoot =? bd.address.treeCid.mhash.mapFailure, err:
return failure("Unable to get mhash from treeCid for block, nested err: " & err.msg)
if err =? proof.verify(leaf, treeRoot).errorOption:
return failure("Unable to verify proof for block, nested err: " & err.msg)
else: # not leaf
if bd.address.cid != bd.blk.cid:
return failure("Delivery cid " & $bd.address.cid & " doesn't match block cid " & $bd.blk.cid)
return success()
proc blocksDeliveryHandler*(
b: BlockExcEngine,
peer: PeerId,
blocksDelivery: seq[BlockDelivery]) {.async.} =
trace "Received blocks from peer", peer, blocks = (blocksDelivery.mapIt($it.address)).join(",")
var validatedBlocksDelivery: seq[BlockDelivery]
for bd in blocksDelivery:
logScope:
peer = peer
address = bd.address
if err =? b.validateBlockDelivery(bd).errorOption:
warn "Block validation failed", msg = err.msg
continue
if err =? (await b.localStore.putBlock(bd.blk)).errorOption:
error "Unable to store block", err = err.msg
continue
if bd.address.leaf:
without proof =? bd.proof:
error "Proof expected for a leaf block delivery"
continue
if err =? (await b.localStore.putCidAndProof(
bd.address.treeCid,
bd.address.index,
bd.blk.cid,
proof)).errorOption:
error "Unable to store proof and cid for a block"
continue
validatedBlocksDelivery.add(bd)
await b.resolveBlocks(validatedBlocksDelivery)
codex_block_exchange_blocks_received.inc(validatedBlocksDelivery.len.int64)
let
peerCtx = b.peers.get(peer)
if peerCtx != nil:
await b.payForBlocks(peerCtx, blocksDelivery)
## shouldn't we remove them from the want-list instead of this:
peerCtx.cleanPresence(blocksDelivery.mapIt( it.address ))
proc wantListHandler*(
b: BlockExcEngine,
peer: PeerId,
wantList: WantList) {.async.} =
let
peerCtx = b.peers.get(peer)
if isNil(peerCtx):
return
var
presence: seq[BlockPresence]
for e in wantList.entries:
let
idx = peerCtx.peerWants.findIt(it.address == e.address)
logScope:
peer = peerCtx.id
address = e.address
wantType = $e.wantType
if idx < 0: # updating entry
let
have = await e.address in b.localStore
price = @(
b.pricing.get(Pricing(price: 0.u256))
.price.toBytesBE)
if e.wantType == WantType.WantHave:
codex_block_exchange_want_have_lists_received.inc()
if not have and e.sendDontHave:
presence.add(
BlockPresence(
address: e.address,
`type`: BlockPresenceType.DontHave,
price: price))
elif have and e.wantType == WantType.WantHave:
presence.add(
BlockPresence(
address: e.address,
`type`: BlockPresenceType.Have,
price: price))
elif e.wantType == WantType.WantBlock:
peerCtx.peerWants.add(e)
codex_block_exchange_want_block_lists_received.inc()
else:
# peer doesn't want this block anymore
if e.cancel:
peerCtx.peerWants.del(idx)
else:
# peer might want to ask for the same cid with
# different want params
peerCtx.peerWants[idx] = e # update entry
if presence.len > 0:
trace "Sending presence to remote", items = presence.mapIt($it).join(",")
await b.network.request.sendPresence(peer, presence)
if not b.scheduleTask(peerCtx):
warn "Unable to schedule task for peer", peer
proc accountHandler*(
engine: BlockExcEngine,
peer: PeerId,
account: Account) {.async.} =
let
context = engine.peers.get(peer)
if context.isNil:
return
context.account = account.some
proc paymentHandler*(
engine: BlockExcEngine,
peer: PeerId,
payment: SignedState) {.async.} =
trace "Handling payments", peer
without context =? engine.peers.get(peer).option and
account =? context.account:
trace "No context or account for peer", peer
return
if channel =? context.paymentChannel:
let
sender = account.address
discard engine.wallet.acceptPayment(channel, Asset, sender, payment)
else:
context.paymentChannel = engine.wallet.acceptChannel(payment).option
proc setupPeer*(b: BlockExcEngine, peer: PeerId) {.async.} =
## Perform initial setup, such as want
## list exchange
##
trace "Setting up peer", peer
if peer notin b.peers:
trace "Setting up new peer", peer
b.peers.add(BlockExcPeerCtx(
id: peer
))
trace "Added peer", peers = b.peers.len
# broadcast our want list, the other peer will do the same
if b.pendingBlocks.wantListLen > 0:
trace "Sending our want list to a peer", peer
let cids = toSeq(b.pendingBlocks.wantList)
await b.network.request.sendWantList(
peer, cids, full = true)
if address =? b.pricing.?address:
await b.network.request.sendAccount(peer, Account(address: address))
proc dropPeer*(b: BlockExcEngine, peer: PeerId) =
## Cleanup disconnected peer
##
trace "Dropping peer", peer
# drop the peer from the peers table
b.peers.remove(peer)
proc taskHandler*(b: BlockExcEngine, task: BlockExcPeerCtx) {.gcsafe, async.} =
# Send to the peer blocks he wants to get,
# if they present in our local store
# TODO: There should be all sorts of accounting of
# bytes sent/received here
var
wantsBlocks = task.peerWants.filterIt(
it.wantType == WantType.WantBlock and not it.inFlight
)
proc updateInFlight(addresses: seq[BlockAddress], inFlight: bool) =
for peerWant in task.peerWants.mitems:
if peerWant.address in addresses:
peerWant.inFlight = inFlight
if wantsBlocks.len > 0:
# Mark wants as in-flight.
let wantAddresses = wantsBlocks.mapIt(it.address)
updateInFlight(wantAddresses, true)
wantsBlocks.sort(SortOrder.Descending)
proc localLookup(e: WantListEntry): Future[?!BlockDelivery] {.async.} =
if e.address.leaf:
(await b.localStore.getBlockAndProof(e.address.treeCid, e.address.index)).map(
(blkAndProof: (Block, CodexProof)) =>
BlockDelivery(address: e.address, blk: blkAndProof[0], proof: blkAndProof[1].some)
)
else:
(await b.localStore.getBlock(e.address)).map(
(blk: Block) => BlockDelivery(address: e.address, blk: blk, proof: CodexProof.none)
)
let
blocksDeliveryFut = await allFinished(wantsBlocks.map(localLookup))
blocksDelivery = blocksDeliveryFut
.filterIt(it.completed and it.read.isOk)
.mapIt(it.read.get)
# All the wants that failed local lookup must be set to not-in-flight again.
let
successAddresses = blocksDelivery.mapIt(it.address)
failedAddresses = wantAddresses.filterIt(it notin successAddresses)
updateInFlight(failedAddresses, false)
if blocksDelivery.len > 0:
trace "Sending blocks to peer", peer = task.id, blocks = (blocksDelivery.mapIt($it.address)).join(",")
await b.network.request.sendBlocksDelivery(
task.id,
blocksDelivery
)
codex_block_exchange_blocks_sent.inc(blocksDelivery.len.int64)
task.peerWants.keepItIf(it.address notin successAddresses)
proc blockexcTaskRunner(b: BlockExcEngine) {.async.} =
## process tasks
##
trace "Starting blockexc task runner"
while b.blockexcRunning:
let
peerCtx = await b.taskQueue.pop()
await b.taskHandler(peerCtx)
info "Exiting blockexc task runner"
proc new*(
T: type BlockExcEngine,
localStore: BlockStore,
wallet: WalletRef,
network: BlockExcNetwork,
discovery: DiscoveryEngine,
advertiser: Advertiser,
peerStore: PeerCtxStore,
pendingBlocks: PendingBlocksManager,
concurrentTasks = DefaultConcurrentTasks,
peersPerRequest = DefaultMaxPeersPerRequest,
blockFetchTimeout = DefaultBlockTimeout,
): BlockExcEngine =
## Create new block exchange engine instance
##
let
engine = BlockExcEngine(
localStore: localStore,
peers: peerStore,
pendingBlocks: pendingBlocks,
peersPerRequest: peersPerRequest,
network: network,
wallet: wallet,
concurrentTasks: concurrentTasks,
taskQueue: newAsyncHeapQueue[BlockExcPeerCtx](DefaultTaskQueueSize),
discovery: discovery,
advertiser: advertiser,
blockFetchTimeout: blockFetchTimeout)
proc peerEventHandler(peerId: PeerId, event: PeerEvent) {.async.} =
if event.kind == PeerEventKind.Joined:
await engine.setupPeer(peerId)
else:
engine.dropPeer(peerId)
if not isNil(network.switch):
network.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Joined)
network.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Left)
proc blockWantListHandler(
peer: PeerId,
wantList: WantList): Future[void] {.gcsafe.} =
engine.wantListHandler(peer, wantList)
proc blockPresenceHandler(
peer: PeerId,
presence: seq[BlockPresence]): Future[void] {.gcsafe.} =
engine.blockPresenceHandler(peer, presence)
proc blocksDeliveryHandler(
peer: PeerId,
blocksDelivery: seq[BlockDelivery]): Future[void] {.gcsafe.} =
engine.blocksDeliveryHandler(peer, blocksDelivery)
proc accountHandler(peer: PeerId, account: Account): Future[void] {.gcsafe.} =
engine.accountHandler(peer, account)
proc paymentHandler(peer: PeerId, payment: SignedState): Future[void] {.gcsafe.} =
engine.paymentHandler(peer, payment)
network.handlers = BlockExcHandlers(
onWantList: blockWantListHandler,
onBlocksDelivery: blocksDeliveryHandler,
onPresence: blockPresenceHandler,
onAccount: accountHandler,
onPayment: paymentHandler)
return engine