diff --git a/10 Notes/Advertising BitTorrent content on Codex.md b/10 Notes/Advertising BitTorrent content on Codex.md index 87f1ada..3b04a2d 100644 --- a/10 Notes/Advertising BitTorrent content on Codex.md +++ b/10 Notes/Advertising BitTorrent content on Codex.md @@ -4,11 +4,14 @@ tags: related-to: - "[[How BitTorrent-Codex integration may look like?]]" - "[[Learn BitTorrent]]" +related: + - "[[Uploading and downloading content in Codex]]" --- #bittorrent | related-to | [[How BitTorrent-Codex integration may look like?]], [[Learn BitTorrent]] | | ---------- | ------------------------------------------------------------------------- | +| related | [[Uploading and downloading content in Codex]] | This content builds upon [[BitTorrent metadata files]] diff --git a/10 Notes/Advertising Blocks on DHT.md b/10 Notes/Advertising Blocks on DHT.md new file mode 100644 index 0000000..6712e4f --- /dev/null +++ b/10 Notes/Advertising Blocks on DHT.md @@ -0,0 +1,108 @@ +--- +tags: + - codex/dht-advertising +related: + - "[[No manifest for BitTorrent on Codex]]" + - "[[Advertising BitTorrent content on Codex]]" + - "[[Discovering Blocks on DHT]]" +--- +#codex/dht-advertising + +| related | [[No manifest for BitTorrent on Codex]], [[No manifest for BitTorrent on Codex]], [[Discovering Blocks on DHT]] | +| ------- | --------------------------------------------------------------------------------------------------------------- | + +`Advertiser` (`codex/blockexchange/engine/advertiser.nim`). Here is its constructor: + +```nim +proc start*(b: Advertiser) {.async.} = + ## Start the advertiser + ## + + trace "Advertiser start" + + proc onBlock(cid: Cid) {.async.} = + await b.advertiseBlock(cid) + + doAssert(b.localStore.onBlockStored.isNone()) + b.localStore.onBlockStored = onBlock.some + + if b.advertiserRunning: + warn "Starting advertiser twice" + return + + b.advertiserRunning = true + for i in 0 ..< b.concurrentAdvReqs: + let fut = b.processQueueLoop() + b.trackedFutures.track(fut) + asyncSpawn fut + + b.advertiseLocalStoreLoop = advertiseLocalStoreLoop(b) + b.trackedFutures.track(b.advertiseLocalStoreLoop) + asyncSpawn b.advertiseLocalStoreLoop +``` + +Crucial here is `onBlockStored` property of the `localStore`. + +`Advertiser` get instance of `RepoStore` as its `localStore` in `CodexServer.new`. + +This handler is invoked in `RepoStore.putBlock` and `RepoStore.putBlock` is called from `NetworkStore.putBlock`. + +Let's now look quickly at `advertiseBlock`: + +```nim +proc advertiseBlock(b: Advertiser, cid: Cid) {.async.} = + without isM =? cid.isManifest, err: + warn "Unable to determine if cid is manifest" + return + + if isM: + without blk =? await b.localStore.getBlock(cid), err: + error "Error retrieving manifest block", cid, err = err.msg + return + + without manifest =? Manifest.decode(blk), err: + error "Unable to decode as manifest", err = err.msg + return + + # announce manifest cid and tree cid + await b.addCidToQueue(cid) + await b.addCidToQueue(manifest.treeCid) +``` + +So, the first thing to notice is that **only Manifest Cid are currently advertised**. This may be of crucial importance in the context of [[Advertising BitTorrent content on Codex]] and [[No manifest for BitTorrent on Codex]]. + +Likewise, also in the `advertiseLocalStoreLoop`, we only track `BlockType.Manifest`: + +```nim +proc advertiseLocalStoreLoop(b: Advertiser) {.async: (raises: []).} = + while b.advertiserRunning: + try: + if cids =? await b.localStore.listBlocks(blockType = BlockType.Manifest): + trace "Advertiser begins iterating blocks..." + for c in cids: + if cid =? await c: + await b.advertiseBlock(cid) + trace "Advertiser iterating blocks finished." + + await sleepAsync(b.advertiseLocalStoreLoopSleep) + except CancelledError: + break # do not propagate as advertiseLocalStoreLoop was asyncSpawned + except CatchableError as e: + error "failed to advertise blocks in local store", error = e.msgDetail + + info "Exiting advertise task loop" +``` + +For each `cid` to be advertised on DHT, the advertiser will use `Discovery.provide` to advertise the `cid` on the DHT using DHT's protocol `addProvider` operation: + +```nim +method provide*(d: Discovery, cid: Cid) {.async, base.} = + ## Provide a block Cid + ## + let nodes = await d.protocol.addProvider(cid.toNodeId(), d.providerRecord.get) + + if nodes.len <= 0: + warn "Couldn't provide to any nodes!" +``` + +For block discovery on the DHT, please refer to [[Discovering Blocks on DHT]]. diff --git a/10 Notes/Codex Block Exchange Protocol.md b/10 Notes/Codex Block Exchange Protocol.md index 915419a..49d91ba 100644 --- a/10 Notes/Codex Block Exchange Protocol.md +++ b/10 Notes/Codex Block Exchange Protocol.md @@ -353,3 +353,4 @@ proc resolveBlocks*(b: BlockExcEngine, blocksDelivery: seq[BlockDelivery]) {.asy await b.cancelBlocks(blocksDelivery.mapIt(it.address)) ``` +This is important moment, as from *receiving* mode, we are changing to *sending* mode: we just received a number of blocks via `blockDelivery` and we will now announce possession of those blocks to other peers that may *want* them. \ No newline at end of file diff --git a/10 Notes/Codex Merkle Proofs.md b/10 Notes/Codex Merkle Proofs.md new file mode 100644 index 0000000..e69de29 diff --git a/10 Notes/Codex Peer Context Records.md b/10 Notes/Codex Peer Context Records.md new file mode 100644 index 0000000..da56e85 --- /dev/null +++ b/10 Notes/Codex Peer Context Records.md @@ -0,0 +1,37 @@ +--- +tags: + - codex/peer-presence +related: + - "[[Codex WantList]]" +--- +#codex/peer-presence + +| related | [[Codex WantList]], [[When Peer Presence Records are added and removed?]], [[When Peer Want List is updated?]] | +| ------- | -------------------------------------------------------------------------------------------------------------- | + +For each remote peer we are interacting with, we create an object of type `BlockExcPeerCtx` (`codex/blockexchange/peers/peercontext.nim`). This object is created in one place only: in `BlockExcEngine.setupPeer`, which is called in response to `PeerEventKind.Joined` peer event (registered in `BlockExcEngine.new`). + +`BlockExcPeerCtx` objects keeps two important things about the peer: + +1. the blocks the remote peer has - in the form of `Presence` records stored in `blocks: Table[BlockAddress, Presence]` +2. the blocks the remote peer **has explicitly asked for** - as `peerWants: seq[WantListEntry]` + +>[!note] +It is easy to get confused, so this note: `peerCtx.peerWants` is a list of blocks for which the remote peer explicitly asked, and not the blocks that remote peer would like to have. The blocks that the remote peer would like to have (but did not request them explicitly yet) are not recorded and handled on-the-fly by sending the presence list in response to `WantHave` request (see `BlockExcEngine.wantListHandler`). + +>[!note] +>This is important to emphasize that the *presence list* - the block that the remote peer has, is constrained to the list of blocks that the current peer is interested in (wants). Thus, at any given time, the `blocks` in the `BlockExcPeerCtx` records is conjunction of two sets: the blocks that the remote peer has **AND** the blocks that we want. + +>[!info] +>In BitTorrent peers use `have` (equivalent of our `presence` but for single piece only, which may comprise of many blocks) to say they have a block. But then a peer seems to never say which blocks it wants without wanting to download them immediately (so, our `WantBlock`). On the very first message the downloaders send `bitfield` message to indicate which blocks they already have. Downloaders which don't have anything yet may skip the 'bitfield' message. And then they have `interested` `not interested` to indicate if they have interest in receiving blocks (any blocks, those messages do not have payload, so they do not tell which blocks you are or are no longer interested in). Interest state must be kept up to date at all times - whenever a downloader doesn't have something they currently would ask a peer for in `unchoked`, they must express lack of interest, despite being `choked`. + +For peer's *have* list, we have a couple of helpers: + +- `peerHave` - these are the blocks (given as `seq[BlockAddress]`) the peer has +- `peerHaveCids` - the corresponding cids (`HashSet[Cid]`) +- `contains` - to check if the remote peer `has` given block (as `BlockAddress`) +- `setPresence` and `cleanPresence` to add/remove the blocks from peer's *have* list + +For peer's *want* list, there is one helper `peerWantsCids`, giving back the cids of the corresponding blocks (as `HashSet[Cid]`). + +See also [[When Peer Presence Records are added and removed?]] and [[When Peer Want List is updated?]] diff --git a/10 Notes/Codex WantList.md b/10 Notes/Codex WantList.md new file mode 100644 index 0000000..9c9141e --- /dev/null +++ b/10 Notes/Codex WantList.md @@ -0,0 +1,201 @@ +--- +tags: + - codex/want-list + - codex/block-exchange +related: + - "[[Codex Block Exchange Protocol]]" + - "[[Uploading and downloading content in Codex]]" +--- +#codex/want-list #codex/block-exchange + +| related | [[Codex Block Exchange Protocol]], [[Uploading and downloading content in Codex]] | +| ------- | --------------------------------------------------------------------------------- | + +When engine is being created, it subscribes to the `PeerEventKind.Joined` and `PeerEventKind.Left`: + +```nim +network.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Joined) +network.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Left) +``` + +`PeerEventKind.Joined` is triggered when peers connects to us, and `PeerEventKind.Left` when peer disconnects from us. + +When peer is joining, we call `setupPeer` + +```nim +proc setupPeer*(b: BlockExcEngine, peer: PeerId) {.async.} = + ## Perform initial setup, such as want + ## list exchange + ## + + trace "Setting up peer", peer + + if peer notin b.peers: + trace "Setting up new peer", peer + b.peers.add(BlockExcPeerCtx(id: peer)) + trace "Added peer", peers = b.peers.len + + # broadcast our want list, the other peer will do the same + if b.pendingBlocks.wantListLen > 0: + trace "Sending our want list to a peer", peer + let cids = toSeq(b.pendingBlocks.wantList) + await b.network.request.sendWantList(peer, cids, full = true) + + if address =? b.pricing .? address: + await b.network.request.sendAccount(peer, Account(address: address)) +``` + +Here is where we send the joining peer our `WantList`. + +We get it from `PendingBlocksManager`. `PendingBlocksManager` has a list of *pending* blocks. Every time engine requests a block via `requestBlock(address)`, the block corresponding to the `address` provided becomes pending. It is done via call: + +```nim +b.pendingBlocks.getWantHandle(address, b.blockFetchTimeout) +``` + +`getWantHandle` will put the request address on its `blocks` list, which is a mapping from `BlockAddress` to `BlockReq`: + +```nim +p.blocks[address] = BlockReq( + handle: newFuture[Block]("pendingBlocks.getWantHandle"), + inFlight: inFlight, + startTime: getMonoTime().ticks, +) +``` + +At any given time, the pending blocks form our `WantList` and it will be sent to the joining peer: + +```nim +await b.network.request.sendWantList(peer, cids, full = true) +``` + +where `request.sendWantList` is set to: + +```nim +proc sendWantList( + id: PeerId, + cids: seq[BlockAddress], + priority: int32 = 0, + cancel: bool = false, + wantType: WantType = WantType.WantHave, + full: bool = false, + sendDontHave: bool = false, +): Future[void] {.gcsafe.} = + self.sendWantList(id, cids, priority, cancel, wantType, full, sendDontHave) +``` + +in `BlockExcNetwork.new`. + +We see that `wantType` argument takes the default value `WantType.WantHave`. The `full` argument is set to `true` in this case, which means this is our full `WantList`. + +Thus, intuitively, if a `cid` (`BlockAddress` to be precise) is on the `WantList` with `WantType.WantHave` it means that the corresponding node *wants to have* that cid. + +Let's look closer at the `BlockExcEngine.requestBlock` proc: + +```nim +proc requestBlock*( + b: BlockExcEngine, address: BlockAddress +): Future[?!Block] {.async.} = + let blockFuture = b.pendingBlocks.getWantHandle(address, b.blockFetchTimeout) + + if not b.pendingBlocks.isInFlight(address): + let peers = b.peers.getPeersForBlock(address) + + if peers.with.len == 0: + b.discovery.queueFindBlocksReq(@[address.cidOrTreeCid]) + else: + let selected = pickPseudoRandom(address, peers.with) + asyncSpawn b.monitorBlockHandle(blockFuture, address, selected.id) + b.pendingBlocks.setInFlight(address) + await b.sendWantBlock(@[address], selected) + + await b.sendWantHave(@[address], peers.without) + + # Don't let timeouts bubble up. We can't be too broad here or we break + # cancellations. + try: + success await blockFuture + except AsyncTimeoutError as err: + failure err +``` + +> [!warning] +> `requestBlock` as we see above is undergoing some important changes and for a good reason. First it will be called `downloadInternal` and the `getWantHandle` will no longer be awaiting on returned handle (thus ultimately it will be doing what it days it does). Other important change to notice is that `sendWantHave` will be called only if there are no peers with the requested address; in the version above we see that `WantHave` is sent even if we have a peer with the request address to which we have just sent `WantBlock`. + +When a node *requests* a block, we first check if the given pending block has the `inFlight` attribute set, indicating that the block has been recently requested from a remote node known to have it. If it is not the case, we first gather all the peers that have given `cid` and the complementary list of peers that do not have the given `cid`. If no peer in the swarm is having that `cid`, we will trigger discovery. Otherwise, we (pseudo) randomly choose one peer known to have the given `cid` and send it the `WantBlock` request. Subsequently, we then send the `WantHave` request to all the peers known not to have that `cid` (so that they know we are interested in it and let us know that have it once it is the case). + +Now, let's look what happens when a peer receives the `WantList`. This is handled by `BlockExcEngine.wantListHandler`: + +```nim +proc wantListHandler*(b: BlockExcEngine, peer: PeerId, wantList: WantList) {.async.} = + let peerCtx = b.peers.get(peer) + + if peerCtx.isNil: + return + + var + presence: seq[BlockPresence] + schedulePeer = false + + for e in wantList.entries: + let idx = peerCtx.peerWants.findIt(it.address == e.address) + + logScope: + peer = peerCtx.id + address = e.address + wantType = $e.wantType + + if idx < 0: # Adding new entry to peer wants + let + have = await e.address in b.localStore + price = @(b.pricing.get(Pricing(price: 0.u256)).price.toBytesBE) + + case e.wantType + of WantType.WantHave: + if have: + presence.add( + BlockPresence( + address: e.address, `type`: BlockPresenceType.Have, price: price + ) + ) + else: + if e.sendDontHave: + presence.add( + BlockPresence( + address: e.address, `type`: BlockPresenceType.DontHave, price: price + ) + ) + peerCtx.peerWants.add(e) + + codex_block_exchange_want_have_lists_received.inc() + of WantType.WantBlock: + peerCtx.peerWants.add(e) + schedulePeer = true + codex_block_exchange_want_block_lists_received.inc() + else: # Updating existing entry in peer wants + # peer doesn't want this block anymore + if e.cancel: + trace "Canceling want for block", address = e.address + peerCtx.peerWants.del(idx) + else: + # peer might want to ask for the same cid with + # different want params + trace "Updating want for block", address = e.address + peerCtx.peerWants[idx] = e # update entry + + if presence.len > 0: + trace "Sending presence to remote", items = presence.mapIt($it).join(",") + await b.network.request.sendPresence(peer, presence) + + if schedulePeer: + if not b.scheduleTask(peerCtx): + warn "Unable to schedule task for peer", peer +``` + +We go though the `WantList` entries, one-by-one. + +1. We check if the `WantList` item is already on the locally kept `WantList` associated with that peer (`peerCtx.peerWants`). +2. If it is not the case, we add new entry to the peer's `WantList`: + 1. We first check if we already have the block corresponding to the `WantList` item in our `localStore`. + 2. If we do, and the `WantList` item is `WantHave`, we add an entry to the `presence` list, otherwise (i.e. when `WantList` item is `WantHave` but we do not have the corresponding block in `localStore`) we add the entry to `peerCtx.peerWants`. If `WantList` item is `WantBlock` we add the corresponding entry to `peerCtx.peerWants` and set a flag to schedule a task where we will eventually send the requested block to the remote peer (we do that even regardless of if we have a block or not in `localStore`). +3. If the `WantList` item is already on the locally kept `WantList` associated with that peer, we just update the entry. \ No newline at end of file diff --git a/10 Notes/Uploading and downloading content in Codex.md b/10 Notes/Uploading and downloading content in Codex.md index ab6bbc0..63df706 100644 --- a/10 Notes/Uploading and downloading content in Codex.md +++ b/10 Notes/Uploading and downloading content in Codex.md @@ -53,7 +53,7 @@ const ### storing blocks -Now, the `netoworkStore.putBlock`: +Now, the `networkStore.putBlock`: ```nim method putBlock*( @@ -152,8 +152,193 @@ There is a cascade of callbacks going from `RepoStore` through `TypedDatastore` `LevelDbDataStore` directly interacts with the underlying storage and ensures atomicity of the `modifyGet` operation. `TypedDatastore` performs *encoding* and *decoding* of the data. Finally, `RepoStore` handles metadata creation or update, and also writes the actual block to the underlying block storage via its `repoDS` instance variable. +After the blocks are stored in `repoDS`, back in `node.store` (`CodexNodeRef.store`), we build the Merkle Tree for our block cids and then we compute its root (`treeCid`). Finally, for each block (cid) we compute the [[Codex Merkle Proofs|inclusion proofs]], and we store each `cid`, block `index`, and `proof` under the computed `treeCid`: + +```nim +without tree =? CodexTree.init(cids), err: + return failure(err) + + without treeCid =? tree.rootCid(CIDv1, dataCodec), err: + return failure(err) + + for index, cid in cids: + without proof =? tree.getProof(index), err: + return failure(err) + if err =? + (await self.networkStore.putCidAndProof(treeCid, index, cid, proof)).errorOption: + # TODO add log here + return failure(err) +``` + This concludes the local block storage. We leave the description of `engine.resolveBlocks(@[blk])` for later, when describing the block exchange protocol. ## Downloading content -TBD... +When we want to download the content from the network, we use `/api/codex/v1/data/{cid}/network/stream` API where we call `await node.retrieveCid(cid.get(), local = false, resp = resp)`. + +`node.retrieveCid` tries to get a stream (descendent of libp2p's `LPStream`): + +```nim +without stream =? (await node.retrieve(cid, local)), error: + if error of BlockNotFoundError: + resp.status = Http404 + return await resp.sendBody("") + else: + resp.status = Http500 + return await resp.sendBody(error.msg) +``` + +This `stream` will be read chunk by chunk (`DefaultBlockSize`) and returned to the client. + +To see what the `stream` really will be, we need to dive into `node.retrieve(cid, local)` (`local` is `false` in this case): + +```nim +proc retrieve*( + self: CodexNodeRef, cid: Cid, local: bool = true +): Future[?!LPStream] {.async.} = + ## Retrieve by Cid a single block or an entire dataset described by manifest + ## + + if local and not await (cid in self.networkStore): + return failure((ref BlockNotFoundError)(msg: "Block not found in local store")) + + without manifest =? (await self.fetchManifest(cid)), err: + if err of AsyncTimeoutError: + return failure(err) + + return await self.streamSingleBlock(cid) + + await self.streamEntireDataset(manifest, cid) +``` + +We first try to get the manifest with `self.fetchManifest(cid)`: + +```nim +proc fetchManifest*(self: CodexNodeRef, cid: Cid): Future[?!Manifest] {.async.} = + ## Fetch and decode a manifest block + ## + + if err =? cid.isManifest.errorOption: + return failure "CID has invalid content type for manifest {$cid}" + + trace "Retrieving manifest for cid", cid + + without blk =? await self.networkStore.getBlock(BlockAddress.init(cid)), err: + trace "Error retrieve manifest block", cid, err = err.msg + return failure err + + trace "Decoding manifest for cid", cid + + without manifest =? Manifest.decode(blk), err: + trace "Unable to decode as manifest", err = err.msg + return failure("Unable to decode as manifest") + + trace "Decoded manifest", cid + + return manifest.success +``` + +Manifest is ***single block***, and we get it with: + +```nim +self.networkStore.getBlock(BlockAddress.init(cid)) +``` + +Here `BlockAddress.init(cid)` reduces to `BlockAddress(leaf: false, cid: cid)`, which means our [[Codex Blocks|BlockAddress]] is just a `Cid`. `getBlock` will try to get the block from the `localStore` first: + +```nim +method getBlock*(self: NetworkStore, address: BlockAddress): Future[?!Block] {.async.} = + without blk =? (await self.localStore.getBlock(address)), err: + if not (err of BlockNotFoundError): + error "Error getting block from local store", address, err = err.msg + return failure err + + without newBlock =? (await self.engine.requestBlock(address)), err: + error "Unable to get block from exchange engine", address, err = err.msg + return failure err + + return success newBlock + + return success blk +``` + +It is `RepoStore`, which by default is set to be a `FSDatastore`: + +```nim +method getBlock*(self: RepoStore, address: BlockAddress): Future[?!Block] = + ## Get a block from the blockstore + ## + + if address.leaf: + self.getBlock(address.treeCid, address.index) + else: + self.getBlock(address.cid) +``` + +Now, we have `leaf` set to `false`, thus we will be using simpler `getBlock` variant: + +```nim +method getBlock*(self: RepoStore, cid: Cid): Future[?!Block] {.async.} = + ## Get a block from the blockstore + ## + + logScope: + cid = cid + + if cid.isEmpty: + trace "Empty block, ignoring" + return cid.emptyBlock + + without key =? makePrefixKey(self.postFixLen, cid), err: + trace "Error getting key from provider", err = err.msg + return failure(err) + + without data =? await self.repoDs.get(key), err: + if not (err of DatastoreKeyNotFound): + trace "Error getting block from datastore", err = err.msg, key + return failure(err) + + return failure(newException(BlockNotFoundError, err.msg)) + + trace "Got block for cid", cid + return Block.new(cid, data, verify = true) +``` + +If we do not have the block in the `localStore`, we will be trying to get it from the network with `self.engine.requestBlock(address)`: + +```nim +proc requestBlock*( + b: BlockExcEngine, address: BlockAddress +): Future[?!Block] {.async.} = + let blockFuture = b.pendingBlocks.getWantHandle(address, b.blockFetchTimeout) + + if not b.pendingBlocks.isInFlight(address): + let peers = b.peers.getPeersForBlock(address) + + if peers.with.len == 0: + b.discovery.queueFindBlocksReq(@[address.cidOrTreeCid]) + else: + let selected = pickPseudoRandom(address, peers.with) + asyncSpawn b.monitorBlockHandle(blockFuture, address, selected.id) + b.pendingBlocks.setInFlight(address) + await b.sendWantBlock(@[address], selected) + + await b.sendWantHave(@[address], peers.without) + + # Don't let timeouts bubble up. We can't be too broad here or we break + # cancellations. + try: + success await blockFuture + except AsyncTimeoutError as err: + failure err +``` + +This is also where [[Codex WantList]] topic becomes relevant (and perhaps also [[Codex Block Exchange Protocol]]. + +After we got the manifest we will proceed with creating a stream through which we will be stream the data down to the browser: + +```nim +LPStream(StoreStream.new(self.networkStore, manifest, pad = false)).success +``` + +The stream abstraction provides a `readOnce` method which will be retrieving the blocks from the `networkStore` and sending the requested bytes down via the stream. `readOnce` is called in `node.retrieveCid`. diff --git a/10 Notes/When Peer Presence Records are added and removed?.md b/10 Notes/When Peer Presence Records are added and removed?.md new file mode 100644 index 0000000..4694072 --- /dev/null +++ b/10 Notes/When Peer Presence Records are added and removed?.md @@ -0,0 +1,17 @@ +--- +tags: + - codex/peer-presence +related: + - "[[Codex Peer Context Records]]" +--- +#codex/peer-presence + +| related | [[Codex Peer Context Records]] | +| ------- | -------------------------------- | +Remote peer's `Presence` records are added to the remote peer context object (`BlockExcPeerCtx`) in one place only: `BlockExcEngine.blockPresenceHandler`. + +We remove the records in `BlockExcEngine.blockPresenceHandler` (as part of filtering out irrelevant records), when canceling blocks (in response to receiving new block deliveries), and in the `BlockExcEngine.blockPresenceHandler` itself. + +>[!question] +> isn't the last one (calling `cleanPresence` in `BlockExcEngine.blockPresenceHandler`) a duplicate of `resolveBlocks` which calls `cancelBlocks`, which also calls `cleanPresence` ).` +