2022-05-19 19:56:03 +00:00
|
|
|
|
## Nim-Codex
|
2022-01-10 15:32:56 +00:00
|
|
|
|
## Copyright (c) 2021 Status Research & Development GmbH
|
|
|
|
|
## Licensed under either of
|
|
|
|
|
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
|
|
|
|
|
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
|
|
|
|
|
## at your option.
|
|
|
|
|
## This file may not be copied, modified, or distributed except according to
|
|
|
|
|
## those terms.
|
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
{.push raises: [].}
|
|
|
|
|
|
2022-01-10 15:32:56 +00:00
|
|
|
|
import std/options
|
2022-05-12 21:52:03 +00:00
|
|
|
|
import std/sequtils
|
2022-10-27 13:41:34 +00:00
|
|
|
|
import std/strformat
|
2023-11-14 12:02:17 +00:00
|
|
|
|
import std/sugar
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
|
|
|
|
import pkg/questionable
|
|
|
|
|
import pkg/questionable/results
|
|
|
|
|
import pkg/chronos
|
2024-01-11 16:45:23 +00:00
|
|
|
|
import pkg/poseidon2
|
2023-08-01 23:47:57 +00:00
|
|
|
|
|
2023-11-14 12:02:17 +00:00
|
|
|
|
import pkg/libp2p/[switch, multicodec, multihash]
|
2023-08-01 23:47:57 +00:00
|
|
|
|
import pkg/libp2p/stream/bufferstream
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
|
|
|
|
# TODO: remove once exported by libp2p
|
|
|
|
|
import pkg/libp2p/routing_record
|
|
|
|
|
import pkg/libp2p/signed_envelope
|
|
|
|
|
|
|
|
|
|
import ./chunker
|
2024-01-11 16:45:23 +00:00
|
|
|
|
import ./slots
|
2023-11-22 11:35:26 +00:00
|
|
|
|
import ./clock
|
2022-01-10 15:32:56 +00:00
|
|
|
|
import ./blocktype as bt
|
2022-03-14 16:06:36 +00:00
|
|
|
|
import ./manifest
|
2023-11-14 12:02:17 +00:00
|
|
|
|
import ./merkletree
|
2022-01-10 15:32:56 +00:00
|
|
|
|
import ./stores/blockstore
|
|
|
|
|
import ./blockexchange
|
2022-03-30 02:43:35 +00:00
|
|
|
|
import ./streams
|
2022-04-06 00:34:29 +00:00
|
|
|
|
import ./erasure
|
2022-04-13 16:32:35 +00:00
|
|
|
|
import ./discovery
|
2022-04-13 12:15:22 +00:00
|
|
|
|
import ./contracts
|
2024-02-08 02:27:11 +00:00
|
|
|
|
import ./indexingstrategy
|
2023-11-14 12:02:17 +00:00
|
|
|
|
import ./utils
|
2023-11-28 21:04:11 +00:00
|
|
|
|
import ./errors
|
feat: create logging proxy (#663)
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* bump nim-poseidon2 to export fromBytes
Before the changes in this branch, fromBytes was likely being resolved by nim-stew, or other dependency. With the changes in this branch, that dependency was removed and fromBytes could no longer be resolved. By exporting fromBytes from nim-poseidon, the correct resolution is now happening.
* fixes to get compiling after rebasing master
* Add support for Result types being logged using formatIt
2024-01-23 07:35:03 +00:00
|
|
|
|
import ./logutils
|
2024-02-08 02:27:11 +00:00
|
|
|
|
import ./utils/poseidon2digest
|
feat: create logging proxy (#663)
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* bump nim-poseidon2 to export fromBytes
Before the changes in this branch, fromBytes was likely being resolved by nim-stew, or other dependency. With the changes in this branch, that dependency was removed and fromBytes could no longer be resolved. By exporting fromBytes from nim-poseidon, the correct resolution is now happening.
* fixes to get compiling after rebasing master
* Add support for Result types being logged using formatIt
2024-01-23 07:35:03 +00:00
|
|
|
|
|
|
|
|
|
export logutils
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
|
|
|
|
logScope:
|
2022-05-19 19:56:03 +00:00
|
|
|
|
topics = "codex node"
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2022-05-20 16:53:34 +00:00
|
|
|
|
const
|
2022-07-29 20:04:12 +00:00
|
|
|
|
FetchBatch = 200
|
2022-05-20 16:53:34 +00:00
|
|
|
|
|
2022-01-10 15:32:56 +00:00
|
|
|
|
type
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
Contracts* = tuple
|
|
|
|
|
client: ?ClientInteractions
|
|
|
|
|
host: ?HostInteractions
|
2023-04-19 13:06:00 +00:00
|
|
|
|
validator: ?ValidatorInteractions
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
CodexNode* = object
|
2024-01-11 16:45:23 +00:00
|
|
|
|
switch: Switch
|
|
|
|
|
networkId: PeerId
|
|
|
|
|
blockStore: BlockStore
|
|
|
|
|
engine: BlockExcEngine
|
|
|
|
|
erasure: Erasure
|
|
|
|
|
discovery: Discovery
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
contracts*: Contracts
|
2023-11-22 11:35:26 +00:00
|
|
|
|
clock*: Clock
|
2024-01-15 16:45:04 +00:00
|
|
|
|
storage*: Contracts
|
|
|
|
|
|
|
|
|
|
CodexNodeRef* = ref CodexNode
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
OnManifest* = proc(cid: Cid, manifest: Manifest): void {.gcsafe, raises: [].}
|
|
|
|
|
BatchProc* = proc(blocks: seq[bt.Block]): Future[?!void] {.gcsafe, raises: [].}
|
2023-11-09 08:47:09 +00:00
|
|
|
|
|
2024-01-11 16:45:23 +00:00
|
|
|
|
func switch*(self: CodexNodeRef): Switch =
|
|
|
|
|
return self.switch
|
|
|
|
|
|
|
|
|
|
func blockStore*(self: CodexNodeRef): BlockStore =
|
|
|
|
|
return self.blockStore
|
|
|
|
|
|
|
|
|
|
func engine*(self: CodexNodeRef): BlockExcEngine =
|
|
|
|
|
return self.engine
|
|
|
|
|
|
|
|
|
|
func erasure*(self: CodexNodeRef): Erasure =
|
|
|
|
|
return self.erasure
|
|
|
|
|
|
|
|
|
|
func discovery*(self: CodexNodeRef): Discovery =
|
|
|
|
|
return self.discovery
|
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
proc storeManifest*(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
manifest: Manifest): Future[?!bt.Block] {.async.} =
|
2024-01-11 16:45:23 +00:00
|
|
|
|
without encodedVerifiable =? manifest.encode(), err:
|
|
|
|
|
trace "Unable to encode manifest"
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without blk =? bt.Block.new(data = encodedVerifiable, codec = ManifestCodec), error:
|
|
|
|
|
trace "Unable to create block from manifest"
|
|
|
|
|
return failure(error)
|
|
|
|
|
|
|
|
|
|
if err =? (await self.blockStore.putBlock(blk)).errorOption:
|
|
|
|
|
trace "Unable to store manifest block", cid = blk.cid, err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
success blk
|
|
|
|
|
|
2022-07-28 17:44:59 +00:00
|
|
|
|
proc fetchManifest*(
|
2024-01-15 16:45:04 +00:00
|
|
|
|
self: CodexNodeRef,
|
2023-08-01 23:47:57 +00:00
|
|
|
|
cid: Cid): Future[?!Manifest] {.async.} =
|
2022-07-28 17:44:59 +00:00
|
|
|
|
## Fetch and decode a manifest block
|
|
|
|
|
##
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2022-12-03 00:00:55 +00:00
|
|
|
|
if err =? cid.isManifest.errorOption:
|
|
|
|
|
return failure "CID has invalid content type for manifest {$cid}"
|
2022-07-29 20:04:12 +00:00
|
|
|
|
|
2023-07-18 05:50:47 +00:00
|
|
|
|
trace "Retrieving manifest for cid", cid
|
2022-07-28 00:39:17 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
without blk =? await self.blockStore.getBlock(BlockAddress.init(cid)), err:
|
2023-07-18 05:50:47 +00:00
|
|
|
|
trace "Error retrieve manifest block", cid, err = err.msg
|
2022-12-03 00:00:55 +00:00
|
|
|
|
return failure err
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2023-07-18 05:50:47 +00:00
|
|
|
|
trace "Decoding manifest for cid", cid
|
|
|
|
|
|
2022-12-03 00:00:55 +00:00
|
|
|
|
without manifest =? Manifest.decode(blk), err:
|
|
|
|
|
trace "Unable to decode as manifest", err = err.msg
|
|
|
|
|
return failure("Unable to decode as manifest")
|
|
|
|
|
|
|
|
|
|
trace "Decoded manifest", cid
|
2022-07-28 17:44:59 +00:00
|
|
|
|
|
|
|
|
|
return manifest.success
|
|
|
|
|
|
2024-01-17 19:24:34 +00:00
|
|
|
|
proc findPeer*(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
peerId: PeerId): Future[?PeerRecord] {.async.} =
|
|
|
|
|
## Find peer using the discovery service from the given CodexNode
|
|
|
|
|
##
|
|
|
|
|
return await self.discovery.findPeer(peerId)
|
|
|
|
|
|
|
|
|
|
proc connect*(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
peerId: PeerId,
|
|
|
|
|
addrs: seq[MultiAddress]
|
|
|
|
|
): Future[void] =
|
|
|
|
|
self.switch.connect(peerId, addrs)
|
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
proc updateExpiry*(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
manifestCid: Cid,
|
|
|
|
|
expiry: SecondsSince1970): Future[?!void] {.async.} =
|
|
|
|
|
|
|
|
|
|
without manifest =? await self.fetchManifest(manifestCid), error:
|
2023-11-22 10:09:12 +00:00
|
|
|
|
trace "Unable to fetch manifest for cid", manifestCid
|
|
|
|
|
return failure(error)
|
|
|
|
|
|
|
|
|
|
try:
|
2024-01-15 16:45:04 +00:00
|
|
|
|
let
|
|
|
|
|
ensuringFutures = Iter
|
|
|
|
|
.fromSlice(0..<manifest.blocksCount)
|
|
|
|
|
.mapIt(self.blockStore.ensureExpiry( manifest.treeCid, it, expiry ))
|
|
|
|
|
await allFuturesThrowing(ensuringFutures)
|
2023-11-22 10:09:12 +00:00
|
|
|
|
except CancelledError as exc:
|
|
|
|
|
raise exc
|
|
|
|
|
except CatchableError as exc:
|
|
|
|
|
return failure(exc.msg)
|
|
|
|
|
|
|
|
|
|
return success()
|
|
|
|
|
|
2022-07-29 20:04:12 +00:00
|
|
|
|
proc fetchBatched*(
|
2024-01-15 16:45:04 +00:00
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
cid: Cid,
|
|
|
|
|
iter: Iter[int],
|
2023-08-01 23:47:57 +00:00
|
|
|
|
batchSize = FetchBatch,
|
2023-11-28 21:04:11 +00:00
|
|
|
|
onBatch: BatchProc = nil): Future[?!void] {.async, gcsafe.} =
|
2024-01-15 16:45:04 +00:00
|
|
|
|
## Fetch blocks in batches of `batchSize`
|
2022-07-29 20:04:12 +00:00
|
|
|
|
##
|
2023-11-14 17:52:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
let
|
|
|
|
|
iter = iter.map(
|
|
|
|
|
(i: int) => self.blockStore.getBlock(BlockAddress.init(cid, i))
|
|
|
|
|
)
|
2023-11-14 12:02:17 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
while not iter.finished:
|
2023-11-14 12:02:17 +00:00
|
|
|
|
let blocks = collect:
|
|
|
|
|
for i in 0..<batchSize:
|
|
|
|
|
if not iter.finished:
|
|
|
|
|
iter.next()
|
|
|
|
|
|
2023-11-28 21:04:11 +00:00
|
|
|
|
if blocksErr =? (await allFutureResult(blocks)).errorOption:
|
|
|
|
|
return failure(blocksErr)
|
2023-11-22 10:09:12 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not onBatch.isNil and
|
|
|
|
|
batchErr =? (await onBatch(blocks.mapIt( it.read.get ))).errorOption:
|
2023-11-28 21:04:11 +00:00
|
|
|
|
return failure(batchErr)
|
2022-07-29 20:04:12 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
success()
|
|
|
|
|
|
|
|
|
|
proc fetchBatched*(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
manifest: Manifest,
|
|
|
|
|
batchSize = FetchBatch,
|
|
|
|
|
onBatch: BatchProc = nil): Future[?!void] =
|
|
|
|
|
## Fetch manifest in batches of `batchSize`
|
|
|
|
|
##
|
|
|
|
|
|
|
|
|
|
trace "Fetching blocks in batches of", size = batchSize
|
|
|
|
|
|
|
|
|
|
let iter = Iter.fromSlice(0..<manifest.blocksCount)
|
|
|
|
|
self.fetchBatched(manifest.treeCid, iter, batchSize, onBatch)
|
2022-07-29 20:04:12 +00:00
|
|
|
|
|
2022-07-28 17:44:59 +00:00
|
|
|
|
proc retrieve*(
|
2024-01-15 16:45:04 +00:00
|
|
|
|
self: CodexNodeRef,
|
2023-11-21 00:14:06 +00:00
|
|
|
|
cid: Cid,
|
|
|
|
|
local: bool = true): Future[?!LPStream] {.async.} =
|
2022-08-24 12:15:59 +00:00
|
|
|
|
## Retrieve by Cid a single block or an entire dataset described by manifest
|
2022-07-28 17:44:59 +00:00
|
|
|
|
##
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if local and not await (cid in self.blockStore):
|
2023-11-21 00:14:06 +00:00
|
|
|
|
return failure((ref BlockNotFoundError)(msg: "Block not found in local store"))
|
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if manifest =? (await self.fetchManifest(cid)):
|
2022-12-03 00:00:55 +00:00
|
|
|
|
trace "Retrieving blocks from manifest", cid
|
2022-04-06 00:34:29 +00:00
|
|
|
|
if manifest.protected:
|
2022-08-24 12:15:59 +00:00
|
|
|
|
# Retrieve, decode and save to the local store all EС groups
|
2022-04-06 00:34:29 +00:00
|
|
|
|
proc erasureJob(): Future[void] {.async.} =
|
|
|
|
|
try:
|
2022-08-24 12:15:59 +00:00
|
|
|
|
# Spawn an erasure decoding job
|
2024-01-15 16:45:04 +00:00
|
|
|
|
without res =? (await self.erasure.decode(manifest)), error:
|
2022-05-12 21:52:03 +00:00
|
|
|
|
trace "Unable to erasure decode manifest", cid, exc = error.msg
|
2022-04-06 00:34:29 +00:00
|
|
|
|
except CatchableError as exc:
|
2023-03-09 11:23:45 +00:00
|
|
|
|
trace "Exception decoding manifest", cid, exc = exc.msg
|
2023-08-01 23:47:57 +00:00
|
|
|
|
|
2022-04-06 00:34:29 +00:00
|
|
|
|
asyncSpawn erasureJob()
|
2023-08-01 23:47:57 +00:00
|
|
|
|
|
2022-08-24 12:15:59 +00:00
|
|
|
|
# Retrieve all blocks of the dataset sequentially from the local store or network
|
2022-12-03 00:00:55 +00:00
|
|
|
|
trace "Creating store stream for manifest", cid
|
2024-01-15 16:45:04 +00:00
|
|
|
|
LPStream(StoreStream.new(self.blockStore, manifest, pad = false)).success
|
2023-08-01 23:47:57 +00:00
|
|
|
|
else:
|
|
|
|
|
let
|
|
|
|
|
stream = BufferStream.new()
|
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
without blk =? (await self.blockStore.getBlock(BlockAddress.init(cid))), err:
|
2023-08-01 23:47:57 +00:00
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
proc streamOneBlock(): Future[void] {.async.} =
|
|
|
|
|
try:
|
|
|
|
|
await stream.pushData(blk.data)
|
|
|
|
|
except CatchableError as exc:
|
|
|
|
|
trace "Unable to send block", cid, exc = exc.msg
|
|
|
|
|
discard
|
|
|
|
|
finally:
|
|
|
|
|
await stream.pushEof()
|
|
|
|
|
|
|
|
|
|
asyncSpawn streamOneBlock()
|
|
|
|
|
LPStream(stream).success()
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
|
|
|
|
proc store*(
|
2023-08-01 23:47:57 +00:00
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
stream: LPStream,
|
|
|
|
|
blockSize = DefaultBlockSize): Future[?!Cid] {.async.} =
|
2022-08-24 12:15:59 +00:00
|
|
|
|
## Save stream contents as dataset with given blockSize
|
|
|
|
|
## to nodes's BlockStore, and return Cid of its manifest
|
|
|
|
|
##
|
2022-01-10 15:32:56 +00:00
|
|
|
|
trace "Storing data"
|
|
|
|
|
|
2023-11-14 12:02:17 +00:00
|
|
|
|
let
|
2023-12-22 12:04:01 +00:00
|
|
|
|
hcodec = Sha256HashCodec
|
|
|
|
|
dataCodec = BlockCodec
|
2023-11-14 12:02:17 +00:00
|
|
|
|
chunker = LPStreamChunker.new(stream, chunkSize = blockSize)
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2023-11-14 12:02:17 +00:00
|
|
|
|
var cids: seq[Cid]
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
|
|
|
|
try:
|
|
|
|
|
while (
|
|
|
|
|
let chunk = await chunker.getBytes();
|
|
|
|
|
chunk.len > 0):
|
|
|
|
|
|
|
|
|
|
trace "Got data from stream", len = chunk.len
|
2023-11-14 12:02:17 +00:00
|
|
|
|
|
|
|
|
|
without mhash =? MultiHash.digest($hcodec, chunk).mapFailure, err:
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without cid =? Cid.init(CIDv1, dataCodec, mhash).mapFailure, err:
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without blk =? bt.Block.new(cid, chunk, verify = false):
|
2022-01-11 02:25:13 +00:00
|
|
|
|
return failure("Unable to init block from chunk!")
|
2023-11-14 17:52:27 +00:00
|
|
|
|
|
2023-11-14 12:02:17 +00:00
|
|
|
|
cids.add(cid)
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2022-12-03 00:00:55 +00:00
|
|
|
|
if err =? (await self.blockStore.putBlock(blk)).errorOption:
|
|
|
|
|
trace "Unable to store block", cid = blk.cid, err = err.msg
|
2022-10-27 13:41:34 +00:00
|
|
|
|
return failure(&"Unable to store block {blk.cid}")
|
2022-01-10 15:32:56 +00:00
|
|
|
|
except CancelledError as exc:
|
|
|
|
|
raise exc
|
|
|
|
|
except CatchableError as exc:
|
|
|
|
|
return failure(exc.msg)
|
|
|
|
|
finally:
|
|
|
|
|
await stream.close()
|
|
|
|
|
|
2023-12-21 06:41:43 +00:00
|
|
|
|
without tree =? CodexTree.init(cids), err:
|
2023-11-14 12:02:17 +00:00
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without treeCid =? tree.rootCid(CIDv1, dataCodec), err:
|
|
|
|
|
return failure(err)
|
2023-11-14 17:52:27 +00:00
|
|
|
|
|
2023-11-14 12:02:17 +00:00
|
|
|
|
for index, cid in cids:
|
|
|
|
|
without proof =? tree.getProof(index), err:
|
|
|
|
|
return failure(err)
|
2024-01-08 22:52:46 +00:00
|
|
|
|
if err =? (await self.blockStore.putCidAndProof(treeCid, index, cid, proof)).errorOption:
|
2023-11-14 12:02:17 +00:00
|
|
|
|
# TODO add log here
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
let manifest = Manifest.new(
|
|
|
|
|
treeCid = treeCid,
|
|
|
|
|
blockSize = blockSize,
|
|
|
|
|
datasetSize = NBytes(chunker.offset),
|
|
|
|
|
version = CIDv1,
|
|
|
|
|
hcodec = hcodec,
|
2024-01-11 16:45:23 +00:00
|
|
|
|
codec = dataCodec)
|
|
|
|
|
|
|
|
|
|
without manifestBlk =? await self.storeManifest(manifest), err:
|
|
|
|
|
trace "Unable to store manifest"
|
|
|
|
|
return failure(err)
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2023-11-14 12:02:17 +00:00
|
|
|
|
info "Stored data", manifestCid = manifestBlk.cid,
|
|
|
|
|
treeCid = treeCid,
|
|
|
|
|
blocks = manifest.blocksCount,
|
|
|
|
|
datasetSize = manifest.datasetSize
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2022-11-15 15:46:21 +00:00
|
|
|
|
# Announce manifest
|
2023-11-14 12:02:17 +00:00
|
|
|
|
await self.discovery.provide(manifestBlk.cid)
|
|
|
|
|
await self.discovery.provide(treeCid)
|
2022-11-15 15:46:21 +00:00
|
|
|
|
|
2023-11-14 12:02:17 +00:00
|
|
|
|
return manifestBlk.cid.success
|
2022-01-10 15:32:56 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
proc iterateManifests*(self: CodexNodeRef, onManifest: OnManifest) {.async.} =
|
|
|
|
|
without cids =? await self.blockStore.listBlocks(BlockType.Manifest):
|
2023-11-09 08:47:09 +00:00
|
|
|
|
warn "Failed to listBlocks"
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
for c in cids:
|
|
|
|
|
if cid =? await c:
|
2024-01-15 16:45:04 +00:00
|
|
|
|
without blk =? await self.blockStore.getBlock(cid):
|
2023-11-09 08:47:09 +00:00
|
|
|
|
warn "Failed to get manifest block by cid", cid
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
without manifest =? Manifest.decode(blk):
|
|
|
|
|
warn "Failed to decode manifest", cid
|
|
|
|
|
return
|
|
|
|
|
|
|
|
|
|
onManifest(cid, manifest)
|
|
|
|
|
|
2024-01-11 16:45:23 +00:00
|
|
|
|
proc setupRequest(
|
2023-08-01 23:47:57 +00:00
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
cid: Cid,
|
|
|
|
|
duration: UInt256,
|
|
|
|
|
proofProbability: UInt256,
|
|
|
|
|
nodes: uint,
|
|
|
|
|
tolerance: uint,
|
|
|
|
|
reward: UInt256,
|
|
|
|
|
collateral: UInt256,
|
2024-01-11 16:45:23 +00:00
|
|
|
|
expiry: UInt256): Future[?!StorageRequest] {.async.} =
|
|
|
|
|
## Setup slots for a given dataset
|
2022-04-06 00:34:29 +00:00
|
|
|
|
##
|
|
|
|
|
|
2024-01-11 16:45:23 +00:00
|
|
|
|
let
|
|
|
|
|
ecK = nodes - tolerance
|
|
|
|
|
ecM = tolerance
|
|
|
|
|
|
|
|
|
|
logScope:
|
|
|
|
|
cid = cid
|
|
|
|
|
duration = duration
|
|
|
|
|
nodes = nodes
|
|
|
|
|
tolerance = tolerance
|
|
|
|
|
reward = reward
|
|
|
|
|
proofProbability = proofProbability
|
|
|
|
|
collateral = collateral
|
|
|
|
|
expiry = expiry
|
|
|
|
|
ecK = ecK
|
|
|
|
|
ecM = ecM
|
|
|
|
|
|
|
|
|
|
trace "Setting up slots"
|
2022-05-11 07:01:31 +00:00
|
|
|
|
|
2022-07-29 20:04:12 +00:00
|
|
|
|
without manifest =? await self.fetchManifest(cid), error:
|
2024-01-11 16:45:23 +00:00
|
|
|
|
trace "Unable to fetch manifest for cid"
|
|
|
|
|
return failure error
|
2022-04-06 00:34:29 +00:00
|
|
|
|
|
|
|
|
|
# Erasure code the dataset according to provided parameters
|
2024-01-11 16:45:23 +00:00
|
|
|
|
without encoded =? (await self.erasure.encode(manifest, ecK, ecM)), error:
|
|
|
|
|
trace "Unable to erasure code dataset"
|
2022-04-06 00:34:29 +00:00
|
|
|
|
return failure(error)
|
|
|
|
|
|
2024-02-08 02:27:11 +00:00
|
|
|
|
without builder =? Poseidon2Builder.new(self.blockStore, encoded), err:
|
2024-01-11 16:45:23 +00:00
|
|
|
|
trace "Unable to create slot builder"
|
|
|
|
|
return failure(err)
|
2022-04-06 00:34:29 +00:00
|
|
|
|
|
2024-01-11 16:45:23 +00:00
|
|
|
|
without verifiable =? (await builder.buildManifest()), err:
|
|
|
|
|
trace "Unable to build verifiable manifest"
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without manifestBlk =? await self.storeManifest(verifiable), err:
|
|
|
|
|
trace "Unable to store verifiable manifest"
|
|
|
|
|
return failure(err)
|
2022-04-06 00:34:29 +00:00
|
|
|
|
|
2024-01-11 16:45:23 +00:00
|
|
|
|
let
|
|
|
|
|
verifyRoot =
|
|
|
|
|
if builder.verifyRoot.isNone:
|
|
|
|
|
return failure("No slots root")
|
|
|
|
|
else:
|
|
|
|
|
builder.verifyRoot.get.toBytes
|
|
|
|
|
|
|
|
|
|
slotRoots =
|
|
|
|
|
if builder.slotRoots.len <= 0:
|
|
|
|
|
return failure("Slots are empty")
|
|
|
|
|
else:
|
|
|
|
|
builder.slotRoots.mapIt( it.toBytes )
|
|
|
|
|
|
|
|
|
|
request = StorageRequest(
|
|
|
|
|
ask: StorageAsk(
|
|
|
|
|
slots: verifiable.numSlots.uint64,
|
|
|
|
|
slotSize: builder.slotBytes.uint.u256,
|
|
|
|
|
duration: duration,
|
|
|
|
|
proofProbability: proofProbability,
|
|
|
|
|
reward: reward,
|
|
|
|
|
collateral: collateral,
|
|
|
|
|
maxSlotLoss: tolerance
|
|
|
|
|
),
|
|
|
|
|
content: StorageContent(
|
|
|
|
|
cid: $manifestBlk.cid, # TODO: why string?
|
|
|
|
|
merkleRoot: verifyRoot
|
|
|
|
|
),
|
|
|
|
|
expiry: expiry
|
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
trace "Request created", request = $request
|
|
|
|
|
success request
|
|
|
|
|
|
|
|
|
|
proc requestStorage*(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
cid: Cid,
|
|
|
|
|
duration: UInt256,
|
|
|
|
|
proofProbability: UInt256,
|
|
|
|
|
nodes: uint,
|
|
|
|
|
tolerance: uint,
|
|
|
|
|
reward: UInt256,
|
|
|
|
|
collateral: UInt256,
|
|
|
|
|
expiry: UInt256): Future[?!PurchaseId] {.async.} =
|
|
|
|
|
## Initiate a request for storage sequence, this might
|
|
|
|
|
## be a multistep procedure.
|
|
|
|
|
##
|
|
|
|
|
|
|
|
|
|
logScope:
|
|
|
|
|
cid = cid
|
|
|
|
|
duration = duration
|
|
|
|
|
nodes = nodes
|
|
|
|
|
tolerance = tolerance
|
|
|
|
|
reward = reward
|
|
|
|
|
proofProbability = proofProbability
|
|
|
|
|
collateral = collateral
|
|
|
|
|
expiry = expiry
|
|
|
|
|
|
|
|
|
|
trace "Received a request for storage!"
|
|
|
|
|
|
|
|
|
|
without contracts =? self.contracts.client:
|
|
|
|
|
trace "Purchasing not available"
|
|
|
|
|
return failure "Purchasing not available"
|
|
|
|
|
|
|
|
|
|
without request =?
|
|
|
|
|
(await self.setupRequest(
|
|
|
|
|
cid,
|
|
|
|
|
duration,
|
|
|
|
|
proofProbability,
|
|
|
|
|
nodes,
|
|
|
|
|
tolerance,
|
|
|
|
|
reward,
|
|
|
|
|
collateral,
|
|
|
|
|
expiry)), err:
|
|
|
|
|
trace "Unable to setup request"
|
|
|
|
|
return failure err
|
2022-05-11 07:01:31 +00:00
|
|
|
|
|
2022-11-08 07:10:17 +00:00
|
|
|
|
let purchase = await contracts.purchasing.purchase(request)
|
2024-01-11 16:45:23 +00:00
|
|
|
|
success purchase.id
|
2022-04-06 00:34:29 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
proc onStore(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
request: StorageRequest,
|
|
|
|
|
slotIdx: UInt256,
|
|
|
|
|
blocksCb: BlocksCb): Future[?!void] {.async.} =
|
|
|
|
|
## store data in local storage
|
2023-07-19 14:06:59 +00:00
|
|
|
|
##
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
logScope:
|
|
|
|
|
cid = request.content.cid
|
|
|
|
|
slotIdx = slotIdx
|
|
|
|
|
|
|
|
|
|
trace "Received a request to store a slot!"
|
|
|
|
|
|
2024-01-17 19:24:34 +00:00
|
|
|
|
without cid =? Cid.init(request.content.cid).mapFailure, err:
|
2024-01-15 16:45:04 +00:00
|
|
|
|
trace "Unable to parse Cid", cid
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without manifest =? (await self.fetchManifest(cid)), err:
|
|
|
|
|
trace "Unable to fetch manifest for cid", cid, err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
2024-02-08 02:27:11 +00:00
|
|
|
|
without builder =? Poseidon2Builder.new(self.blockStore, manifest), err:
|
2024-01-15 16:45:04 +00:00
|
|
|
|
trace "Unable to create slots builder", err = err.msg
|
|
|
|
|
return failure(err)
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
let
|
|
|
|
|
slotIdx = slotIdx.truncate(int)
|
|
|
|
|
expiry = request.expiry.toSecondsSince1970
|
|
|
|
|
|
|
|
|
|
if slotIdx > manifest.slotRoots.high:
|
|
|
|
|
trace "Slot index not in manifest", slotIdx
|
|
|
|
|
return failure(newException(CodexError, "Slot index not in manifest"))
|
|
|
|
|
|
|
|
|
|
proc updateExpiry(blocks: seq[bt.Block]): Future[?!void] {.async.} =
|
|
|
|
|
trace "Updating expiry for blocks", blocks = blocks.len
|
|
|
|
|
|
|
|
|
|
let ensureExpiryFutures = blocks.mapIt(self.blockStore.ensureExpiry(it.cid, expiry))
|
|
|
|
|
if updateExpiryErr =? (await allFutureResult(ensureExpiryFutures)).errorOption:
|
|
|
|
|
return failure(updateExpiryErr)
|
|
|
|
|
|
|
|
|
|
if not blocksCb.isNil and err =? (await blocksCb(blocks)).errorOption:
|
|
|
|
|
trace "Unable to process blocks", err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
return success()
|
|
|
|
|
|
2024-02-08 02:27:11 +00:00
|
|
|
|
without indexer =? manifest.protectedStrategy.init(
|
|
|
|
|
0, manifest.blocksCount() - 1, manifest.numSlots).catch and
|
|
|
|
|
blksIter =? indexer.getIndicies(slotIdx).catch, err:
|
|
|
|
|
trace "Unable to create indexing strategy from protected manifest", err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
if err =? (await self.fetchBatched(
|
2024-01-17 19:24:34 +00:00
|
|
|
|
manifest.treeCid,
|
|
|
|
|
blksIter,
|
|
|
|
|
onBatch = updateExpiry)).errorOption:
|
2024-01-15 16:45:04 +00:00
|
|
|
|
trace "Unable to fetch blocks", err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without slotRoot =? (await builder.buildSlot(slotIdx.Natural)), err:
|
|
|
|
|
trace "Unable to build slot", err = err.msg
|
|
|
|
|
return failure(err)
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if cid =? slotRoot.toSlotCid() and cid != manifest.slotRoots[slotIdx.int]:
|
|
|
|
|
trace "Slot root mismatch", manifest = manifest.slotRoots[slotIdx.int], recovered = slotRoot.toSlotCid()
|
|
|
|
|
return failure(newException(CodexError, "Slot root mismatch"))
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
return success()
|
2023-11-22 11:35:26 +00:00
|
|
|
|
|
2024-01-17 19:24:34 +00:00
|
|
|
|
proc onProve(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
slot: Slot,
|
2024-02-07 06:50:35 +00:00
|
|
|
|
challenge: ProofChallenge): Future[?!Groth16Proof] {.async.} =
|
2024-01-17 19:24:34 +00:00
|
|
|
|
## Generats a proof for a given slot and challenge
|
|
|
|
|
##
|
|
|
|
|
|
|
|
|
|
let
|
|
|
|
|
cidStr = slot.request.content.cid
|
|
|
|
|
slotIdx = slot.slotIndex.truncate(Natural)
|
|
|
|
|
|
|
|
|
|
logScope:
|
|
|
|
|
cid = cidStr
|
|
|
|
|
slot = slotIdx
|
|
|
|
|
challenge = challenge
|
|
|
|
|
|
|
|
|
|
trace "Received proof challenge"
|
|
|
|
|
|
|
|
|
|
without cid =? Cid.init(cidStr).mapFailure, err:
|
|
|
|
|
error "Unable to parse Cid", cid, err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without manifest =? await self.fetchManifest(cid), err:
|
|
|
|
|
error "Unable to fetch manifest for cid", err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
2024-02-08 02:27:11 +00:00
|
|
|
|
without builder =? Poseidon2Builder.new(self.blockStore, manifest), err:
|
2024-01-17 19:24:34 +00:00
|
|
|
|
error "Unable to create slots builder", err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without sampler =? DataSampler.new(slotIdx, self.blockStore, builder), err:
|
|
|
|
|
error "Unable to create data sampler", err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
without proofInput =? await sampler.getProofInput(challenge, nSamples = 3), err:
|
|
|
|
|
error "Unable to get proof input for slot", err = err.msg
|
|
|
|
|
return failure(err)
|
|
|
|
|
|
|
|
|
|
# Todo: send proofInput to circuit. Get proof. (Profit, repeat.)
|
2024-02-07 06:50:35 +00:00
|
|
|
|
|
|
|
|
|
# For now: dummy proof that is not all zero's, so that it is accepted by the
|
|
|
|
|
# dummy verifier:
|
|
|
|
|
var proof = Groth16Proof.default
|
|
|
|
|
proof.a.x = 42.u256
|
|
|
|
|
success(proof)
|
2024-01-17 19:24:34 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
proc onExpiryUpdate(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
rootCid: string,
|
|
|
|
|
expiry: SecondsSince1970): Future[?!void] {.async.} =
|
|
|
|
|
without cid =? Cid.init(rootCid):
|
|
|
|
|
trace "Unable to parse Cid", cid
|
|
|
|
|
let error = newException(CodexError, "Unable to parse Cid")
|
|
|
|
|
return failure(error)
|
2022-07-28 17:44:59 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
return await self.updateExpiry(cid, expiry)
|
2022-07-28 17:44:59 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
proc onClear(
|
|
|
|
|
self: CodexNodeRef,
|
|
|
|
|
request: StorageRequest,
|
|
|
|
|
slotIndex: UInt256) =
|
|
|
|
|
# TODO: remove data from local storage
|
|
|
|
|
discard
|
2022-07-28 17:44:59 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
proc start*(self: CodexNodeRef) {.async.} =
|
|
|
|
|
if not self.engine.isNil:
|
|
|
|
|
await self.engine.start()
|
2023-11-28 21:04:11 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not self.erasure.isNil:
|
|
|
|
|
await self.erasure.start()
|
2023-11-28 21:04:11 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not self.discovery.isNil:
|
|
|
|
|
await self.discovery.start()
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not self.clock.isNil:
|
|
|
|
|
await self.clock.start()
|
2022-07-28 17:44:59 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if hostContracts =? self.contracts.host:
|
|
|
|
|
hostContracts.sales.onStore =
|
|
|
|
|
proc(
|
|
|
|
|
request: StorageRequest,
|
|
|
|
|
slot: UInt256,
|
|
|
|
|
onBatch: BatchProc): Future[?!void] = self.onStore(request, slot, onBatch)
|
2023-11-22 10:09:12 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
hostContracts.sales.onExpiryUpdate =
|
|
|
|
|
proc(rootCid: string, expiry: SecondsSince1970): Future[?!void] =
|
|
|
|
|
self.onExpiryUpdate(rootCid, expiry)
|
2023-11-22 10:09:12 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
hostContracts.sales.onClear =
|
|
|
|
|
proc(request: StorageRequest, slotIndex: UInt256) =
|
2022-07-07 14:14:19 +00:00
|
|
|
|
# TODO: remove data from local storage
|
2024-01-15 16:45:04 +00:00
|
|
|
|
self.onClear(request, slotIndex)
|
2022-08-17 04:02:53 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
hostContracts.sales.onProve =
|
2024-02-07 06:50:35 +00:00
|
|
|
|
proc(slot: Slot, challenge: ProofChallenge): Future[?!Groth16Proof] =
|
2024-01-15 16:45:04 +00:00
|
|
|
|
# TODO: generate proof
|
|
|
|
|
self.onProve(slot, challenge)
|
2022-07-28 17:44:59 +00:00
|
|
|
|
|
2022-08-09 04:29:06 +00:00
|
|
|
|
try:
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
await hostContracts.start()
|
2022-08-09 04:29:06 +00:00
|
|
|
|
except CatchableError as error:
|
2023-11-28 21:04:11 +00:00
|
|
|
|
error "Unable to start host contract interactions", error=error.msg
|
2024-01-15 16:45:04 +00:00
|
|
|
|
self.contracts.host = HostInteractions.none
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if clientContracts =? self.contracts.client:
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
try:
|
|
|
|
|
await clientContracts.start()
|
|
|
|
|
except CatchableError as error:
|
|
|
|
|
error "Unable to start client contract interactions: ", error=error.msg
|
2024-01-15 16:45:04 +00:00
|
|
|
|
self.contracts.client = ClientInteractions.none
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if validatorContracts =? self.contracts.validator:
|
2023-04-19 13:06:00 +00:00
|
|
|
|
try:
|
|
|
|
|
await validatorContracts.start()
|
|
|
|
|
except CatchableError as error:
|
|
|
|
|
error "Unable to start validator contract interactions: ", error=error.msg
|
2024-01-15 16:45:04 +00:00
|
|
|
|
self.contracts.validator = ValidatorInteractions.none
|
2023-04-19 13:06:00 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
self.networkId = self.switch.peerInfo.peerId
|
feat: create logging proxy (#663)
* implement a logging proxy
The logging proxy:
- prevents the need to import chronicles (as well as export except toJson),
- prevents the need to override `writeValue` or use or import nim-json-seralization elsewhere in the codebase, allowing for sole use of utils/json for de/serialization,
- and handles json formatting correctly in chronicles json sinks
* Rename logging -> logutils to avoid ambiguity with common names
* clean up
* add setProperty for JsonRecord, remove nim-json-serialization conflict
* Allow specifying textlines and json format separately
Not specifying a LogFormat will apply the formatting to both textlines and json sinks.
Specifying a LogFormat will apply the formatting to only that sink.
* remove unneeded usages of std/json
We only need to import utils/json instead of std/json
* move serialization from rest/json to utils/json so it can be shared
* fix NoColors ambiguity
Was causing unit tests to fail on Windows.
* Remove nre usage to fix Windows error
Windows was erroring with `could not load: pcre64.dll`. Instead of fixing that error, remove the pcre usage :)
* Add logutils module doc
* Shorten logutils.formatIt for `NBytes`
Both json and textlines formatIt were not needed, and could be combined into one formatIt
* remove debug integration test config
debug output and logformat of json for integration test logs
* Use ## module doc to support docgen
* bump nim-poseidon2 to export fromBytes
Before the changes in this branch, fromBytes was likely being resolved by nim-stew, or other dependency. With the changes in this branch, that dependency was removed and fromBytes could no longer be resolved. By exporting fromBytes from nim-poseidon, the correct resolution is now happening.
* fixes to get compiling after rebasing master
* Add support for Result types being logged using formatIt
2024-01-23 07:35:03 +00:00
|
|
|
|
notice "Started codex node", id = self.networkId, addrs = self.switch.peerInfo.addrs
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
proc stop*(self: CodexNodeRef) {.async.} =
|
2022-07-06 13:37:27 +00:00
|
|
|
|
trace "Stopping node"
|
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not self.engine.isNil:
|
|
|
|
|
await self.engine.stop()
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not self.erasure.isNil:
|
|
|
|
|
await self.erasure.stop()
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not self.discovery.isNil:
|
|
|
|
|
await self.discovery.stop()
|
2022-07-06 13:37:27 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not self.clock.isNil:
|
|
|
|
|
await self.clock.stop()
|
2023-11-22 11:35:26 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if clientContracts =? self.contracts.client:
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
await clientContracts.stop()
|
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if hostContracts =? self.contracts.host:
|
[marketplace] Add Reservations Module (#340)
* [marketplace] reservations module
- add de/serialization for Availability
- add markUsed/markUnused in persisted availability
- add query for unused
- add reserve/release
- reservation module tests
- split ContractInteractions into client contracts and host contracts
- remove reservations start/stop as the repo start/stop is being managed by the node
- remove dedicated reservations metadata store and use the metadata store from the repo instead
- Split ContractInteractions into:
- ClientInteractions (with purchasing)
- HostInteractions (with sales and proving)
- compilation fix for nim 1.2
[repostore] fix started flag, add tests
[marketplace] persist slot index
For loading the sales state from chain, the slot index was not previously persisted in the contract. Will retrieve the slot index from the contract when the sales state is loaded.
* Revert repostore changes
In favour of separate PR https://github.com/status-im/nim-codex/pull/374.
* remove warnings
* clean up
* tests: stop repostore during teardown
* change constructor type identifier
Change Contracts constructor to accept Contracts type instead of ContractInteractions.
* change constructor return type to Result instead of Option
* fix and split interactions tests
* clean up, fix tests
* find availability by slot id
* remove duplication in host/client interactions
* add test for finding availability by slotId
* log instead of raiseAssert when failed to mark availability as unused
* move to SaleErrored state instead of raiseAssert
* remove unneeded reverse
It appears that order is not preserved in the repostore, so reversing does not have the intended effect here.
* update open api spec for potential rest endpoint errors
* move functions about available bytes to repostore
* WIP: reserve and release availabilities as needed
WIP: not tested yet
Availabilities are marked as used when matched (just before downloading starts) so that future matching logic does not match an availability currently in use.
As the download progresses, batches of blocks are written to disk, and the equivalent bytes are released from the reservation module. The size of the availability is reduced as well.
During a reserve or release operation, availability updates occur after the repo is manipulated. If the availability update operation fails, the reserve or release is rolled back to maintain correct accounting of bytes.
Finally, once download completes, or if an error occurs, the availability is marked as unused so future matching can occur.
* delete availability when all bytes released
* fix tests + cleanup
* remove availability from SalesContext callbacks
Availability is no longer used past the SaleDownloading state in the state machine. Cleanup of Availability (marking unused) is handled directly in the SaleDownloading state, and no longer in SaleErrored or SaleFinished. Likewise, Availabilities shouldn’t need to be handled on node restart.
Additionally, Availability was being passed in SalesContext callbacks, and now that Availability is only used temporarily in the SaleDownloading state, Availability is contextually irrelevant to the callbacks, except in OnStore possibly, though it was not being consumed.
* test clean up
* - remove availability from callbacks and constructors from previous commit that needed to be removed (oopsie)
- fix integration test that checks availabilities
- there was a bug fixed that crashed the node due to a missing `return success` in onStore
- the test was fixed by ensuring that availabilities are remaining on the node, and the size has been reduced
- change Availability back to non-ref object and constructor back to init
- add trace logging of all state transitions in state machine
- add generally useful trace logging
* fixes after rebase
1. Fix onProve callbacks
2. Use Slot type instead of tuple for retrieving active slot.
3. Bump codex-contracts-eth that exposes getActivceSlot call.
* swap contracts branch to not support slot collateral
Slot collateral changes in the contracts require further changes in the client code, so we’ll skip those changes for now and add in a separate commit.
* modify Interactions and Deployment constructors
- `HostInteractions` and `ClientInteractions` constructors were simplified to take a contract address and no overloads
- `Interactions` prepared simplified so there are no overloads
- `Deployment` constructor updated so that it takes an optional string parameter, instead `Option[string]`
* Move `batchProc` declaration
`batchProc` needs to be consumed by both `node` and `salescontext`, and they can’t reference each other as it creates a circular dependency.
* [reservations] rename `available` to `hasAvailable`
* [reservations] default error message to inner error msg
* add SaleIngored state
When a storage request is handled but the request does match availabilities, the sales agent machine is sent to the SaleIgnored state. In addition, the agent is constructed in a way that if the request is ignored, the sales agent is removed from the list of active agents being tracked in the sales module.
2023-04-04 07:05:16 +00:00
|
|
|
|
await hostContracts.stop()
|
2022-07-22 23:38:49 +00:00
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if validatorContracts =? self.contracts.validator:
|
2023-05-01 14:23:26 +00:00
|
|
|
|
await validatorContracts.stop()
|
|
|
|
|
|
2024-01-15 16:45:04 +00:00
|
|
|
|
if not self.blockStore.isNil:
|
|
|
|
|
await self.blockStore.close
|
|
|
|
|
|
|
|
|
|
proc new*(
|
|
|
|
|
T: type CodexNodeRef,
|
|
|
|
|
switch: Switch,
|
|
|
|
|
store: BlockStore,
|
|
|
|
|
engine: BlockExcEngine,
|
|
|
|
|
erasure: Erasure,
|
|
|
|
|
discovery: Discovery,
|
|
|
|
|
contracts = Contracts.default): CodexNodeRef =
|
|
|
|
|
## Create new instance of a Codex self, call `start` to run it
|
|
|
|
|
##
|
|
|
|
|
|
|
|
|
|
CodexNodeRef(
|
|
|
|
|
switch: switch,
|
|
|
|
|
blockStore: store,
|
|
|
|
|
engine: engine,
|
|
|
|
|
erasure: erasure,
|
|
|
|
|
discovery: discovery,
|
|
|
|
|
contracts: contracts)
|