Compare commits

..

15 Commits

Author SHA1 Message Date
Slava
0c647d8337
chore: new marketplace address for testnet (#961)
https://github.com/codex-storage/infra-codex/issues/248

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-10-21 13:31:54 +03:00
Ben Bierens
f196caf8cb
Download API upgrade (#955)
* Adds API for fetching manifest only and downloading dataset without stream

* Updates openapi.yaml

* Adds tests for downloading manifest-only and without stream.

* review comments by Giuliano

* updates test clients
2024-10-21 13:25:19 +03:00
Adam Uhlíř
bf1434d192
docs: openapi node fix (#950) 2024-10-21 13:25:15 +03:00
Adam Uhlíř
00ab8d712e
ci: linux ci runs on ubuntu-20.04 (#953)
* ci: linux ci runs uses ubuntu-20.04

* ci: use ubuntu-20.04 for nim-matrix

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-10-21 13:25:10 +03:00
Ben Bierens
21d996ab3f
Adds log for cirdl download URL (#948) 2024-10-21 13:24:52 +03:00
Adam Uhlíř
eff0d8cd18
feat: partial rewards and withdraws (#880)
* feat: partial rewards and withdraws

* test: missing reserve slot

* test: fix contracts
2024-10-21 13:24:47 +03:00
Ben Bierens
b0607d3fdb
Handles LPStreamError in chunker (#947)
* Handles LPStreamError in chunker

* Adds test for lpstream exception

* Adds tests for other stream exceptions. Cleanup.
2024-10-21 13:24:38 +03:00
Arnaud
859b7ea0e5
fix(restapi): Add cors headers when the request is returning errors (#942)
* Add cors headers when the request is returning errors

* Prevent nim presto to send multiple cors headers
2024-10-21 13:24:32 +03:00
Eric
29549935ad
Support enforcement of slot reservations before filling slot (#934) 2024-10-21 13:22:55 +03:00
Slava
47061bf29b
Release v0.1.6 (#945)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* remove erasure and por parameters from openapi spec (#915)

* Move Building Codex guide to the main docs site (#893)

* updates Marketplace tutorial documentation (#888)

* updates Marketplace tutorial documentation

* Applies review comments to marketplace-tutorial

* Final formatting touches

* moved `Prerequisites` around

* Fixes indentation in one JSON snippet

* Use CLI args when passed for cirdl in Docker entrypoint (#927)

* Use CLI args when passed for cirdl in Docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase CI timeout

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Validator - support partitioning of the  slot id space (#890)

* Adds validatorPartitionSize and validatorPartitionIndex config options

* adds partitioning options to the validation type

* adds partitioning logic to the validator

* ignores partitionIndex when partitionSize is either 0 or 1

* clips the partition index to <<partitionIndex mod partitionSize>>

* handles negative values for the validation partition index

* updates long description of the new validator cli options

* makes default partitionSize to be 0 for better backward compatibility

* Improving formatting on validator CLI

* reactors validation params into a separate type and simplifies validation of validation params

* removes suspected duplication

* fixes typo in validator CLI help

* updates README

* Applies review comments - using optionals and range types to handle validation params

* Adds initializer to the configFactory for validatorMaxSlots

* [Review] update validator CLI description and README

* [Review]: renaming validationParams to validationConfig (config)

* [Review]: move validationconfig.nim to a higher level (next to validation.nim)

* changes backing type of MaxSlots to be int and makes sure slots are validated without limit when maxSlots is set to 0

* adds more end-to-end test for the validator and the groups

* fixes typo in README and conf.nim

* makes `maxSlotsConstraintRespected` and `shouldValidateSlot` private + updates the tests

* fixes public address of the signer account in the marketplace tutorial

* applies review comments - removes two tests

* Remove moved docs (#930)

* Remove moved document

* Update main Readme and point links to the documentation site

* feat(slot-reservations): Support reserving slots (#907)

* feat(slot-reservations): Support reserving slots

Closes #898.

Wire up reserveSlot and canReserveSlot contract calls, but don't call them

* Remove return value from `reserveSlot`

* convert EthersError to MarketError

* Move convertEthersError to reserveSlot

* bump codex-contracts-eth after rebase

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* bump codex-contracts-eth after rebase

* bump codex-contracts-eth to master after codex-contracts-eth/pull/177 merged

* feat(slot-reservations): Add SaleSlotReserving state (#917)

* convert EthersError to MarketError

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* Add SaleSlotReserving

Adds a new state, SaleSlotReserving, that attempts to reserve a slot before downloading.
If the slot cannot be reserved, the state moves to SaleIgnored.
On error, the state moves to SaleErrored.

SaleIgnored is also updated to pass in `reprocessSlot` and `returnBytes`, controlling the behaviour in the Sales module after the slot is ignored. This is because previously it was assumed that SaleIgnored was only reached when there was no Availability. This is no longer the case, since SaleIgnored can now be reached when a slot cannot be reserved.

* Update SalePreparing

Specify `reprocessSlot` and `returnBytes` when moving to `SaleIgnored` from `SalePreparing`.

Update tests to include test for a raised CatchableError.

* Fix unit test

* Modify `canReserveSlot` and `reverseSlot` params after rebase

* Update MockMarket with new `canReserveSlot` and `reserveSlot` params

* fix after rebase

also bump codex-contracts-eth to master

* Use Ubuntu 20.04 for Linux amd64 releases (#939)

* Use Ubuntu 20.04 for Linux amd64 releases (#932)

* Accept branches with the slash in the name for release workflow (#932)

* Increase artifacts retention-days for release workflow (#932)

* feat(slot-reservations): support SlotReservationsFull event (#926)

* Remove moved docs (#935)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fix: null-ref in networkPeer (#937)

* Fixes nullref in networkPeer

* Removes inflight semaphore

* Revert "Removes inflight semaphore"

This reverts commit 26ec15c6f788df3adb6ff3b912a0c4b5d3139358.

* docs(openapi): provider better documentation for space endpoint parameters (#921)

* Trying to improve documentation

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

---------

Signed-off-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>

* Update Codex Testnet marketplace contract address (#944)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Signed-off-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
Co-authored-by: Marcin Czenko <marcin.czenko@pm.me>
2024-10-08 12:22:12 +03:00
Slava
7ba5e8c13a
Release v0.1.5 (#941)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* remove erasure and por parameters from openapi spec (#915)

* Move Building Codex guide to the main docs site (#893)

* updates Marketplace tutorial documentation (#888)

* updates Marketplace tutorial documentation

* Applies review comments to marketplace-tutorial

* Final formatting touches

* moved `Prerequisites` around

* Fixes indentation in one JSON snippet

* Use CLI args when passed for cirdl in Docker entrypoint (#927)

* Use CLI args when passed for cirdl in Docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase CI timeout

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Validator - support partitioning of the  slot id space (#890)

* Adds validatorPartitionSize and validatorPartitionIndex config options

* adds partitioning options to the validation type

* adds partitioning logic to the validator

* ignores partitionIndex when partitionSize is either 0 or 1

* clips the partition index to <<partitionIndex mod partitionSize>>

* handles negative values for the validation partition index

* updates long description of the new validator cli options

* makes default partitionSize to be 0 for better backward compatibility

* Improving formatting on validator CLI

* reactors validation params into a separate type and simplifies validation of validation params

* removes suspected duplication

* fixes typo in validator CLI help

* updates README

* Applies review comments - using optionals and range types to handle validation params

* Adds initializer to the configFactory for validatorMaxSlots

* [Review] update validator CLI description and README

* [Review]: renaming validationParams to validationConfig (config)

* [Review]: move validationconfig.nim to a higher level (next to validation.nim)

* changes backing type of MaxSlots to be int and makes sure slots are validated without limit when maxSlots is set to 0

* adds more end-to-end test for the validator and the groups

* fixes typo in README and conf.nim

* makes `maxSlotsConstraintRespected` and `shouldValidateSlot` private + updates the tests

* fixes public address of the signer account in the marketplace tutorial

* applies review comments - removes two tests

* Remove moved docs (#930)

* Remove moved document

* Update main Readme and point links to the documentation site

* feat(slot-reservations): Support reserving slots (#907)

* feat(slot-reservations): Support reserving slots

Closes #898.

Wire up reserveSlot and canReserveSlot contract calls, but don't call them

* Remove return value from `reserveSlot`

* convert EthersError to MarketError

* Move convertEthersError to reserveSlot

* bump codex-contracts-eth after rebase

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* bump codex-contracts-eth after rebase

* bump codex-contracts-eth to master after codex-contracts-eth/pull/177 merged

* feat(slot-reservations): Add SaleSlotReserving state (#917)

* convert EthersError to MarketError

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* Add SaleSlotReserving

Adds a new state, SaleSlotReserving, that attempts to reserve a slot before downloading.
If the slot cannot be reserved, the state moves to SaleIgnored.
On error, the state moves to SaleErrored.

SaleIgnored is also updated to pass in `reprocessSlot` and `returnBytes`, controlling the behaviour in the Sales module after the slot is ignored. This is because previously it was assumed that SaleIgnored was only reached when there was no Availability. This is no longer the case, since SaleIgnored can now be reached when a slot cannot be reserved.

* Update SalePreparing

Specify `reprocessSlot` and `returnBytes` when moving to `SaleIgnored` from `SalePreparing`.

Update tests to include test for a raised CatchableError.

* Fix unit test

* Modify `canReserveSlot` and `reverseSlot` params after rebase

* Update MockMarket with new `canReserveSlot` and `reserveSlot` params

* fix after rebase

also bump codex-contracts-eth to master

* Use Ubuntu 20.04 for Linux amd64 releases (#939)

* Use Ubuntu 20.04 for Linux amd64 releases (#932)

* Accept branches with the slash in the name for release workflow (#932)

* Increase artifacts retention-days for release workflow (#932)

* feat(slot-reservations): support SlotReservationsFull event (#926)

* Remove moved docs (#935)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fix: null-ref in networkPeer (#937)

* Fixes nullref in networkPeer

* Removes inflight semaphore

* Revert "Removes inflight semaphore"

This reverts commit 26ec15c6f788df3adb6ff3b912a0c4b5d3139358.

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
Co-authored-by: Marcin Czenko <marcin.czenko@pm.me>
2024-10-07 15:27:25 +03:00
Slava
484124db09
Release v0.1.4 (#912)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
2024-09-24 13:19:58 +03:00
Slava
89917d4bb6
Release v0.1.3 (#856) 2024-07-03 20:20:53 +03:00
Slava
7602adc0df
Release v0.1.2 (#847)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
2024-06-27 08:51:50 +03:00
Slava
15ff87a8bb
Merge latest master into release (#842)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
2024-06-26 05:38:04 +03:00
425 changed files with 12595 additions and 24530 deletions

View File

@ -1,2 +0,0 @@
# Formatted with nph v0.6.1-0-g0d8000e
e5df8c50d3b6e70e6eec1ff031657d2b7bb6fe63

View File

@ -11,16 +11,13 @@ inputs:
default: "amd64" default: "amd64"
nim_version: nim_version:
description: "Nim version" description: "Nim version"
default: "v2.0.14" default: "version-1-6"
rust_version: rust_version:
description: "Rust version" description: "Rust version"
default: "1.79.0" default: "1.78.0"
shell: shell:
description: "Shell to run commands in" description: "Shell to run commands in"
default: "bash --noprofile --norc -e -o pipefail" default: "bash --noprofile --norc -e -o pipefail"
coverage:
description: "True if the process is used for coverage"
default: false
runs: runs:
using: "composite" using: "composite"
steps: steps:
@ -34,8 +31,8 @@ runs:
if: inputs.os == 'linux' && (inputs.cpu == 'amd64' || inputs.cpu == 'arm64') if: inputs.os == 'linux' && (inputs.cpu == 'amd64' || inputs.cpu == 'arm64')
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: | run: |
sudo apt-get update -qq sudo apt-fast update -qq
sudo DEBIAN_FRONTEND='noninteractive' apt-get install \ sudo DEBIAN_FRONTEND='noninteractive' apt-fast install \
--no-install-recommends -yq lcov --no-install-recommends -yq lcov
- name: APT (Linux i386) - name: APT (Linux i386)
@ -43,8 +40,8 @@ runs:
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: | run: |
sudo dpkg --add-architecture i386 sudo dpkg --add-architecture i386
sudo apt-get update -qq sudo apt-fast update -qq
sudo DEBIAN_FRONTEND='noninteractive' apt-get install \ sudo DEBIAN_FRONTEND='noninteractive' apt-fast install \
--no-install-recommends -yq gcc-multilib g++-multilib --no-install-recommends -yq gcc-multilib g++-multilib
- name: Homebrew (macOS) - name: Homebrew (macOS)
@ -81,48 +78,11 @@ runs:
mingw-w64-i686-ntldd-git mingw-w64-i686-ntldd-git
mingw-w64-i686-rust mingw-w64-i686-rust
- name: Install gcc 14 on Linux - name: MSYS2 (Windows All) - Downgrade to gcc 13
# We don't want to install gcc 14 for coverage (Ubuntu 20.04)
if : ${{ inputs.os == 'linux' && inputs.coverage != 'true' }}
shell: ${{ inputs.shell }} {0}
run: |
# Skip for older Ubuntu versions
if [[ $(lsb_release -r | awk -F '[^0-9]+' '{print $2}') -ge 24 ]]; then
# Install GCC-14
sudo apt-get update -qq
sudo apt-get install -yq gcc-14
# Add GCC-14 to alternatives
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-14 14
# Set GCC-14 as the default
sudo update-alternatives --set gcc /usr/bin/gcc-14
fi
- name: Install ccache on Linux/Mac
if: inputs.os == 'linux' || inputs.os == 'macos'
uses: hendrikmuhs/ccache-action@v1.2
with:
create-symlink: true
key: ${{ inputs.os }}-${{ inputs.builder }}-${{ inputs.cpu }}-${{ inputs.tests }}-${{ inputs.nim_version }}
evict-old-files: 7d
- name: Install ccache on Windows
if: inputs.os == 'windows'
uses: hendrikmuhs/ccache-action@v1.2
with:
key: ${{ inputs.os }}-${{ inputs.builder }}-${{ inputs.cpu }}-${{ inputs.tests }}-${{ inputs.nim_version }}
evict-old-files: 7d
- name: Enable ccache on Windows
if: inputs.os == 'windows' if: inputs.os == 'windows'
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: | run: |
CCACHE_DIR=$(dirname $(which ccache))/ccached pacman -U --noconfirm https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-13.2.0-6-any.pkg.tar.zst https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-libs-13.2.0-6-any.pkg.tar.zst
mkdir ${CCACHE_DIR}
ln -s $(which ccache) ${CCACHE_DIR}/gcc.exe
ln -s $(which ccache) ${CCACHE_DIR}/g++.exe
ln -s $(which ccache) ${CCACHE_DIR}/cc.exe
ln -s $(which ccache) ${CCACHE_DIR}/c++.exe
echo "export PATH=${CCACHE_DIR}:\$PATH" >> $HOME/.bash_profile # prefix path in MSYS2
- name: Derive environment variables - name: Derive environment variables
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
@ -181,11 +141,8 @@ runs:
llvm_bin_dir="${llvm_dir}/bin" llvm_bin_dir="${llvm_dir}/bin"
llvm_lib_dir="${llvm_dir}/lib" llvm_lib_dir="${llvm_dir}/lib"
echo "${llvm_bin_dir}" >> ${GITHUB_PATH} echo "${llvm_bin_dir}" >> ${GITHUB_PATH}
# Make sure ccache has precedence (GITHUB_PATH is appending before)
echo "$(brew --prefix)/opt/ccache/libexec" >> ${GITHUB_PATH}
echo $PATH
echo "LDFLAGS=${LDFLAGS} -L${libomp_lib_dir} -L${llvm_lib_dir} -Wl,-rpath,${llvm_lib_dir}" >> ${GITHUB_ENV} echo "LDFLAGS=${LDFLAGS} -L${libomp_lib_dir} -L${llvm_lib_dir} -Wl,-rpath,${llvm_lib_dir}" >> ${GITHUB_ENV}
NIMFLAGS="${NIMFLAGS} $(quote "-d:LeopardCmakeFlags='-DCMAKE_BUILD_TYPE=Release' -d:LeopardExtraCompilerFlags='-fopenmp' -d:LeopardExtraLinkerFlags='-fopenmp -L${libomp_lib_dir}'")" NIMFLAGS="${NIMFLAGS} $(quote "-d:LeopardCmakeFlags='-DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=${llvm_bin_dir}/clang -DCMAKE_CXX_COMPILER=${llvm_bin_dir}/clang++' -d:LeopardExtraCompilerlags='-fopenmp' -d:LeopardExtraLinkerFlags='-fopenmp -L${libomp_lib_dir}'")"
echo "NIMFLAGS=${NIMFLAGS}" >> $GITHUB_ENV echo "NIMFLAGS=${NIMFLAGS}" >> $GITHUB_ENV
fi fi
@ -202,7 +159,6 @@ runs:
- name: Restore Nim toolchain binaries from cache - name: Restore Nim toolchain binaries from cache
id: nim-cache id: nim-cache
uses: actions/cache@v4 uses: actions/cache@v4
if : ${{ inputs.coverage != 'true' }}
with: with:
path: NimBinaries path: NimBinaries
key: ${{ inputs.os }}-${{ inputs.cpu }}-nim-${{ inputs.nim_version }}-cache-${{ env.cache_nonce }}-${{ github.run_id }} key: ${{ inputs.os }}-${{ inputs.cpu }}-nim-${{ inputs.nim_version }}-cache-${{ env.cache_nonce }}-${{ github.run_id }}
@ -212,17 +168,9 @@ runs:
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: echo "NIM_COMMIT=${{ inputs.nim_version }}" >> ${GITHUB_ENV} run: echo "NIM_COMMIT=${{ inputs.nim_version }}" >> ${GITHUB_ENV}
- name: MSYS2 (Windows All) - Disable git symbolic links (since miniupnp 2.2.5) - name: Build Nim and Codex dependencies
if: inputs.os == 'windows'
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: | run: |
git config --global core.symlinks false
- name: Build Nim and Logos Storage dependencies
shell: ${{ inputs.shell }} {0}
run: |
which gcc
gcc --version
make -j${ncpu} CI_CACHE=NimBinaries ${ARCH_OVERRIDE} QUICK_AND_DIRTY_COMPILER=1 update make -j${ncpu} CI_CACHE=NimBinaries ${ARCH_OVERRIDE} QUICK_AND_DIRTY_COMPILER=1 update
echo echo
./env.sh nim --version ./env.sh nim --version

View File

@ -3,14 +3,12 @@ Tips for shorter build times
### Runner availability ### ### Runner availability ###
When running on the Github free, pro or team plan, the bottleneck when Currently, the biggest bottleneck when optimizing workflows is the availability
optimizing workflows is the availability of macOS runners. Therefore, anything of Windows and macOS runners. Therefore, anything that reduces the time spent in
that reduces the time spent in macOS jobs will have a positive impact on the Windows or macOS jobs will have a positive impact on the time waiting for
time waiting for runners to become available. On the Github enterprise plan, runners to become available. The usage limits for Github Actions are [described
this is not the case and you can more freely use parallelization on multiple here][limits]. You can see a breakdown of runner usage for your jobs in the
runners. The usage limits for Github Actions are [described here][limits]. You Github Actions tab ([example][usage]).
can see a breakdown of runner usage for your jobs in the Github Actions tab
([example][usage]).
### Windows is slow ### ### Windows is slow ###
@ -24,10 +22,11 @@ analysis, etc. are therefore better performed on a Linux runner.
Breaking up a long build job into several jobs that you run in parallel can have Breaking up a long build job into several jobs that you run in parallel can have
a positive impact on the wall clock time that a workflow runs. For instance, you a positive impact on the wall clock time that a workflow runs. For instance, you
might consider running unit tests and integration tests in parallel. When might consider running unit tests and integration tests in parallel. Keep in
running on the Github free, pro or team plan, keep in mind that availability of mind however that availability of macOS and Windows runners is the biggest
macOS runners is a bottleneck. If you split a macOS job into two jobs, you now bottleneck. If you split a Windows job into two jobs, you now need to wait for
need to wait for two macOS runners to become available. two Windows runners to become available! Therefore parallelization often only
makes sense for Linux jobs.
### Refactoring ### ### Refactoring ###
@ -67,10 +66,9 @@ might seem inconvenient, because when you're debugging an issue you often want
to know whether you introduced a failure on all platforms, or only on a single to know whether you introduced a failure on all platforms, or only on a single
one. You might be tempted to disable fail-fast, but keep in mind that this keeps one. You might be tempted to disable fail-fast, but keep in mind that this keeps
runners busy for longer on a workflow that you know is going to fail anyway. runners busy for longer on a workflow that you know is going to fail anyway.
Consequent runs will therefore take longer to start. Fail fast is most likely Consequent runs will therefore take longer to start. Fail fast is most likely better for overall development speed.
better for overall development speed.
[usage]: https://github.com/logos-storage/logos-storage-nim/actions/runs/3462031231/usage [usage]: https://github.com/codex-storage/nim-codex/actions/runs/3462031231/usage
[composite]: https://docs.github.com/en/actions/creating-actions/creating-a-composite-action [composite]: https://docs.github.com/en/actions/creating-actions/creating-a-composite-action
[reusable]: https://docs.github.com/en/actions/using-workflows/reusing-workflows [reusable]: https://docs.github.com/en/actions/using-workflows/reusing-workflows
[cache]: https://github.com/actions/cache/blob/main/workarounds.md#update-a-cache [cache]: https://github.com/actions/cache/blob/main/workarounds.md#update-a-cache

View File

@ -24,9 +24,9 @@ jobs:
run: run:
shell: ${{ matrix.shell }} {0} shell: ${{ matrix.shell }} {0}
name: ${{ matrix.os }}-${{ matrix.tests }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.job_number }} name: '${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.tests }}'
runs-on: ${{ matrix.builder }} runs-on: ${{ matrix.builder }}
timeout-minutes: 90 timeout-minutes: 100
steps: steps:
- name: Checkout sources - name: Checkout sources
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -38,32 +38,28 @@ jobs:
uses: ./.github/actions/nimbus-build-system uses: ./.github/actions/nimbus-build-system
with: with:
os: ${{ matrix.os }} os: ${{ matrix.os }}
cpu: ${{ matrix.cpu }}
shell: ${{ matrix.shell }} shell: ${{ matrix.shell }}
nim_version: ${{ matrix.nim_version }} nim_version: ${{ matrix.nim_version }}
coverage: false
## Part 1 Tests ## ## Part 1 Tests ##
- name: Unit tests - name: Unit tests
if: matrix.tests == 'unittest' || matrix.tests == 'all' if: matrix.tests == 'unittest' || matrix.tests == 'all'
run: make -j${ncpu} test run: make -j${ncpu} test
# workaround for https://github.com/NomicFoundation/hardhat/issues/3877
- name: Setup Node.js - name: Setup Node.js
if: matrix.tests == 'contract' || matrix.tests == 'integration' || matrix.tests == 'tools' || matrix.tests == 'all'
uses: actions/setup-node@v4 uses: actions/setup-node@v4
with: with:
node-version: 22 node-version: 18.15
- name: Start Ethereum node with Logos Storage contracts - name: Start Ethereum node with Codex contracts
if: matrix.tests == 'contract' || matrix.tests == 'integration' || matrix.tests == 'tools' || matrix.tests == 'all' if: matrix.tests == 'contract' || matrix.tests == 'integration' || matrix.tests == 'tools' || matrix.tests == 'all'
working-directory: vendor/logos-storage-contracts-eth working-directory: vendor/codex-contracts-eth
env: env:
MSYS2_PATH_TYPE: inherit MSYS2_PATH_TYPE: inherit
run: | run: |
npm ci npm install
npm start & npm start &
# Wait for the contracts to be deployed
sleep 5
## Part 2 Tests ## ## Part 2 Tests ##
- name: Contract tests - name: Contract tests
@ -73,15 +69,13 @@ jobs:
## Part 3 Tests ## ## Part 3 Tests ##
- name: Integration tests - name: Integration tests
if: matrix.tests == 'integration' || matrix.tests == 'all' if: matrix.tests == 'integration' || matrix.tests == 'all'
env:
CODEX_INTEGRATION_TEST_INCLUDES: ${{ matrix.includes }}
run: make -j${ncpu} testIntegration run: make -j${ncpu} testIntegration
- name: Upload integration tests log files - name: Upload integration tests log files
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
if: (matrix.tests == 'integration' || matrix.tests == 'all') && always() if: (matrix.tests == 'integration' || matrix.tests == 'all') && always()
with: with:
name: ${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.job_number }}-integration-tests-logs name: ${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-integration-tests-logs
path: tests/integration/logs/ path: tests/integration/logs/
retention-days: 1 retention-days: 1

View File

@ -9,28 +9,31 @@ on:
env: env:
cache_nonce: 0 # Allows for easily busting actions/cache caches cache_nonce: 0 # Allows for easily busting actions/cache caches
nim_version: v2.2.4 nim_version: pinned
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.ref || github.run_id }} group: ${{ github.workflow }}-${{ github.ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
matrix: matrix:
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
matrix: ${{ steps.matrix.outputs.matrix }} matrix: ${{ steps.matrix.outputs.matrix }}
cache_nonce: ${{ env.cache_nonce }} cache_nonce: ${{ env.cache_nonce }}
steps: steps:
- name: Checkout sources - name: Compute matrix
uses: actions/checkout@v4 id: matrix
- name: Compute matrix uses: fabiocaccamo/create-matrix-action@v4
id: matrix with:
run: | matrix: |
echo 'matrix<<EOF' >> $GITHUB_OUTPUT os {linux}, cpu {amd64}, builder {ubuntu-20.04}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
tools/scripts/ci-job-matrix.sh >> $GITHUB_OUTPUT os {macos}, cpu {amd64}, builder {macos-13}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
echo 'EOF' >> $GITHUB_OUTPUT os {windows}, cpu {amd64}, builder {windows-latest}, tests {unittest}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {contract}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {integration}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {tools}, nim_version {${{ env.nim_version }}}, shell {msys2}
build: build:
needs: matrix needs: matrix
@ -39,21 +42,8 @@ jobs:
matrix: ${{ needs.matrix.outputs.matrix }} matrix: ${{ needs.matrix.outputs.matrix }}
cache_nonce: ${{ needs.matrix.outputs.cache_nonce }} cache_nonce: ${{ needs.matrix.outputs.cache_nonce }}
linting:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v4
- name: Check `nph` formatting
uses: arnetheduck/nph-action@v1
with:
version: 0.6.1
options: "codex/ tests/"
fail: true
suggest: true
coverage: coverage:
runs-on: ubuntu-latest runs-on: ubuntu-20.04
steps: steps:
- name: Checkout sources - name: Checkout sources
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -66,7 +56,6 @@ jobs:
with: with:
os: linux os: linux
nim_version: ${{ env.nim_version }} nim_version: ${{ env.nim_version }}
coverage: true
- name: Generate coverage data - name: Generate coverage data
run: | run: |

View File

@ -1,19 +0,0 @@
name: Conventional Commits Linting
on:
push:
branches:
- master
pull_request:
workflow_dispatch:
merge_group:
jobs:
pr-title:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- name: PR Conventional Commit Validation
uses: ytanikin/pr-conventional-commits@1.4.1
with:
task_types: '["feat","fix","docs","test","ci","build","refactor","style","perf","chore","revert"]'

33
.github/workflows/docker-dist-tests.yml vendored Normal file
View File

@ -0,0 +1,33 @@
name: Docker - Dist-Tests
on:
push:
branches:
- master
tags:
- 'v*.*.*'
paths-ignore:
- '**/*.md'
- '.gitignore'
- '.github/**'
- '!.github/workflows/docker-dist-tests.yml'
- '!.github/workflows/docker-reusable.yml'
- 'docker/**'
- '!docker/codex.Dockerfile'
- '!docker/docker-entrypoint.sh'
workflow_dispatch:
jobs:
build-and-push:
name: Build and Push
uses: ./.github/workflows/docker-reusable.yml
with:
nimflags: '-d:disableMarchNative -d:codex_enable_api_debug_peers=true -d:codex_enable_proof_failures=true -d:codex_enable_log_counter=true -d:verify_circuit=true'
nat_ip_auto: true
tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }}
tag_suffix: dist-tests
continuous_tests_list: PeersTest HoldMyBeerTest
continuous_tests_duration: 12h
secrets: inherit

View File

@ -34,11 +34,6 @@ on:
description: Set latest tag for Docker images description: Set latest tag for Docker images
required: false required: false
type: boolean type: boolean
tag_stable:
default: false
description: Set stable tag for Docker images
required: false
type: boolean
tag_sha: tag_sha:
default: true default: true
description: Set Git short commit as Docker tag description: Set Git short commit as Docker tag
@ -59,19 +54,6 @@ on:
description: Continuous Tests duration description: Continuous Tests duration
required: false required: false
type: string type: string
run_release_tests:
description: Run Release tests
required: false
type: string
default: false
contract_image:
description: Specifies compatible smart contract image
required: false
type: string
outputs:
codex_image:
description: Logos Storage Docker image tag
value: ${{ jobs.publish.outputs.codex_image }}
env: env:
@ -82,33 +64,19 @@ env:
NIMFLAGS: ${{ inputs.nimflags }} NIMFLAGS: ${{ inputs.nimflags }}
NAT_IP_AUTO: ${{ inputs.nat_ip_auto }} NAT_IP_AUTO: ${{ inputs.nat_ip_auto }}
TAG_LATEST: ${{ inputs.tag_latest }} TAG_LATEST: ${{ inputs.tag_latest }}
TAG_STABLE: ${{ inputs.tag_stable }}
TAG_SHA: ${{ inputs.tag_sha }} TAG_SHA: ${{ inputs.tag_sha }}
TAG_SUFFIX: ${{ inputs.tag_suffix }} TAG_SUFFIX: ${{ inputs.tag_suffix }}
CONTRACT_IMAGE: ${{ inputs.contract_image }}
# Tests # Tests
TESTS_SOURCE: logos-storage/logos-storage-nim-cs-dist-tests CONTINUOUS_TESTS_SOURCE: codex-storage/cs-codex-dist-tests
TESTS_BRANCH: master CONTINUOUS_TESTS_BRANCH: master
CONTINUOUS_TESTS_LIST: ${{ inputs.continuous_tests_list }} CONTINUOUS_TESTS_LIST: ${{ inputs.continuous_tests_list }}
CONTINUOUS_TESTS_DURATION: ${{ inputs.continuous_tests_duration }} CONTINUOUS_TESTS_DURATION: ${{ inputs.continuous_tests_duration }}
CONTINUOUS_TESTS_NAMEPREFIX: c-tests-ci CONTINUOUS_TESTS_NAMEPREFIX: c-tests-ci
jobs: jobs:
# Compute variables
compute:
name: Compute build ID
runs-on: ubuntu-latest
outputs:
build_id: ${{ steps.build_id.outputs.build_id }}
steps:
- name: Generate unique build id
id: build_id
run: echo "build_id=$(openssl rand -hex 5)" >> $GITHUB_OUTPUT
# Build platform specific image # Build platform specific image
build: build:
needs: compute
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@ -121,11 +89,11 @@ jobs:
- target: - target:
os: linux os: linux
arch: amd64 arch: amd64
builder: ubuntu-24.04 builder: ubuntu-22.04
- target: - target:
os: linux os: linux
arch: arm64 arch: arm64
builder: ubuntu-24.04-arm builder: buildjet-4vcpu-ubuntu-2204-arm
name: Build ${{ matrix.target.os }}/${{ matrix.target.arch }} name: Build ${{ matrix.target.os }}/${{ matrix.target.arch }}
runs-on: ${{ matrix.builder }} runs-on: ${{ matrix.builder }}
@ -135,19 +103,11 @@ jobs:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Docker - Variables
run: |
# Create contract label for compatible contract image if specified
if [[ -n "${{ env.CONTRACT_IMAGE }}" ]]; then
echo "CONTRACT_LABEL=storage.codex.nim-codex.blockchain-image=${{ env.CONTRACT_IMAGE }}" >> $GITHUB_ENV
fi
- name: Docker - Meta - name: Docker - Meta
id: meta id: meta
uses: docker/metadata-action@v5 uses: docker/metadata-action@v5
with: with:
images: ${{ env.DOCKER_REPO }} images: ${{ env.DOCKER_REPO }}
labels: ${{ env.CONTRACT_LABEL }}
- name: Docker - Set up Buildx - name: Docker - Set up Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
@ -182,7 +142,7 @@ jobs:
- name: Docker - Upload digest - name: Docker - Upload digest
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: digests-${{ needs.compute.outputs.build_id }}-${{ matrix.target.arch }} name: digests-${{ matrix.target.arch }}
path: /tmp/digests/* path: /tmp/digests/*
if-no-files-found: error if-no-files-found: error
retention-days: 1 retention-days: 1
@ -194,41 +154,35 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
version: ${{ steps.meta.outputs.version }} version: ${{ steps.meta.outputs.version }}
codex_image: ${{ steps.image_tag.outputs.codex_image }} needs: build
needs: [build, compute]
steps: steps:
- name: Docker - Variables - name: Docker - Variables
run: | run: |
# Adjust custom suffix when set # Adjust custom suffix when set and
if [[ -n "${{ env.TAG_SUFFIX }}" ]]; then if [[ -n "${{ env.TAG_SUFFIX }}" ]]; then
echo "TAG_SUFFIX=-${{ env.TAG_SUFFIX }}" >> $GITHUB_ENV echo "TAG_SUFFIX=-${{ env.TAG_SUFFIX }}" >>$GITHUB_ENV
fi fi
# Disable SHA tags on tagged release # Disable SHA tags on tagged release
if [[ ${{ startsWith(github.ref, 'refs/tags/') }} == "true" ]]; then if [[ ${{ startsWith(github.ref, 'refs/tags/') }} == "true" ]]; then
echo "TAG_SHA=false" >> $GITHUB_ENV echo "TAG_SHA=false" >>$GITHUB_ENV
fi fi
# Handle latest and latest-custom using raw # Handle latest and latest-custom using raw
if [[ ${{ env.TAG_SHA }} == "false" ]]; then if [[ ${{ env.TAG_SHA }} == "false" ]]; then
echo "TAG_LATEST=false" >> $GITHUB_ENV echo "TAG_LATEST=false" >>$GITHUB_ENV
echo "TAG_RAW=true" >> $GITHUB_ENV echo "TAG_RAW=true" >>$GITHUB_ENV
if [[ -z "${{ env.TAG_SUFFIX }}" ]]; then if [[ -z "${{ env.TAG_SUFFIX }}" ]]; then
echo "TAG_RAW_VALUE=latest" >> $GITHUB_ENV echo "TAG_RAW_VALUE=latest" >>$GITHUB_ENV
else else
echo "TAG_RAW_VALUE=latest-{{ env.TAG_SUFFIX }}" >> $GITHUB_ENV echo "TAG_RAW_VALUE=latest-{{ env.TAG_SUFFIX }}" >>$GITHUB_ENV
fi fi
else else
echo "TAG_RAW=false" >> $GITHUB_ENV echo "TAG_RAW=false" >>$GITHUB_ENV
fi
# Create contract label for compatible contract image if specified
if [[ -n "${{ env.CONTRACT_IMAGE }}" ]]; then
echo "CONTRACT_LABEL=storage.codex.nim-codex.blockchain-image=${{ env.CONTRACT_IMAGE }}" >> $GITHUB_ENV
fi fi
- name: Docker - Download digests - name: Docker - Download digests
uses: actions/download-artifact@v4 uses: actions/download-artifact@v4
with: with:
pattern: digests-${{ needs.compute.outputs.build_id }}-* pattern: digests-*
merge-multiple: true merge-multiple: true
path: /tmp/digests path: /tmp/digests
@ -240,14 +194,12 @@ jobs:
uses: docker/metadata-action@v5 uses: docker/metadata-action@v5
with: with:
images: ${{ env.DOCKER_REPO }} images: ${{ env.DOCKER_REPO }}
labels: ${{ env.CONTRACT_LABEL }}
flavor: | flavor: |
latest=${{ env.TAG_LATEST }} latest=${{ env.TAG_LATEST }}
suffix=${{ env.TAG_SUFFIX }},onlatest=true suffix=${{ env.TAG_SUFFIX }},onlatest=true
tags: | tags: |
type=semver,pattern={{version}} type=semver,pattern={{version}}
type=raw,enable=${{ env.TAG_RAW }},value=latest type=raw,enable=${{ env.TAG_RAW }},value=latest
type=raw,enable=${{ env.TAG_STABLE }},value=stable
type=sha,enable=${{ env.TAG_SHA }} type=sha,enable=${{ env.TAG_SHA }}
- name: Docker - Login to Docker Hub - name: Docker - Login to Docker Hub
@ -262,81 +214,54 @@ jobs:
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \ docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.DOCKER_REPO }}@sha256:%s ' *) $(printf '${{ env.DOCKER_REPO }}@sha256:%s ' *)
- name: Docker - Image tag
id: image_tag
run: echo "codex_image=${{ env.DOCKER_REPO }}:${{ steps.meta.outputs.version }}" >> "$GITHUB_OUTPUT"
- name: Docker - Inspect image - name: Docker - Inspect image
run: docker buildx imagetools inspect ${{ steps.image_tag.outputs.codex_image }} run: |
docker buildx imagetools inspect ${{ env.DOCKER_REPO }}:${{ steps.meta.outputs.version }}
# Compute Tests inputs # Compute Continuous Tests inputs
compute-tests-inputs: compute-tests-inputs:
name: Compute Tests inputs name: Compute Continuous Tests list
if: ${{ inputs.continuous_tests_list != '' || inputs.run_release_tests == 'true' }} if: ${{ inputs.continuous_tests_list != '' && github.ref_name == github.event.repository.default_branch }}
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: publish needs: publish
outputs: outputs:
source: ${{ steps.compute.outputs.source }} source: ${{ steps.compute.outputs.source }}
branch: ${{ env.TESTS_BRANCH }} branch: ${{ steps.compute.outputs.branch }}
workflow_source: ${{ env.TESTS_SOURCE }}
codexdockerimage: ${{ steps.compute.outputs.codexdockerimage }} codexdockerimage: ${{ steps.compute.outputs.codexdockerimage }}
steps:
- name: Compute Tests inputs
id: compute
run: |
echo "source=${{ format('{0}/{1}', github.server_url, env.TESTS_SOURCE) }}" >> "$GITHUB_OUTPUT"
echo "codexdockerimage=${{ inputs.docker_repo }}:${{ needs.publish.outputs.version }}" >> "$GITHUB_OUTPUT"
# Compute Continuous Tests inputs
compute-continuous-tests-inputs:
name: Compute Continuous Tests inputs
if: ${{ inputs.continuous_tests_list != '' && github.ref_name == github.event.repository.default_branch }}
runs-on: ubuntu-latest
needs: compute-tests-inputs
outputs:
nameprefix: ${{ steps.compute.outputs.nameprefix }} nameprefix: ${{ steps.compute.outputs.nameprefix }}
continuous_tests_list: ${{ steps.compute.outputs.continuous_tests_list }} continuous_tests_list: ${{ steps.compute.outputs.continuous_tests_list }}
continuous_tests_duration: ${{ env.CONTINUOUS_TESTS_DURATION }} continuous_tests_duration: ${{ steps.compute.outputs.continuous_tests_duration }}
continuous_tests_workflow: ${{ steps.compute.outputs.continuous_tests_workflow }} continuous_tests_workflow: ${{ steps.compute.outputs.continuous_tests_workflow }}
workflow_source: ${{ steps.compute.outputs.workflow_source }}
steps: steps:
- name: Compute Continuous Tests inputs - name: Compute Continuous Tests list
id: compute id: compute
run: | run: |
echo "source=${{ format('{0}/{1}', github.server_url, env.CONTINUOUS_TESTS_SOURCE) }}" >> "$GITHUB_OUTPUT"
echo "branch=${{ env.CONTINUOUS_TESTS_BRANCH }}" >> "$GITHUB_OUTPUT"
echo "codexdockerimage=${{ inputs.docker_repo }}:${{ needs.publish.outputs.version }}" >> "$GITHUB_OUTPUT"
echo "nameprefix=$(awk '{ print tolower($0) }' <<< ${{ env.CONTINUOUS_TESTS_NAMEPREFIX }})" >> "$GITHUB_OUTPUT" echo "nameprefix=$(awk '{ print tolower($0) }' <<< ${{ env.CONTINUOUS_TESTS_NAMEPREFIX }})" >> "$GITHUB_OUTPUT"
echo "continuous_tests_list=$(jq -cR 'split(" ")' <<< '${{ env.CONTINUOUS_TESTS_LIST }}')" >> "$GITHUB_OUTPUT" echo "continuous_tests_list=$(jq -cR 'split(" ")' <<< '${{ env.CONTINUOUS_TESTS_LIST }}')" >> "$GITHUB_OUTPUT"
echo "continuous_tests_duration=${{ env.CONTINUOUS_TESTS_DURATION }}" >> "$GITHUB_OUTPUT"
echo "workflow_source=${{ env.CONTINUOUS_TESTS_SOURCE }}" >> "$GITHUB_OUTPUT"
# Run Continuous Tests # Run Continuous Tests
run-continuous-tests: run-tests:
name: Run Continuous Tests name: Run Continuous Tests
needs: [compute-tests-inputs, compute-continuous-tests-inputs] needs: [publish, compute-tests-inputs]
strategy: strategy:
max-parallel: 1 max-parallel: 1
matrix: matrix:
tests: ${{ fromJSON(needs.compute-continuous-tests-inputs.outputs.continuous_tests_list) }} tests: ${{ fromJSON(needs.compute-tests-inputs.outputs.continuous_tests_list) }}
uses: logos-storage/logos-storage-nim-cs-dist-tests/.github/workflows/run-continuous-tests.yaml@master uses: codex-storage/cs-codex-dist-tests/.github/workflows/run-continuous-tests.yaml@master
with: with:
source: ${{ needs.compute-tests-inputs.outputs.source }} source: ${{ needs.compute-tests-inputs.outputs.source }}
branch: ${{ needs.compute-tests-inputs.outputs.branch }} branch: ${{ needs.compute-tests-inputs.outputs.branch }}
codexdockerimage: ${{ needs.compute-tests-inputs.outputs.codexdockerimage }} codexdockerimage: ${{ needs.compute-tests-inputs.outputs.codexdockerimage }}
nameprefix: ${{ needs.compute-continuous-tests-inputs.outputs.nameprefix }}-${{ matrix.tests }}-${{ needs.compute-continuous-tests-inputs.outputs.continuous_tests_duration }} nameprefix: ${{ needs.compute-tests-inputs.outputs.nameprefix }}-${{ matrix.tests }}-${{ needs.compute-tests-inputs.outputs.continuous_tests_duration }}
tests_filter: ${{ matrix.tests }} tests_filter: ${{ matrix.tests }}
tests_target_duration: ${{ needs.compute-tests-inputs.outputs.continuous_tests_duration }} tests_target_duration: ${{ needs.compute-tests-inputs.outputs.continuous_tests_duration }}
workflow_source: ${{ needs.compute-tests-inputs.outputs.workflow_source }} workflow_source: ${{ needs.compute-tests-inputs.outputs.workflow_source }}
secrets: inherit secrets: inherit
# Run Release Tests
run-release-tests:
name: Run Release Tests
needs: [compute-tests-inputs]
if: ${{ inputs.run_release_tests == 'true' }}
uses: logos-storage/logos-storage-nim-cs-dist-tests/.github/workflows/run-release-tests.yaml@master
with:
source: ${{ needs.compute-tests-inputs.outputs.source }}
branch: ${{ needs.compute-tests-inputs.outputs.branch }}
codexdockerimage: ${{ needs.compute-tests-inputs.outputs.codexdockerimage }}
workflow_source: ${{ needs.compute-tests-inputs.outputs.workflow_source }}
secrets: inherit

View File

@ -18,27 +18,11 @@ on:
- '!docker/docker-entrypoint.sh' - '!docker/docker-entrypoint.sh'
workflow_dispatch: workflow_dispatch:
jobs:
get-contracts-hash:
runs-on: ubuntu-latest
outputs:
hash: ${{ steps.get-hash.outputs.hash }}
steps:
- uses: actions/checkout@v4
with:
submodules: true
- name: Get submodule short hash jobs:
id: get-hash
run: |
hash=$(git rev-parse --short HEAD:vendor/logos-storage-contracts-eth)
echo "hash=$hash" >> $GITHUB_OUTPUT
build-and-push: build-and-push:
name: Build and Push name: Build and Push
uses: ./.github/workflows/docker-reusable.yml uses: ./.github/workflows/docker-reusable.yml
needs: get-contracts-hash
with: with:
tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }} tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }}
tag_stable: ${{ startsWith(github.ref, 'refs/tags/') }} secrets: inherit
contract_image: "codexstorage/codex-contracts-eth:sha-${{ needs.get-contracts-hash.outputs.hash }}"
secrets: inherit

View File

@ -2,17 +2,17 @@ name: OpenAPI
on: on:
push: push:
tags: branches:
- "v*.*.*" - 'master'
paths: paths:
- "openapi.yaml" - 'openapi.yaml'
- ".github/workflows/docs.yml" - '.github/workflows/docs.yml'
pull_request: pull_request:
branches: branches:
- "**" - '**'
paths: paths:
- "openapi.yaml" - 'openapi.yaml'
- ".github/workflows/docs.yml" - '.github/workflows/docs.yml'
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions: permissions:
@ -28,39 +28,38 @@ jobs:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: '0'
- uses: actions/setup-node@v4 - uses: actions/setup-node@v4
with: with:
node-version: 18 node-version: 18
- name: Lint OpenAPI - name: Lint OpenAPI
shell: bash
run: npx @redocly/cli lint openapi.yaml run: npx @redocly/cli lint openapi.yaml
deploy: deploy:
name: Deploy name: Deploy
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/') if: github.ref == 'refs/heads/master'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: '0'
- uses: actions/setup-node@v4 - uses: actions/setup-node@v4
with: with:
node-version: 18 node-version: 18
- name: Build OpenAPI - name: Build OpenAPI
run: npx @redocly/cli build-docs openapi.yaml --output openapi/index.html --title "Logos Storage API" shell: bash
run: npx @redocly/cli build-docs openapi.yaml --output "openapi/index.html" --title "Codex API"
- name: Build Postman Collection
run: npx -y openapi-to-postmanv2 -s openapi.yaml -o openapi/postman.json -p -O folderStrategy=Tags,includeAuthInfoInExample=false
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@v3 uses: actions/upload-pages-artifact@v3
with: with:
path: openapi path: './openapi'
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
uses: actions/deploy-pages@v4 uses: actions/deploy-pages@v4

View File

@ -8,21 +8,19 @@ env:
cache_nonce: 0 # Allows for easily busting actions/cache caches cache_nonce: 0 # Allows for easily busting actions/cache caches
nim_version: pinned nim_version: pinned
jobs: jobs:
matrix: matrix:
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
matrix: ${{ steps.matrix.outputs.matrix }} matrix: ${{ steps.matrix.outputs.matrix }}
cache_nonce: ${{ env.cache_nonce }} cache_nonce: ${{ env.cache_nonce }}
steps: steps:
- name: Checkout sources - name: Compute matrix
uses: actions/checkout@v4 id: matrix
- name: Compute matrix uses: fabiocaccamo/create-matrix-action@v4
id: matrix with:
run: | matrix: |
echo 'matrix<<EOF' >> $GITHUB_OUTPUT os {linux}, cpu {amd64}, builder {ubuntu-20.04}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
tools/scripts/ci-job-matrix.sh linux >> $GITHUB_OUTPUT
echo 'EOF' >> $GITHUB_OUTPUT
build: build:
needs: matrix needs: matrix

View File

@ -4,15 +4,13 @@ on:
push: push:
tags: tags:
- 'v*.*.*' - 'v*.*.*'
branches:
- master
workflow_dispatch: workflow_dispatch:
env: env:
cache_nonce: 0 # Allows for easily busting actions/cache caches cache_nonce: 0 # Allows for easily busting actions/cache caches
nim_version: pinned nim_version: pinned
rust_version: 1.79.0 rust_version: 1.78.0
storage_binary_base: storage codex_binary_base: codex
cirdl_binary_base: cirdl cirdl_binary_base: cirdl
build_dir: build build_dir: build
nim_flags: '' nim_flags: ''
@ -27,13 +25,14 @@ jobs:
steps: steps:
- name: Compute matrix - name: Compute matrix
id: matrix id: matrix
uses: fabiocaccamo/create-matrix-action@v5 uses: fabiocaccamo/create-matrix-action@v4
with: with:
matrix: | matrix: |
os {linux}, cpu {amd64}, builder {ubuntu-22.04}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail} os {linux}, cpu {amd64}, builder {ubuntu-20.04}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {linux}, cpu {arm64}, builder {ubuntu-22.04-arm}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail} os {linux}, cpu {arm64}, builder {buildjet-4vcpu-ubuntu-2204-arm}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {macos}, cpu {arm64}, builder {macos-14}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail} os {macos}, cpu {amd64}, builder {macos-13}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {windows}, cpu {amd64}, builder {windows-latest}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {msys2} os {macos}, cpu {arm64}, builder {macos-14}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {windows}, cpu {amd64}, builder {windows-latest}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {msys2}
# Build # Build
build: build:
@ -73,18 +72,18 @@ jobs:
windows*) os_name="windows" ;; windows*) os_name="windows" ;;
esac esac
github_ref_name="${GITHUB_REF_NAME/\//-}" github_ref_name="${GITHUB_REF_NAME/\//-}"
storage_binary="${{ env.storage_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}" codex_binary="${{ env.codex_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
cirdl_binary="${{ env.cirdl_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}" cirdl_binary="${{ env.cirdl_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
if [[ ${os_name} == "windows" ]]; then if [[ ${os_name} == "windows" ]]; then
storage_binary="${storage_binary}.exe" codex_binary="${codex_binary}.exe"
cirdl_binary="${cirdl_binary}.exe" cirdl_binary="${cirdl_binary}.exe"
fi fi
echo "storage_binary=${storage_binary}" >>$GITHUB_ENV echo "codex_binary=${codex_binary}" >>$GITHUB_ENV
echo "cirdl_binary=${cirdl_binary}" >>$GITHUB_ENV echo "cirdl_binary=${cirdl_binary}" >>$GITHUB_ENV
- name: Release - Build - name: Release - Build
run: | run: |
make NIMFLAGS="--out:${{ env.build_dir }}/${{ env.storage_binary }} ${{ env.nim_flags }}" make NIMFLAGS="--out:${{ env.build_dir }}/${{ env.codex_binary }} ${{ env.nim_flags }}"
make cirdl NIMFLAGS="--out:${{ env.build_dir }}/${{ env.cirdl_binary }} ${{ env.nim_flags }}" make cirdl NIMFLAGS="--out:${{ env.build_dir }}/${{ env.cirdl_binary }} ${{ env.nim_flags }}"
- name: Release - Libraries - name: Release - Libraries
@ -95,11 +94,11 @@ jobs:
done done
fi fi
- name: Release - Upload Logos Storage build artifacts - name: Release - Upload codex build artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: release-${{ env.storage_binary }} name: release-${{ env.codex_binary }}
path: ${{ env.build_dir }}/${{ env.storage_binary_base }}* path: ${{ env.build_dir }}/${{ env.codex_binary_base }}*
retention-days: 30 retention-days: 30
- name: Release - Upload cirdl build artifacts - name: Release - Upload cirdl build artifacts
@ -139,7 +138,7 @@ jobs:
} }
# Compress and prepare # Compress and prepare
for file in ${{ env.storage_binary_base }}* ${{ env.cirdl_binary_base }}*; do for file in ${{ env.codex_binary_base }}* ${{ env.cirdl_binary_base }}*; do
if [[ "${file}" == *".exe"* ]]; then if [[ "${file}" == *".exe"* ]]; then
# Windows - binary only # Windows - binary only
@ -171,34 +170,6 @@ jobs:
path: /tmp/release/ path: /tmp/release/
retention-days: 30 retention-days: 30
- name: Release - Upload to the cloud
env:
s3_endpoint: ${{ secrets.S3_ENDPOINT }}
s3_bucket: ${{ secrets.S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
run: |
# Variables
branch="${GITHUB_REF_NAME/\//-}"
folder="/tmp/release"
# Tagged releases
if [[ "${{ github.ref }}" == *"refs/tags/"* ]]; then
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/releases/${branch} --endpoint-url ${{ env.s3_endpoint }}
echo "${branch}" > "${folder}"/latest
aws s3 cp "${folder}"/latest s3://${{ env.s3_bucket }}/releases/latest --endpoint-url ${{ env.s3_endpoint }}
rm -f "${folder}"/latest
# master branch
elif [[ "${branch}" == "${{ github.event.repository.default_branch }}" ]]; then
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/${branch} --endpoint-url ${{ env.s3_endpoint }}
# Custom branch
else
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/branches/${branch} --endpoint-url ${{ env.s3_endpoint }}
fi
- name: Release - name: Release
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
if: startsWith(github.ref, 'refs/tags/') if: startsWith(github.ref, 'refs/tags/')
@ -206,12 +177,3 @@ jobs:
files: | files: |
/tmp/release/* /tmp/release/*
make_latest: true make_latest: true
- name: Generate Python SDK
uses: peter-evans/repository-dispatch@v3
if: startsWith(github.ref, 'refs/tags/')
with:
token: ${{ secrets.DISPATCH_PAT }}
repository: logos-storage/logos-storage-py-api-client
event-type: generate
client-payload: '{"openapi_url": "https://raw.githubusercontent.com/logos-storage/logos-storage-nim/${{ github.ref }}/openapi.yaml"}'

6
.gitignore vendored
View File

@ -5,13 +5,9 @@
!LICENSE* !LICENSE*
!Makefile !Makefile
!Jenkinsfile
nimcache/ nimcache/
# Executables when using nix will be stored in result/ directory
result/
# Executables shall be put in an ignored build/ directory # Executables shall be put in an ignored build/ directory
build/ build/
@ -45,5 +41,3 @@ docker/prometheus-data
.DS_Store .DS_Store
nim.cfg nim.cfg
tests/integration/logs tests/integration/logs
data/

58
.gitmodules vendored
View File

@ -37,17 +37,22 @@
path = vendor/nim-nitro path = vendor/nim-nitro
url = https://github.com/status-im/nim-nitro.git url = https://github.com/status-im/nim-nitro.git
ignore = untracked ignore = untracked
branch = main branch = master
[submodule "vendor/questionable"] [submodule "vendor/questionable"]
path = vendor/questionable path = vendor/questionable
url = https://github.com/status-im/questionable.git url = https://github.com/status-im/questionable.git
ignore = untracked ignore = untracked
branch = main branch = master
[submodule "vendor/upraises"]
path = vendor/upraises
url = https://github.com/markspanbroek/upraises.git
ignore = untracked
branch = master
[submodule "vendor/asynctest"] [submodule "vendor/asynctest"]
path = vendor/asynctest path = vendor/asynctest
url = https://github.com/status-im/asynctest.git url = https://github.com/status-im/asynctest.git
ignore = untracked ignore = untracked
branch = main branch = master
[submodule "vendor/nim-presto"] [submodule "vendor/nim-presto"]
path = vendor/nim-presto path = vendor/nim-presto
url = https://github.com/status-im/nim-presto.git url = https://github.com/status-im/nim-presto.git
@ -127,7 +132,7 @@
path = vendor/nim-websock path = vendor/nim-websock
url = https://github.com/status-im/nim-websock.git url = https://github.com/status-im/nim-websock.git
ignore = untracked ignore = untracked
branch = main branch = master
[submodule "vendor/nim-contract-abi"] [submodule "vendor/nim-contract-abi"]
path = vendor/nim-contract-abi path = vendor/nim-contract-abi
url = https://github.com/status-im/nim-contract-abi url = https://github.com/status-im/nim-contract-abi
@ -155,13 +160,13 @@
path = vendor/nim-taskpools path = vendor/nim-taskpools
url = https://github.com/status-im/nim-taskpools.git url = https://github.com/status-im/nim-taskpools.git
ignore = untracked ignore = untracked
branch = stable branch = master
[submodule "vendor/nim-leopard"] [submodule "vendor/nim-leopard"]
path = vendor/nim-leopard path = vendor/nim-leopard
url = https://github.com/status-im/nim-leopard.git url = https://github.com/status-im/nim-leopard.git
[submodule "vendor/logos-storage-nim-dht"] [submodule "vendor/nim-codex-dht"]
path = vendor/logos-storage-nim-dht path = vendor/nim-codex-dht
url = https://github.com/logos-storage/logos-storage-nim-dht.git url = https://github.com/codex-storage/nim-codex-dht.git
ignore = untracked ignore = untracked
branch = master branch = master
[submodule "vendor/nim-datastore"] [submodule "vendor/nim-datastore"]
@ -173,11 +178,9 @@
[submodule "vendor/nim-eth"] [submodule "vendor/nim-eth"]
path = vendor/nim-eth path = vendor/nim-eth
url = https://github.com/status-im/nim-eth url = https://github.com/status-im/nim-eth
[submodule "vendor/logos-storage-contracts-eth"] [submodule "vendor/codex-contracts-eth"]
path = vendor/logos-storage-contracts-eth path = vendor/codex-contracts-eth
url = https://github.com/logos-storage/logos-storage-contracts-eth.git url = https://github.com/status-im/codex-contracts-eth
ignore = untracked
branch = master
[submodule "vendor/nim-protobuf-serialization"] [submodule "vendor/nim-protobuf-serialization"]
path = vendor/nim-protobuf-serialization path = vendor/nim-protobuf-serialization
url = https://github.com/status-im/nim-protobuf-serialization url = https://github.com/status-im/nim-protobuf-serialization
@ -192,41 +195,26 @@
url = https://github.com/zevv/npeg url = https://github.com/zevv/npeg
[submodule "vendor/nim-poseidon2"] [submodule "vendor/nim-poseidon2"]
path = vendor/nim-poseidon2 path = vendor/nim-poseidon2
url = https://github.com/logos-storage/nim-poseidon2.git url = https://github.com/codex-storage/nim-poseidon2.git
ignore = untracked
branch = master
[submodule "vendor/constantine"] [submodule "vendor/constantine"]
path = vendor/constantine path = vendor/constantine
url = https://github.com/mratsim/constantine.git url = https://github.com/mratsim/constantine.git
[submodule "vendor/nim-circom-compat"] [submodule "vendor/nim-circom-compat"]
path = vendor/nim-circom-compat path = vendor/nim-circom-compat
url = https://github.com/logos-storage/nim-circom-compat.git url = https://github.com/codex-storage/nim-circom-compat.git
ignore = untracked ignore = untracked
branch = master branch = master
[submodule "vendor/logos-storage-proofs-circuits"] [submodule "vendor/codex-storage-proofs-circuits"]
path = vendor/logos-storage-proofs-circuits path = vendor/codex-storage-proofs-circuits
url = https://github.com/logos-storage/logos-storage-proofs-circuits.git url = https://github.com/codex-storage/codex-storage-proofs-circuits.git
ignore = untracked ignore = untracked
branch = master branch = master
[submodule "vendor/nim-serde"] [submodule "vendor/nim-serde"]
path = vendor/nim-serde path = vendor/nim-serde
url = https://github.com/logos-storage/nim-serde.git url = https://github.com/codex-storage/nim-serde.git
[submodule "vendor/nim-leveldbstatic"] [submodule "vendor/nim-leveldbstatic"]
path = vendor/nim-leveldbstatic path = vendor/nim-leveldbstatic
url = https://github.com/logos-storage/nim-leveldb.git url = https://github.com/codex-storage/nim-leveldb.git
[submodule "vendor/nim-zippy"] [submodule "vendor/nim-zippy"]
path = vendor/nim-zippy path = vendor/nim-zippy
url = https://github.com/status-im/nim-zippy.git url = https://github.com/status-im/nim-zippy.git
[submodule "vendor/nph"]
path = vendor/nph
url = https://github.com/arnetheduck/nph.git
[submodule "vendor/nim-quic"]
path = vendor/nim-quic
url = https://github.com/vacp2p/nim-quic.git
ignore = untracked
branch = main
[submodule "vendor/nim-ngtcp2"]
path = vendor/nim-ngtcp2
url = https://github.com/vacp2p/nim-ngtcp2.git
ignore = untracked
branch = main

37
Jenkinsfile vendored
View File

@ -1,37 +0,0 @@
#!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.13'
pipeline {
agent { label 'linux && x86_64 && nix-2.24' }
options {
disableConcurrentBuilds()
/* manage how many builds we keep */
buildDiscarder(logRotator(
numToKeepStr: '20',
daysToKeepStr: '30',
))
}
stages {
stage('Build') {
steps {
script {
nix.flake("default")
}
}
}
stage('Check') {
steps {
script {
sh './result/bin/storage --version'
}
}
}
}
post {
cleanup { cleanWs() }
}
}

112
Makefile
View File

@ -15,7 +15,7 @@
# #
# If NIM_COMMIT is set to "nimbusbuild", this will use the # If NIM_COMMIT is set to "nimbusbuild", this will use the
# version pinned by nimbus-build-system. # version pinned by nimbus-build-system.
PINNED_NIM_VERSION := v2.2.4 PINNED_NIM_VERSION := 38640664088251bbc88917b4bacfd86ec53014b8 # 1.6.21
ifeq ($(NIM_COMMIT),) ifeq ($(NIM_COMMIT),)
NIM_COMMIT := $(PINNED_NIM_VERSION) NIM_COMMIT := $(PINNED_NIM_VERSION)
@ -40,30 +40,6 @@ DOCKER_IMAGE_NIM_PARAMS ?= -d:chronicles_colors:none -d:insecure
LINK_PCRE := 0 LINK_PCRE := 0
ifeq ($(OS),Windows_NT)
ifeq ($(PROCESSOR_ARCHITECTURE), AMD64)
ARCH = x86_64
endif
ifeq ($(PROCESSOR_ARCHITECTURE), ARM64)
ARCH = arm64
endif
else
UNAME_P := $(shell uname -m)
ifneq ($(filter $(UNAME_P), i686 i386 x86_64),)
ARCH = x86_64
endif
ifneq ($(filter $(UNAME_P), aarch64 arm),)
ARCH = arm64
endif
endif
ifeq ($(ARCH), x86_64)
CXXFLAGS ?= -std=c++17 -mssse3
else
CXXFLAGS ?= -std=c++17
endif
export CXXFLAGS
# we don't want an error here, so we can handle things later, in the ".DEFAULT" target # we don't want an error here, so we can handle things later, in the ".DEFAULT" target
-include $(BUILD_SYSTEM_DIR)/makefiles/variables.mk -include $(BUILD_SYSTEM_DIR)/makefiles/variables.mk
@ -93,10 +69,10 @@ else # "variables.mk" was included. Business as usual until the end of this file
# default target, because it's the first one that doesn't start with '.' # default target, because it's the first one that doesn't start with '.'
# Builds the Logos Storage binary # Builds the codex binary
all: | build deps all: | build deps
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim storage $(NIM_PARAMS) build.nims $(ENV_SCRIPT) nim codex $(NIM_PARAMS) build.nims
# Build tools/cirdl # Build tools/cirdl
cirdl: | deps cirdl: | deps
@ -138,12 +114,12 @@ test: | build deps
# Builds and runs the smart contract tests # Builds and runs the smart contract tests
testContracts: | build deps testContracts: | build deps
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testContracts $(NIM_PARAMS) --define:ws_resubscribe=240 build.nims $(ENV_SCRIPT) nim testContracts $(NIM_PARAMS) build.nims
# Builds and runs the integration tests # Builds and runs the integration tests
testIntegration: | build deps testIntegration: | build deps
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testIntegration $(NIM_PARAMS) --define:ws_resubscribe=240 build.nims $(ENV_SCRIPT) nim testIntegration $(NIM_PARAMS) build.nims
# Builds and runs all tests (except for Taiko L2 tests) # Builds and runs all tests (except for Taiko L2 tests)
testAll: | build deps testAll: | build deps
@ -178,11 +154,11 @@ coverage:
$(MAKE) NIMFLAGS="$(NIMFLAGS) --lineDir:on --passC:-fprofile-arcs --passC:-ftest-coverage --passL:-fprofile-arcs --passL:-ftest-coverage" test $(MAKE) NIMFLAGS="$(NIMFLAGS) --lineDir:on --passC:-fprofile-arcs --passC:-ftest-coverage --passL:-fprofile-arcs --passL:-ftest-coverage" test
cd nimcache/release/testCodex && rm -f *.c cd nimcache/release/testCodex && rm -f *.c
mkdir -p coverage mkdir -p coverage
lcov --capture --keep-going --directory nimcache/release/testCodex --output-file coverage/coverage.info lcov --capture --directory nimcache/release/testCodex --output-file coverage/coverage.info
shopt -s globstar && ls $$(pwd)/codex/{*,**/*}.nim shopt -s globstar && ls $$(pwd)/codex/{*,**/*}.nim
shopt -s globstar && lcov --extract coverage/coverage.info --keep-going $$(pwd)/codex/{*,**/*}.nim --output-file coverage/coverage.f.info shopt -s globstar && lcov --extract coverage/coverage.info $$(pwd)/codex/{*,**/*}.nim --output-file coverage/coverage.f.info
echo -e $(BUILD_MSG) "coverage/report/index.html" echo -e $(BUILD_MSG) "coverage/report/index.html"
genhtml coverage/coverage.f.info --keep-going --output-directory coverage/report genhtml coverage/coverage.f.info --output-directory coverage/report
show-coverage: show-coverage:
if which open >/dev/null; then (echo -e "\e[92mOpening\e[39m HTML coverage report in browser..." && open coverage/report/index.html) || true; fi if which open >/dev/null; then (echo -e "\e[92mOpening\e[39m HTML coverage report in browser..." && open coverage/report/index.html) || true; fi
@ -199,76 +175,4 @@ ifneq ($(USE_LIBBACKTRACE), 0)
+ $(MAKE) -C vendor/nim-libbacktrace clean $(HANDLE_OUTPUT) + $(MAKE) -C vendor/nim-libbacktrace clean $(HANDLE_OUTPUT)
endif endif
############
## Format ##
############
.PHONY: build-nph install-nph-hook clean-nph print-nph-path
# Default location for nph binary shall be next to nim binary to make it available on the path.
NPH:=$(shell dirname $(NIM_BINARY))/nph
build-nph:
ifeq ("$(wildcard $(NPH))","")
$(ENV_SCRIPT) nim c vendor/nph/src/nph.nim && \
mv vendor/nph/src/nph $(shell dirname $(NPH))
echo "nph utility is available at " $(NPH)
endif
GIT_PRE_COMMIT_HOOK := .git/hooks/pre-commit
install-nph-hook: build-nph
ifeq ("$(wildcard $(GIT_PRE_COMMIT_HOOK))","")
cp ./tools/scripts/git_pre_commit_format.sh $(GIT_PRE_COMMIT_HOOK)
else
echo "$(GIT_PRE_COMMIT_HOOK) already present, will NOT override"
exit 1
endif
nph/%: build-nph
echo -e $(FORMAT_MSG) "nph/$*" && \
$(NPH) $*
format:
$(NPH) *.nim
$(NPH) codex/
$(NPH) tests/
$(NPH) library/
clean-nph:
rm -f $(NPH)
# To avoid hardcoding nph binary location in several places
print-nph-path:
echo "$(NPH)"
clean: | clean-nph
################
## C Bindings ##
################
.PHONY: libstorage
STATIC ?= 0
ifneq ($(strip $(STORAGE_LIB_PARAMS)),)
NIM_PARAMS := $(NIM_PARAMS) $(STORAGE_LIB_PARAMS)
endif
libstorage:
$(MAKE) deps
rm -f build/libstorage*
ifeq ($(STATIC), 1)
echo -e $(BUILD_MSG) "build/$@.a" && \
$(ENV_SCRIPT) nim libstorageStatic $(NIM_PARAMS) -d:LeopardCmakeFlags="\"-DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_BUILD_TYPE=Release\"" codex.nims
else ifeq ($(detected_OS),Windows)
echo -e $(BUILD_MSG) "build/$@.dll" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) -d:LeopardCmakeFlags="\"-G \\\"MSYS Makefiles\\\" -DCMAKE_BUILD_TYPE=Release\"" codex.nims
else ifeq ($(detected_OS),macOS)
echo -e $(BUILD_MSG) "build/$@.dylib" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) -d:LeopardCmakeFlags="\"-DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_BUILD_TYPE=Release\"" codex.nims
else
echo -e $(BUILD_MSG) "build/$@.so" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) -d:LeopardCmakeFlags="\"-DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_BUILD_TYPE=Release\"" codex.nims
endif
endif # "variables.mk" was not included endif # "variables.mk" was not included

View File

@ -1,22 +1,22 @@
# Logos Storage Decentralized Engine # Codex Decentralized Durability Engine
> The Logos Storage project aims to create a decentralized engine that allows persisting data in p2p networks. > The Codex project aims to create a decentralized durability engine that allows persisting data in p2p networks. In other words, it allows storing files and data with predictable durability guarantees for later retrieval.
> WARNING: This project is under active development and is considered pre-alpha. > WARNING: This project is under active development and is considered pre-alpha.
[![License: Apache](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![License: Apache](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Stability: experimental](https://img.shields.io/badge/stability-experimental-orange.svg)](#stability) [![Stability: experimental](https://img.shields.io/badge/stability-experimental-orange.svg)](#stability)
[![CI](https://github.com/logos-storage/logos-storage-nim/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/logos-storage/logos-storage-nim/actions/workflows/ci.yml?query=branch%3Amaster) [![CI](https://github.com/codex-storage/nim-codex/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/codex-storage/nim-codex/actions/workflows/ci.yml?query=branch%3Amaster)
[![Docker](https://github.com/logos-storage/logos-storage-nim/actions/workflows/docker.yml/badge.svg?branch=master)](https://github.com/logos-storage/logos-storage-nim/actions/workflows/docker.yml?query=branch%3Amaster) [![Docker](https://github.com/codex-storage/nim-codex/actions/workflows/docker.yml/badge.svg?branch=master)](https://github.com/codex-storage/nim-codex/actions/workflows/docker.yml?query=branch%3Amaster)
[![Codecov](https://codecov.io/gh/logos-storage/logos-storage-nim/branch/master/graph/badge.svg?token=XFmCyPSNzW)](https://codecov.io/gh/logos-storage/logos-storage-nim) [![Codecov](https://codecov.io/gh/codex-storage/nim-codex/branch/master/graph/badge.svg?token=XFmCyPSNzW)](https://codecov.io/gh/codex-storage/nim-codex)
[![Discord](https://img.shields.io/discord/895609329053474826)](https://discord.gg/CaJTh24ddQ) [![Discord](https://img.shields.io/discord/895609329053474826)](https://discord.gg/CaJTh24ddQ)
![Docker Pulls](https://img.shields.io/docker/pulls/codexstorage/nim-codex) ![Docker Pulls](https://img.shields.io/docker/pulls/codexstorage/nim-codex)
## Build and Run ## Build and Run
For detailed instructions on preparing to build logos-storagenim see [*Build Logos Storage*](https://docs.codex.storage/learn/build). For detailed instructions on preparing to build nim-codex see [*Build Codex*](https://docs.codex.storage/learn/build).
To build the project, clone it and run: To build the project, clone it and run:
@ -29,12 +29,11 @@ The executable will be placed under the `build` directory under the project root
Run the client with: Run the client with:
```bash ```bash
build/storage build/codex
``` ```
## Configuration ## Configuration
It is possible to configure a Logos Storage node in several ways: It is possible to configure a Codex node in several ways:
1. CLI options 1. CLI options
2. Environment variables 2. Environment variables
3. Configuration file 3. Configuration file
@ -45,72 +44,10 @@ Please check [documentation](https://docs.codex.storage/learn/run#configuration)
## Guides ## Guides
To get acquainted with Logos Storage, consider: To get acquainted with Codex, consider:
* running the simple [Logos Storage Two-Client Test](https://docs.codex.storage/learn/local-two-client-test) for a start, and; * running the simple [Codex Two-Client Test](https://docs.codex.storage/learn/local-two-client-test) for a start, and;
* if you are feeling more adventurous, try [Running a Local Logos Storage Network with Marketplace Support](https://docs.codex.storage/learn/local-marketplace) using a local blockchain as well. * if you are feeling more adventurous, try [Running a Local Codex Network with Marketplace Support](https://docs.codex.storage/learn/local-marketplace) using a local blockchain as well.
## API ## API
The client exposes a REST API that can be used to interact with the clients. Overview of the API can be found on [api.codex.storage](https://api.codex.storage). The client exposes a REST API that can be used to interact with the clients. Overview of the API can be found on [api.codex.storage](https://api.codex.storage).
## Bindings
Logos Storage provides a C API that can be wrapped by other languages. The bindings is located in the `library` folder.
Currently, only a Go binding is included.
### Build the C library
```bash
make libstorage
```
This produces the shared library under `build/`.
### Run the Go example
Build the Go example:
```bash
go build -o storage-go examples/golang/storage.go
```
Export the library path:
```bash
export LD_LIBRARY_PATH=build
```
Run the example:
```bash
./storage-go
```
### Static vs Dynamic build
By default, Logos Storage builds a dynamic library (`libstorage.so`), which you can load at runtime.
If you prefer a static library (`libstorage.a`), set the `STATIC` flag:
```bash
# Build dynamic (default)
make libstorage
# Build static
make STATIC=1 libstorage
```
### Limitation
Callbacks must be fast and non-blocking; otherwise, the working thread will hang and prevent other requests from being processed.
## Contributing and development
Feel free to dive in, contributions are welcomed! Open an issue or submit PRs.
### Linting and formatting
`logos-storage-nim` uses [nph](https://github.com/arnetheduck/nph) for formatting our code and it is required to adhere to its styling.
If you are setting up fresh setup, in order to get `nph` run `make build-nph`.
In order to format files run `make nph/<file/folder you want to format>`.
If you want you can install Git pre-commit hook using `make install-nph-commit`, which will format modified files prior committing them.
If you are using VSCode and the [NimLang](https://marketplace.visualstudio.com/items?itemName=NimLang.nimlang) extension you can enable "Format On Save" (eq. the `nim.formatOnSave` property) that will format the files using `nph`.

View File

@ -10,17 +10,17 @@ nim c -r run_benchmarks
``` ```
By default all circuit files for each combinations of circuit args will be generated in a unique folder named like: By default all circuit files for each combinations of circuit args will be generated in a unique folder named like:
logos-storage-nim/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3 nim-codex/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3
Generating the circuit files often takes longer than running benchmarks, so caching the results allows re-running the benchmark as needed. Generating the circuit files often takes longer than running benchmarks, so caching the results allows re-running the benchmark as needed.
You can modify the `CircuitArgs` and `CircuitEnv` objects in `runAllBenchMarks` to suite your needs. See `create_circuits.nim` for their definition. You can modify the `CircuitArgs` and `CircuitEnv` objects in `runAllBenchMarks` to suite your needs. See `create_circuits.nim` for their definition.
The runner executes all commands relative to the `logos-storage-nim` repo. This simplifies finding the correct circuit includes paths, etc. `CircuitEnv` sets all of this. The runner executes all commands relative to the `nim-codex` repo. This simplifies finding the correct circuit includes paths, etc. `CircuitEnv` sets all of this.
## Logos Storage Ark Circom CLI ## Codex Ark Circom CLI
Runs Logos Storage's prover setup with Ark / Circom. Runs Codex's prover setup with Ark / Circom.
Compile: Compile:
```sh ```sh

View File

@ -29,10 +29,10 @@ proc findCodexProjectDir(): string =
func default*(tp: typedesc[CircuitEnv]): CircuitEnv = func default*(tp: typedesc[CircuitEnv]): CircuitEnv =
let codexDir = findCodexProjectDir() let codexDir = findCodexProjectDir()
result.nimCircuitCli = result.nimCircuitCli =
codexDir / "vendor" / "logos-storage-proofs-circuits" / "reference" / "nim" / codexDir / "vendor" / "codex-storage-proofs-circuits" / "reference" / "nim" /
"proof_input" / "cli" "proof_input" / "cli"
result.circuitDirIncludes = result.circuitDirIncludes =
codexDir / "vendor" / "logos-storage-proofs-circuits" / "circuit" codexDir / "vendor" / "codex-storage-proofs-circuits" / "circuit"
result.ptauPath = result.ptauPath =
codexDir / "benchmarks" / "ceremony" / "powersOfTau28_hez_final_23.ptau" codexDir / "benchmarks" / "ceremony" / "powersOfTau28_hez_final_23.ptau"
result.ptauUrl = "https://storage.googleapis.com/zkevm/ptau".parseUri result.ptauUrl = "https://storage.googleapis.com/zkevm/ptau".parseUri
@ -118,7 +118,7 @@ proc createCircuit*(
## ##
## All needed circuit files will be generated as needed. ## All needed circuit files will be generated as needed.
## They will be located in `circBenchDir` which defaults to a folder like: ## They will be located in `circBenchDir` which defaults to a folder like:
## `logos-storage-nim/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3` ## `nim-codex/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3`
## with all the given CircuitArgs. ## with all the given CircuitArgs.
## ##
let circdir = circBenchDir let circdir = circBenchDir

View File

@ -41,18 +41,19 @@ template benchmark*(name: untyped, count: int, blk: untyped) =
) )
benchRuns[benchmarkName] = (runs.avg(), count) benchRuns[benchmarkName] = (runs.avg(), count)
template printBenchMarkSummaries*(printRegular = true, printTsv = true) = template printBenchMarkSummaries*(printRegular=true, printTsv=true) =
if printRegular: if printRegular:
echo "" echo ""
for k, v in benchRuns: for k, v in benchRuns:
echo "Benchmark average run ", v.avgTimeSec, " for ", v.count, " runs ", "for ", k echo "Benchmark average run ", v.avgTimeSec, " for ", v.count, " runs ", "for ", k
if printTsv: if printTsv:
echo "" echo ""
echo "name", "\t", "avgTimeSec", "\t", "count" echo "name", "\t", "avgTimeSec", "\t", "count"
for k, v in benchRuns: for k, v in benchRuns:
echo k, "\t", v.avgTimeSec, "\t", v.count echo k, "\t", v.avgTimeSec, "\t", v.count
import std/math import std/math
func floorLog2*(x: int): int = func floorLog2*(x: int): int =

View File

@ -3,97 +3,63 @@ mode = ScriptMode.Verbose
import std/os except commandLineParams import std/os except commandLineParams
### Helper functions ### Helper functions
proc buildBinary(srcName: string, outName = os.lastPathPart(srcName), srcDir = "./", params = "", lang = "c") = proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") =
if not dirExists "build": if not dirExists "build":
mkDir "build" mkDir "build"
# allow something like "nim nimbus --verbosity:0 --hints:off nimbus.nims" # allow something like "nim nimbus --verbosity:0 --hints:off nimbus.nims"
var extra_params = params var extra_params = params
when compiles(commandLineParams): when compiles(commandLineParams):
for param in commandLineParams(): for param in commandLineParams():
extra_params &= " " & param extra_params &= " " & param
else: else:
for i in 2 ..< paramCount(): for i in 2..<paramCount():
extra_params &= " " & paramStr(i) extra_params &= " " & paramStr(i)
let let
# Place build output in 'build' folder, even if name includes a longer path. # Place build output in 'build' folder, even if name includes a longer path.
cmd = outName = os.lastPathPart(name)
"nim " & lang & " --out:build/" & outName & " " & extra_params & " " & srcDir & cmd = "nim " & lang & " --out:build/" & outName & " " & extra_params & " " & srcDir & name & ".nim"
srcName & ".nim"
exec(cmd) exec(cmd)
proc buildLibrary(name: string, srcDir = "./", params = "", `type` = "dynamic") = proc test(name: string, srcDir = "tests/", params = "", lang = "c") =
if not dirExists "build": buildBinary name, srcDir, params
mkDir "build" exec "build/" & name
if `type` == "dynamic": task codex, "build codex binary":
let lib_name = ( buildBinary "codex", params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE"
when defined(windows): name & ".dll"
elif defined(macosx): name & ".dylib"
else: name & ".so"
)
exec "nim c" & " --out:build/" & lib_name &
" --threads:on --app:lib --opt:size --noMain --mm:refc --header --d:metrics " &
"--nimMainPrefix:libstorage -d:noSignalHandler " &
"-d:LeopardExtraCompilerFlags=-fPIC " & "-d:chronicles_runtime_filtering " &
"-d:chronicles_log_level=TRACE " & params & " " & srcDir & name & ".nim"
else:
exec "nim c" & " --out:build/" & name &
".a --threads:on --app:staticlib --opt:size --noMain --mm:refc --header --d:metrics " &
"--nimMainPrefix:libstorage -d:noSignalHandler " &
"-d:LeopardExtraCompilerFlags=-fPIC " &
"-d:chronicles_runtime_filtering " &
"-d:chronicles_log_level=TRACE " &
params & " " & srcDir & name & ".nim"
proc test(name: string, outName = name, srcDir = "tests/", params = "", lang = "c") =
buildBinary name, outName, srcDir, params
exec "build/" & outName
task storage, "build logos storage binary":
buildBinary "codex",
outname = "storage",
params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE"
task toolsCirdl, "build tools/cirdl binary": task toolsCirdl, "build tools/cirdl binary":
buildBinary "tools/cirdl/cirdl" buildBinary "tools/cirdl/cirdl"
task testStorage, "Build & run Logos Storage tests": task testCodex, "Build & run Codex tests":
test "testCodex", outName = "testStorage", params = "-d:storage_enable_proof_failures=true" test "testCodex", params = "-d:codex_enable_proof_failures=true"
task testContracts, "Build & run Logos Storage Contract tests": task testContracts, "Build & run Codex Contract tests":
test "testContracts" test "testContracts"
task testIntegration, "Run integration tests": task testIntegration, "Run integration tests":
buildBinary "codex", buildBinary "codex", params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE -d:codex_enable_proof_failures=true"
outName = "storage",
params =
"-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE -d:storage_enable_proof_failures=true"
test "testIntegration" test "testIntegration"
# use params to enable logging from the integration test executable
# test "testIntegration", params = "-d:chronicles_sinks=textlines[notimestamps,stdout],textlines[dynamic] " &
# "-d:chronicles_enabled_topics:integration:TRACE"
task build, "build Logos Storage binary": task build, "build codex binary":
storageTask() codexTask()
task test, "Run tests": task test, "Run tests":
testStorageTask() testCodexTask()
task testTools, "Run Tools tests": task testTools, "Run Tools tests":
toolsCirdlTask() toolsCirdlTask()
test "testTools" test "testTools"
task testAll, "Run all tests (except for Taiko L2 tests)": task testAll, "Run all tests (except for Taiko L2 tests)":
testStorageTask() testCodexTask()
testContractsTask() testContractsTask()
testIntegrationTask() testIntegrationTask()
testToolsTask() testToolsTask()
task testTaiko, "Run Taiko L2 tests": task testTaiko, "Run Taiko L2 tests":
storageTask() codexTask()
test "testTaiko" test "testTaiko"
import strutils import strutils
@ -119,50 +85,20 @@ task coverage, "generates code coverage report":
var nimSrcs = " " var nimSrcs = " "
for f in walkDirRec("codex", {pcFile}): for f in walkDirRec("codex", {pcFile}):
if f.endswith(".nim"): if f.endswith(".nim"): nimSrcs.add " " & f.absolutePath.quoteShell()
nimSrcs.add " " & f.absolutePath.quoteShell()
echo "======== Running Tests ======== " echo "======== Running Tests ======== "
test "coverage", test "coverage", srcDir = "tests/", params = " --nimcache:nimcache/coverage -d:release -d:codex_enable_proof_failures=true"
srcDir = "tests/",
params =
" --nimcache:nimcache/coverage -d:release -d:storage_enable_proof_failures=true"
exec("rm nimcache/coverage/*.c") exec("rm nimcache/coverage/*.c")
rmDir("coverage") rmDir("coverage"); mkDir("coverage")
mkDir("coverage")
echo " ======== Running LCOV ======== " echo " ======== Running LCOV ======== "
exec( exec("lcov --capture --directory nimcache/coverage --output-file coverage/coverage.info")
"lcov --capture --keep-going --directory nimcache/coverage --output-file coverage/coverage.info" exec("lcov --extract coverage/coverage.info --output-file coverage/coverage.f.info " & nimSrcs)
)
exec(
"lcov --extract coverage/coverage.info --keep-going --output-file coverage/coverage.f.info " &
nimSrcs
)
echo " ======== Generating HTML coverage report ======== " echo " ======== Generating HTML coverage report ======== "
exec("genhtml coverage/coverage.f.info --keep-going --output-directory coverage/report ") exec("genhtml coverage/coverage.f.info --output-directory coverage/report ")
echo " ======== Coverage report Done ======== " echo " ======== Coverage report Done ======== "
task showCoverage, "open coverage html": task showCoverage, "open coverage html":
echo " ======== Opening HTML coverage report in browser... ======== " echo " ======== Opening HTML coverage report in browser... ======== "
if findExe("open") != "": if findExe("open") != "":
exec("open coverage/report/index.html") exec("open coverage/report/index.html")
task libstorageDynamic, "Generate bindings":
var params = ""
when compiles(commandLineParams):
for param in commandLineParams():
if param.len > 0 and param.startsWith("-"):
params.add " " & param
let name = "libstorage"
buildLibrary name, "library/", params, "dynamic"
task libstorageStatic, "Generate bindings":
var params = ""
when compiles(commandLineParams):
for param in commandLineParams():
if param.len > 0 and param.startsWith("-"):
params.add " " & param
let name = "libstorage"
buildLibrary name, "library/", params, "static"

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -28,6 +28,7 @@ import ./codex/codextypes
export codex, conf, libp2p, chronos, logutils export codex, conf, libp2p, chronos, logutils
when isMainModule: when isMainModule:
import std/sequtils
import std/os import std/os
import pkg/confutils/defs import pkg/confutils/defs
import ./codex/utils/fileutils import ./codex/utils/fileutils
@ -38,45 +39,40 @@ when isMainModule:
when defined(posix): when defined(posix):
import system/ansi_c import system/ansi_c
type CodexStatus {.pure.} = enum type
Stopped CodexStatus {.pure.} = enum
Stopping Stopped,
Running Stopping,
Running
let config = CodexConf.load( let config = CodexConf.load(
version = codexFullVersion, version = codexFullVersion,
envVarsPrefix = "storage", envVarsPrefix = "codex",
secondarySources = proc( secondarySources = proc (config: CodexConf, sources: auto) =
config: CodexConf, sources: auto if configFile =? config.configFile:
) {.gcsafe, raises: [ConfigurationError].} = sources.addConfigFile(Toml, configFile)
if configFile =? config.configFile:
sources.addConfigFile(Toml, configFile)
,
) )
config.setupLogging() config.setupLogging()
try:
updateLogLevel(config.logLevel)
except ValueError as err:
try:
stderr.write "Invalid value for --log-level. " & err.msg & "\n"
except IOError:
echo "Invalid value for --log-level. " & err.msg
quit QuitFailure
config.setupMetrics() config.setupMetrics()
if not (checkAndCreateDataDir((config.dataDir).string)): if config.nat == ValidIpAddress.init(IPv4_any()):
error "`--nat` cannot be set to the any (`0.0.0.0`) address"
quit QuitFailure
if config.nat == ValidIpAddress.init("127.0.0.1"):
warn "`--nat` is set to loopback, your node wont properly announce over the DHT"
if not(checkAndCreateDataDir((config.dataDir).string)):
# We are unable to access/create data folder or data folder's # We are unable to access/create data folder or data folder's
# permissions are insecure. # permissions are insecure.
quit QuitFailure quit QuitFailure
if config.prover() and not (checkAndCreateDataDir((config.circuitDir).string)): if config.prover() and not(checkAndCreateDataDir((config.circuitDir).string)):
quit QuitFailure quit QuitFailure
trace "Data dir initialized", dir = $config.dataDir trace "Data dir initialized", dir = $config.dataDir
if not (checkAndCreateDataDir((config.dataDir / "repo"))): if not(checkAndCreateDataDir((config.dataDir / "repo"))):
# We are unable to access/create data folder or data folder's # We are unable to access/create data folder or data folder's
# permissions are insecure. # permissions are insecure.
quit QuitFailure quit QuitFailure
@ -95,28 +91,25 @@ when isMainModule:
config.dataDir / config.netPrivKeyFile config.dataDir / config.netPrivKeyFile
privateKey = setupKey(keyPath).expect("Should setup private key!") privateKey = setupKey(keyPath).expect("Should setup private key!")
server = server = try:
try: CodexServer.new(config, privateKey)
CodexServer.new(config, privateKey) except Exception as exc:
except Exception as exc: error "Failed to start Codex", msg = exc.msg
error "Failed to start Logos Storage", msg = exc.msg quit QuitFailure
quit QuitFailure
## Ctrl+C handling ## Ctrl+C handling
proc doShutdown() = proc doShutdown() =
shutdown = server.shutdown() shutdown = server.stop()
state = CodexStatus.Stopping state = CodexStatus.Stopping
notice "Stopping Logos Storage" notice "Stopping Codex"
proc controlCHandler() {.noconv.} = proc controlCHandler() {.noconv.} =
when defined(windows): when defined(windows):
# workaround for https://github.com/nim-lang/Nim/issues/4057 # workaround for https://github.com/nim-lang/Nim/issues/4057
try: try:
setupForeignThreadGc() setupForeignThreadGc()
except Exception as exc: except Exception as exc: raiseAssert exc.msg # shouldn't happen
raiseAssert exc.msg
# shouldn't happen
notice "Shutting down after having received SIGINT" notice "Shutting down after having received SIGINT"
doShutdown() doShutdown()
@ -138,7 +131,7 @@ when isMainModule:
try: try:
waitFor server.start() waitFor server.start()
except CatchableError as error: except CatchableError as error:
error "Logos Storage failed to start", error = error.msg error "Codex failed to start", error = error.msg
# XXX ideally we'd like to issue a stop instead of quitting cold turkey, # XXX ideally we'd like to issue a stop instead of quitting cold turkey,
# but this would mean we'd have to fix the implementation of all # but this would mean we'd have to fix the implementation of all
# services so they won't crash if we attempt to stop them before they # services so they won't crash if we attempt to stop them before they
@ -159,7 +152,7 @@ when isMainModule:
# be assigned before state switches to Stopping # be assigned before state switches to Stopping
waitFor shutdown waitFor shutdown
except CatchableError as error: except CatchableError as error:
error "Logos Storage didn't shutdown correctly", error = error.msg error "Codex didn't shutdown correctly", error = error.msg
quit QuitFailure quit QuitFailure
notice "Exited Storage" notice "Exited codex"

View File

@ -1,5 +1,5 @@
version = "0.1.0" version = "0.1.0"
author = "Logos Storage Team" author = "Codex Team"
description = "p2p data durability engine" description = "p2p data durability engine"
license = "MIT" license = "MIT"
binDir = "build" binDir = "build"

View File

@ -1,5 +1,10 @@
import ./blockexchange/[network, engine, peers] import ./blockexchange/[
network,
engine,
peers]
import ./blockexchange/protobuf/[blockexc, presence] import ./blockexchange/protobuf/[
blockexc,
presence]
export network, engine, blockexc, presence, peers export network, engine, blockexc, presence, peers

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,8 +7,6 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import pkg/chronos import pkg/chronos
import pkg/libp2p/cid import pkg/libp2p/cid
import pkg/libp2p/multicodec import pkg/libp2p/multicodec
@ -20,8 +18,6 @@ import ../protobuf/presence
import ../peers import ../peers
import ../../utils import ../../utils
import ../../utils/exceptions
import ../../utils/trackedfutures
import ../../discovery import ../../discovery
import ../../stores/blockstore import ../../stores/blockstore
import ../../logutils import ../../logutils
@ -30,122 +26,114 @@ import ../../manifest
logScope: logScope:
topics = "codex discoveryengine advertiser" topics = "codex discoveryengine advertiser"
declareGauge(codex_inflight_advertise, "inflight advertise requests") declareGauge(codexInflightAdvertise, "inflight advertise requests")
const const
DefaultConcurrentAdvertRequests = 10 DefaultConcurrentAdvertRequests = 10
DefaultAdvertiseLoopSleep = 30.minutes DefaultAdvertiseLoopSleep = 30.minutes
type Advertiser* = ref object of RootObj type
localStore*: BlockStore # Local block store for this instance Advertiser* = ref object of RootObj
discovery*: Discovery # Discovery interface localStore*: BlockStore # Local block store for this instance
discovery*: Discovery # Discovery interface
advertiserRunning*: bool # Indicates if discovery is running advertiserRunning*: bool # Indicates if discovery is running
concurrentAdvReqs: int # Concurrent advertise requests concurrentAdvReqs: int # Concurrent advertise requests
advertiseLocalStoreLoop*: Future[void].Raising([]) # Advertise loop task handle advertiseLocalStoreLoop*: Future[void] # Advertise loop task handle
advertiseQueue*: AsyncQueue[Cid] # Advertise queue advertiseQueue*: AsyncQueue[Cid] # Advertise queue
trackedFutures*: TrackedFutures # Advertise tasks futures advertiseTasks*: seq[Future[void]] # Advertise tasks
advertiseLocalStoreLoopSleep: Duration # Advertise loop sleep advertiseLocalStoreLoopSleep: Duration # Advertise loop sleep
inFlightAdvReqs*: Table[Cid, Future[void]] # Inflight advertise requests inFlightAdvReqs*: Table[Cid, Future[void]] # Inflight advertise requests
proc addCidToQueue(b: Advertiser, cid: Cid) {.async: (raises: [CancelledError]).} = proc addCidToQueue(b: Advertiser, cid: Cid) {.async.} =
if cid notin b.advertiseQueue: if cid notin b.advertiseQueue:
await b.advertiseQueue.put(cid) await b.advertiseQueue.put(cid)
trace "Advertising", cid trace "Advertising", cid
proc advertiseBlock(b: Advertiser, cid: Cid) {.async: (raises: [CancelledError]).} = proc advertiseBlock(b: Advertiser, cid: Cid) {.async.} =
without isM =? cid.isManifest, err: without isM =? cid.isManifest, err:
warn "Unable to determine if cid is manifest" warn "Unable to determine if cid is manifest"
return return
try: if isM:
if isM: without blk =? await b.localStore.getBlock(cid), err:
without blk =? await b.localStore.getBlock(cid), err: error "Error retrieving manifest block", cid, err = err.msg
error "Error retrieving manifest block", cid, err = err.msg return
return
without manifest =? Manifest.decode(blk), err: without manifest =? Manifest.decode(blk), err:
error "Unable to decode as manifest", err = err.msg error "Unable to decode as manifest", err = err.msg
return return
# announce manifest cid and tree cid # announce manifest cid and tree cid
await b.addCidToQueue(cid) await b.addCidToQueue(cid)
await b.addCidToQueue(manifest.treeCid) await b.addCidToQueue(manifest.treeCid)
except CancelledError as exc:
trace "Cancelled advertise block", cid
raise exc
except CatchableError as e:
error "failed to advertise block", cid, error = e.msgDetail
proc advertiseLocalStoreLoop(b: Advertiser) {.async: (raises: []).} = proc advertiseLocalStoreLoop(b: Advertiser) {.async.} =
try: while b.advertiserRunning:
while b.advertiserRunning: if cids =? await b.localStore.listBlocks(blockType = BlockType.Manifest):
if cidsIter =? await b.localStore.listBlocks(blockType = BlockType.Manifest): trace "Advertiser begins iterating blocks..."
trace "Advertiser begins iterating blocks..." for c in cids:
for c in cidsIter: if cid =? await c:
if cid =? await c: await b.advertiseBlock(cid)
await b.advertiseBlock(cid) trace "Advertiser iterating blocks finished."
trace "Advertiser iterating blocks finished."
await sleepAsync(b.advertiseLocalStoreLoopSleep) await sleepAsync(b.advertiseLocalStoreLoopSleep)
except CancelledError:
warn "Cancelled advertise local store loop"
info "Exiting advertise task loop" info "Exiting advertise task loop"
proc processQueueLoop(b: Advertiser) {.async: (raises: []).} = proc processQueueLoop(b: Advertiser) {.async.} =
try: while b.advertiserRunning:
while b.advertiserRunning: try:
let cid = await b.advertiseQueue.get() let
cid = await b.advertiseQueue.get()
if cid in b.inFlightAdvReqs: if cid in b.inFlightAdvReqs:
continue continue
let request = b.discovery.provide(cid) try:
b.inFlightAdvReqs[cid] = request let
codex_inflight_advertise.set(b.inFlightAdvReqs.len.int64) request = b.discovery.provide(cid)
defer: b.inFlightAdvReqs[cid] = request
codexInflightAdvertise.set(b.inFlightAdvReqs.len.int64)
await request
finally:
b.inFlightAdvReqs.del(cid) b.inFlightAdvReqs.del(cid)
codex_inflight_advertise.set(b.inFlightAdvReqs.len.int64) codexInflightAdvertise.set(b.inFlightAdvReqs.len.int64)
except CancelledError:
await request trace "Advertise task cancelled"
except CancelledError: return
warn "Cancelled advertise task runner" except CatchableError as exc:
warn "Exception in advertise task runner", exc = exc.msg
info "Exiting advertise task runner" info "Exiting advertise task runner"
proc start*(b: Advertiser) {.async: (raises: []).} = proc start*(b: Advertiser) {.async.} =
## Start the advertiser ## Start the advertiser
## ##
trace "Advertiser start" trace "Advertiser start"
# The advertiser is expected to be started only once. proc onBlock(cid: Cid) {.async.} =
if b.advertiserRunning: await b.advertiseBlock(cid)
raiseAssert "Advertiser can only be started once — this should not happen"
proc onBlock(cid: Cid) {.async: (raises: []).} =
try:
await b.advertiseBlock(cid)
except CancelledError:
trace "Cancelled advertise block", cid
doAssert(b.localStore.onBlockStored.isNone()) doAssert(b.localStore.onBlockStored.isNone())
b.localStore.onBlockStored = onBlock.some b.localStore.onBlockStored = onBlock.some
if b.advertiserRunning:
warn "Starting advertiser twice"
return
b.advertiserRunning = true b.advertiserRunning = true
for i in 0 ..< b.concurrentAdvReqs: for i in 0..<b.concurrentAdvReqs:
let fut = b.processQueueLoop() b.advertiseTasks.add(processQueueLoop(b))
b.trackedFutures.track(fut)
b.advertiseLocalStoreLoop = advertiseLocalStoreLoop(b) b.advertiseLocalStoreLoop = advertiseLocalStoreLoop(b)
b.trackedFutures.track(b.advertiseLocalStoreLoop)
proc stop*(b: Advertiser) {.async: (raises: []).} = proc stop*(b: Advertiser) {.async.} =
## Stop the advertiser ## Stop the advertiser
## ##
@ -157,16 +145,26 @@ proc stop*(b: Advertiser) {.async: (raises: []).} =
b.advertiserRunning = false b.advertiserRunning = false
# Stop incoming tasks from callback and localStore loop # Stop incoming tasks from callback and localStore loop
b.localStore.onBlockStored = CidCallback.none b.localStore.onBlockStored = CidCallback.none
trace "Stopping advertise loop and tasks" if not b.advertiseLocalStoreLoop.isNil and not b.advertiseLocalStoreLoop.finished:
await b.trackedFutures.cancelTracked() trace "Awaiting advertise loop to stop"
trace "Advertiser loop and tasks stopped" await b.advertiseLocalStoreLoop.cancelAndWait()
trace "Advertise loop stopped"
# Clear up remaining tasks
for task in b.advertiseTasks:
if not task.finished:
trace "Awaiting advertise task to stop"
await task.cancelAndWait()
trace "Advertise task stopped"
trace "Advertiser stopped"
proc new*( proc new*(
T: type Advertiser, T: type Advertiser,
localStore: BlockStore, localStore: BlockStore,
discovery: Discovery, discovery: Discovery,
concurrentAdvReqs = DefaultConcurrentAdvertRequests, concurrentAdvReqs = DefaultConcurrentAdvertRequests,
advertiseLocalStoreLoopSleep = DefaultAdvertiseLoopSleep, advertiseLocalStoreLoopSleep = DefaultAdvertiseLoopSleep
): Advertiser = ): Advertiser =
## Create a advertiser instance ## Create a advertiser instance
## ##
@ -175,7 +173,5 @@ proc new*(
discovery: discovery, discovery: discovery,
concurrentAdvReqs: concurrentAdvReqs, concurrentAdvReqs: concurrentAdvReqs,
advertiseQueue: newAsyncQueue[Cid](concurrentAdvReqs), advertiseQueue: newAsyncQueue[Cid](concurrentAdvReqs),
trackedFutures: TrackedFutures.new(),
inFlightAdvReqs: initTable[Cid, Future[void]](), inFlightAdvReqs: initTable[Cid, Future[void]](),
advertiseLocalStoreLoopSleep: advertiseLocalStoreLoopSleep, advertiseLocalStoreLoopSleep: advertiseLocalStoreLoopSleep)
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -8,7 +8,6 @@
## those terms. ## those terms.
import std/sequtils import std/sequtils
import std/algorithm
import pkg/chronos import pkg/chronos
import pkg/libp2p/cid import pkg/libp2p/cid
@ -24,7 +23,6 @@ import ../network
import ../peers import ../peers
import ../../utils import ../../utils
import ../../utils/trackedfutures
import ../../discovery import ../../discovery
import ../../stores/blockstore import ../../stores/blockstore
import ../../logutils import ../../logutils
@ -33,107 +31,95 @@ import ../../manifest
logScope: logScope:
topics = "codex discoveryengine" topics = "codex discoveryengine"
declareGauge(codex_inflight_discovery, "inflight discovery requests") declareGauge(codexInflightDiscovery, "inflight discovery requests")
const const
DefaultConcurrentDiscRequests = 10 DefaultConcurrentDiscRequests = 10
DefaultDiscoveryTimeout = 1.minutes DefaultDiscoveryTimeout = 1.minutes
DefaultMinPeersPerBlock = 3 DefaultMinPeersPerBlock = 3
DefaultMaxPeersPerBlock = 8
DefaultDiscoveryLoopSleep = 3.seconds DefaultDiscoveryLoopSleep = 3.seconds
type DiscoveryEngine* = ref object of RootObj type
localStore*: BlockStore # Local block store for this instance DiscoveryEngine* = ref object of RootObj
peers*: PeerCtxStore # Peer context store localStore*: BlockStore # Local block store for this instance
network*: BlockExcNetwork # Network interface peers*: PeerCtxStore # Peer context store
discovery*: Discovery # Discovery interface network*: BlockExcNetwork # Network interface
pendingBlocks*: PendingBlocksManager # Blocks we're awaiting to be resolved discovery*: Discovery # Discovery interface
discEngineRunning*: bool # Indicates if discovery is running pendingBlocks*: PendingBlocksManager # Blocks we're awaiting to be resolved
concurrentDiscReqs: int # Concurrent discovery requests discEngineRunning*: bool # Indicates if discovery is running
discoveryLoop*: Future[void].Raising([]) # Discovery loop task handle concurrentDiscReqs: int # Concurrent discovery requests
discoveryQueue*: AsyncQueue[Cid] # Discovery queue discoveryLoop*: Future[void] # Discovery loop task handle
trackedFutures*: TrackedFutures # Tracked Discovery tasks futures discoveryQueue*: AsyncQueue[Cid] # Discovery queue
minPeersPerBlock*: int # Min number of peers with block discoveryTasks*: seq[Future[void]] # Discovery tasks
maxPeersPerBlock*: int # Max number of peers with block minPeersPerBlock*: int # Max number of peers with block
discoveryLoopSleep: Duration # Discovery loop sleep discoveryLoopSleep: Duration # Discovery loop sleep
inFlightDiscReqs*: Table[Cid, Future[seq[SignedPeerRecord]]] inFlightDiscReqs*: Table[Cid, Future[seq[SignedPeerRecord]]] # Inflight discovery requests
# Inflight discovery requests
proc cleanupExcessPeers(b: DiscoveryEngine, cid: Cid) {.gcsafe, raises: [].} = proc discoveryQueueLoop(b: DiscoveryEngine) {.async.} =
var haves = b.peers.peersHave(cid) while b.discEngineRunning:
let count = haves.len - b.maxPeersPerBlock for cid in toSeq(b.pendingBlocks.wantListBlockCids):
if count <= 0: try:
return
haves.sort(
proc(a, b: BlockExcPeerCtx): int =
cmp(a.lastExchange, b.lastExchange)
)
let toRemove = haves[0 ..< count]
for peer in toRemove:
try:
peer.cleanPresence(BlockAddress.init(cid))
trace "Removed block presence from peer", cid, peer = peer.id
except CatchableError as exc:
error "Failed to clean presence for peer",
cid, peer = peer.id, error = exc.msg, name = exc.name
proc discoveryQueueLoop(b: DiscoveryEngine) {.async: (raises: []).} =
try:
while b.discEngineRunning:
for cid in toSeq(b.pendingBlocks.wantListBlockCids):
await b.discoveryQueue.put(cid) await b.discoveryQueue.put(cid)
except CancelledError:
trace "Discovery loop cancelled"
return
except CatchableError as exc:
warn "Exception in discovery loop", exc = exc.msg
await sleepAsync(b.discoveryLoopSleep) logScope:
except CancelledError: sleep = b.discoveryLoopSleep
trace "Discovery loop cancelled" wanted = b.pendingBlocks.len
proc discoveryTaskLoop(b: DiscoveryEngine) {.async: (raises: []).} = await sleepAsync(b.discoveryLoopSleep)
proc discoveryTaskLoop(b: DiscoveryEngine) {.async.} =
## Run discovery tasks ## Run discovery tasks
## ##
try: while b.discEngineRunning:
while b.discEngineRunning: try:
let cid = await b.discoveryQueue.get() let
cid = await b.discoveryQueue.get()
if cid in b.inFlightDiscReqs: if cid in b.inFlightDiscReqs:
trace "Discovery request already in progress", cid trace "Discovery request already in progress", cid
continue continue
trace "Running discovery task for cid", cid let
haves = b.peers.peersHave(cid)
let haves = b.peers.peersHave(cid)
if haves.len > b.maxPeersPerBlock:
trace "Cleaning up excess peers",
cid, peers = haves.len, max = b.maxPeersPerBlock
b.cleanupExcessPeers(cid)
continue
if haves.len < b.minPeersPerBlock: if haves.len < b.minPeersPerBlock:
let request = b.discovery.find(cid) try:
b.inFlightDiscReqs[cid] = request let
codex_inflight_discovery.set(b.inFlightDiscReqs.len.int64) request = b.discovery
.find(cid)
.wait(DefaultDiscoveryTimeout)
defer: b.inFlightDiscReqs[cid] = request
b.inFlightDiscReqs.del(cid) codexInflightDiscovery.set(b.inFlightDiscReqs.len.int64)
codex_inflight_discovery.set(b.inFlightDiscReqs.len.int64) let
peers = await request
if (await request.withTimeout(DefaultDiscoveryTimeout)) and let
peers =? (await request).catch: dialed = await allFinished(
let dialed = await allFinished(peers.mapIt(b.network.dialPeer(it.data))) peers.mapIt( b.network.dialPeer(it.data) ))
for i, f in dialed: for i, f in dialed:
if f.failed: if f.failed:
await b.discovery.removeProvider(peers[i].data.peerId) await b.discovery.removeProvider(peers[i].data.peerId)
except CancelledError:
trace "Discovery task cancelled" finally:
return b.inFlightDiscReqs.del(cid)
codexInflightDiscovery.set(b.inFlightDiscReqs.len.int64)
except CancelledError:
trace "Discovery task cancelled"
return
except CatchableError as exc:
warn "Exception in discovery task runner", exc = exc.msg
info "Exiting discovery task runner" info "Exiting discovery task runner"
proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) = proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) {.inline.} =
for cid in cids: for cid in cids:
if cid notin b.discoveryQueue: if cid notin b.discoveryQueue:
try: try:
@ -141,27 +127,23 @@ proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) =
except CatchableError as exc: except CatchableError as exc:
warn "Exception queueing discovery request", exc = exc.msg warn "Exception queueing discovery request", exc = exc.msg
proc start*(b: DiscoveryEngine) {.async: (raises: []).} = proc start*(b: DiscoveryEngine) {.async.} =
## Start the discengine task ## Start the discengine task
## ##
trace "Discovery engine starting" trace "Discovery engine start"
if b.discEngineRunning: if b.discEngineRunning:
warn "Starting discovery engine twice" warn "Starting discovery engine twice"
return return
b.discEngineRunning = true b.discEngineRunning = true
for i in 0 ..< b.concurrentDiscReqs: for i in 0..<b.concurrentDiscReqs:
let fut = b.discoveryTaskLoop() b.discoveryTasks.add(discoveryTaskLoop(b))
b.trackedFutures.track(fut)
b.discoveryLoop = b.discoveryQueueLoop() b.discoveryLoop = discoveryQueueLoop(b)
b.trackedFutures.track(b.discoveryLoop)
trace "Discovery engine started" proc stop*(b: DiscoveryEngine) {.async.} =
proc stop*(b: DiscoveryEngine) {.async: (raises: []).} =
## Stop the discovery engine ## Stop the discovery engine
## ##
@ -171,9 +153,16 @@ proc stop*(b: DiscoveryEngine) {.async: (raises: []).} =
return return
b.discEngineRunning = false b.discEngineRunning = false
trace "Stopping discovery loop and tasks" for task in b.discoveryTasks:
await b.trackedFutures.cancelTracked() if not task.finished:
trace "Discovery loop and tasks stopped" trace "Awaiting discovery task to stop"
await task.cancelAndWait()
trace "Discovery task stopped"
if not b.discoveryLoop.isNil and not b.discoveryLoop.finished:
trace "Awaiting discovery loop to stop"
await b.discoveryLoop.cancelAndWait()
trace "Discovery loop stopped"
trace "Discovery engine stopped" trace "Discovery engine stopped"
@ -186,8 +175,7 @@ proc new*(
pendingBlocks: PendingBlocksManager, pendingBlocks: PendingBlocksManager,
concurrentDiscReqs = DefaultConcurrentDiscRequests, concurrentDiscReqs = DefaultConcurrentDiscRequests,
discoveryLoopSleep = DefaultDiscoveryLoopSleep, discoveryLoopSleep = DefaultDiscoveryLoopSleep,
minPeersPerBlock = DefaultMinPeersPerBlock, minPeersPerBlock = DefaultMinPeersPerBlock
maxPeersPerBlock = DefaultMaxPeersPerBlock,
): DiscoveryEngine = ): DiscoveryEngine =
## Create a discovery engine instance for advertising services ## Create a discovery engine instance for advertising services
## ##
@ -199,9 +187,6 @@ proc new*(
pendingBlocks: pendingBlocks, pendingBlocks: pendingBlocks,
concurrentDiscReqs: concurrentDiscReqs, concurrentDiscReqs: concurrentDiscReqs,
discoveryQueue: newAsyncQueue[Cid](concurrentDiscReqs), discoveryQueue: newAsyncQueue[Cid](concurrentDiscReqs),
trackedFutures: TrackedFutures.new(),
inFlightDiscReqs: initTable[Cid, Future[seq[SignedPeerRecord]]](), inFlightDiscReqs: initTable[Cid, Future[seq[SignedPeerRecord]]](),
discoveryLoopSleep: discoveryLoopSleep, discoveryLoopSleep: discoveryLoopSleep,
minPeersPerBlock: minPeersPerBlock, minPeersPerBlock: minPeersPerBlock)
maxPeersPerBlock: maxPeersPerBlock,
)

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,8 +7,6 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/math import std/math
import pkg/nitro import pkg/nitro
import pkg/questionable/results import pkg/questionable/results
@ -17,13 +15,15 @@ import ../peers
export nitro export nitro
export results export results
push: {.upraises: [].}
const ChainId* = 0.u256 # invalid chain id for now const ChainId* = 0.u256 # invalid chain id for now
const Asset* = EthAddress.zero # invalid ERC20 asset address for now const Asset* = EthAddress.zero # invalid ERC20 asset address for now
const AmountPerChannel = (10'u64 ^ 18).u256 # 1 asset, ERC20 default is 18 decimals const AmountPerChannel = (10'u64^18).u256 # 1 asset, ERC20 default is 18 decimals
func openLedgerChannel*( func openLedgerChannel*(wallet: WalletRef,
wallet: WalletRef, hub: EthAddress, asset: EthAddress hub: EthAddress,
): ?!ChannelId = asset: EthAddress): ?!ChannelId =
wallet.openLedgerChannel(hub, ChainId, asset, AmountPerChannel) wallet.openLedgerChannel(hub, ChainId, asset, AmountPerChannel)
func getOrOpenChannel(wallet: WalletRef, peer: BlockExcPeerCtx): ?!ChannelId = func getOrOpenChannel(wallet: WalletRef, peer: BlockExcPeerCtx): ?!ChannelId =
@ -36,7 +36,9 @@ func getOrOpenChannel(wallet: WalletRef, peer: BlockExcPeerCtx): ?!ChannelId =
else: else:
failure "no account set for peer" failure "no account set for peer"
func pay*(wallet: WalletRef, peer: BlockExcPeerCtx, amount: UInt256): ?!SignedState = func pay*(wallet: WalletRef,
peer: BlockExcPeerCtx,
amount: UInt256): ?!SignedState =
if account =? peer.account: if account =? peer.account:
let asset = Asset let asset = Asset
let receiver = account.address let receiver = account.address

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,11 +7,12 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/tables import std/tables
import std/monotimes import std/monotimes
import std/strutils
import pkg/upraises
push: {.upraises: [].}
import pkg/chronos import pkg/chronos
import pkg/libp2p import pkg/libp2p
@ -24,194 +25,133 @@ import ../../logutils
logScope: logScope:
topics = "codex pendingblocks" topics = "codex pendingblocks"
declareGauge( declareGauge(codex_block_exchange_pending_block_requests, "codex blockexchange pending block requests")
codex_block_exchange_pending_block_requests, declareGauge(codex_block_exchange_retrieval_time_us, "codex blockexchange block retrieval time us")
"codex blockexchange pending block requests",
)
declareGauge(
codex_block_exchange_retrieval_time_us, "codex blockexchange block retrieval time us"
)
const const
DefaultBlockRetries* = 3000 DefaultBlockTimeout* = 10.minutes
DefaultRetryInterval* = 2.seconds
type type
RetriesExhaustedError* = object of CatchableError
BlockHandle* = Future[Block].Raising([CancelledError, RetriesExhaustedError])
BlockReq* = object BlockReq* = object
handle*: BlockHandle handle*: Future[Block]
requested*: ?PeerId inFlight*: bool
blockRetries*: int
startTime*: int64 startTime*: int64
PendingBlocksManager* = ref object of RootObj PendingBlocksManager* = ref object of RootObj
blockRetries*: int = DefaultBlockRetries
retryInterval*: Duration = DefaultRetryInterval
blocks*: Table[BlockAddress, BlockReq] # pending Block requests blocks*: Table[BlockAddress, BlockReq] # pending Block requests
lastInclusion*: Moment # time at which we last included a block into our wantlist
proc updatePendingBlockGauge(p: PendingBlocksManager) = proc updatePendingBlockGauge(p: PendingBlocksManager) =
codex_block_exchange_pending_block_requests.set(p.blocks.len.int64) codex_block_exchange_pending_block_requests.set(p.blocks.len.int64)
proc getWantHandle*( proc getWantHandle*(
self: PendingBlocksManager, address: BlockAddress, requested: ?PeerId = PeerId.none p: PendingBlocksManager,
): Future[Block] {.async: (raw: true, raises: [CancelledError, RetriesExhaustedError]).} = address: BlockAddress,
timeout = DefaultBlockTimeout,
inFlight = false): Future[Block] {.async.} =
## Add an event for a block ## Add an event for a block
## ##
self.blocks.withValue(address, blk): try:
return blk[].handle if address notin p.blocks:
do: p.blocks[address] = BlockReq(
let blk = BlockReq( handle: newFuture[Block]("pendingBlocks.getWantHandle"),
handle: newFuture[Block]("pendingBlocks.getWantHandle"), inFlight: inFlight,
requested: requested, startTime: getMonoTime().ticks)
blockRetries: self.blockRetries,
startTime: getMonoTime().ticks,
)
self.blocks[address] = blk
self.lastInclusion = Moment.now()
let handle = blk.handle p.updatePendingBlockGauge()
return await p.blocks[address].handle.wait(timeout)
proc cleanUpBlock(data: pointer) {.raises: [].} = except CancelledError as exc:
self.blocks.del(address) trace "Blocks cancelled", exc = exc.msg, address
self.updatePendingBlockGauge() raise exc
except CatchableError as exc:
handle.addCallback(cleanUpBlock) error "Pending WANT failed or expired", exc = exc.msg
handle.cancelCallback = proc(data: pointer) {.raises: [].} = # no need to cancel, it is already cancelled by wait()
if not handle.finished: raise exc
handle.removeCallback(cleanUpBlock) finally:
cleanUpBlock(nil) p.blocks.del(address)
p.updatePendingBlockGauge()
self.updatePendingBlockGauge()
return handle
proc getWantHandle*( proc getWantHandle*(
self: PendingBlocksManager, cid: Cid, requested: ?PeerId = PeerId.none p: PendingBlocksManager,
): Future[Block] {.async: (raw: true, raises: [CancelledError, RetriesExhaustedError]).} = cid: Cid,
self.getWantHandle(BlockAddress.init(cid), requested) timeout = DefaultBlockTimeout,
inFlight = false): Future[Block] =
proc completeWantHandle*( p.getWantHandle(BlockAddress.init(cid), timeout, inFlight)
self: PendingBlocksManager, address: BlockAddress, blk: Block
) {.raises: [].} =
## Complete a pending want handle
self.blocks.withValue(address, blockReq):
if not blockReq[].handle.finished:
trace "Completing want handle from provided block", address
blockReq[].handle.complete(blk)
else:
trace "Want handle already completed", address
do:
trace "No pending want handle found for address", address
proc resolve*( proc resolve*(
self: PendingBlocksManager, blocksDelivery: seq[BlockDelivery] p: PendingBlocksManager,
) {.gcsafe, raises: [].} = blocksDelivery: seq[BlockDelivery]) {.gcsafe, raises: [].} =
## Resolve pending blocks ## Resolve pending blocks
## ##
for bd in blocksDelivery: for bd in blocksDelivery:
self.blocks.withValue(bd.address, blockReq): p.blocks.withValue(bd.address, blockReq):
if not blockReq[].handle.finished: if not blockReq.handle.finished:
trace "Resolving pending block", address = bd.address
let let
startTime = blockReq[].startTime startTime = blockReq.startTime
stopTime = getMonoTime().ticks stopTime = getMonoTime().ticks
retrievalDurationUs = (stopTime - startTime) div 1000 retrievalDurationUs = (stopTime - startTime) div 1000
blockReq.handle.complete(bd.blk) blockReq.handle.complete(bd.blk)
codex_block_exchange_retrieval_time_us.set(retrievalDurationUs) codex_block_exchange_retrieval_time_us.set(retrievalDurationUs)
if retrievalDurationUs > 500000:
warn "High block retrieval time", retrievalDurationUs, address = bd.address
else: else:
trace "Block handle already finished", address = bd.address trace "Block handle already finished", address = bd.address
func retries*(self: PendingBlocksManager, address: BlockAddress): int = proc setInFlight*(
self.blocks.withValue(address, pending): p: PendingBlocksManager,
result = pending[].blockRetries address: BlockAddress,
do: inFlight = true) =
result = 0 ## Set inflight status for a block
func decRetries*(self: PendingBlocksManager, address: BlockAddress) =
self.blocks.withValue(address, pending):
pending[].blockRetries -= 1
func retriesExhausted*(self: PendingBlocksManager, address: BlockAddress): bool =
self.blocks.withValue(address, pending):
result = pending[].blockRetries <= 0
func isRequested*(self: PendingBlocksManager, address: BlockAddress): bool =
## Check if a block has been requested to a peer
##
result = false
self.blocks.withValue(address, pending):
result = pending[].requested.isSome
func getRequestPeer*(self: PendingBlocksManager, address: BlockAddress): ?PeerId =
## Returns the peer that requested this block
##
result = PeerId.none
self.blocks.withValue(address, pending):
result = pending[].requested
proc markRequested*(
self: PendingBlocksManager, address: BlockAddress, peer: PeerId
): bool =
## Marks this block as having been requested to a peer
## ##
if self.isRequested(address): p.blocks.withValue(address, pending):
return false pending[].inFlight = inFlight
self.blocks.withValue(address, pending): proc isInFlight*(
pending[].requested = peer.some p: PendingBlocksManager,
return true address: BlockAddress): bool =
## Check if a block is in flight
##
proc clearRequest*( p.blocks.withValue(address, pending):
self: PendingBlocksManager, address: BlockAddress, peer: ?PeerId = PeerId.none result = pending[].inFlight
) =
self.blocks.withValue(address, pending):
if peer.isSome:
assert peer == pending[].requested
pending[].requested = PeerId.none
func contains*(self: PendingBlocksManager, cid: Cid): bool = proc contains*(p: PendingBlocksManager, cid: Cid): bool =
BlockAddress.init(cid) in self.blocks BlockAddress.init(cid) in p.blocks
func contains*(self: PendingBlocksManager, address: BlockAddress): bool = proc contains*(p: PendingBlocksManager, address: BlockAddress): bool =
address in self.blocks address in p.blocks
iterator wantList*(self: PendingBlocksManager): BlockAddress = iterator wantList*(p: PendingBlocksManager): BlockAddress =
for a in self.blocks.keys: for a in p.blocks.keys:
yield a yield a
iterator wantListBlockCids*(self: PendingBlocksManager): Cid = iterator wantListBlockCids*(p: PendingBlocksManager): Cid =
for a in self.blocks.keys: for a in p.blocks.keys:
if not a.leaf: if not a.leaf:
yield a.cid yield a.cid
iterator wantListCids*(self: PendingBlocksManager): Cid = iterator wantListCids*(p: PendingBlocksManager): Cid =
var yieldedCids = initHashSet[Cid]() var yieldedCids = initHashSet[Cid]()
for a in self.blocks.keys: for a in p.blocks.keys:
let cid = a.cidOrTreeCid let cid = a.cidOrTreeCid
if cid notin yieldedCids: if cid notin yieldedCids:
yieldedCids.incl(cid) yieldedCids.incl(cid)
yield cid yield cid
iterator wantHandles*(self: PendingBlocksManager): Future[Block] = iterator wantHandles*(p: PendingBlocksManager): Future[Block] =
for v in self.blocks.values: for v in p.blocks.values:
yield v.handle yield v.handle
proc wantListLen*(self: PendingBlocksManager): int = proc wantListLen*(p: PendingBlocksManager): int =
self.blocks.len p.blocks.len
func len*(self: PendingBlocksManager): int = func len*(p: PendingBlocksManager): int =
self.blocks.len p.blocks.len
func new*( func new*(T: type PendingBlocksManager): PendingBlocksManager =
T: type PendingBlocksManager, PendingBlocksManager()
retries = DefaultBlockRetries,
interval = DefaultRetryInterval,
): PendingBlocksManager =
PendingBlocksManager(blockRetries: retries, retryInterval: interval)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -21,28 +21,24 @@ import ../../blocktype as bt
import ../../logutils import ../../logutils
import ../protobuf/blockexc as pb import ../protobuf/blockexc as pb
import ../protobuf/payments import ../protobuf/payments
import ../../utils/trackedfutures
import ./networkpeer import ./networkpeer
export networkpeer, payments export network, payments
logScope: logScope:
topics = "codex blockexcnetwork" topics = "codex blockexcnetwork"
const const
Codec* = "/codex/blockexc/1.0.0" Codec* = "/codex/blockexc/1.0.0"
DefaultMaxInflight* = 100 MaxInflight* = 100
type type
WantListHandler* = proc(peer: PeerId, wantList: WantList) {.async: (raises: []).} WantListHandler* = proc(peer: PeerId, wantList: WantList): Future[void] {.gcsafe.}
BlocksDeliveryHandler* = BlocksDeliveryHandler* = proc(peer: PeerId, blocks: seq[BlockDelivery]): Future[void] {.gcsafe.}
proc(peer: PeerId, blocks: seq[BlockDelivery]) {.async: (raises: []).} BlockPresenceHandler* = proc(peer: PeerId, precense: seq[BlockPresence]): Future[void] {.gcsafe.}
BlockPresenceHandler* = AccountHandler* = proc(peer: PeerId, account: Account): Future[void] {.gcsafe.}
proc(peer: PeerId, precense: seq[BlockPresence]) {.async: (raises: []).} PaymentHandler* = proc(peer: PeerId, payment: SignedState): Future[void] {.gcsafe.}
AccountHandler* = proc(peer: PeerId, account: Account) {.async: (raises: []).}
PaymentHandler* = proc(peer: PeerId, payment: SignedState) {.async: (raises: []).}
PeerEventHandler* = proc(peer: PeerId) {.async: (raises: [CancelledError]).}
BlockExcHandlers* = object BlockExcHandlers* = object
onWantList*: WantListHandler onWantList*: WantListHandler
@ -50,9 +46,6 @@ type
onPresence*: BlockPresenceHandler onPresence*: BlockPresenceHandler
onAccount*: AccountHandler onAccount*: AccountHandler
onPayment*: PaymentHandler onPayment*: PaymentHandler
onPeerJoined*: PeerEventHandler
onPeerDeparted*: PeerEventHandler
onPeerDropped*: PeerEventHandler
WantListSender* = proc( WantListSender* = proc(
id: PeerId, id: PeerId,
@ -61,21 +54,12 @@ type
cancel: bool = false, cancel: bool = false,
wantType: WantType = WantType.WantHave, wantType: WantType = WantType.WantHave,
full: bool = false, full: bool = false,
sendDontHave: bool = false, sendDontHave: bool = false): Future[void] {.gcsafe.}
) {.async: (raises: [CancelledError]).} WantCancellationSender* = proc(peer: PeerId, addresses: seq[BlockAddress]): Future[void] {.gcsafe.}
WantCancellationSender* = proc(peer: PeerId, addresses: seq[BlockAddress]) {. BlocksDeliverySender* = proc(peer: PeerId, blocksDelivery: seq[BlockDelivery]): Future[void] {.gcsafe.}
async: (raises: [CancelledError]) PresenceSender* = proc(peer: PeerId, presence: seq[BlockPresence]): Future[void] {.gcsafe.}
.} AccountSender* = proc(peer: PeerId, account: Account): Future[void] {.gcsafe.}
BlocksDeliverySender* = proc(peer: PeerId, blocksDelivery: seq[BlockDelivery]) {. PaymentSender* = proc(peer: PeerId, payment: SignedState): Future[void] {.gcsafe.}
async: (raises: [CancelledError])
.}
PresenceSender* = proc(peer: PeerId, presence: seq[BlockPresence]) {.
async: (raises: [CancelledError])
.}
AccountSender* =
proc(peer: PeerId, account: Account) {.async: (raises: [CancelledError]).}
PaymentSender* =
proc(peer: PeerId, payment: SignedState) {.async: (raises: [CancelledError]).}
BlockExcRequest* = object BlockExcRequest* = object
sendWantList*: WantListSender sendWantList*: WantListSender
@ -92,8 +76,6 @@ type
request*: BlockExcRequest request*: BlockExcRequest
getConn: ConnProvider getConn: ConnProvider
inflightSema: AsyncSemaphore inflightSema: AsyncSemaphore
maxInflight: int = DefaultMaxInflight
trackedFutures*: TrackedFutures = TrackedFutures()
proc peerId*(b: BlockExcNetwork): PeerId = proc peerId*(b: BlockExcNetwork): PeerId =
## Return peer id ## Return peer id
@ -107,9 +89,7 @@ proc isSelf*(b: BlockExcNetwork, peer: PeerId): bool =
return b.peerId == peer return b.peerId == peer
proc send*( proc send*(b: BlockExcNetwork, id: PeerId, msg: pb.Message) {.async.} =
b: BlockExcNetwork, id: PeerId, msg: pb.Message
) {.async: (raises: [CancelledError]).} =
## Send message to peer ## Send message to peer
## ##
@ -117,9 +97,8 @@ proc send*(
trace "Unable to send, peer not found", peerId = id trace "Unable to send, peer not found", peerId = id
return return
let peer = b.peers[id]
try: try:
let peer = b.peers[id]
await b.inflightSema.acquire() await b.inflightSema.acquire()
await peer.send(msg) await peer.send(msg)
except CancelledError as error: except CancelledError as error:
@ -130,8 +109,9 @@ proc send*(
b.inflightSema.release() b.inflightSema.release()
proc handleWantList( proc handleWantList(
b: BlockExcNetwork, peer: NetworkPeer, list: WantList b: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
list: WantList) {.async.} =
## Handle incoming want list ## Handle incoming want list
## ##
@ -139,15 +119,14 @@ proc handleWantList(
await b.handlers.onWantList(peer.id, list) await b.handlers.onWantList(peer.id, list)
proc sendWantList*( proc sendWantList*(
b: BlockExcNetwork, b: BlockExcNetwork,
id: PeerId, id: PeerId,
addresses: seq[BlockAddress], addresses: seq[BlockAddress],
priority: int32 = 0, priority: int32 = 0,
cancel: bool = false, cancel: bool = false,
wantType: WantType = WantType.WantHave, wantType: WantType = WantType.WantHave,
full: bool = false, full: bool = false,
sendDontHave: bool = false, sendDontHave: bool = false): Future[void] =
) {.async: (raw: true, raises: [CancelledError]).} =
## Send a want message to peer ## Send a want message to peer
## ##
@ -158,41 +137,43 @@ proc sendWantList*(
priority: priority, priority: priority,
cancel: cancel, cancel: cancel,
wantType: wantType, wantType: wantType,
sendDontHave: sendDontHave, sendDontHave: sendDontHave) ),
) full: full)
),
full: full,
)
b.send(id, Message(wantlist: msg)) b.send(id, Message(wantlist: msg))
proc sendWantCancellations*( proc sendWantCancellations*(
b: BlockExcNetwork, id: PeerId, addresses: seq[BlockAddress] b: BlockExcNetwork,
): Future[void] {.async: (raises: [CancelledError]).} = id: PeerId,
addresses: seq[BlockAddress]): Future[void] {.async.} =
## Informs a remote peer that we're no longer interested in a set of blocks ## Informs a remote peer that we're no longer interested in a set of blocks
## ##
await b.sendWantList(id = id, addresses = addresses, cancel = true) await b.sendWantList(id = id, addresses = addresses, cancel = true)
proc handleBlocksDelivery( proc handleBlocksDelivery(
b: BlockExcNetwork, peer: NetworkPeer, blocksDelivery: seq[BlockDelivery] b: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
blocksDelivery: seq[BlockDelivery]) {.async.} =
## Handle incoming blocks ## Handle incoming blocks
## ##
if not b.handlers.onBlocksDelivery.isNil: if not b.handlers.onBlocksDelivery.isNil:
await b.handlers.onBlocksDelivery(peer.id, blocksDelivery) await b.handlers.onBlocksDelivery(peer.id, blocksDelivery)
proc sendBlocksDelivery*( proc sendBlocksDelivery*(
b: BlockExcNetwork, id: PeerId, blocksDelivery: seq[BlockDelivery] b: BlockExcNetwork,
) {.async: (raw: true, raises: [CancelledError]).} = id: PeerId,
blocksDelivery: seq[BlockDelivery]): Future[void] =
## Send blocks to remote ## Send blocks to remote
## ##
b.send(id, pb.Message(payload: blocksDelivery)) b.send(id, pb.Message(payload: blocksDelivery))
proc handleBlockPresence( proc handleBlockPresence(
b: BlockExcNetwork, peer: NetworkPeer, presence: seq[BlockPresence] b: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
presence: seq[BlockPresence]) {.async.} =
## Handle block presence ## Handle block presence
## ##
@ -200,16 +181,18 @@ proc handleBlockPresence(
await b.handlers.onPresence(peer.id, presence) await b.handlers.onPresence(peer.id, presence)
proc sendBlockPresence*( proc sendBlockPresence*(
b: BlockExcNetwork, id: PeerId, presence: seq[BlockPresence] b: BlockExcNetwork,
) {.async: (raw: true, raises: [CancelledError]).} = id: PeerId,
presence: seq[BlockPresence]): Future[void] =
## Send presence to remote ## Send presence to remote
## ##
b.send(id, Message(blockPresences: @presence)) b.send(id, Message(blockPresences: @presence))
proc handleAccount( proc handleAccount(
network: BlockExcNetwork, peer: NetworkPeer, account: Account network: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
account: Account) {.async.} =
## Handle account info ## Handle account info
## ##
@ -217,24 +200,27 @@ proc handleAccount(
await network.handlers.onAccount(peer.id, account) await network.handlers.onAccount(peer.id, account)
proc sendAccount*( proc sendAccount*(
b: BlockExcNetwork, id: PeerId, account: Account b: BlockExcNetwork,
) {.async: (raw: true, raises: [CancelledError]).} = id: PeerId,
account: Account): Future[void] =
## Send account info to remote ## Send account info to remote
## ##
b.send(id, Message(account: AccountMessage.init(account))) b.send(id, Message(account: AccountMessage.init(account)))
proc sendPayment*( proc sendPayment*(
b: BlockExcNetwork, id: PeerId, payment: SignedState b: BlockExcNetwork,
) {.async: (raw: true, raises: [CancelledError]).} = id: PeerId,
payment: SignedState): Future[void] =
## Send payment to remote ## Send payment to remote
## ##
b.send(id, Message(payment: StateChannelUpdate.init(payment))) b.send(id, Message(payment: StateChannelUpdate.init(payment)))
proc handlePayment( proc handlePayment(
network: BlockExcNetwork, peer: NetworkPeer, payment: SignedState network: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
payment: SignedState) {.async.} =
## Handle payment ## Handle payment
## ##
@ -242,185 +228,138 @@ proc handlePayment(
await network.handlers.onPayment(peer.id, payment) await network.handlers.onPayment(peer.id, payment)
proc rpcHandler( proc rpcHandler(
self: BlockExcNetwork, peer: NetworkPeer, msg: Message b: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
msg: Message) {.raises: [].} =
## handle rpc messages ## handle rpc messages
## ##
if msg.wantList.entries.len > 0: if msg.wantList.entries.len > 0:
self.trackedFutures.track(self.handleWantList(peer, msg.wantList)) asyncSpawn b.handleWantList(peer, msg.wantList)
if msg.payload.len > 0: if msg.payload.len > 0:
self.trackedFutures.track(self.handleBlocksDelivery(peer, msg.payload)) asyncSpawn b.handleBlocksDelivery(peer, msg.payload)
if msg.blockPresences.len > 0: if msg.blockPresences.len > 0:
self.trackedFutures.track(self.handleBlockPresence(peer, msg.blockPresences)) asyncSpawn b.handleBlockPresence(peer, msg.blockPresences)
if account =? Account.init(msg.account): if account =? Account.init(msg.account):
self.trackedFutures.track(self.handleAccount(peer, account)) asyncSpawn b.handleAccount(peer, account)
if payment =? SignedState.init(msg.payment): if payment =? SignedState.init(msg.payment):
self.trackedFutures.track(self.handlePayment(peer, payment)) asyncSpawn b.handlePayment(peer, payment)
proc getOrCreatePeer(self: BlockExcNetwork, peer: PeerId): NetworkPeer = proc getOrCreatePeer(b: BlockExcNetwork, peer: PeerId): NetworkPeer =
## Creates or retrieves a BlockExcNetwork Peer ## Creates or retrieves a BlockExcNetwork Peer
## ##
if peer in self.peers: if peer in b.peers:
return self.peers.getOrDefault(peer, nil) return b.peers.getOrDefault(peer, nil)
var getConn: ConnProvider = proc(): Future[Connection] {. var getConn: ConnProvider = proc(): Future[Connection] {.async, gcsafe, closure.} =
async: (raises: [CancelledError])
.} =
try: try:
trace "Getting new connection stream", peer return await b.switch.dial(peer, Codec)
return await self.switch.dial(peer, Codec)
except CancelledError as error: except CancelledError as error:
raise error raise error
except CatchableError as exc: except CatchableError as exc:
trace "Unable to connect to blockexc peer", exc = exc.msg trace "Unable to connect to blockexc peer", exc = exc.msg
if not isNil(self.getConn): if not isNil(b.getConn):
getConn = self.getConn getConn = b.getConn
let rpcHandler = proc(p: NetworkPeer, msg: Message) {.async: (raises: []).} = let rpcHandler = proc (p: NetworkPeer, msg: Message) {.async.} =
await self.rpcHandler(p, msg) b.rpcHandler(p, msg)
# create new pubsub peer # create new pubsub peer
let blockExcPeer = NetworkPeer.new(peer, getConn, rpcHandler) let blockExcPeer = NetworkPeer.new(peer, getConn, rpcHandler)
debug "Created new blockexc peer", peer debug "Created new blockexc peer", peer
self.peers[peer] = blockExcPeer b.peers[peer] = blockExcPeer
return blockExcPeer return blockExcPeer
proc dialPeer*(self: BlockExcNetwork, peer: PeerRecord) {.async.} = proc setupPeer*(b: BlockExcNetwork, peer: PeerId) =
## Perform initial setup, such as want
## list exchange
##
discard b.getOrCreatePeer(peer)
proc dialPeer*(b: BlockExcNetwork, peer: PeerRecord) {.async.} =
## Dial a peer ## Dial a peer
## ##
if self.isSelf(peer.peerId): if b.isSelf(peer.peerId):
trace "Skipping dialing self", peer = peer.peerId trace "Skipping dialing self", peer = peer.peerId
return return
if peer.peerId in self.peers: await b.switch.connect(peer.peerId, peer.addresses.mapIt(it.address))
trace "Already connected to peer", peer = peer.peerId
return
await self.switch.connect(peer.peerId, peer.addresses.mapIt(it.address)) proc dropPeer*(b: BlockExcNetwork, peer: PeerId) =
proc dropPeer*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
trace "Dropping peer", peer
try:
if not self.switch.isNil:
await self.switch.disconnect(peer)
except CatchableError as error:
warn "Error attempting to disconnect from peer", peer = peer, error = error.msg
if not self.handlers.onPeerDropped.isNil:
await self.handlers.onPeerDropped(peer)
proc handlePeerJoined*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
discard self.getOrCreatePeer(peer)
if not self.handlers.onPeerJoined.isNil:
await self.handlers.onPeerJoined(peer)
proc handlePeerDeparted*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
## Cleanup disconnected peer ## Cleanup disconnected peer
## ##
trace "Cleaning up departed peer", peer b.peers.del(peer)
self.peers.del(peer)
if not self.handlers.onPeerDeparted.isNil:
await self.handlers.onPeerDeparted(peer)
method init*(self: BlockExcNetwork) {.raises: [].} = method init*(b: BlockExcNetwork) =
## Perform protocol initialization ## Perform protocol initialization
## ##
proc peerEventHandler( proc peerEventHandler(peerId: PeerId, event: PeerEvent) {.async.} =
peerId: PeerId, event: PeerEvent
): Future[void] {.async: (raises: [CancelledError]).} =
if event.kind == PeerEventKind.Joined: if event.kind == PeerEventKind.Joined:
await self.handlePeerJoined(peerId) b.setupPeer(peerId)
elif event.kind == PeerEventKind.Left:
await self.handlePeerDeparted(peerId)
else: else:
warn "Unknown peer event", event b.dropPeer(peerId)
self.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Joined) b.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Joined)
self.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Left) b.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Left)
proc handler( proc handle(conn: Connection, proto: string) {.async, gcsafe, closure.} =
conn: Connection, proto: string
): Future[void] {.async: (raises: [CancelledError]).} =
let peerId = conn.peerId let peerId = conn.peerId
let blockexcPeer = self.getOrCreatePeer(peerId) let blockexcPeer = b.getOrCreatePeer(peerId)
await blockexcPeer.readLoop(conn) # attach read loop await blockexcPeer.readLoop(conn) # attach read loop
self.handler = handler b.handler = handle
self.codec = Codec b.codec = Codec
proc stop*(self: BlockExcNetwork) {.async: (raises: []).} =
await self.trackedFutures.cancelTracked()
proc new*( proc new*(
T: type BlockExcNetwork, T: type BlockExcNetwork,
switch: Switch, switch: Switch,
connProvider: ConnProvider = nil, connProvider: ConnProvider = nil,
maxInflight = DefaultMaxInflight, maxInflight = MaxInflight): BlockExcNetwork =
): BlockExcNetwork =
## Create a new BlockExcNetwork instance ## Create a new BlockExcNetwork instance
## ##
let self = BlockExcNetwork( let
switch: switch, self = BlockExcNetwork(
getConn: connProvider, switch: switch,
inflightSema: newAsyncSemaphore(maxInflight), getConn: connProvider,
maxInflight: maxInflight, inflightSema: newAsyncSemaphore(maxInflight))
)
self.maxIncomingStreams = self.maxInflight
proc sendWantList( proc sendWantList(
id: PeerId, id: PeerId,
cids: seq[BlockAddress], cids: seq[BlockAddress],
priority: int32 = 0, priority: int32 = 0,
cancel: bool = false, cancel: bool = false,
wantType: WantType = WantType.WantHave, wantType: WantType = WantType.WantHave,
full: bool = false, full: bool = false,
sendDontHave: bool = false, sendDontHave: bool = false): Future[void] {.gcsafe.} =
): Future[void] {.async: (raw: true, raises: [CancelledError]).} = self.sendWantList(
self.sendWantList(id, cids, priority, cancel, wantType, full, sendDontHave) id, cids, priority, cancel,
wantType, full, sendDontHave)
proc sendWantCancellations( proc sendWantCancellations(id: PeerId, addresses: seq[BlockAddress]): Future[void] {.gcsafe.} =
id: PeerId, addresses: seq[BlockAddress]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendWantCancellations(id, addresses) self.sendWantCancellations(id, addresses)
proc sendBlocksDelivery( proc sendBlocksDelivery(id: PeerId, blocksDelivery: seq[BlockDelivery]): Future[void] {.gcsafe.} =
id: PeerId, blocksDelivery: seq[BlockDelivery]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendBlocksDelivery(id, blocksDelivery) self.sendBlocksDelivery(id, blocksDelivery)
proc sendPresence( proc sendPresence(id: PeerId, presence: seq[BlockPresence]): Future[void] {.gcsafe.} =
id: PeerId, presence: seq[BlockPresence]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendBlockPresence(id, presence) self.sendBlockPresence(id, presence)
proc sendAccount( proc sendAccount(id: PeerId, account: Account): Future[void] {.gcsafe.} =
id: PeerId, account: Account
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendAccount(id, account) self.sendAccount(id, account)
proc sendPayment( proc sendPayment(id: PeerId, payment: SignedState): Future[void] {.gcsafe.} =
id: PeerId, payment: SignedState
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendPayment(id, payment) self.sendPayment(id, payment)
self.request = BlockExcRequest( self.request = BlockExcRequest(
@ -429,8 +368,7 @@ proc new*(
sendBlocksDelivery: sendBlocksDelivery, sendBlocksDelivery: sendBlocksDelivery,
sendPresence: sendPresence, sendPresence: sendPresence,
sendAccount: sendAccount, sendAccount: sendAccount,
sendPayment: sendPayment, sendPayment: sendPayment)
)
self.init() self.init()
return self return self

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,7 +7,8 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].} import pkg/upraises
push: {.upraises: [].}
import pkg/chronos import pkg/chronos
import pkg/libp2p import pkg/libp2p
@ -16,98 +17,78 @@ import ../protobuf/blockexc
import ../protobuf/message import ../protobuf/message
import ../../errors import ../../errors
import ../../logutils import ../../logutils
import ../../utils/trackedfutures
logScope: logScope:
topics = "codex blockexcnetworkpeer" topics = "codex blockexcnetworkpeer"
const DefaultYieldInterval = 50.millis
type type
ConnProvider* = proc(): Future[Connection] {.async: (raises: [CancelledError]).} ConnProvider* = proc(): Future[Connection] {.gcsafe, closure.}
RPCHandler* = proc(peer: NetworkPeer, msg: Message) {.async: (raises: []).} RPCHandler* = proc(peer: NetworkPeer, msg: Message): Future[void] {.gcsafe.}
NetworkPeer* = ref object of RootObj NetworkPeer* = ref object of RootObj
id*: PeerId id*: PeerId
handler*: RPCHandler handler*: RPCHandler
sendConn: Connection sendConn: Connection
getConn: ConnProvider getConn: ConnProvider
yieldInterval*: Duration = DefaultYieldInterval
trackedFutures: TrackedFutures
proc connected*(self: NetworkPeer): bool = proc connected*(b: NetworkPeer): bool =
not (isNil(self.sendConn)) and not (self.sendConn.closed or self.sendConn.atEof) not(isNil(b.sendConn)) and
not(b.sendConn.closed or b.sendConn.atEof)
proc readLoop*(self: NetworkPeer, conn: Connection) {.async: (raises: []).} = proc readLoop*(b: NetworkPeer, conn: Connection) {.async.} =
if isNil(conn): if isNil(conn):
trace "No connection to read from", peer = self.id
return return
trace "Attaching read loop", peer = self.id, connId = conn.oid
try: try:
var nextYield = Moment.now() + self.yieldInterval
while not conn.atEof or not conn.closed: while not conn.atEof or not conn.closed:
if Moment.now() > nextYield:
nextYield = Moment.now() + self.yieldInterval
trace "Yielding in read loop",
peer = self.id, nextYield = nextYield, interval = self.yieldInterval
await sleepAsync(10.millis)
let let
data = await conn.readLp(MaxMessageSize.int) data = await conn.readLp(MaxMessageSize.int)
msg = Message.protobufDecode(data).mapFailure().tryGet() msg = Message.protobufDecode(data).mapFailure().tryGet()
trace "Received message", peer = self.id, connId = conn.oid await b.handler(b, msg)
await self.handler(self, msg)
except CancelledError: except CancelledError:
trace "Read loop cancelled" trace "Read loop cancelled"
except CatchableError as err: except CatchableError as err:
warn "Exception in blockexc read loop", msg = err.msg warn "Exception in blockexc read loop", msg = err.msg
finally: finally:
warn "Detaching read loop", peer = self.id, connId = conn.oid
if self.sendConn == conn:
self.sendConn = nil
await conn.close() await conn.close()
proc connect*( proc connect*(b: NetworkPeer): Future[Connection] {.async.} =
self: NetworkPeer if b.connected:
): Future[Connection] {.async: (raises: [CancelledError]).} = return b.sendConn
if self.connected:
trace "Already connected", peer = self.id, connId = self.sendConn.oid
return self.sendConn
self.sendConn = await self.getConn() b.sendConn = await b.getConn()
self.trackedFutures.track(self.readLoop(self.sendConn)) asyncSpawn b.readLoop(b.sendConn)
return self.sendConn return b.sendConn
proc send*( proc send*(b: NetworkPeer, msg: Message) {.async.} =
self: NetworkPeer, msg: Message let conn = await b.connect()
) {.async: (raises: [CancelledError, LPStreamError]).} =
let conn = await self.connect()
if isNil(conn): if isNil(conn):
warn "Unable to get send connection for peer message not sent", peer = self.id warn "Unable to get send connection for peer message not sent", peer = b.id
return return
trace "Sending message", peer = self.id, connId = conn.oid await conn.writeLp(protobufEncode(msg))
try:
await conn.writeLp(protobufEncode(msg)) proc broadcast*(b: NetworkPeer, msg: Message) =
except CatchableError as err: proc sendAwaiter() {.async.} =
if self.sendConn == conn: try:
self.sendConn = nil await b.send(msg)
raise newException(LPStreamError, "Failed to send message: " & err.msg) except CatchableError as exc:
warn "Exception broadcasting message to peer", peer = b.id, exc = exc.msg
asyncSpawn sendAwaiter()
func new*( func new*(
T: type NetworkPeer, T: type NetworkPeer,
peer: PeerId, peer: PeerId,
connProvider: ConnProvider, connProvider: ConnProvider,
rpcHandler: RPCHandler, rpcHandler: RPCHandler): NetworkPeer =
): NetworkPeer =
doAssert(not isNil(connProvider), "should supply connection provider") doAssert(not isNil(connProvider),
"should supply connection provider")
NetworkPeer( NetworkPeer(
id: peer, id: peer,
getConn: connProvider, getConn: connProvider,
handler: rpcHandler, handler: rpcHandler)
trackedFutures: TrackedFutures(),
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -25,77 +25,29 @@ import ../../logutils
export payments, nitro export payments, nitro
const type
MinRefreshInterval = 1.seconds BlockExcPeerCtx* = ref object of RootObj
MaxRefreshBackoff = 36 # 36 seconds id*: PeerId
MaxWantListBatchSize* = 1024 # Maximum blocks to send per WantList message blocks*: Table[BlockAddress, Presence] # remote peer have list including price
peerWants*: seq[WantListEntry] # remote peers want lists
exchanged*: int # times peer has exchanged with us
lastExchange*: Moment # last time peer has exchanged with us
account*: ?Account # ethereum account of this peer
paymentChannel*: ?ChannelId # payment channel id
type BlockExcPeerCtx* = ref object of RootObj proc peerHave*(self: BlockExcPeerCtx): seq[BlockAddress] =
id*: PeerId toSeq(self.blocks.keys)
blocks*: Table[BlockAddress, Presence] # remote peer have list including price
wantedBlocks*: HashSet[BlockAddress] # blocks that the peer wants
exchanged*: int # times peer has exchanged with us
refreshInProgress*: bool # indicates if a refresh is in progress
lastRefresh*: Moment # last time we refreshed our knowledge of the blocks this peer has
refreshBackoff*: int = 1 # backoff factor for refresh requests
account*: ?Account # ethereum account of this peer
paymentChannel*: ?ChannelId # payment channel id
blocksSent*: HashSet[BlockAddress] # blocks sent to peer
blocksRequested*: HashSet[BlockAddress] # pending block requests to this peer
lastExchange*: Moment # last time peer has sent us a block
activityTimeout*: Duration
lastSentWants*: HashSet[BlockAddress]
# track what wantList we last sent for delta updates
proc isKnowledgeStale*(self: BlockExcPeerCtx): bool = proc peerHaveCids*(self: BlockExcPeerCtx): HashSet[Cid] =
let staleness = self.blocks.keys.toSeq.mapIt(it.cidOrTreeCid).toHashSet
self.lastRefresh + self.refreshBackoff * MinRefreshInterval < Moment.now()
if staleness and self.refreshInProgress: proc peerWantsCids*(self: BlockExcPeerCtx): HashSet[Cid] =
trace "Cleaning up refresh state", peer = self.id self.peerWants.mapIt(it.address.cidOrTreeCid).toHashSet
self.refreshInProgress = false
self.refreshBackoff = 1
staleness
proc isBlockSent*(self: BlockExcPeerCtx, address: BlockAddress): bool =
address in self.blocksSent
proc markBlockAsSent*(self: BlockExcPeerCtx, address: BlockAddress) =
self.blocksSent.incl(address)
proc markBlockAsNotSent*(self: BlockExcPeerCtx, address: BlockAddress) =
self.blocksSent.excl(address)
proc refreshRequested*(self: BlockExcPeerCtx) =
trace "Refresh requested for peer", peer = self.id, backoff = self.refreshBackoff
self.refreshInProgress = true
self.lastRefresh = Moment.now()
proc refreshReplied*(self: BlockExcPeerCtx) =
self.refreshInProgress = false
self.lastRefresh = Moment.now()
self.refreshBackoff = min(self.refreshBackoff * 2, MaxRefreshBackoff)
proc havesUpdated(self: BlockExcPeerCtx) =
self.refreshBackoff = 1
proc wantsUpdated*(self: BlockExcPeerCtx) =
self.refreshBackoff = 1
proc peerHave*(self: BlockExcPeerCtx): HashSet[BlockAddress] =
# XXX: this is ugly an inefficient, but since those will typically
# be used in "joins", it's better to pay the price here and have
# a linear join than to not do it and have a quadratic join.
toHashSet(self.blocks.keys.toSeq)
proc contains*(self: BlockExcPeerCtx, address: BlockAddress): bool = proc contains*(self: BlockExcPeerCtx, address: BlockAddress): bool =
address in self.blocks address in self.blocks
func setPresence*(self: BlockExcPeerCtx, presence: Presence) = func setPresence*(self: BlockExcPeerCtx, presence: Presence) =
if presence.address notin self.blocks:
self.havesUpdated()
self.blocks[presence.address] = presence self.blocks[presence.address] = presence
func cleanPresence*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]) = func cleanPresence*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]) =
@ -112,36 +64,3 @@ func price*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]): UInt256 =
price += precense[].price price += precense[].price
price price
proc blockRequestScheduled*(self: BlockExcPeerCtx, address: BlockAddress) =
## Adds a block the set of blocks that have been requested to this peer
## (its request schedule).
if self.blocksRequested.len == 0:
self.lastExchange = Moment.now()
self.blocksRequested.incl(address)
proc blockRequestCancelled*(self: BlockExcPeerCtx, address: BlockAddress) =
## Removes a block from the set of blocks that have been requested to this peer
## (its request schedule).
self.blocksRequested.excl(address)
proc blockReceived*(self: BlockExcPeerCtx, address: BlockAddress): bool =
let wasRequested = address in self.blocksRequested
self.blocksRequested.excl(address)
self.lastExchange = Moment.now()
wasRequested
proc activityTimer*(
self: BlockExcPeerCtx
): Future[void] {.async: (raises: [CancelledError]).} =
## This is called by the block exchange when a block is scheduled for this peer.
## If the peer sends no blocks for a while, it is considered inactive/uncooperative
## and the peer is dropped. Note that ANY block that the peer sends will reset this
## timer for all blocks.
##
while true:
let idleTime = Moment.now() - self.lastExchange
if idleTime > self.activityTimeout:
return
await sleepAsync(self.activityTimeout - idleTime)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,12 +7,13 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/sequtils import std/sequtils
import std/tables import std/tables
import std/algorithm import std/algorithm
import std/sequtils
import pkg/upraises
push: {.upraises: [].}
import pkg/chronos import pkg/chronos
import pkg/libp2p import pkg/libp2p
@ -21,6 +22,7 @@ import ../protobuf/blockexc
import ../../blocktype import ../../blocktype
import ../../logutils import ../../logutils
import ./peercontext import ./peercontext
export peercontext export peercontext
@ -31,8 +33,6 @@ type
PeerCtxStore* = ref object of RootObj PeerCtxStore* = ref object of RootObj
peers*: OrderedTable[PeerId, BlockExcPeerCtx] peers*: OrderedTable[PeerId, BlockExcPeerCtx]
PeersForBlock* = tuple[with: seq[BlockExcPeerCtx], without: seq[BlockExcPeerCtx]]
iterator items*(self: PeerCtxStore): BlockExcPeerCtx = iterator items*(self: PeerCtxStore): BlockExcPeerCtx =
for p in self.peers.values: for p in self.peers.values:
yield p yield p
@ -41,10 +41,7 @@ proc contains*(a: openArray[BlockExcPeerCtx], b: PeerId): bool =
## Convenience method to check for peer precense ## Convenience method to check for peer precense
## ##
a.anyIt(it.id == b) a.anyIt( it.id == b )
func peerIds*(self: PeerCtxStore): seq[PeerId] =
toSeq(self.peers.keys)
func contains*(self: PeerCtxStore, peerId: PeerId): bool = func contains*(self: PeerCtxStore, peerId: PeerId): bool =
peerId in self.peers peerId in self.peers
@ -62,27 +59,43 @@ func len*(self: PeerCtxStore): int =
self.peers.len self.peers.len
func peersHave*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] = func peersHave*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
toSeq(self.peers.values).filterIt(address in it.peerHave) toSeq(self.peers.values).filterIt( it.peerHave.anyIt( it == address ) )
func peersHave*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] = func peersHave*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] =
# FIXME: this is way slower and can end up leading to unexpected performance loss. toSeq(self.peers.values).filterIt( it.peerHave.anyIt( it.cidOrTreeCid == cid ) )
toSeq(self.peers.values).filterIt(it.peerHave.anyIt(it.cidOrTreeCid == cid))
func peersWant*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] = func peersWant*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
toSeq(self.peers.values).filterIt(address in it.wantedBlocks) toSeq(self.peers.values).filterIt( it.peerWants.anyIt( it == address ) )
func peersWant*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] = func peersWant*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] =
# FIXME: this is way slower and can end up leading to unexpected performance loss. toSeq(self.peers.values).filterIt( it.peerWants.anyIt( it.address.cidOrTreeCid == cid ) )
toSeq(self.peers.values).filterIt(it.wantedBlocks.anyIt(it.cidOrTreeCid == cid))
proc getPeersForBlock*(self: PeerCtxStore, address: BlockAddress): PeersForBlock = func selectCheapest*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
var res: PeersForBlock = (@[], @[]) # assume that the price for all leaves in a tree is the same
for peer in self: let rootAddress = BlockAddress(leaf: false, cid: address.cidOrTreeCid)
if address in peer: var peers = self.peersHave(rootAddress)
res.with.add(peer)
func cmp(a, b: BlockExcPeerCtx): int =
var
priceA = 0.u256
priceB = 0.u256
a.blocks.withValue(rootAddress, precense):
priceA = precense[].price
b.blocks.withValue(rootAddress, precense):
priceB = precense[].price
if priceA == priceB:
0
elif priceA > priceB:
1
else: else:
res.without.add(peer) -1
res
peers.sort(cmp)
trace "Selected cheapest peers", peers = peers.len
return peers
proc new*(T: type PeerCtxStore): PeerCtxStore = proc new*(T: type PeerCtxStore): PeerCtxStore =
## create new instance of a peer context store ## create new instance of a peer context store

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,6 +9,7 @@
import std/hashes import std/hashes
import std/sequtils import std/sequtils
import pkg/stew/endians2
import message import message
@ -19,6 +20,13 @@ export Wantlist, WantType, WantListEntry
export BlockDelivery, BlockPresenceType, BlockPresence export BlockDelivery, BlockPresenceType, BlockPresence
export AccountMessage, StateChannelUpdate export AccountMessage, StateChannelUpdate
proc hash*(a: BlockAddress): Hash =
if a.leaf:
let data = a.treeCid.data.buffer & @(a.index.uint64.toBytesBE)
hash(data)
else:
hash(a.cid.data.buffer)
proc hash*(e: WantListEntry): Hash = proc hash*(e: WantListEntry): Hash =
hash(e.address) hash(e.address)
@ -34,6 +42,7 @@ proc `==`*(a: WantListEntry, b: BlockAddress): bool =
proc `<`*(a, b: WantListEntry): bool = proc `<`*(a, b: WantListEntry): bool =
a.priority < b.priority a.priority < b.priority
proc `==`*(a: BlockPresence, b: BlockAddress): bool = proc `==`*(a: BlockPresence, b: BlockAddress): bool =
return a.address == b return a.address == b

View File

@ -1,4 +1,4 @@
# Protocol of data exchange between Logos Storage nodes # Protocol of data exchange between Codex nodes
# and Protobuf encoder/decoder for these messages. # and Protobuf encoder/decoder for these messages.
# #
# Eventually all this code should be auto-generated from message.proto. # Eventually all this code should be auto-generated from message.proto.
@ -20,44 +20,40 @@ const
type type
WantType* = enum WantType* = enum
WantBlock = 0 WantBlock = 0,
WantHave = 1 WantHave = 1
WantListEntry* = object WantListEntry* = object
address*: BlockAddress address*: BlockAddress
# XXX: I think explicit priority is pointless as the peer will request priority*: int32 # The priority (normalized). default to 1
# the blocks in the order it wants to receive them, and all we have to cancel*: bool # Whether this revokes an entry
# do is process those in the same order as we send them back. It also wantType*: WantType # Note: defaults to enum 0, ie Block
# complicates things for no reason at the moment, as the priority is sendDontHave*: bool # Note: defaults to false
# always set to 0. inFlight*: bool # Whether block sending is in progress. Not serialized.
priority*: int32 # The priority (normalized). default to 1
cancel*: bool # Whether this revokes an entry
wantType*: WantType # Note: defaults to enum 0, ie Block
sendDontHave*: bool # Note: defaults to false
WantList* = object WantList* = object
entries*: seq[WantListEntry] # A list of wantList entries entries*: seq[WantListEntry] # A list of wantList entries
full*: bool # Whether this is the full wantList. default to false full*: bool # Whether this is the full wantList. default to false
BlockDelivery* = object BlockDelivery* = object
blk*: Block blk*: Block
address*: BlockAddress address*: BlockAddress
proof*: ?CodexProof # Present only if `address.leaf` is true proof*: ?CodexProof # Present only if `address.leaf` is true
BlockPresenceType* = enum BlockPresenceType* = enum
Have = 0 Have = 0,
DontHave = 1 DontHave = 1
BlockPresence* = object BlockPresence* = object
address*: BlockAddress address*: BlockAddress
`type`*: BlockPresenceType `type`*: BlockPresenceType
price*: seq[byte] # Amount of assets to pay for the block (UInt256) price*: seq[byte] # Amount of assets to pay for the block (UInt256)
AccountMessage* = object AccountMessage* = object
address*: seq[byte] # Ethereum address to which payments should be made address*: seq[byte] # Ethereum address to which payments should be made
StateChannelUpdate* = object StateChannelUpdate* = object
update*: seq[byte] # Signed Nitro state, serialized as JSON update*: seq[byte] # Signed Nitro state, serialized as JSON
Message* = object Message* = object
wantList*: WantList wantList*: WantList
@ -101,7 +97,7 @@ proc write*(pb: var ProtoBuffer, field: int, value: WantList) =
pb.write(field, ipb) pb.write(field, ipb)
proc write*(pb: var ProtoBuffer, field: int, value: BlockDelivery) = proc write*(pb: var ProtoBuffer, field: int, value: BlockDelivery) =
var ipb = initProtoBuffer() var ipb = initProtoBuffer(maxSize = MaxBlockSize)
ipb.write(1, value.blk.cid.data.buffer) ipb.write(1, value.blk.cid.data.buffer)
ipb.write(2, value.blk.data) ipb.write(2, value.blk.data)
ipb.write(3, value.address) ipb.write(3, value.address)
@ -132,7 +128,7 @@ proc write*(pb: var ProtoBuffer, field: int, value: StateChannelUpdate) =
pb.write(field, ipb) pb.write(field, ipb)
proc protobufEncode*(value: Message): seq[byte] = proc protobufEncode*(value: Message): seq[byte] =
var ipb = initProtoBuffer() var ipb = initProtoBuffer(maxSize = MaxMessageSize)
ipb.write(1, value.wantList) ipb.write(1, value.wantList)
for v in value.payload: for v in value.payload:
ipb.write(3, v) ipb.write(3, v)
@ -144,6 +140,7 @@ proc protobufEncode*(value: Message): seq[byte] =
ipb.finish() ipb.finish()
ipb.buffer ipb.buffer
# #
# Decoding Message from seq[byte] in Protobuf format # Decoding Message from seq[byte] in Protobuf format
# #
@ -154,22 +151,22 @@ proc decode*(_: type BlockAddress, pb: ProtoBuffer): ProtoResult[BlockAddress] =
field: uint64 field: uint64
cidBuf = newSeq[byte]() cidBuf = newSeq[byte]()
if ?pb.getField(1, field): if ? pb.getField(1, field):
leaf = bool(field) leaf = bool(field)
if leaf: if leaf:
var var
treeCid: Cid treeCid: Cid
index: Natural index: Natural
if ?pb.getField(2, cidBuf): if ? pb.getField(2, cidBuf):
treeCid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob) treeCid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ?pb.getField(3, field): if ? pb.getField(3, field):
index = field index = field
value = BlockAddress(leaf: true, treeCid: treeCid, index: index) value = BlockAddress(leaf: true, treeCid: treeCid, index: index)
else: else:
var cid: Cid var cid: Cid
if ?pb.getField(4, cidBuf): if ? pb.getField(4, cidBuf):
cid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob) cid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
value = BlockAddress(leaf: false, cid: cid) value = BlockAddress(leaf: false, cid: cid)
ok(value) ok(value)
@ -179,15 +176,15 @@ proc decode*(_: type WantListEntry, pb: ProtoBuffer): ProtoResult[WantListEntry]
value = WantListEntry() value = WantListEntry()
field: uint64 field: uint64
ipb: ProtoBuffer ipb: ProtoBuffer
if ?pb.getField(1, ipb): if ? pb.getField(1, ipb):
value.address = ?BlockAddress.decode(ipb) value.address = ? BlockAddress.decode(ipb)
if ?pb.getField(2, field): if ? pb.getField(2, field):
value.priority = int32(field) value.priority = int32(field)
if ?pb.getField(3, field): if ? pb.getField(3, field):
value.cancel = bool(field) value.cancel = bool(field)
if ?pb.getField(4, field): if ? pb.getField(4, field):
value.wantType = WantType(field) value.wantType = WantType(field)
if ?pb.getField(5, field): if ? pb.getField(5, field):
value.sendDontHave = bool(field) value.sendDontHave = bool(field)
ok(value) ok(value)
@ -196,10 +193,10 @@ proc decode*(_: type WantList, pb: ProtoBuffer): ProtoResult[WantList] =
value = WantList() value = WantList()
field: uint64 field: uint64
sublist: seq[seq[byte]] sublist: seq[seq[byte]]
if ?pb.getRepeatedField(1, sublist): if ? pb.getRepeatedField(1, sublist):
for item in sublist: for item in sublist:
value.entries.add(?WantListEntry.decode(initProtoBuffer(item))) value.entries.add(? WantListEntry.decode(initProtoBuffer(item)))
if ?pb.getField(2, field): if ? pb.getField(2, field):
value.full = bool(field) value.full = bool(field)
ok(value) ok(value)
@ -211,18 +208,17 @@ proc decode*(_: type BlockDelivery, pb: ProtoBuffer): ProtoResult[BlockDelivery]
cid: Cid cid: Cid
ipb: ProtoBuffer ipb: ProtoBuffer
if ?pb.getField(1, cidBuf): if ? pb.getField(1, cidBuf):
cid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob) cid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ?pb.getField(2, dataBuf): if ? pb.getField(2, dataBuf):
value.blk = value.blk = ? Block.new(cid, dataBuf, verify = true).mapErr(x => ProtoError.IncorrectBlob)
?Block.new(cid, dataBuf, verify = true).mapErr(x => ProtoError.IncorrectBlob) if ? pb.getField(3, ipb):
if ?pb.getField(3, ipb): value.address = ? BlockAddress.decode(ipb)
value.address = ?BlockAddress.decode(ipb)
if value.address.leaf: if value.address.leaf:
var proofBuf = newSeq[byte]() var proofBuf = newSeq[byte]()
if ?pb.getField(4, proofBuf): if ? pb.getField(4, proofBuf):
let proof = ?CodexProof.decode(proofBuf).mapErr(x => ProtoError.IncorrectBlob) let proof = ? CodexProof.decode(proofBuf).mapErr(x => ProtoError.IncorrectBlob)
value.proof = proof.some value.proof = proof.some
else: else:
value.proof = CodexProof.none value.proof = CodexProof.none
@ -236,42 +232,42 @@ proc decode*(_: type BlockPresence, pb: ProtoBuffer): ProtoResult[BlockPresence]
value = BlockPresence() value = BlockPresence()
field: uint64 field: uint64
ipb: ProtoBuffer ipb: ProtoBuffer
if ?pb.getField(1, ipb): if ? pb.getField(1, ipb):
value.address = ?BlockAddress.decode(ipb) value.address = ? BlockAddress.decode(ipb)
if ?pb.getField(2, field): if ? pb.getField(2, field):
value.`type` = BlockPresenceType(field) value.`type` = BlockPresenceType(field)
discard ?pb.getField(3, value.price) discard ? pb.getField(3, value.price)
ok(value) ok(value)
proc decode*(_: type AccountMessage, pb: ProtoBuffer): ProtoResult[AccountMessage] = proc decode*(_: type AccountMessage, pb: ProtoBuffer): ProtoResult[AccountMessage] =
var value = AccountMessage() var
discard ?pb.getField(1, value.address) value = AccountMessage()
discard ? pb.getField(1, value.address)
ok(value) ok(value)
proc decode*( proc decode*(_: type StateChannelUpdate, pb: ProtoBuffer): ProtoResult[StateChannelUpdate] =
_: type StateChannelUpdate, pb: ProtoBuffer var
): ProtoResult[StateChannelUpdate] = value = StateChannelUpdate()
var value = StateChannelUpdate() discard ? pb.getField(1, value.update)
discard ?pb.getField(1, value.update)
ok(value) ok(value)
proc protobufDecode*(_: type Message, msg: seq[byte]): ProtoResult[Message] = proc protobufDecode*(_: type Message, msg: seq[byte]): ProtoResult[Message] =
var var
value = Message() value = Message()
pb = initProtoBuffer(msg) pb = initProtoBuffer(msg, maxSize = MaxMessageSize)
ipb: ProtoBuffer ipb: ProtoBuffer
sublist: seq[seq[byte]] sublist: seq[seq[byte]]
if ?pb.getField(1, ipb): if ? pb.getField(1, ipb):
value.wantList = ?WantList.decode(ipb) value.wantList = ? WantList.decode(ipb)
if ?pb.getRepeatedField(3, sublist): if ? pb.getRepeatedField(3, sublist):
for item in sublist: for item in sublist:
value.payload.add(?BlockDelivery.decode(initProtoBuffer(item))) value.payload.add(? BlockDelivery.decode(initProtoBuffer(item, maxSize = MaxBlockSize)))
if ?pb.getRepeatedField(4, sublist): if ? pb.getRepeatedField(4, sublist):
for item in sublist: for item in sublist:
value.blockPresences.add(?BlockPresence.decode(initProtoBuffer(item))) value.blockPresences.add(? BlockPresence.decode(initProtoBuffer(item)))
discard ?pb.getField(5, value.pendingBytes) discard ? pb.getField(5, value.pendingBytes)
if ?pb.getField(6, ipb): if ? pb.getField(6, ipb):
value.account = ?AccountMessage.decode(ipb) value.account = ? AccountMessage.decode(ipb)
if ?pb.getField(7, ipb): if ? pb.getField(7, ipb):
value.payment = ?StateChannelUpdate.decode(ipb) value.payment = ? StateChannelUpdate.decode(ipb)
ok(value) ok(value)

View File

@ -1,4 +1,4 @@
// Protocol of data exchange between Logos Storage nodes. // Protocol of data exchange between Codex nodes.
// Extended version of https://github.com/ipfs/specs/blob/main/BITSWAP.md // Extended version of https://github.com/ipfs/specs/blob/main/BITSWAP.md
syntax = "proto3"; syntax = "proto3";

View File

@ -1,9 +1,8 @@
{.push raises: [].}
import pkg/stew/byteutils import pkg/stew/byteutils
import pkg/stint import pkg/stint
import pkg/nitro import pkg/nitro
import pkg/questionable import pkg/questionable
import pkg/upraises
import ./blockexc import ./blockexc
export AccountMessage export AccountMessage
@ -12,8 +11,11 @@ export StateChannelUpdate
export stint export stint
export nitro export nitro
type Account* = object push: {.upraises: [].}
address*: EthAddress
type
Account* = object
address*: EthAddress
func init*(_: type AccountMessage, account: Account): AccountMessage = func init*(_: type AccountMessage, account: Account): AccountMessage =
AccountMessage(address: @(account.address.toArray)) AccountMessage(address: @(account.address.toArray))
@ -22,7 +24,7 @@ func parse(_: type EthAddress, bytes: seq[byte]): ?EthAddress =
var address: array[20, byte] var address: array[20, byte]
if bytes.len != address.len: if bytes.len != address.len:
return EthAddress.none return EthAddress.none
for i in 0 ..< address.len: for i in 0..<address.len:
address[i] = bytes[i] address[i] = bytes[i]
EthAddress(address).some EthAddress(address).some

View File

@ -1,9 +1,8 @@
{.push raises: [].}
import libp2p import libp2p
import pkg/stint import pkg/stint
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/upraises
import ./blockexc import ./blockexc
import ../../blocktype import ../../blocktype
@ -12,6 +11,8 @@ export questionable
export stint export stint
export BlockPresenceType export BlockPresenceType
upraises.push: {.upraises: [].}
type type
PresenceMessage* = blockexc.BlockPresence PresenceMessage* = blockexc.BlockPresence
Presence* = object Presence* = object
@ -31,12 +32,15 @@ func init*(_: type Presence, message: PresenceMessage): ?Presence =
some Presence( some Presence(
address: message.address, address: message.address,
have: message.`type` == BlockPresenceType.Have, have: message.`type` == BlockPresenceType.Have,
price: price, price: price
) )
func init*(_: type PresenceMessage, presence: Presence): PresenceMessage = func init*(_: type PresenceMessage, presence: Presence): PresenceMessage =
PresenceMessage( PresenceMessage(
address: presence.address, address: presence.address,
`type`: if presence.have: BlockPresenceType.Have else: BlockPresenceType.DontHave, `type`: if presence.have:
price: @(presence.price.toBytesBE), BlockPresenceType.Have
else:
BlockPresenceType.DontHave,
price: @(presence.price.toBytesBE)
) )

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,14 +9,15 @@
import std/tables import std/tables
import std/sugar import std/sugar
import std/hashes
export tables export tables
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p/[cid, multicodec, multihash] import pkg/libp2p/[cid, multicodec, multihash]
import pkg/stew/[byteutils, endians2] import pkg/stew/byteutils
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
@ -48,16 +49,16 @@ logutils.formatIt(LogFormat.textLines, BlockAddress):
else: else:
"cid: " & shortLog($it.cid) "cid: " & shortLog($it.cid)
logutils.formatIt(LogFormat.json, BlockAddress): logutils.formatIt(LogFormat.json, BlockAddress): %it
%it
proc `==`*(a, b: BlockAddress): bool = proc `==`*(a, b: BlockAddress): bool =
a.leaf == b.leaf and ( a.leaf == b.leaf and
if a.leaf: (
a.treeCid == b.treeCid and a.index == b.index if a.leaf:
else: a.treeCid == b.treeCid and a.index == b.index
a.cid == b.cid else:
) a.cid == b.cid
)
proc `$`*(a: BlockAddress): string = proc `$`*(a: BlockAddress): string =
if a.leaf: if a.leaf:
@ -65,15 +66,11 @@ proc `$`*(a: BlockAddress): string =
else: else:
"cid: " & $a.cid "cid: " & $a.cid
proc hash*(a: BlockAddress): Hash =
if a.leaf:
let data = a.treeCid.data.buffer & @(a.index.uint64.toBytesBE)
hash(data)
else:
hash(a.cid.data.buffer)
proc cidOrTreeCid*(a: BlockAddress): Cid = proc cidOrTreeCid*(a: BlockAddress): Cid =
if a.leaf: a.treeCid else: a.cid if a.leaf:
a.treeCid
else:
a.cid
proc address*(b: Block): BlockAddress = proc address*(b: Block): BlockAddress =
BlockAddress(leaf: false, cid: b.cid) BlockAddress(leaf: false, cid: b.cid)
@ -89,55 +86,57 @@ proc `$`*(b: Block): string =
result &= "\ndata: " & string.fromBytes(b.data) result &= "\ndata: " & string.fromBytes(b.data)
func new*( func new*(
T: type Block, T: type Block,
data: openArray[byte] = [], data: openArray[byte] = [],
version = CIDv1, version = CIDv1,
mcodec = Sha256HashCodec, mcodec = Sha256HashCodec,
codec = BlockCodec, codec = BlockCodec): ?!Block =
): ?!Block =
## creates a new block for both storage and network IO ## creates a new block for both storage and network IO
## ##
let let
hash = ?MultiHash.digest($mcodec, data).mapFailure hash = ? MultiHash.digest($mcodec, data).mapFailure
cid = ?Cid.init(version, codec, hash).mapFailure cid = ? Cid.init(version, codec, hash).mapFailure
# TODO: If the hash is `>=` to the data, # TODO: If the hash is `>=` to the data,
# use the Cid as a container! # use the Cid as a container!
Block(
Block(cid: cid, data: @data).success cid: cid,
data: @data).success
proc new*( proc new*(
T: type Block, cid: Cid, data: openArray[byte], verify: bool = true T: type Block,
cid: Cid,
data: openArray[byte],
verify: bool = true
): ?!Block = ): ?!Block =
## creates a new block for both storage and network IO ## creates a new block for both storage and network IO
## ##
if verify: if verify:
let let
mhash = ?cid.mhash.mapFailure mhash = ? cid.mhash.mapFailure
computedMhash = ?MultiHash.digest($mhash.mcodec, data).mapFailure computedMhash = ? MultiHash.digest($mhash.mcodec, data).mapFailure
computedCid = ?Cid.init(cid.cidver, cid.mcodec, computedMhash).mapFailure computedCid = ? Cid.init(cid.cidver, cid.mcodec, computedMhash).mapFailure
if computedCid != cid: if computedCid != cid:
return "Cid doesn't match the data".failure return "Cid doesn't match the data".failure
return Block(cid: cid, data: @data).success return Block(
cid: cid,
data: @data
).success
proc emptyBlock*(version: CidVersion, hcodec: MultiCodec): ?!Block = proc emptyBlock*(version: CidVersion, hcodec: MultiCodec): ?!Block =
emptyCid(version, hcodec, BlockCodec).flatMap( emptyCid(version, hcodec, BlockCodec)
(cid: Cid) => Block.new(cid = cid, data = @[]) .flatMap((cid: Cid) => Block.new(cid = cid, data = @[]))
)
proc emptyBlock*(cid: Cid): ?!Block = proc emptyBlock*(cid: Cid): ?!Block =
cid.mhash.mapFailure.flatMap( cid.mhash.mapFailure.flatMap((mhash: MultiHash) =>
(mhash: MultiHash) => emptyBlock(cid.cidver, mhash.mcodec) emptyBlock(cid.cidver, mhash.mcodec))
)
proc isEmpty*(cid: Cid): bool = proc isEmpty*(cid: Cid): bool =
success(cid) == success(cid) == cid.mhash.mapFailure.flatMap((mhash: MultiHash) =>
cid.mhash.mapFailure.flatMap( emptyCid(cid.cidver, mhash.mcodec, cid.mcodec))
(mhash: MultiHash) => emptyCid(cid.cidver, mhash.mcodec, cid.mcodec)
)
proc isEmpty*(blk: Block): bool = proc isEmpty*(blk: Block): bool =
blk.cid.isEmpty blk.cid.isEmpty

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,7 +9,9 @@
# TODO: This is super inneficient and needs a rewrite, but it'll do for now # TODO: This is super inneficient and needs a rewrite, but it'll do for now
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
@ -21,22 +23,20 @@ import ./logutils
export blocktype export blocktype
const DefaultChunkSize* = DefaultBlockSize const
DefaultChunkSize* = DefaultBlockSize
type type
# default reader type # default reader type
ChunkerError* = object of CatchableError
ChunkBuffer* = ptr UncheckedArray[byte] ChunkBuffer* = ptr UncheckedArray[byte]
Reader* = proc(data: ChunkBuffer, len: int): Future[int] {. Reader* = proc(data: ChunkBuffer, len: int): Future[int] {.gcsafe, raises: [Defect].}
async: (raises: [ChunkerError, CancelledError])
.}
# Reader that splits input data into fixed-size chunks # Reader that splits input data into fixed-size chunks
Chunker* = ref object Chunker* = ref object
reader*: Reader # Procedure called to actually read the data reader*: Reader # Procedure called to actually read the data
offset*: int # Bytes read so far (position in the stream) offset*: int # Bytes read so far (position in the stream)
chunkSize*: NBytes # Size of each chunk chunkSize*: NBytes # Size of each chunk
pad*: bool # Pad last chunk to chunkSize? pad*: bool # Pad last chunk to chunkSize?
FileChunker* = Chunker FileChunker* = Chunker
LPStreamChunker* = Chunker LPStreamChunker* = Chunker
@ -60,21 +60,30 @@ proc getBytes*(c: Chunker): Future[seq[byte]] {.async.} =
return move buff return move buff
proc new*( proc new*(
T: type Chunker, reader: Reader, chunkSize = DefaultChunkSize, pad = true T: type Chunker,
reader: Reader,
chunkSize = DefaultChunkSize,
pad = true
): Chunker = ): Chunker =
## create a new Chunker instance ## create a new Chunker instance
## ##
Chunker(reader: reader, offset: 0, chunkSize: chunkSize, pad: pad) Chunker(
reader: reader,
offset: 0,
chunkSize: chunkSize,
pad: pad)
proc new*( proc new*(
T: type LPStreamChunker, stream: LPStream, chunkSize = DefaultChunkSize, pad = true T: type LPStreamChunker,
stream: LPStream,
chunkSize = DefaultChunkSize,
pad = true
): LPStreamChunker = ): LPStreamChunker =
## create the default File chunker ## create the default File chunker
## ##
proc reader( proc reader(data: ChunkBuffer, len: int): Future[int]
data: ChunkBuffer, len: int {.gcsafe, async, raises: [Defect].} =
): Future[int] {.async: (raises: [ChunkerError, CancelledError]).} =
var res = 0 var res = 0
try: try:
while res < len: while res < len:
@ -85,24 +94,29 @@ proc new*(
raise error raise error
except LPStreamError as error: except LPStreamError as error:
error "LPStream error", err = error.msg error "LPStream error", err = error.msg
raise newException(ChunkerError, "LPStream error", error) raise error
except CatchableError as exc: except CatchableError as exc:
error "CatchableError exception", exc = exc.msg error "CatchableError exception", exc = exc.msg
raise newException(Defect, exc.msg) raise newException(Defect, exc.msg)
return res return res
LPStreamChunker.new(reader = reader, chunkSize = chunkSize, pad = pad) LPStreamChunker.new(
reader = reader,
chunkSize = chunkSize,
pad = pad)
proc new*( proc new*(
T: type FileChunker, file: File, chunkSize = DefaultChunkSize, pad = true T: type FileChunker,
file: File,
chunkSize = DefaultChunkSize,
pad = true
): FileChunker = ): FileChunker =
## create the default File chunker ## create the default File chunker
## ##
proc reader( proc reader(data: ChunkBuffer, len: int): Future[int]
data: ChunkBuffer, len: int {.gcsafe, async, raises: [Defect].} =
): Future[int] {.async: (raises: [ChunkerError, CancelledError]).} =
var total = 0 var total = 0
try: try:
while total < len: while total < len:
@ -121,4 +135,7 @@ proc new*(
return total return total
FileChunker.new(reader = reader, chunkSize = chunkSize, pad = pad) FileChunker.new(
reader = reader,
chunkSize = chunkSize,
pad = pad)

View File

@ -1,7 +1,6 @@
{.push raises: [].}
import pkg/chronos import pkg/chronos
import pkg/stew/endians2 import pkg/stew/endians2
import pkg/upraises
import pkg/stint import pkg/stint
type type
@ -9,12 +8,10 @@ type
SecondsSince1970* = int64 SecondsSince1970* = int64
Timeout* = object of CatchableError Timeout* = object of CatchableError
method now*(clock: Clock): SecondsSince1970 {.base, gcsafe, raises: [].} = method now*(clock: Clock): SecondsSince1970 {.base, upraises: [].} =
raiseAssert "not implemented" raiseAssert "not implemented"
method waitUntil*( method waitUntil*(clock: Clock, time: SecondsSince1970) {.base, async.} =
clock: Clock, time: SecondsSince1970
) {.base, async: (raises: [CancelledError]).} =
raiseAssert "not implemented" raiseAssert "not implemented"
method start*(clock: Clock) {.base, async.} = method start*(clock: Clock) {.base, async.} =
@ -23,9 +20,9 @@ method start*(clock: Clock) {.base, async.} =
method stop*(clock: Clock) {.base, async.} = method stop*(clock: Clock) {.base, async.} =
discard discard
proc withTimeout*( proc withTimeout*(future: Future[void],
future: Future[void], clock: Clock, expiry: SecondsSince1970 clock: Clock,
) {.async.} = expiry: SecondsSince1970) {.async.} =
let timeout = clock.waitUntil(expiry) let timeout = clock.waitUntil(expiry)
try: try:
await future or timeout await future or timeout
@ -43,8 +40,5 @@ proc toSecondsSince1970*(bytes: seq[byte]): SecondsSince1970 =
let asUint = uint64.fromBytes(bytes) let asUint = uint64.fromBytes(bytes)
cast[int64](asUint) cast[int64](asUint)
proc toSecondsSince1970*(num: uint64): SecondsSince1970 =
cast[int64](num)
proc toSecondsSince1970*(bigint: UInt256): SecondsSince1970 = proc toSecondsSince1970*(bigint: UInt256): SecondsSince1970 =
bigint.truncate(int64) bigint.truncate(int64)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -12,23 +12,23 @@ import std/strutils
import std/os import std/os
import std/tables import std/tables
import std/cpuinfo import std/cpuinfo
import std/net
import pkg/chronos import pkg/chronos
import pkg/taskpools
import pkg/presto import pkg/presto
import pkg/libp2p import pkg/libp2p
import pkg/confutils import pkg/confutils
import pkg/confutils/defs import pkg/confutils/defs
import pkg/nitro import pkg/nitro
import pkg/stew/io2 import pkg/stew/io2
import pkg/stew/shims/net as stewnet
import pkg/datastore import pkg/datastore
import pkg/ethers except Rng import pkg/ethers except Rng
import pkg/stew/io2 import pkg/stew/io2
import pkg/taskpools
import ./node import ./node
import ./conf import ./conf
import ./rng as random import ./rng
import ./rest/api import ./rest/api
import ./stores import ./stores
import ./slots import ./slots
@ -44,7 +44,6 @@ import ./utils/addrutils
import ./namespaces import ./namespaces
import ./codextypes import ./codextypes
import ./logutils import ./logutils
import ./nat
logScope: logScope:
topics = "codex node" topics = "codex node"
@ -57,20 +56,10 @@ type
repoStore: RepoStore repoStore: RepoStore
maintenance: BlockMaintainer maintenance: BlockMaintainer
taskpool: Taskpool taskpool: Taskpool
isStarted: bool
CodexPrivateKey* = libp2p.PrivateKey # alias CodexPrivateKey* = libp2p.PrivateKey # alias
EthWallet = ethers.Wallet EthWallet = ethers.Wallet
func config*(self: CodexServer): CodexConf =
return self.config
func node*(self: CodexServer): CodexNodeRef =
return self.codexNode
func repoStore*(self: CodexServer): RepoStore =
return self.repoStore
proc waitForSync(provider: Provider): Future[void] {.async.} = proc waitForSync(provider: Provider): Future[void] {.async.} =
var sleepTime = 1 var sleepTime = 1
trace "Checking sync state of Ethereum provider..." trace "Checking sync state of Ethereum provider..."
@ -81,7 +70,8 @@ proc waitForSync(provider: Provider): Future[void] {.async.} =
inc sleepTime inc sleepTime
trace "Ethereum provider is synced." trace "Ethereum provider is synced."
proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} = proc bootstrapInteractions(
s: CodexServer): Future[void] {.async.} =
## bootstrap interactions and return contracts ## bootstrap interactions and return contracts
## using clients, hosts, validators pairings ## using clients, hosts, validators pairings
## ##
@ -94,9 +84,7 @@ proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
error "Persistence enabled, but no Ethereum account was set" error "Persistence enabled, but no Ethereum account was set"
quit QuitFailure quit QuitFailure
let provider = JsonRpcProvider.new( let provider = JsonRpcProvider.new(config.ethProvider)
config.ethProvider, maxPriorityFeePerGas = config.maxPriorityFeePerGas.u256
)
await waitForSync(provider) await waitForSync(provider)
var signer: Signer var signer: Signer
if account =? config.ethAccount: if account =? config.ethAccount:
@ -116,15 +104,13 @@ proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
quit QuitFailure quit QuitFailure
signer = wallet signer = wallet
let deploy = Deployment.new(provider, config.marketplaceAddress) let deploy = Deployment.new(provider, config)
without marketplaceAddress =? await deploy.address(Marketplace): without marketplaceAddress =? await deploy.address(Marketplace):
error "No Marketplace address was specified or there is no known address for the current network" error "No Marketplace address was specified or there is no known address for the current network"
quit QuitFailure quit QuitFailure
let marketplace = Marketplace.new(marketplaceAddress, signer) let marketplace = Marketplace.new(marketplaceAddress, signer)
let market = OnChainMarket.new( let market = OnChainMarket.new(marketplace, config.rewardRecipient)
marketplace, config.rewardRecipient, config.marketplaceRequestCacheSize
)
let clock = OnChainClock.new(provider) let clock = OnChainClock.new(provider)
var client: ?ClientInteractions var client: ?ClientInteractions
@ -138,7 +124,7 @@ proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
# This is used for simulation purposes. Normal nodes won't be compiled with this flag # This is used for simulation purposes. Normal nodes won't be compiled with this flag
# and hence the proof failure will always be 0. # and hence the proof failure will always be 0.
when storage_enable_proof_failures: when codex_enable_proof_failures:
let proofFailures = config.simulateProofFailures let proofFailures = config.simulateProofFailures
if proofFailures > 0: if proofFailures > 0:
warn "Enabling proof failure simulation!" warn "Enabling proof failure simulation!"
@ -147,232 +133,172 @@ proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
if config.simulateProofFailures > 0: if config.simulateProofFailures > 0:
warn "Proof failure simulation is not enabled for this build! Configuration ignored" warn "Proof failure simulation is not enabled for this build! Configuration ignored"
if error =? (await market.loadConfig()).errorOption:
fatal "Cannot load market configuration", error = error.msg
quit QuitFailure
let purchasing = Purchasing.new(market, clock) let purchasing = Purchasing.new(market, clock)
let sales = Sales.new(market, clock, repo, proofFailures) let sales = Sales.new(market, clock, repo, proofFailures)
client = some ClientInteractions.new(clock, purchasing) client = some ClientInteractions.new(clock, purchasing)
host = some HostInteractions.new(clock, sales) host = some HostInteractions.new(clock, sales)
if config.validator: if config.validator:
without validationConfig =? without validationConfig =? ValidationConfig.init(
ValidationConfig.init( config.validatorMaxSlots,
config.validatorMaxSlots, config.validatorGroups, config.validatorGroupIndex config.validatorGroups,
), err: config.validatorGroupIndex), err:
error "Invalid validation parameters", err = err.msg error "Invalid validation parameters", err = err.msg
quit QuitFailure quit QuitFailure
let validation = Validation.new(clock, market, validationConfig) let validation = Validation.new(clock, market, validationConfig)
validator = some ValidatorInteractions.new(clock, validation) validator = some ValidatorInteractions.new(clock, validation)
s.codexNode.contracts = (client, host, validator) s.codexNode.contracts = (client, host, validator)
proc start*(s: CodexServer) {.async.} = proc start*(s: CodexServer) {.async.} =
if s.isStarted: trace "Starting codex node", config = $s.config
warn "Storage server already started, skipping"
return
trace "Starting Storage node", config = $s.config
await s.repoStore.start() await s.repoStore.start()
s.maintenance.start() s.maintenance.start()
await s.codexNode.switch.start() await s.codexNode.switch.start()
let (announceAddrs, discoveryAddrs) = nattedAddress( let
s.config.nat, s.codexNode.switch.peerInfo.addrs, s.config.discoveryPort # TODO: Can't define these as constants, pity
) natIpPart = MultiAddress.init("/ip4/" & $s.config.nat & "/")
.expect("Should create multiaddress")
anyAddrIp = MultiAddress.init("/ip4/0.0.0.0/")
.expect("Should create multiaddress")
loopBackAddrIp = MultiAddress.init("/ip4/127.0.0.1/")
.expect("Should create multiaddress")
# announce addresses should be set to bound addresses,
# but the IP should be mapped to the provided nat ip
announceAddrs = s.codexNode.switch.peerInfo.addrs.mapIt:
block:
let
listenIPPart = it[multiCodec("ip4")].expect("Should get IP")
if listenIPPart == anyAddrIp or
(listenIPPart == loopBackAddrIp and natIpPart != loopBackAddrIp):
it.remapAddr(s.config.nat.some)
else:
it
s.codexNode.discovery.updateAnnounceRecord(announceAddrs) s.codexNode.discovery.updateAnnounceRecord(announceAddrs)
s.codexNode.discovery.updateDhtRecord(discoveryAddrs) s.codexNode.discovery.updateDhtRecord(s.config.nat, s.config.discoveryPort)
await s.bootstrapInteractions() await s.bootstrapInteractions()
await s.codexNode.start() await s.codexNode.start()
s.restServer.start()
if s.restServer != nil:
s.restServer.start()
s.isStarted = true
proc stop*(s: CodexServer) {.async.} = proc stop*(s: CodexServer) {.async.} =
if not s.isStarted: notice "Stopping codex node"
warn "Storage is not started"
return
notice "Stopping Storage node"
var futures = s.taskpool.syncAll()
@[ s.taskpool.shutdown()
s.codexNode.switch.stop(),
s.codexNode.stop(),
s.repoStore.stop(),
s.maintenance.stop(),
]
if s.restServer != nil: await allFuturesThrowing(
futures.add(s.restServer.stop()) s.restServer.stop(),
s.codexNode.switch.stop(),
let res = await noCancel allFinishedFailed[void](futures) s.codexNode.stop(),
s.repoStore.stop(),
if res.failure.len > 0: s.maintenance.stop())
error "Failed to stop Storage node", failures = res.failure.len
raiseAssert "Failed to stop Storage node"
proc close*(s: CodexServer) {.async.} =
var futures = @[s.codexNode.close(), s.repoStore.close()]
let res = await noCancel allFinishedFailed[void](futures)
if not s.taskpool.isNil:
try:
s.taskpool.shutdown()
except Exception as exc:
error "Failed to stop the taskpool", failures = res.failure.len
raiseAssert("Failure in taskpool shutdown:" & exc.msg)
if res.failure.len > 0:
error "Failed to close Storage node", failures = res.failure.len
raiseAssert "Failed to close Storage node"
proc shutdown*(server: CodexServer) {.async.} =
await server.stop()
await server.close()
proc new*( proc new*(
T: type CodexServer, config: CodexConf, privateKey: CodexPrivateKey T: type CodexServer,
): CodexServer = config: CodexConf,
privateKey: CodexPrivateKey): CodexServer =
## create CodexServer including setting up datastore, repostore, etc ## create CodexServer including setting up datastore, repostore, etc
let switch = SwitchBuilder let
switch = SwitchBuilder
.new() .new()
.withPrivateKey(privateKey) .withPrivateKey(privateKey)
.withAddresses(config.listenAddrs) .withAddresses(config.listenAddrs)
.withRng(random.Rng.instance()) .withRng(Rng.instance())
.withNoise() .withNoise()
.withMplex(5.minutes, 5.minutes) .withMplex(5.minutes, 5.minutes)
.withMaxConnections(config.maxPeers) .withMaxConnections(config.maxPeers)
.withAgentVersion(config.agentString) .withAgentVersion(config.agentString)
.withSignedPeerRecord(true) .withSignedPeerRecord(true)
.withTcpTransport({ServerFlags.ReuseAddr, ServerFlags.TcpNoDelay}) .withTcpTransport({ServerFlags.ReuseAddr})
.build() .build()
var var
cache: CacheStore = nil cache: CacheStore = nil
taskpool: Taskpool
try:
if config.numThreads == ThreadCount(0):
taskpool = Taskpool.new(numThreads = min(countProcessors(), 16))
else:
taskpool = Taskpool.new(numThreads = int(config.numThreads))
info "Threadpool started", numThreads = taskpool.numThreads
except CatchableError as exc:
raiseAssert("Failure in taskpool initialization:" & exc.msg)
if config.cacheSize > 0'nb: if config.cacheSize > 0'nb:
cache = CacheStore.new(cacheSize = config.cacheSize) cache = CacheStore.new(cacheSize = config.cacheSize)
## Is unused? ## Is unused?
let discoveryDir = config.dataDir / CodexDhtNamespace let
discoveryDir = config.dataDir / CodexDhtNamespace
if io2.createPath(discoveryDir).isErr: if io2.createPath(discoveryDir).isErr:
trace "Unable to create discovery directory for block store", trace "Unable to create discovery directory for block store", discoveryDir = discoveryDir
discoveryDir = discoveryDir
raise (ref Defect)( raise (ref Defect)(
msg: "Unable to create discovery directory for block store: " & discoveryDir msg: "Unable to create discovery directory for block store: " & discoveryDir)
)
let let
discoveryStore = Datastore( discoveryStore = Datastore(
LevelDbDatastore.new(config.dataDir / CodexDhtProvidersNamespace).expect( LevelDbDatastore.new(config.dataDir / CodexDhtProvidersNamespace)
"Should create discovery datastore!" .expect("Should create discovery datastore!"))
)
)
discovery = Discovery.new( discovery = Discovery.new(
switch.peerInfo.privateKey, switch.peerInfo.privateKey,
announceAddrs = config.listenAddrs, announceAddrs = config.listenAddrs,
bindIp = config.discoveryIp,
bindPort = config.discoveryPort, bindPort = config.discoveryPort,
bootstrapNodes = config.bootstrapNodes, bootstrapNodes = config.bootstrapNodes,
store = discoveryStore, store = discoveryStore)
)
wallet = WalletRef.new(EthPrivateKey.random()) wallet = WalletRef.new(EthPrivateKey.random())
network = BlockExcNetwork.new(switch) network = BlockExcNetwork.new(switch)
repoData = repoData = case config.repoKind
case config.repoKind of repoFS: Datastore(FSDatastore.new($config.dataDir, depth = 5)
of repoFS: .expect("Should create repo file data store!"))
Datastore( of repoSQLite: Datastore(SQLiteDatastore.new($config.dataDir)
FSDatastore.new($config.dataDir, depth = 5).expect( .expect("Should create repo SQLite data store!"))
"Should create repo file data store!" of repoLevelDb: Datastore(LevelDbDatastore.new($config.dataDir)
) .expect("Should create repo LevelDB data store!"))
)
of repoSQLite:
Datastore(
SQLiteDatastore.new($config.dataDir).expect(
"Should create repo SQLite data store!"
)
)
of repoLevelDb:
Datastore(
LevelDbDatastore.new($config.dataDir).expect(
"Should create repo LevelDB data store!"
)
)
repoStore = RepoStore.new( repoStore = RepoStore.new(
repoDs = repoData, repoDs = repoData,
metaDs = LevelDbDatastore.new(config.dataDir / CodexMetaNamespace).expect( metaDs = LevelDbDatastore.new(config.dataDir / CodexMetaNamespace)
"Should create metadata store!" .expect("Should create metadata store!"),
),
quotaMaxBytes = config.storageQuota, quotaMaxBytes = config.storageQuota,
blockTtl = config.blockTtl, blockTtl = config.blockTtl)
)
maintenance = BlockMaintainer.new( maintenance = BlockMaintainer.new(
repoStore, repoStore,
interval = config.blockMaintenanceInterval, interval = config.blockMaintenanceInterval,
numberOfBlocksPerInterval = config.blockMaintenanceNumberOfBlocks, numberOfBlocksPerInterval = config.blockMaintenanceNumberOfBlocks)
)
peerStore = PeerCtxStore.new() peerStore = PeerCtxStore.new()
pendingBlocks = PendingBlocksManager.new(retries = config.blockRetries) pendingBlocks = PendingBlocksManager.new()
advertiser = Advertiser.new(repoStore, discovery) advertiser = Advertiser.new(repoStore, discovery)
blockDiscovery = blockDiscovery = DiscoveryEngine.new(repoStore, peerStore, network, discovery, pendingBlocks)
DiscoveryEngine.new(repoStore, peerStore, network, discovery, pendingBlocks) engine = BlockExcEngine.new(repoStore, wallet, network, blockDiscovery, advertiser, peerStore, pendingBlocks)
engine = BlockExcEngine.new(
repoStore, wallet, network, blockDiscovery, advertiser, peerStore, pendingBlocks
)
store = NetworkStore.new(engine, repoStore) store = NetworkStore.new(engine, repoStore)
prover = prover = if config.prover:
if config.prover: let backend = config.initializeBackend().expect("Unable to create prover backend.")
let backend = some Prover.new(store, backend, config.numProofSamples)
config.initializeBackend().expect("Unable to create prover backend.") else:
some Prover.new(store, backend, config.numProofSamples) none Prover
else:
none Prover taskpool = Taskpool.new(num_threads = countProcessors())
codexNode = CodexNodeRef.new( codexNode = CodexNodeRef.new(
switch = switch, switch = switch,
networkStore = store, networkStore = store,
engine = engine, engine = engine,
discovery = discovery,
prover = prover, prover = prover,
taskPool = taskpool, discovery = discovery,
) taskpool = taskpool)
var restServer: RestServerRef = nil restServer = RestServerRef.new(
codexNode.initRestApi(config, repoStore, config.apiCorsAllowedOrigin),
if config.apiBindAddress.isSome: initTAddress(config.apiBindAddress , config.apiPort),
restServer = RestServerRef bufferSize = (1024 * 64),
.new( maxRequestBodySize = int.high)
codexNode.initRestApi(config, repoStore, config.apiCorsAllowedOrigin), .expect("Should start rest server!")
initTAddress(config.apiBindAddress.get(), config.apiPort),
bufferSize = (1024 * 64),
maxRequestBodySize = int.high,
)
.expect("Should create rest server!")
switch.mount(network) switch.mount(network)
@ -382,5 +308,4 @@ proc new*(
restServer: restServer, restServer: restServer,
repoStore: repoStore, repoStore: repoStore,
maintenance: maintenance, maintenance: maintenance,
taskpool: taskpool, taskpool: taskpool)
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -25,15 +25,15 @@ export tables
const const
# Size of blocks for storage / network exchange, # Size of blocks for storage / network exchange,
DefaultBlockSize* = NBytes 1024 * 64 DefaultBlockSize* = NBytes 1024*64
DefaultCellSize* = NBytes 2048 DefaultCellSize* = NBytes 2048
# Proving defaults # Proving defaults
DefaultMaxSlotDepth* = 32 DefaultMaxSlotDepth* = 32
DefaultMaxDatasetDepth* = 8 DefaultMaxDatasetDepth* = 8
DefaultBlockDepth* = 5 DefaultBlockDepth* = 5
DefaultCellElms* = 67 DefaultCellElms* = 67
DefaultSamplesNum* = 5 DefaultSamplesNum* = 5
# hashes # hashes
Sha256HashCodec* = multiCodec("sha2-256") Sha256HashCodec* = multiCodec("sha2-256")
@ -48,10 +48,18 @@ const
SlotProvingRootCodec* = multiCodec("codex-proving-root") SlotProvingRootCodec* = multiCodec("codex-proving-root")
CodexSlotCellCodec* = multiCodec("codex-slot-cell") CodexSlotCellCodec* = multiCodec("codex-slot-cell")
CodexHashesCodecs* = [Sha256HashCodec, Pos2Bn128SpngCodec, Pos2Bn128MrklCodec] CodexHashesCodecs* = [
Sha256HashCodec,
Pos2Bn128SpngCodec,
Pos2Bn128MrklCodec
]
CodexPrimitivesCodecs* = [ CodexPrimitivesCodecs* = [
ManifestCodec, DatasetRootCodec, BlockCodec, SlotRootCodec, SlotProvingRootCodec, ManifestCodec,
DatasetRootCodec,
BlockCodec,
SlotRootCodec,
SlotProvingRootCodec,
CodexSlotCellCodec, CodexSlotCellCodec,
] ]
@ -66,34 +74,40 @@ proc initEmptyCidTable(): ?!Table[(CidVersion, MultiCodec, MultiCodec), Cid] =
let let
emptyData: seq[byte] = @[] emptyData: seq[byte] = @[]
PadHashes = { PadHashes = {
Sha256HashCodec: ?MultiHash.digest($Sha256HashCodec, emptyData).mapFailure, Sha256HashCodec: ? MultiHash.digest($Sha256HashCodec, emptyData).mapFailure,
Sha512HashCodec: ?MultiHash.digest($Sha512HashCodec, emptyData).mapFailure, Sha512HashCodec: ? MultiHash.digest($Sha512HashCodec, emptyData).mapFailure,
}.toTable }.toTable
var table = initTable[(CidVersion, MultiCodec, MultiCodec), Cid]() var
table = initTable[(CidVersion, MultiCodec, MultiCodec), Cid]()
for hcodec, mhash in PadHashes.pairs: for hcodec, mhash in PadHashes.pairs:
table[(CIDv1, hcodec, BlockCodec)] = ?Cid.init(CIDv1, BlockCodec, mhash).mapFailure table[(CIDv1, hcodec, BlockCodec)] = ? Cid.init(CIDv1, BlockCodec, mhash).mapFailure
success table success table
proc emptyCid*(version: CidVersion, hcodec: MultiCodec, dcodec: MultiCodec): ?!Cid = proc emptyCid*(
version: CidVersion,
hcodec: MultiCodec,
dcodec: MultiCodec): ?!Cid =
## Returns cid representing empty content, ## Returns cid representing empty content,
## given cid version, hash codec and data codec ## given cid version, hash codec and data codec
## ##
var table {.global, threadvar.}: Table[(CidVersion, MultiCodec, MultiCodec), Cid] var
table {.global, threadvar.}: Table[(CidVersion, MultiCodec, MultiCodec), Cid]
once: once:
table = ?initEmptyCidTable() table = ? initEmptyCidTable()
table[(version, hcodec, dcodec)].catch table[(version, hcodec, dcodec)].catch
proc emptyDigest*( proc emptyDigest*(
version: CidVersion, hcodec: MultiCodec, dcodec: MultiCodec version: CidVersion,
): ?!MultiHash = hcodec: MultiCodec,
dcodec: MultiCodec): ?!MultiHash =
## Returns hash representing empty content, ## Returns hash representing empty content,
## given cid version, hash codec and data codec ## given cid version, hash codec and data codec
## ##
emptyCid(version, hcodec, dcodec)
emptyCid(version, hcodec, dcodec).flatMap((cid: Cid) => cid.mhash.mapFailure) .flatMap((cid: Cid) => cid.mhash.mapFailure)

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +0,0 @@
const ContentIdsExts = [
multiCodec("codex-root"),
multiCodec("codex-manifest"),
multiCodec("codex-block"),
multiCodec("codex-slot-root"),
multiCodec("codex-proving-root"),
multiCodec("codex-slot-cell"),
]

View File

@ -2,10 +2,8 @@ import contracts/requests
import contracts/marketplace import contracts/marketplace
import contracts/market import contracts/market
import contracts/interactions import contracts/interactions
import contracts/provider
export requests export requests
export marketplace export marketplace
export market export market
export interactions export interactions
export provider

View File

@ -1,13 +1,13 @@
Logos Storage Contracts in Nim Codex Contracts in Nim
======================= =======================
Nim API for the [Logos Storage smart contracts][1]. Nim API for the [Codex smart contracts][1].
Usage Usage
----- -----
For a global overview of the steps involved in starting and fulfilling a For a global overview of the steps involved in starting and fulfilling a
storage contract, see [Logos Storage Contracts][1]. storage contract, see [Codex Contracts][1].
Smart contract Smart contract
-------------- --------------
@ -144,5 +144,5 @@ await storage
.markProofAsMissing(id, period) .markProofAsMissing(id, period)
``` ```
[1]: https://github.com/logos-storage/logos-storage-contracts-eth/ [1]: https://github.com/status-im/codex-contracts-eth/
[2]: https://github.com/logos-storage/logos-storage-research/blob/master/design/storage-proof-timing.md [2]: https://github.com/status-im/codex-research/blob/main/design/storage-proof-timing.md

View File

@ -1,32 +1,26 @@
{.push raises: [].}
import std/times import std/times
import pkg/ethers import pkg/ethers
import pkg/questionable
import pkg/chronos import pkg/chronos
import pkg/stint import pkg/stint
import ../clock import ../clock
import ../conf import ../conf
import ../utils/trackedfutures
export clock export clock
logScope: logScope:
topics = "contracts clock" topics = "contracts clock"
type OnChainClock* = ref object of Clock type
provider: Provider OnChainClock* = ref object of Clock
subscription: Subscription provider: Provider
offset: times.Duration subscription: Subscription
blockNumber: UInt256 offset: times.Duration
started: bool blockNumber: UInt256
newBlock: AsyncEvent started: bool
trackedFutures: TrackedFutures newBlock: AsyncEvent
proc new*(_: type OnChainClock, provider: Provider): OnChainClock = proc new*(_: type OnChainClock, provider: Provider): OnChainClock =
OnChainClock( OnChainClock(provider: provider, newBlock: newAsyncEvent())
provider: provider, newBlock: newAsyncEvent(), trackedFutures: TrackedFutures()
)
proc update(clock: OnChainClock, blck: Block) = proc update(clock: OnChainClock, blck: Block) =
if number =? blck.number and number > clock.blockNumber: if number =? blck.number and number > clock.blockNumber:
@ -34,28 +28,26 @@ proc update(clock: OnChainClock, blck: Block) =
let computerTime = getTime() let computerTime = getTime()
clock.offset = blockTime - computerTime clock.offset = blockTime - computerTime
clock.blockNumber = number clock.blockNumber = number
trace "updated clock", trace "updated clock", blockTime=blck.timestamp, blockNumber=number, offset=clock.offset
blockTime = blck.timestamp, blockNumber = number, offset = clock.offset
clock.newBlock.fire() clock.newBlock.fire()
proc update(clock: OnChainClock) {.async: (raises: []).} = proc update(clock: OnChainClock) {.async.} =
try: try:
if latest =? (await clock.provider.getBlock(BlockTag.latest)): if latest =? (await clock.provider.getBlock(BlockTag.latest)):
clock.update(latest) clock.update(latest)
except CancelledError as error:
raise error
except CatchableError as error: except CatchableError as error:
debug "error updating clock: ", error = error.msg debug "error updating clock: ", error=error.msg
discard
method start*(clock: OnChainClock) {.async.} = method start*(clock: OnChainClock) {.async.} =
if clock.started: if clock.started:
return return
proc onBlock(blckResult: ?!Block) = proc onBlock(_: Block) =
if eventError =? blckResult.errorOption:
error "There was an error in block subscription", msg = eventError.msg
return
# ignore block parameter; hardhat may call this with pending blocks # ignore block parameter; hardhat may call this with pending blocks
clock.trackedFutures.track(clock.update()) asyncSpawn clock.update()
await clock.update() await clock.update()
@ -67,16 +59,13 @@ method stop*(clock: OnChainClock) {.async.} =
return return
await clock.subscription.unsubscribe() await clock.subscription.unsubscribe()
await clock.trackedFutures.cancelTracked()
clock.started = false clock.started = false
method now*(clock: OnChainClock): SecondsSince1970 = method now*(clock: OnChainClock): SecondsSince1970 =
doAssert clock.started, "clock should be started before calling now()" doAssert clock.started, "clock should be started before calling now()"
return toUnix(getTime() + clock.offset) return toUnix(getTime() + clock.offset)
method waitUntil*( method waitUntil*(clock: OnChainClock, time: SecondsSince1970) {.async.} =
clock: OnChainClock, time: SecondsSince1970
) {.async: (raises: [CancelledError]).} =
while (let difference = time - clock.now(); difference > 0): while (let difference = time - clock.now(); difference > 0):
clock.newBlock.clear() clock.newBlock.clear()
discard await clock.newBlock.wait().withTimeout(chronos.seconds(difference)) discard await clock.newBlock.wait().withTimeout(chronos.seconds(difference))

View File

@ -1,71 +1,52 @@
import pkg/contractabi import pkg/contractabi
import pkg/ethers/contracts/fields import pkg/ethers/fields
import pkg/questionable/results import pkg/questionable/results
export contractabi export contractabi
const DefaultRequestCacheSize* = 128.uint16
const DefaultMaxPriorityFeePerGas* = 1_000_000_000.uint64
type type
MarketplaceConfig* = object MarketplaceConfig* = object
collateral*: CollateralConfig collateral*: CollateralConfig
proofs*: ProofConfig proofs*: ProofConfig
reservations*: SlotReservationsConfig
requestDurationLimit*: uint64
CollateralConfig* = object CollateralConfig* = object
repairRewardPercentage*: uint8 repairRewardPercentage*: uint8 # percentage of remaining collateral slot has after it has been freed
# percentage of remaining collateral slot has after it has been freed
maxNumberOfSlashes*: uint8 # frees slot when the number of slashes reaches this value maxNumberOfSlashes*: uint8 # frees slot when the number of slashes reaches this value
slashCriterion*: uint16 # amount of proofs missed that lead to slashing
slashPercentage*: uint8 # percentage of the collateral that is slashed slashPercentage*: uint8 # percentage of the collateral that is slashed
validatorRewardPercentage*: uint8
# percentage of the slashed amount going to the validators
ProofConfig* = object ProofConfig* = object
period*: uint64 # proofs requirements are calculated per period (in seconds) period*: UInt256 # proofs requirements are calculated per period (in seconds)
timeout*: uint64 # mark proofs as missing before the timeout (in seconds) timeout*: UInt256 # mark proofs as missing before the timeout (in seconds)
downtime*: uint8 # ignore this much recent blocks for proof requirements downtime*: uint8 # ignore this much recent blocks for proof requirements
downtimeProduct*: uint8
zkeyHash*: string # hash of the zkey file which is linked to the verifier zkeyHash*: string # hash of the zkey file which is linked to the verifier
# Ensures the pointer does not remain in downtime for many consecutive # Ensures the pointer does not remain in downtime for many consecutive
# periods. For each period increase, move the pointer `pointerProduct` # periods. For each period increase, move the pointer `pointerProduct`
# blocks. Should be a prime number to ensure there are no cycles. # blocks. Should be a prime number to ensure there are no cycles.
downtimeProduct*: uint8
SlotReservationsConfig* = object
maxReservations*: uint8
func fromTuple(_: type ProofConfig, tupl: tuple): ProofConfig = func fromTuple(_: type ProofConfig, tupl: tuple): ProofConfig =
ProofConfig( ProofConfig(
period: tupl[0], period: tupl[0],
timeout: tupl[1], timeout: tupl[1],
downtime: tupl[2], downtime: tupl[2],
downtimeProduct: tupl[3], zkeyHash: tupl[3],
zkeyHash: tupl[4], downtimeProduct: tupl[4]
) )
func fromTuple(_: type SlotReservationsConfig, tupl: tuple): SlotReservationsConfig =
SlotReservationsConfig(maxReservations: tupl[0])
func fromTuple(_: type CollateralConfig, tupl: tuple): CollateralConfig = func fromTuple(_: type CollateralConfig, tupl: tuple): CollateralConfig =
CollateralConfig( CollateralConfig(
repairRewardPercentage: tupl[0], repairRewardPercentage: tupl[0],
maxNumberOfSlashes: tupl[1], maxNumberOfSlashes: tupl[1],
slashPercentage: tupl[2], slashCriterion: tupl[2],
validatorRewardPercentage: tupl[3], slashPercentage: tupl[3]
) )
func fromTuple(_: type MarketplaceConfig, tupl: tuple): MarketplaceConfig = func fromTuple(_: type MarketplaceConfig, tupl: tuple): MarketplaceConfig =
MarketplaceConfig( MarketplaceConfig(
collateral: tupl[0], collateral: tupl[0],
proofs: tupl[1], proofs: tupl[1]
reservations: tupl[2],
requestDurationLimit: tupl[3],
) )
func solidityType*(_: type SlotReservationsConfig): string =
solidityType(SlotReservationsConfig.fieldTypes)
func solidityType*(_: type ProofConfig): string = func solidityType*(_: type ProofConfig): string =
solidityType(ProofConfig.fieldTypes) solidityType(ProofConfig.fieldTypes)
@ -73,10 +54,7 @@ func solidityType*(_: type CollateralConfig): string =
solidityType(CollateralConfig.fieldTypes) solidityType(CollateralConfig.fieldTypes)
func solidityType*(_: type MarketplaceConfig): string = func solidityType*(_: type MarketplaceConfig): string =
solidityType(MarketplaceConfig.fieldTypes) solidityType(CollateralConfig.fieldTypes)
func encode*(encoder: var AbiEncoder, slot: SlotReservationsConfig) =
encoder.write(slot.fieldValues)
func encode*(encoder: var AbiEncoder, slot: ProofConfig) = func encode*(encoder: var AbiEncoder, slot: ProofConfig) =
encoder.write(slot.fieldValues) encoder.write(slot.fieldValues)
@ -91,10 +69,6 @@ func decode*(decoder: var AbiDecoder, T: type ProofConfig): ?!T =
let tupl = ?decoder.read(ProofConfig.fieldTypes) let tupl = ?decoder.read(ProofConfig.fieldTypes)
success ProofConfig.fromTuple(tupl) success ProofConfig.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type SlotReservationsConfig): ?!T =
let tupl = ?decoder.read(SlotReservationsConfig.fieldTypes)
success SlotReservationsConfig.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type CollateralConfig): ?!T = func decode*(decoder: var AbiDecoder, T: type CollateralConfig): ?!T =
let tupl = ?decoder.read(CollateralConfig.fieldTypes) let tupl = ?decoder.read(CollateralConfig.fieldTypes)
success CollateralConfig.fromTuple(tupl) success CollateralConfig.fromTuple(tupl)

View File

@ -9,42 +9,38 @@ import ./marketplace
type Deployment* = ref object type Deployment* = ref object
provider: Provider provider: Provider
marketplaceAddressOverride: ?Address config: CodexConf
const knownAddresses = { const knownAddresses = {
# Hardhat localhost network # Hardhat localhost network
"31337": "31337": {
{"Marketplace": Address.init("0x322813Fd9A801c5507c9de605d63CEA4f2CE6c44")}.toTable, "Marketplace": Address.init("0x322813Fd9A801c5507c9de605d63CEA4f2CE6c44"),
# Taiko Alpha-3 Testnet }.toTable,
"167005": # Taiko Alpha-3 Testnet
{"Marketplace": Address.init("0x948CF9291b77Bd7ad84781b9047129Addf1b894F")}.toTable, "167005": {
# Codex Testnet - Jun 19 2025 13:11:56 PM (+00:00 UTC) "Marketplace": Address.init("0x948CF9291b77Bd7ad84781b9047129Addf1b894F")
"789987": }.toTable,
{"Marketplace": Address.init("0x5378a4EA5dA2a548ce22630A3AE74b052000C62D")}.toTable, # Codex Testnet - Oct 21 2024 07:31:50 AM (+00:00 UTC)
# Linea (Status) "789987": {
"1660990954": "Marketplace": Address.init("0x3F9Cf3F40F0e87d804B776D8403e3d29F85211f4")
{"Marketplace": Address.init("0x34F606C65869277f236ce07aBe9af0B8c88F486B")}.toTable, }.toTable
}.toTable }.toTable
proc getKnownAddress(T: type, chainId: UInt256): ?Address = proc getKnownAddress(T: type, chainId: UInt256): ?Address =
let id = chainId.toString(10) let id = chainId.toString(10)
notice "Looking for well-known contract address with ChainID ", chainId = id notice "Looking for well-known contract address with ChainID ", chainId=id
if not (id in knownAddresses): if not (id in knownAddresses):
return none Address return none Address
return knownAddresses[id].getOrDefault($T, Address.none) return knownAddresses[id].getOrDefault($T, Address.none)
proc new*( proc new*(_: type Deployment, provider: Provider, config: CodexConf): Deployment =
_: type Deployment, Deployment(provider: provider, config: config)
provider: Provider,
marketplaceAddressOverride: ?Address = none Address,
): Deployment =
Deployment(provider: provider, marketplaceAddressOverride: marketplaceAddressOverride)
proc address*(deployment: Deployment, contract: type): Future[?Address] {.async.} = proc address*(deployment: Deployment, contract: type): Future[?Address] {.async.} =
when contract is Marketplace: when contract is Marketplace:
if address =? deployment.marketplaceAddressOverride: if address =? deployment.config.marketplaceAddress:
return some address return some address
let chainId = await deployment.provider.getChainId() let chainId = await deployment.provider.getChainId()

View File

@ -9,12 +9,13 @@ import ./interactions
export purchasing export purchasing
export logutils export logutils
type ClientInteractions* = ref object of ContractInteractions type
purchasing*: Purchasing ClientInteractions* = ref object of ContractInteractions
purchasing*: Purchasing
proc new*( proc new*(_: type ClientInteractions,
_: type ClientInteractions, clock: OnChainClock, purchasing: Purchasing clock: OnChainClock,
): ClientInteractions = purchasing: Purchasing): ClientInteractions =
ClientInteractions(clock: clock, purchasing: purchasing) ClientInteractions(clock: clock, purchasing: purchasing)
proc start*(self: ClientInteractions) {.async.} = proc start*(self: ClientInteractions) {.async.} =

View File

@ -7,10 +7,15 @@ import ./interactions
export sales export sales
export logutils export logutils
type HostInteractions* = ref object of ContractInteractions type
sales*: Sales HostInteractions* = ref object of ContractInteractions
sales*: Sales
proc new*(_: type HostInteractions, clock: Clock, sales: Sales): HostInteractions = proc new*(
_: type HostInteractions,
clock: Clock,
sales: Sales
): HostInteractions =
## Create a new HostInteractions instance ## Create a new HostInteractions instance
## ##
HostInteractions(clock: clock, sales: sales) HostInteractions(clock: clock, sales: sales)

View File

@ -5,8 +5,9 @@ import ../market
export clock export clock
type ContractInteractions* = ref object of RootObj type
clock*: Clock ContractInteractions* = ref object of RootObj
clock*: Clock
method start*(self: ContractInteractions) {.async, base.} = method start*(self: ContractInteractions) {.async, base.} =
discard discard

View File

@ -3,12 +3,13 @@ import ../../validation
export validation export validation
type ValidatorInteractions* = ref object of ContractInteractions type
validation: Validation ValidatorInteractions* = ref object of ContractInteractions
validation: Validation
proc new*( proc new*(_: type ValidatorInteractions,
_: type ValidatorInteractions, clock: OnChainClock, validation: Validation clock: OnChainClock,
): ValidatorInteractions = validation: Validation): ValidatorInteractions =
ValidatorInteractions(clock: clock, validation: validation) ValidatorInteractions(clock: clock, validation: validation)
proc start*(self: ValidatorInteractions) {.async.} = proc start*(self: ValidatorInteractions) {.async.} =

View File

@ -1,14 +1,14 @@
import std/strformat import std/sequtils
import std/strutils import std/strutils
import std/sugar
import pkg/ethers import pkg/ethers
import pkg/upraises
import pkg/questionable import pkg/questionable
import pkg/lrucache
import ../utils/exceptions import ../utils/exceptions
import ../logutils import ../logutils
import ../market import ../market
import ./marketplace import ./marketplace
import ./proofs import ./proofs
import ./provider
export market export market
@ -20,225 +20,128 @@ type
contract: Marketplace contract: Marketplace
signer: Signer signer: Signer
rewardRecipient: ?Address rewardRecipient: ?Address
configuration: ?MarketplaceConfig
requestCache: LruCache[string, StorageRequest]
allowanceLock: AsyncLock
MarketSubscription = market.Subscription MarketSubscription = market.Subscription
EventSubscription = ethers.Subscription EventSubscription = ethers.Subscription
OnChainMarketSubscription = ref object of MarketSubscription OnChainMarketSubscription = ref object of MarketSubscription
eventSubscription: EventSubscription eventSubscription: EventSubscription
func new*( func new*(
_: type OnChainMarket, _: type OnChainMarket,
contract: Marketplace, contract: Marketplace,
rewardRecipient = Address.none, rewardRecipient = Address.none): OnChainMarket =
requestCacheSize: uint16 = DefaultRequestCacheSize,
): OnChainMarket =
without signer =? contract.signer: without signer =? contract.signer:
raiseAssert("Marketplace contract should have a signer") raiseAssert("Marketplace contract should have a signer")
var requestCache = newLruCache[string, StorageRequest](int(requestCacheSize))
OnChainMarket( OnChainMarket(
contract: contract, contract: contract,
signer: signer, signer: signer,
rewardRecipient: rewardRecipient, rewardRecipient: rewardRecipient
requestCache: requestCache,
) )
proc raiseMarketError(message: string) {.raises: [MarketError].} = proc raiseMarketError(message: string) {.raises: [MarketError].} =
raise newException(MarketError, message) raise newException(MarketError, message)
func prefixWith(suffix, prefix: string, separator = ": "): string = template convertEthersError(body) =
if prefix.len > 0:
return &"{prefix}{separator}{suffix}"
else:
return suffix
template convertEthersError(msg: string = "", body) =
try: try:
body body
except EthersError as error: except EthersError as error:
raiseMarketError(error.msgDetail.prefixWith(msg)) raiseMarketError(error.msgDetail)
proc config( proc approveFunds(market: OnChainMarket, amount: UInt256) {.async.} =
market: OnChainMarket
): Future[MarketplaceConfig] {.async: (raises: [CancelledError, MarketError]).} =
without resolvedConfig =? market.configuration:
if err =? (await market.loadConfig()).errorOption:
raiseMarketError(err.msg)
without config =? market.configuration:
raiseMarketError("Failed to access to config from the Marketplace contract")
return config
return resolvedConfig
template withAllowanceLock*(market: OnChainMarket, body: untyped) =
if market.allowanceLock.isNil:
market.allowanceLock = newAsyncLock()
await market.allowanceLock.acquire()
try:
body
finally:
try:
market.allowanceLock.release()
except AsyncLockError as error:
raise newException(Defect, error.msg, error)
proc approveFunds(
market: OnChainMarket, amount: UInt256
) {.async: (raises: [CancelledError, MarketError]).} =
debug "Approving tokens", amount debug "Approving tokens", amount
convertEthersError("Failed to approve funds"): convertEthersError:
let tokenAddress = await market.contract.token() let tokenAddress = await market.contract.token()
let token = Erc20Token.new(tokenAddress, market.signer) let token = Erc20Token.new(tokenAddress, market.signer)
let owner = await market.signer.getAddress() discard await token.increaseAllowance(market.contract.address(), amount).confirm(0)
let spender = market.contract.address
market.withAllowanceLock:
let allowance = await token.allowance(owner, spender)
discard await token.approve(spender, allowance + amount).confirm(1)
method loadConfig*( method getZkeyHash*(market: OnChainMarket): Future[?string] {.async.} =
market: OnChainMarket let config = await market.contract.config()
): Future[?!void] {.async: (raises: [CancelledError]).} =
try:
without config =? market.configuration:
let fetchedConfig = await market.contract.configuration()
market.configuration = some fetchedConfig
return success()
except EthersError as err:
return failure newException(
MarketError,
"Failed to fetch the config from the Marketplace contract: " & err.msg,
)
method getZkeyHash*(
market: OnChainMarket
): Future[?string] {.async: (raises: [CancelledError, MarketError]).} =
let config = await market.config()
return some config.proofs.zkeyHash return some config.proofs.zkeyHash
method getSigner*( method getSigner*(market: OnChainMarket): Future[Address] {.async.} =
market: OnChainMarket convertEthersError:
): Future[Address] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to get signer address"):
return await market.signer.getAddress() return await market.signer.getAddress()
method periodicity*( method periodicity*(market: OnChainMarket): Future[Periodicity] {.async.} =
market: OnChainMarket convertEthersError:
): Future[Periodicity] {.async: (raises: [CancelledError, MarketError]).} = let config = await market.contract.config()
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
let period = config.proofs.period let period = config.proofs.period
return Periodicity(seconds: period) return Periodicity(seconds: period)
method proofTimeout*( method proofTimeout*(market: OnChainMarket): Future[UInt256] {.async.} =
market: OnChainMarket convertEthersError:
): Future[uint64] {.async: (raises: [CancelledError, MarketError]).} = let config = await market.contract.config()
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
return config.proofs.timeout return config.proofs.timeout
method repairRewardPercentage*( method proofDowntime*(market: OnChainMarket): Future[uint8] {.async.} =
market: OnChainMarket convertEthersError:
): Future[uint8] {.async: (raises: [CancelledError, MarketError]).} = let config = await market.contract.config()
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
return config.collateral.repairRewardPercentage
method requestDurationLimit*(market: OnChainMarket): Future[uint64] {.async.} =
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
return config.requestDurationLimit
method proofDowntime*(
market: OnChainMarket
): Future[uint8] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
return config.proofs.downtime return config.proofs.downtime
method getPointer*(market: OnChainMarket, slotId: SlotId): Future[uint8] {.async.} = method getPointer*(market: OnChainMarket, slotId: SlotId): Future[uint8] {.async.} =
convertEthersError("Failed to get slot pointer"): convertEthersError:
let overrides = CallOverrides(blockTag: some BlockTag.pending) let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.getPointer(slotId, overrides) return await market.contract.getPointer(slotId, overrides)
method myRequests*(market: OnChainMarket): Future[seq[RequestId]] {.async.} = method myRequests*(market: OnChainMarket): Future[seq[RequestId]] {.async.} =
convertEthersError("Failed to get my requests"): convertEthersError:
return await market.contract.myRequests return await market.contract.myRequests
method mySlots*(market: OnChainMarket): Future[seq[SlotId]] {.async.} = method mySlots*(market: OnChainMarket): Future[seq[SlotId]] {.async.} =
convertEthersError("Failed to get my slots"): convertEthersError:
let slots = await market.contract.mySlots() let slots = await market.contract.mySlots()
debug "Fetched my slots", numSlots = len(slots) debug "Fetched my slots", numSlots=len(slots)
return slots return slots
method requestStorage( method requestStorage(market: OnChainMarket, request: StorageRequest){.async.} =
market: OnChainMarket, request: StorageRequest convertEthersError:
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to request storage"):
debug "Requesting storage" debug "Requesting storage"
await market.approveFunds(request.totalPrice()) await market.approveFunds(request.price())
discard await market.contract.requestStorage(request).confirm(1) discard await market.contract.requestStorage(request).confirm(0)
method getRequest*( method getRequest(market: OnChainMarket,
market: OnChainMarket, id: RequestId id: RequestId): Future[?StorageRequest] {.async.} =
): Future[?StorageRequest] {.async: (raises: [CancelledError]).} = convertEthersError:
try: try:
let key = $id return some await market.contract.getRequest(id)
except ProviderError as e:
if e.msgDetail.contains("Unknown request"):
return none StorageRequest
raise e
if key in market.requestCache: method requestState*(market: OnChainMarket,
return some market.requestCache[key] requestId: RequestId): Future[?RequestState] {.async.} =
convertEthersError:
let request = await market.contract.getRequest(id)
market.requestCache[key] = request
return some request
except Marketplace_UnknownRequest, KeyError:
warn "Cannot retrieve the request", error = getCurrentExceptionMsg()
return none StorageRequest
except EthersError as e:
error "Cannot retrieve the request", error = e.msg
return none StorageRequest
method requestState*(
market: OnChainMarket, requestId: RequestId
): Future[?RequestState] {.async.} =
convertEthersError("Failed to get request state"):
try: try:
let overrides = CallOverrides(blockTag: some BlockTag.pending) let overrides = CallOverrides(blockTag: some BlockTag.pending)
return some await market.contract.requestState(requestId, overrides) return some await market.contract.requestState(requestId, overrides)
except Marketplace_UnknownRequest: except ProviderError as e:
return none RequestState if e.msgDetail.contains("Unknown request"):
return none RequestState
raise e
method slotState*( method slotState*(market: OnChainMarket,
market: OnChainMarket, slotId: SlotId slotId: SlotId): Future[SlotState] {.async.} =
): Future[SlotState] {.async: (raises: [CancelledError, MarketError]).} = convertEthersError:
convertEthersError("Failed to fetch the slot state from the Marketplace contract"):
let overrides = CallOverrides(blockTag: some BlockTag.pending) let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.slotState(slotId, overrides) return await market.contract.slotState(slotId, overrides)
method getRequestEnd*( method getRequestEnd*(market: OnChainMarket,
market: OnChainMarket, id: RequestId id: RequestId): Future[SecondsSince1970] {.async.} =
): Future[SecondsSince1970] {.async.} = convertEthersError:
convertEthersError("Failed to get request end"):
return await market.contract.requestEnd(id) return await market.contract.requestEnd(id)
method requestExpiresAt*( method requestExpiresAt*(market: OnChainMarket,
market: OnChainMarket, id: RequestId id: RequestId): Future[SecondsSince1970] {.async.} =
): Future[SecondsSince1970] {.async.} = convertEthersError:
convertEthersError("Failed to get request expiry"):
return await market.contract.requestExpiry(id) return await market.contract.requestExpiry(id)
method getHost( method getHost(market: OnChainMarket,
market: OnChainMarket, requestId: RequestId, slotIndex: uint64 requestId: RequestId,
): Future[?Address] {.async: (raises: [CancelledError, MarketError]).} = slotIndex: UInt256): Future[?Address] {.async.} =
convertEthersError("Failed to get slot's host"): convertEthersError:
let slotId = slotId(requestId, slotIndex) let slotId = slotId(requestId, slotIndex)
let address = await market.contract.getHost(slotId) let address = await market.contract.getHost(slotId)
if address != Address.default: if address != Address.default:
@ -246,435 +149,266 @@ method getHost(
else: else:
return none Address return none Address
method currentCollateral*( method getActiveSlot*(market: OnChainMarket,
market: OnChainMarket, slotId: SlotId slotId: SlotId): Future[?Slot] {.async.} =
): Future[UInt256] {.async: (raises: [MarketError, CancelledError]).} = convertEthersError:
convertEthersError("Failed to get slot's current collateral"):
return await market.contract.currentCollateral(slotId)
method getActiveSlot*(market: OnChainMarket, slotId: SlotId): Future[?Slot] {.async.} =
convertEthersError("Failed to get active slot"):
try: try:
return some await market.contract.getActiveSlot(slotId) return some await market.contract.getActiveSlot(slotId)
except Marketplace_SlotIsFree: except ProviderError as e:
return none Slot if e.msgDetail.contains("Slot is free"):
return none Slot
raise e
method fillSlot( method fillSlot(market: OnChainMarket,
market: OnChainMarket, requestId: RequestId,
requestId: RequestId, slotIndex: UInt256,
slotIndex: uint64, proof: Groth16Proof,
proof: Groth16Proof, collateral: UInt256) {.async.} =
collateral: UInt256, convertEthersError:
) {.async: (raises: [CancelledError, MarketError]).} = await market.approveFunds(collateral)
convertEthersError("Failed to fill slot"): discard await market.contract.fillSlot(requestId, slotIndex, proof).confirm(0)
logScope:
requestId
slotIndex
try: method freeSlot*(market: OnChainMarket, slotId: SlotId) {.async.} =
await market.approveFunds(collateral) convertEthersError:
var freeSlot: Future[?TransactionResponse]
if rewardRecipient =? market.rewardRecipient:
# If --reward-recipient specified, use it as the reward recipient, and use
# the SP's address as the collateral recipient
let collateralRecipient = await market.getSigner()
freeSlot = market.contract.freeSlot(
slotId,
rewardRecipient, # --reward-recipient
collateralRecipient) # SP's address
# Add 10% to gas estimate to deal with different evm code flow when we else:
# happen to be the last one to fill a slot in this request # Otherwise, use the SP's address as both the reward and collateral
trace "estimating gas for fillSlot" # recipient (the contract will use msg.sender for both)
let gas = await market.contract.estimateGas.fillSlot(requestId, slotIndex, proof) freeSlot = market.contract.freeSlot(slotId)
let gasLimit = (gas * 110) div 100
let overrides = TransactionOverrides(gasLimit: some gasLimit)
trace "calling fillSlot on contract", estimatedGas = gas, gasLimit = gasLimit discard await freeSlot.confirm(0)
discard await market.contract
.fillSlot(requestId, slotIndex, proof, overrides)
.confirm(1)
trace "fillSlot transaction completed"
except Marketplace_SlotNotFree as parent:
raise newException(
SlotStateMismatchError, "Failed to fill slot because the slot is not free",
parent,
)
method freeSlot*(
market: OnChainMarket, slotId: SlotId
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to free slot"):
try:
var freeSlot: Future[Confirmable]
if rewardRecipient =? market.rewardRecipient:
# If --reward-recipient specified, use it as the reward recipient, and use
# the SP's address as the collateral recipient
let collateralRecipient = await market.getSigner()
# Add 200% to gas estimate to deal with different evm code flow when we method withdrawFunds(market: OnChainMarket,
# happen to be the one to make the request fail requestId: RequestId) {.async.} =
let gas = await market.contract.estimateGas.freeSlot( convertEthersError:
slotId, rewardRecipient, collateralRecipient discard await market.contract.withdrawFunds(requestId).confirm(0)
)
let gasLimit = gas * 3
let overrides = TransactionOverrides(gasLimit: some gasLimit)
trace "calling freeSlot on contract", estimatedGas = gas, gasLimit = gasLimit method isProofRequired*(market: OnChainMarket,
id: SlotId): Future[bool] {.async.} =
freeSlot = market.contract.freeSlot( convertEthersError:
slotId,
rewardRecipient, # --reward-recipient
collateralRecipient, # SP's address
overrides,
)
else:
# Otherwise, use the SP's address as both the reward and collateral
# recipient (the contract will use msg.sender for both)
# Add 200% to gas estimate to deal with different evm code flow when we
# happen to be the one to make the request fail
let gas = await market.contract.estimateGas.freeSlot(slotId)
let gasLimit = gas * 3
let overrides = TransactionOverrides(gasLimit: some (gasLimit))
trace "calling freeSlot on contract", estimatedGas = gas, gasLimit = gasLimit
freeSlot = market.contract.freeSlot(slotId, overrides)
discard await freeSlot.confirm(1)
except Marketplace_SlotIsFree as parent:
raise newException(
SlotStateMismatchError, "Failed to free slot, slot is already free", parent
)
method withdrawFunds(
market: OnChainMarket, requestId: RequestId
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to withdraw funds"):
discard await market.contract.withdrawFunds(requestId).confirm(1)
method isProofRequired*(market: OnChainMarket, id: SlotId): Future[bool] {.async.} =
convertEthersError("Failed to get proof requirement"):
try: try:
let overrides = CallOverrides(blockTag: some BlockTag.pending) let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.isProofRequired(id, overrides) return await market.contract.isProofRequired(id, overrides)
except Marketplace_SlotIsFree: except ProviderError as e:
return false if e.msgDetail.contains("Slot is free"):
return false
raise e
method willProofBeRequired*(market: OnChainMarket, id: SlotId): Future[bool] {.async.} = method willProofBeRequired*(market: OnChainMarket,
convertEthersError("Failed to get future proof requirement"): id: SlotId): Future[bool] {.async.} =
convertEthersError:
try: try:
let overrides = CallOverrides(blockTag: some BlockTag.pending) let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.willProofBeRequired(id, overrides) return await market.contract.willProofBeRequired(id, overrides)
except Marketplace_SlotIsFree: except ProviderError as e:
return false if e.msgDetail.contains("Slot is free"):
return false
raise e
method getChallenge*( method getChallenge*(market: OnChainMarket, id: SlotId): Future[ProofChallenge] {.async.} =
market: OnChainMarket, id: SlotId convertEthersError:
): Future[ProofChallenge] {.async.} =
convertEthersError("Failed to get proof challenge"):
let overrides = CallOverrides(blockTag: some BlockTag.pending) let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.getChallenge(id, overrides) return await market.contract.getChallenge(id, overrides)
method submitProof*( method submitProof*(market: OnChainMarket,
market: OnChainMarket, id: SlotId, proof: Groth16Proof id: SlotId,
) {.async: (raises: [CancelledError, MarketError]).} = proof: Groth16Proof) {.async.} =
convertEthersError("Failed to submit proof"): convertEthersError:
try: discard await market.contract.submitProof(id, proof).confirm(0)
discard await market.contract.submitProof(id, proof).confirm(1)
except Proofs_InvalidProof as parent:
raise newException(
ProofInvalidError, "Failed to submit proof because the proof is invalid", parent
)
method markProofAsMissing*( method markProofAsMissing*(market: OnChainMarket,
market: OnChainMarket, id: SlotId, period: Period id: SlotId,
) {.async: (raises: [CancelledError, MarketError]).} = period: Period) {.async.} =
convertEthersError("Failed to mark proof as missing"): convertEthersError:
# Add 50% to gas estimate to deal with different evm code flow when we discard await market.contract.markProofAsMissing(id, period).confirm(0)
# happen to be the one to make the request fail
let gas = await market.contract.estimateGas.markProofAsMissing(id, period)
let gasLimit = (gas * 150) div 100
let overrides = TransactionOverrides(gasLimit: some gasLimit)
trace "calling markProofAsMissing on contract", method canProofBeMarkedAsMissing*(
estimatedGas = gas, gasLimit = gasLimit market: OnChainMarket,
id: SlotId,
discard await market.contract.markProofAsMissing(id, period, overrides).confirm(1) period: Period
): Future[bool] {.async.} =
method canMarkProofAsMissing*( let provider = market.contract.provider
market: OnChainMarket, id: SlotId, period: Period let contractWithoutSigner = market.contract.connect(provider)
): Future[bool] {.async: (raises: [CancelledError]).} = let overrides = CallOverrides(blockTag: some BlockTag.pending)
try: try:
let overrides = CallOverrides(blockTag: some BlockTag.pending) discard await contractWithoutSigner.markProofAsMissing(id, period, overrides)
discard await market.contract.canMarkProofAsMissing(id, period, overrides)
return true return true
except EthersError as e: except EthersError as e:
trace "Proof cannot be marked as missing", msg = e.msg trace "Proof cannot be marked as missing", msg = e.msg
return false return false
method reserveSlot*( method reserveSlot*(
market: OnChainMarket, requestId: RequestId, slotIndex: uint64 market: OnChainMarket,
) {.async: (raises: [CancelledError, MarketError]).} = requestId: RequestId,
convertEthersError("Failed to reserve slot"): slotIndex: UInt256) {.async.} =
try:
# Add 25% to gas estimate to deal with different evm code flow when we
# happen to be the last one that is allowed to reserve the slot
let gas = await market.contract.estimateGas.reserveSlot(requestId, slotIndex)
let gasLimit = (gas * 125) div 100
let overrides = TransactionOverrides(gasLimit: some gasLimit)
trace "calling reserveSlot on contract", estimatedGas = gas, gasLimit = gasLimit convertEthersError:
discard await market.contract.reserveSlot(requestId, slotIndex).confirm(0)
discard
await market.contract.reserveSlot(requestId, slotIndex, overrides).confirm(1)
except SlotReservations_ReservationNotAllowed:
raise newException(
SlotReservationNotAllowedError,
"Failed to reserve slot because reservation is not allowed",
)
method canReserveSlot*( method canReserveSlot*(
market: OnChainMarket, requestId: RequestId, slotIndex: uint64 market: OnChainMarket,
): Future[bool] {.async.} = requestId: RequestId,
convertEthersError("Unable to determine if slot can be reserved"): slotIndex: UInt256): Future[bool] {.async.} =
convertEthersError:
return await market.contract.canReserveSlot(requestId, slotIndex) return await market.contract.canReserveSlot(requestId, slotIndex)
method subscribeRequests*( method subscribeRequests*(market: OnChainMarket,
market: OnChainMarket, callback: OnRequest callback: OnRequest):
): Future[MarketSubscription] {.async.} = Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!StorageRequested) {.raises: [].} = proc onEvent(event: StorageRequested) {.upraises:[].} =
without event =? eventResult, eventErr: callback(event.requestId,
error "There was an error in Request subscription", msg = eventErr.msg event.ask,
return event.expiry)
callback(event.requestId, event.ask, event.expiry) convertEthersError:
convertEthersError("Failed to subscribe to StorageRequested events"):
let subscription = await market.contract.subscribe(StorageRequested, onEvent) let subscription = await market.contract.subscribe(StorageRequested, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotFilled*( method subscribeSlotFilled*(market: OnChainMarket,
market: OnChainMarket, callback: OnSlotFilled callback: OnSlotFilled):
): Future[MarketSubscription] {.async.} = Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!SlotFilled) {.raises: [].} = proc onEvent(event: SlotFilled) {.upraises:[].} =
without event =? eventResult, eventErr:
error "There was an error in SlotFilled subscription", msg = eventErr.msg
return
callback(event.requestId, event.slotIndex) callback(event.requestId, event.slotIndex)
convertEthersError("Failed to subscribe to SlotFilled events"): convertEthersError:
let subscription = await market.contract.subscribe(SlotFilled, onEvent) let subscription = await market.contract.subscribe(SlotFilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotFilled*( method subscribeSlotFilled*(market: OnChainMarket,
market: OnChainMarket, requestId: RequestId,
requestId: RequestId, slotIndex: UInt256,
slotIndex: uint64, callback: OnSlotFilled):
callback: OnSlotFilled, Future[MarketSubscription] {.async.} =
): Future[MarketSubscription] {.async.} = proc onSlotFilled(eventRequestId: RequestId, eventSlotIndex: UInt256) =
proc onSlotFilled(eventRequestId: RequestId, eventSlotIndex: uint64) =
if eventRequestId == requestId and eventSlotIndex == slotIndex: if eventRequestId == requestId and eventSlotIndex == slotIndex:
callback(requestId, slotIndex) callback(requestId, slotIndex)
convertEthersError("Failed to subscribe to SlotFilled events"): convertEthersError:
return await market.subscribeSlotFilled(onSlotFilled) return await market.subscribeSlotFilled(onSlotFilled)
method subscribeSlotFreed*( method subscribeSlotFreed*(market: OnChainMarket,
market: OnChainMarket, callback: OnSlotFreed callback: OnSlotFreed):
): Future[MarketSubscription] {.async.} = Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!SlotFreed) {.raises: [].} = proc onEvent(event: SlotFreed) {.upraises:[].} =
without event =? eventResult, eventErr:
error "There was an error in SlotFreed subscription", msg = eventErr.msg
return
callback(event.requestId, event.slotIndex) callback(event.requestId, event.slotIndex)
convertEthersError("Failed to subscribe to SlotFreed events"): convertEthersError:
let subscription = await market.contract.subscribe(SlotFreed, onEvent) let subscription = await market.contract.subscribe(SlotFreed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotReservationsFull*( method subscribeSlotReservationsFull*(
market: OnChainMarket, callback: OnSlotReservationsFull market: OnChainMarket,
): Future[MarketSubscription] {.async.} = callback: OnSlotReservationsFull): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!SlotReservationsFull) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in SlotReservationsFull subscription",
msg = eventErr.msg
return
proc onEvent(event: SlotReservationsFull) {.upraises:[].} =
callback(event.requestId, event.slotIndex) callback(event.requestId, event.slotIndex)
convertEthersError("Failed to subscribe to SlotReservationsFull events"): convertEthersError:
let subscription = await market.contract.subscribe(SlotReservationsFull, onEvent) let subscription = await market.contract.subscribe(SlotReservationsFull, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeFulfillment( method subscribeFulfillment(market: OnChainMarket,
market: OnChainMarket, callback: OnFulfillment callback: OnFulfillment):
): Future[MarketSubscription] {.async.} = Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestFulfilled) {.raises: [].} = proc onEvent(event: RequestFulfilled) {.upraises:[].} =
without event =? eventResult, eventErr:
error "There was an error in RequestFulfillment subscription", msg = eventErr.msg
return
callback(event.requestId) callback(event.requestId)
convertEthersError("Failed to subscribe to RequestFulfilled events"): convertEthersError:
let subscription = await market.contract.subscribe(RequestFulfilled, onEvent) let subscription = await market.contract.subscribe(RequestFulfilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeFulfillment( method subscribeFulfillment(market: OnChainMarket,
market: OnChainMarket, requestId: RequestId, callback: OnFulfillment requestId: RequestId,
): Future[MarketSubscription] {.async.} = callback: OnFulfillment):
proc onEvent(eventResult: ?!RequestFulfilled) {.raises: [].} = Future[MarketSubscription] {.async.} =
without event =? eventResult, eventErr: proc onEvent(event: RequestFulfilled) {.upraises:[].} =
error "There was an error in RequestFulfillment subscription", msg = eventErr.msg
return
if event.requestId == requestId: if event.requestId == requestId:
callback(event.requestId) callback(event.requestId)
convertEthersError("Failed to subscribe to RequestFulfilled events"): convertEthersError:
let subscription = await market.contract.subscribe(RequestFulfilled, onEvent) let subscription = await market.contract.subscribe(RequestFulfilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestCancelled*( method subscribeRequestCancelled*(market: OnChainMarket,
market: OnChainMarket, callback: OnRequestCancelled callback: OnRequestCancelled):
): Future[MarketSubscription] {.async.} = Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestCancelled) {.raises: [].} = proc onEvent(event: RequestCancelled) {.upraises:[].} =
without event =? eventResult, eventErr:
error "There was an error in RequestCancelled subscription", msg = eventErr.msg
return
callback(event.requestId) callback(event.requestId)
convertEthersError("Failed to subscribe to RequestCancelled events"): convertEthersError:
let subscription = await market.contract.subscribe(RequestCancelled, onEvent) let subscription = await market.contract.subscribe(RequestCancelled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestCancelled*( method subscribeRequestCancelled*(market: OnChainMarket,
market: OnChainMarket, requestId: RequestId, callback: OnRequestCancelled requestId: RequestId,
): Future[MarketSubscription] {.async.} = callback: OnRequestCancelled):
proc onEvent(eventResult: ?!RequestCancelled) {.raises: [].} = Future[MarketSubscription] {.async.} =
without event =? eventResult, eventErr: proc onEvent(event: RequestCancelled) {.upraises:[].} =
error "There was an error in RequestCancelled subscription", msg = eventErr.msg
return
if event.requestId == requestId: if event.requestId == requestId:
callback(event.requestId) callback(event.requestId)
convertEthersError("Failed to subscribe to RequestCancelled events"): convertEthersError:
let subscription = await market.contract.subscribe(RequestCancelled, onEvent) let subscription = await market.contract.subscribe(RequestCancelled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestFailed*( method subscribeRequestFailed*(market: OnChainMarket,
market: OnChainMarket, callback: OnRequestFailed callback: OnRequestFailed):
): Future[MarketSubscription] {.async.} = Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestFailed) {.raises: [].} = proc onEvent(event: RequestFailed) {.upraises:[]} =
without event =? eventResult, eventErr:
error "There was an error in RequestFailed subscription", msg = eventErr.msg
return
callback(event.requestId) callback(event.requestId)
convertEthersError("Failed to subscribe to RequestFailed events"): convertEthersError:
let subscription = await market.contract.subscribe(RequestFailed, onEvent) let subscription = await market.contract.subscribe(RequestFailed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestFailed*( method subscribeRequestFailed*(market: OnChainMarket,
market: OnChainMarket, requestId: RequestId, callback: OnRequestFailed requestId: RequestId,
): Future[MarketSubscription] {.async.} = callback: OnRequestFailed):
proc onEvent(eventResult: ?!RequestFailed) {.raises: [].} = Future[MarketSubscription] {.async.} =
without event =? eventResult, eventErr: proc onEvent(event: RequestFailed) {.upraises:[]} =
error "There was an error in RequestFailed subscription", msg = eventErr.msg
return
if event.requestId == requestId: if event.requestId == requestId:
callback(event.requestId) callback(event.requestId)
convertEthersError("Failed to subscribe to RequestFailed events"): convertEthersError:
let subscription = await market.contract.subscribe(RequestFailed, onEvent) let subscription = await market.contract.subscribe(RequestFailed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeProofSubmission*( method subscribeProofSubmission*(market: OnChainMarket,
market: OnChainMarket, callback: OnProofSubmitted callback: OnProofSubmitted):
): Future[MarketSubscription] {.async.} = Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!ProofSubmitted) {.raises: [].} = proc onEvent(event: ProofSubmitted) {.upraises: [].} =
without event =? eventResult, eventErr:
error "There was an error in ProofSubmitted subscription", msg = eventErr.msg
return
callback(event.id) callback(event.id)
convertEthersError("Failed to subscribe to ProofSubmitted events"): convertEthersError:
let subscription = await market.contract.subscribe(ProofSubmitted, onEvent) let subscription = await market.contract.subscribe(ProofSubmitted, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription) return OnChainMarketSubscription(eventSubscription: subscription)
method unsubscribe*(subscription: OnChainMarketSubscription) {.async.} = method unsubscribe*(subscription: OnChainMarketSubscription) {.async.} =
await subscription.eventSubscription.unsubscribe() await subscription.eventSubscription.unsubscribe()
method queryPastSlotFilledEvents*( method queryPastEvents*[T: MarketplaceEvent](
market: OnChainMarket, fromBlock: BlockTag market: OnChainMarket,
): Future[seq[SlotFilled]] {.async.} = _: type T,
convertEthersError("Failed to get past SlotFilled events from block"): blocksAgo: int): Future[seq[T]] {.async.} =
return await market.contract.queryFilter(SlotFilled, fromBlock, BlockTag.latest)
method queryPastSlotFilledEvents*( convertEthersError:
market: OnChainMarket, blocksAgo: int let contract = market.contract
): Future[seq[SlotFilled]] {.async.} = let provider = contract.provider
convertEthersError("Failed to get past SlotFilled events"):
let fromBlock = await market.contract.provider.pastBlockTag(blocksAgo)
return await market.queryPastSlotFilledEvents(fromBlock) let head = await provider.getBlockNumber()
let fromBlock = BlockTag.init(head - blocksAgo.abs.u256)
method queryPastSlotFilledEvents*( return await contract.queryFilter(T,
market: OnChainMarket, fromTime: SecondsSince1970 fromBlock,
): Future[seq[SlotFilled]] {.async.} = BlockTag.latest)
convertEthersError("Failed to get past SlotFilled events from time"):
let fromBlock = await market.contract.provider.blockNumberForEpoch(fromTime)
return await market.queryPastSlotFilledEvents(BlockTag.init(fromBlock))
method queryPastStorageRequestedEvents*(
market: OnChainMarket, fromBlock: BlockTag
): Future[seq[StorageRequested]] {.async.} =
convertEthersError("Failed to get past StorageRequested events from block"):
return
await market.contract.queryFilter(StorageRequested, fromBlock, BlockTag.latest)
method queryPastStorageRequestedEvents*(
market: OnChainMarket, blocksAgo: int
): Future[seq[StorageRequested]] {.async.} =
convertEthersError("Failed to get past StorageRequested events"):
let fromBlock = await market.contract.provider.pastBlockTag(blocksAgo)
return await market.queryPastStorageRequestedEvents(fromBlock)
method slotCollateral*(
market: OnChainMarket, requestId: RequestId, slotIndex: uint64
): Future[?!UInt256] {.async: (raises: [CancelledError]).} =
let slotid = slotId(requestId, slotIndex)
try:
let slotState = await market.slotState(slotid)
without request =? await market.getRequest(requestId):
return failure newException(
MarketError, "Failure calculating the slotCollateral, cannot get the request"
)
return market.slotCollateral(request.ask.collateralPerSlot, slotState)
except MarketError as error:
error "Error when trying to calculate the slotCollateral", error = error.msg
return failure error
method slotCollateral*(
market: OnChainMarket, collateralPerSlot: UInt256, slotState: SlotState
): ?!UInt256 {.raises: [].} =
if slotState == SlotState.Repair:
without repairRewardPercentage =?
market.configuration .? collateral .? repairRewardPercentage:
return failure newException(
MarketError,
"Failure calculating the slotCollateral, cannot get the reward percentage",
)
return success (
collateralPerSlot - (collateralPerSlot * repairRewardPercentage.u256).div(
100.u256
)
)
return success(collateralPerSlot)

View File

@ -17,182 +17,40 @@ export requests
type type
Marketplace* = ref object of Contract Marketplace* = ref object of Contract
Marketplace_RepairRewardPercentageTooHigh* = object of SolidityError proc config*(marketplace: Marketplace): MarketplaceConfig {.contract, view.}
Marketplace_SlashPercentageTooHigh* = object of SolidityError
Marketplace_MaximumSlashingTooHigh* = object of SolidityError
Marketplace_InvalidExpiry* = object of SolidityError
Marketplace_InvalidMaxSlotLoss* = object of SolidityError
Marketplace_InsufficientSlots* = object of SolidityError
Marketplace_InvalidClientAddress* = object of SolidityError
Marketplace_RequestAlreadyExists* = object of SolidityError
Marketplace_InvalidSlot* = object of SolidityError
Marketplace_SlotNotFree* = object of SolidityError
Marketplace_InvalidSlotHost* = object of SolidityError
Marketplace_AlreadyPaid* = object of SolidityError
Marketplace_TransferFailed* = object of SolidityError
Marketplace_UnknownRequest* = object of SolidityError
Marketplace_InvalidState* = object of SolidityError
Marketplace_StartNotBeforeExpiry* = object of SolidityError
Marketplace_SlotNotAcceptingProofs* = object of SolidityError
Marketplace_SlotIsFree* = object of SolidityError
Marketplace_ReservationRequired* = object of SolidityError
Marketplace_NothingToWithdraw* = object of SolidityError
Marketplace_InsufficientDuration* = object of SolidityError
Marketplace_InsufficientProofProbability* = object of SolidityError
Marketplace_InsufficientCollateral* = object of SolidityError
Marketplace_InsufficientReward* = object of SolidityError
Marketplace_InvalidCid* = object of SolidityError
Marketplace_DurationExceedsLimit* = object of SolidityError
Proofs_InsufficientBlockHeight* = object of SolidityError
Proofs_InvalidProof* = object of SolidityError
Proofs_ProofAlreadySubmitted* = object of SolidityError
Proofs_PeriodNotEnded* = object of SolidityError
Proofs_ValidationTimedOut* = object of SolidityError
Proofs_ProofNotMissing* = object of SolidityError
Proofs_ProofNotRequired* = object of SolidityError
Proofs_ProofAlreadyMarkedMissing* = object of SolidityError
Periods_InvalidSecondsPerPeriod* = object of SolidityError
SlotReservations_ReservationNotAllowed* = object of SolidityError
proc configuration*(marketplace: Marketplace): MarketplaceConfig {.contract, view.}
proc token*(marketplace: Marketplace): Address {.contract, view.} proc token*(marketplace: Marketplace): Address {.contract, view.}
proc currentCollateral*( proc slashMisses*(marketplace: Marketplace): UInt256 {.contract, view.}
marketplace: Marketplace, id: SlotId proc slashPercentage*(marketplace: Marketplace): UInt256 {.contract, view.}
): UInt256 {.contract, view.} proc minCollateralThreshold*(marketplace: Marketplace): UInt256 {.contract, view.}
proc requestStorage*(
marketplace: Marketplace, request: StorageRequest
): Confirmable {.
contract,
errors: [
Marketplace_InvalidClientAddress, Marketplace_RequestAlreadyExists,
Marketplace_InvalidExpiry, Marketplace_InsufficientSlots,
Marketplace_InvalidMaxSlotLoss, Marketplace_InsufficientDuration,
Marketplace_InsufficientProofProbability, Marketplace_InsufficientCollateral,
Marketplace_InsufficientReward, Marketplace_InvalidCid,
]
.}
proc fillSlot*(
marketplace: Marketplace, requestId: RequestId, slotIndex: uint64, proof: Groth16Proof
): Confirmable {.
contract,
errors: [
Marketplace_InvalidSlot, Marketplace_ReservationRequired, Marketplace_SlotNotFree,
Marketplace_StartNotBeforeExpiry, Marketplace_UnknownRequest,
]
.}
proc withdrawFunds*(
marketplace: Marketplace, requestId: RequestId
): Confirmable {.
contract,
errors: [
Marketplace_InvalidClientAddress, Marketplace_InvalidState,
Marketplace_NothingToWithdraw, Marketplace_UnknownRequest,
]
.}
proc withdrawFunds*(
marketplace: Marketplace, requestId: RequestId, withdrawAddress: Address
): Confirmable {.
contract,
errors: [
Marketplace_InvalidClientAddress, Marketplace_InvalidState,
Marketplace_NothingToWithdraw, Marketplace_UnknownRequest,
]
.}
proc freeSlot*(
marketplace: Marketplace, id: SlotId
): Confirmable {.
contract,
errors: [
Marketplace_InvalidSlotHost, Marketplace_AlreadyPaid,
Marketplace_StartNotBeforeExpiry, Marketplace_UnknownRequest, Marketplace_SlotIsFree,
]
.}
proc freeSlot*(
marketplace: Marketplace,
id: SlotId,
rewardRecipient: Address,
collateralRecipient: Address,
): Confirmable {.
contract,
errors: [
Marketplace_InvalidSlotHost, Marketplace_AlreadyPaid,
Marketplace_StartNotBeforeExpiry, Marketplace_UnknownRequest, Marketplace_SlotIsFree,
]
.}
proc getRequest*(
marketplace: Marketplace, id: RequestId
): StorageRequest {.contract, view, errors: [Marketplace_UnknownRequest].}
proc requestStorage*(marketplace: Marketplace, request: StorageRequest): ?TransactionResponse {.contract.}
proc fillSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256, proof: Groth16Proof): ?TransactionResponse {.contract.}
proc withdrawFunds*(marketplace: Marketplace, requestId: RequestId): ?TransactionResponse {.contract.}
proc withdrawFunds*(marketplace: Marketplace, requestId: RequestId, withdrawAddress: Address): ?TransactionResponse {.contract.}
proc freeSlot*(marketplace: Marketplace, id: SlotId): ?TransactionResponse {.contract.}
proc freeSlot*(marketplace: Marketplace, id: SlotId, rewardRecipient: Address, collateralRecipient: Address): ?TransactionResponse {.contract.}
proc getRequest*(marketplace: Marketplace, id: RequestId): StorageRequest {.contract, view.}
proc getHost*(marketplace: Marketplace, id: SlotId): Address {.contract, view.} proc getHost*(marketplace: Marketplace, id: SlotId): Address {.contract, view.}
proc getActiveSlot*( proc getActiveSlot*(marketplace: Marketplace, id: SlotId): Slot {.contract, view.}
marketplace: Marketplace, id: SlotId
): Slot {.contract, view, errors: [Marketplace_SlotIsFree].}
proc myRequests*(marketplace: Marketplace): seq[RequestId] {.contract, view.} proc myRequests*(marketplace: Marketplace): seq[RequestId] {.contract, view.}
proc mySlots*(marketplace: Marketplace): seq[SlotId] {.contract, view.} proc mySlots*(marketplace: Marketplace): seq[SlotId] {.contract, view.}
proc requestState*( proc requestState*(marketplace: Marketplace, requestId: RequestId): RequestState {.contract, view.}
marketplace: Marketplace, requestId: RequestId
): RequestState {.contract, view, errors: [Marketplace_UnknownRequest].}
proc slotState*(marketplace: Marketplace, slotId: SlotId): SlotState {.contract, view.} proc slotState*(marketplace: Marketplace, slotId: SlotId): SlotState {.contract, view.}
proc requestEnd*( proc requestEnd*(marketplace: Marketplace, requestId: RequestId): SecondsSince1970 {.contract, view.}
marketplace: Marketplace, requestId: RequestId proc requestExpiry*(marketplace: Marketplace, requestId: RequestId): SecondsSince1970 {.contract, view.}
): SecondsSince1970 {.contract, view.}
proc requestExpiry*( proc proofTimeout*(marketplace: Marketplace): UInt256 {.contract, view.}
marketplace: Marketplace, requestId: RequestId
): SecondsSince1970 {.contract, view.}
proc proofEnd*(marketplace: Marketplace, id: SlotId): UInt256 {.contract, view.}
proc missingProofs*(marketplace: Marketplace, id: SlotId): UInt256 {.contract, view.} proc missingProofs*(marketplace: Marketplace, id: SlotId): UInt256 {.contract, view.}
proc isProofRequired*(marketplace: Marketplace, id: SlotId): bool {.contract, view.} proc isProofRequired*(marketplace: Marketplace, id: SlotId): bool {.contract, view.}
proc willProofBeRequired*(marketplace: Marketplace, id: SlotId): bool {.contract, view.} proc willProofBeRequired*(marketplace: Marketplace, id: SlotId): bool {.contract, view.}
proc getChallenge*( proc getChallenge*(marketplace: Marketplace, id: SlotId): array[32, byte] {.contract, view.}
marketplace: Marketplace, id: SlotId
): array[32, byte] {.contract, view.}
proc getPointer*(marketplace: Marketplace, id: SlotId): uint8 {.contract, view.} proc getPointer*(marketplace: Marketplace, id: SlotId): uint8 {.contract, view.}
proc submitProof*( proc submitProof*(marketplace: Marketplace, id: SlotId, proof: Groth16Proof): ?TransactionResponse {.contract.}
marketplace: Marketplace, id: SlotId, proof: Groth16Proof proc markProofAsMissing*(marketplace: Marketplace, id: SlotId, period: UInt256): ?TransactionResponse {.contract.}
): Confirmable {.
contract,
errors:
[Proofs_ProofAlreadySubmitted, Proofs_InvalidProof, Marketplace_UnknownRequest]
.}
proc markProofAsMissing*( proc reserveSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256): ?TransactionResponse {.contract.}
marketplace: Marketplace, id: SlotId, period: uint64 proc canReserveSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256): bool {.contract, view.}
): Confirmable {.
contract,
errors: [
Marketplace_SlotNotAcceptingProofs, Marketplace_StartNotBeforeExpiry,
Proofs_PeriodNotEnded, Proofs_ValidationTimedOut, Proofs_ProofNotMissing,
Proofs_ProofNotRequired, Proofs_ProofAlreadyMarkedMissing,
]
.}
proc canMarkProofAsMissing*(
marketplace: Marketplace, id: SlotId, period: uint64
): Confirmable {.
contract,
errors: [
Marketplace_SlotNotAcceptingProofs, Proofs_PeriodNotEnded,
Proofs_ValidationTimedOut, Proofs_ProofNotMissing, Proofs_ProofNotRequired,
Proofs_ProofAlreadyMarkedMissing,
]
.}
proc reserveSlot*(
marketplace: Marketplace, requestId: RequestId, slotIndex: uint64
): Confirmable {.contract.}
proc canReserveSlot*(
marketplace: Marketplace, requestId: RequestId, slotIndex: uint64
): bool {.contract, view.}

View File

@ -1,22 +1,19 @@
import pkg/stint import pkg/stint
import pkg/contractabi import pkg/contractabi
import pkg/ethers/contracts/fields import pkg/ethers/fields
type type
Groth16Proof* = object Groth16Proof* = object
a*: G1Point a*: G1Point
b*: G2Point b*: G2Point
c*: G1Point c*: G1Point
G1Point* = object G1Point* = object
x*: UInt256 x*: UInt256
y*: UInt256 y*: UInt256
# A field element F_{p^2} encoded as `real + i * imag` # A field element F_{p^2} encoded as `real + i * imag`
Fp2Element* = object Fp2Element* = object
real*: UInt256 real*: UInt256
imag*: UInt256 imag*: UInt256
G2Point* = object G2Point* = object
x*: Fp2Element x*: Fp2Element
y*: Fp2Element y*: Fp2Element

View File

@ -1,123 +0,0 @@
import pkg/ethers/provider
import pkg/chronos
import pkg/questionable
import ../logutils
from ../clock import SecondsSince1970
logScope:
topics = "marketplace onchain provider"
proc raiseProviderError(message: string) {.raises: [ProviderError].} =
raise newException(ProviderError, message)
proc blockNumberAndTimestamp*(
provider: Provider, blockTag: BlockTag
): Future[(UInt256, UInt256)] {.async: (raises: [ProviderError, CancelledError]).} =
without latestBlock =? await provider.getBlock(blockTag):
raiseProviderError("Could not get latest block")
without latestBlockNumber =? latestBlock.number:
raiseProviderError("Could not get latest block number")
return (latestBlockNumber, latestBlock.timestamp)
proc binarySearchFindClosestBlock(
provider: Provider, epochTime: int, low: UInt256, high: UInt256
): Future[UInt256] {.async: (raises: [ProviderError, CancelledError]).} =
let (_, lowTimestamp) = await provider.blockNumberAndTimestamp(BlockTag.init(low))
let (_, highTimestamp) = await provider.blockNumberAndTimestamp(BlockTag.init(high))
if abs(lowTimestamp.truncate(int) - epochTime) <
abs(highTimestamp.truncate(int) - epochTime):
return low
else:
return high
proc binarySearchBlockNumberForEpoch(
provider: Provider,
epochTime: UInt256,
latestBlockNumber: UInt256,
earliestBlockNumber: UInt256,
): Future[UInt256] {.async: (raises: [ProviderError, CancelledError]).} =
var low = earliestBlockNumber
var high = latestBlockNumber
while low <= high:
if low == 0 and high == 0:
return low
let mid = (low + high) div 2
let (midBlockNumber, midBlockTimestamp) =
await provider.blockNumberAndTimestamp(BlockTag.init(mid))
if midBlockTimestamp < epochTime:
low = mid + 1
elif midBlockTimestamp > epochTime:
high = mid - 1
else:
return midBlockNumber
# NOTICE that by how the binary search is implemented, when it finishes
# low is always greater than high - this is why we use high, where
# intuitively we would use low:
await provider.binarySearchFindClosestBlock(
epochTime.truncate(int), low = high, high = low
)
proc blockNumberForEpoch*(
provider: Provider, epochTime: SecondsSince1970
): Future[UInt256] {.async: (raises: [ProviderError, CancelledError]).} =
let epochTimeUInt256 = epochTime.u256
let (latestBlockNumber, latestBlockTimestamp) =
await provider.blockNumberAndTimestamp(BlockTag.latest)
let (earliestBlockNumber, earliestBlockTimestamp) =
await provider.blockNumberAndTimestamp(BlockTag.earliest)
# Initially we used the average block time to predict
# the number of blocks we need to look back in order to find
# the block number corresponding to the given epoch time.
# This estimation can be highly inaccurate if block time
# was changing in the past or is fluctuating and therefore
# we used that information initially only to find out
# if the available history is long enough to perform effective search.
# It turns out we do not have to do that. There is an easier way.
#
# First we check if the given epoch time equals the timestamp of either
# the earliest or the latest block. If it does, we just return the
# block number of that block.
#
# Otherwise, if the earliest available block is not the genesis block,
# we should check the timestamp of that earliest block and if it is greater
# than the epoch time, we should issue a warning and return
# that earliest block number.
# In all other cases, thus when the earliest block is not the genesis
# block but its timestamp is not greater than the requested epoch time, or
# if the earliest available block is the genesis block,
# (which means we have the whole history available), we should proceed with
# the binary search.
#
# Additional benefit of this method is that we do not have to rely
# on the average block time, which not only makes the whole thing
# more reliable, but also easier to test.
# Are lucky today?
if earliestBlockTimestamp == epochTimeUInt256:
return earliestBlockNumber
if latestBlockTimestamp == epochTimeUInt256:
return latestBlockNumber
if earliestBlockNumber > 0 and earliestBlockTimestamp > epochTimeUInt256:
let availableHistoryInDays =
(latestBlockTimestamp - earliestBlockTimestamp) div 1.days.secs.u256
warn "Short block history detected.",
earliestBlockTimestamp = earliestBlockTimestamp, days = availableHistoryInDays
return earliestBlockNumber
return await provider.binarySearchBlockNumberForEpoch(
epochTimeUInt256, latestBlockNumber, earliestBlockNumber
)
proc pastBlockTag*(
provider: Provider, blocksAgo: int
): Future[BlockTag] {.async: (raises: [ProviderError, CancelledError]).} =
let head = await provider.getBlockNumber()
return BlockTag.init(head - blocksAgo.abs.u256)

View File

@ -2,14 +2,13 @@ import std/hashes
import std/sequtils import std/sequtils
import std/typetraits import std/typetraits
import pkg/contractabi import pkg/contractabi
import pkg/nimcrypto/keccak import pkg/nimcrypto
import pkg/ethers/contracts/fields import pkg/ethers/fields
import pkg/questionable/results import pkg/questionable/results
import pkg/stew/byteutils import pkg/stew/byteutils
import pkg/libp2p/[cid, multicodec] import pkg/upraises
import ../logutils import ../logutils
import ../utils/json import ../utils/json
from ../errors import mapFailure
export contractabi export contractabi
@ -18,26 +17,22 @@ type
client* {.serialize.}: Address client* {.serialize.}: Address
ask* {.serialize.}: StorageAsk ask* {.serialize.}: StorageAsk
content* {.serialize.}: StorageContent content* {.serialize.}: StorageContent
expiry* {.serialize.}: uint64 expiry* {.serialize.}: UInt256
nonce*: Nonce nonce*: Nonce
StorageAsk* = object StorageAsk* = object
proofProbability* {.serialize.}: UInt256
pricePerBytePerSecond* {.serialize.}: UInt256
collateralPerByte* {.serialize.}: UInt256
slots* {.serialize.}: uint64 slots* {.serialize.}: uint64
slotSize* {.serialize.}: uint64 slotSize* {.serialize.}: UInt256
duration* {.serialize.}: uint64 duration* {.serialize.}: UInt256
proofProbability* {.serialize.}: UInt256
reward* {.serialize.}: UInt256
collateral* {.serialize.}: UInt256
maxSlotLoss* {.serialize.}: uint64 maxSlotLoss* {.serialize.}: uint64
StorageContent* = object StorageContent* = object
cid* {.serialize.}: Cid cid* {.serialize.}: string
merkleRoot*: array[32, byte] merkleRoot*: array[32, byte]
Slot* = object Slot* = object
request* {.serialize.}: StorageRequest request* {.serialize.}: StorageRequest
slotIndex* {.serialize.}: uint64 slotIndex* {.serialize.}: UInt256
SlotId* = distinct array[32, byte] SlotId* = distinct array[32, byte]
RequestId* = distinct array[32, byte] RequestId* = distinct array[32, byte]
Nonce* = distinct array[32, byte] Nonce* = distinct array[32, byte]
@ -47,7 +42,6 @@ type
Cancelled Cancelled
Finished Finished
Failed Failed
SlotState* {.pure.} = enum SlotState* {.pure.} = enum
Free Free
Filled Filled
@ -55,7 +49,6 @@ type
Failed Failed
Paid Paid
Cancelled Cancelled
Repair
proc `==`*(x, y: Nonce): bool {.borrow.} proc `==`*(x, y: Nonce): bool {.borrow.}
proc `==`*(x, y: RequestId): bool {.borrow.} proc `==`*(x, y: RequestId): bool {.borrow.}
@ -87,43 +80,44 @@ proc toHex*[T: distinct](id: T): string =
type baseType = T.distinctBase type baseType = T.distinctBase
baseType(id).toHex baseType(id).toHex
logutils.formatIt(LogFormat.textLines, Nonce): logutils.formatIt(LogFormat.textLines, Nonce): it.short0xHexLog
it.short0xHexLog logutils.formatIt(LogFormat.textLines, RequestId): it.short0xHexLog
logutils.formatIt(LogFormat.textLines, RequestId): logutils.formatIt(LogFormat.textLines, SlotId): it.short0xHexLog
it.short0xHexLog logutils.formatIt(LogFormat.json, Nonce): it.to0xHexLog
logutils.formatIt(LogFormat.textLines, SlotId): logutils.formatIt(LogFormat.json, RequestId): it.to0xHexLog
it.short0xHexLog logutils.formatIt(LogFormat.json, SlotId): it.to0xHexLog
logutils.formatIt(LogFormat.json, Nonce):
it.to0xHexLog
logutils.formatIt(LogFormat.json, RequestId):
it.to0xHexLog
logutils.formatIt(LogFormat.json, SlotId):
it.to0xHexLog
func fromTuple(_: type StorageRequest, tupl: tuple): StorageRequest = func fromTuple(_: type StorageRequest, tupl: tuple): StorageRequest =
StorageRequest( StorageRequest(
client: tupl[0], ask: tupl[1], content: tupl[2], expiry: tupl[3], nonce: tupl[4] client: tupl[0],
ask: tupl[1],
content: tupl[2],
expiry: tupl[3],
nonce: tupl[4]
) )
func fromTuple(_: type Slot, tupl: tuple): Slot = func fromTuple(_: type Slot, tupl: tuple): Slot =
Slot(request: tupl[0], slotIndex: tupl[1]) Slot(
request: tupl[0],
slotIndex: tupl[1]
)
func fromTuple(_: type StorageAsk, tupl: tuple): StorageAsk = func fromTuple(_: type StorageAsk, tupl: tuple): StorageAsk =
StorageAsk( StorageAsk(
proofProbability: tupl[0], slots: tupl[0],
pricePerBytePerSecond: tupl[1], slotSize: tupl[1],
collateralPerByte: tupl[2], duration: tupl[2],
slots: tupl[3], proofProbability: tupl[3],
slotSize: tupl[4], reward: tupl[4],
duration: tupl[5], collateral: tupl[5],
maxSlotLoss: tupl[6], maxSlotLoss: tupl[6]
) )
func fromTuple(_: type StorageContent, tupl: tuple): StorageContent = func fromTuple(_: type StorageContent, tupl: tuple): StorageContent =
StorageContent(cid: tupl[0], merkleRoot: tupl[1]) StorageContent(
cid: tupl[0],
func solidityType*(_: type Cid): string = merkleRoot: tupl[1]
solidityType(seq[byte]) )
func solidityType*(_: type StorageContent): string = func solidityType*(_: type StorageContent): string =
solidityType(StorageContent.fieldTypes) solidityType(StorageContent.fieldTypes)
@ -134,10 +128,6 @@ func solidityType*(_: type StorageAsk): string =
func solidityType*(_: type StorageRequest): string = func solidityType*(_: type StorageRequest): string =
solidityType(StorageRequest.fieldTypes) solidityType(StorageRequest.fieldTypes)
# Note: it seems to be ok to ignore the vbuffer offset for now
func encode*(encoder: var AbiEncoder, cid: Cid) =
encoder.write(cid.data.buffer)
func encode*(encoder: var AbiEncoder, content: StorageContent) = func encode*(encoder: var AbiEncoder, content: StorageContent) =
encoder.write(content.fieldValues) encoder.write(content.fieldValues)
@ -150,12 +140,8 @@ func encode*(encoder: var AbiEncoder, id: RequestId | SlotId | Nonce) =
func encode*(encoder: var AbiEncoder, request: StorageRequest) = func encode*(encoder: var AbiEncoder, request: StorageRequest) =
encoder.write(request.fieldValues) encoder.write(request.fieldValues)
func encode*(encoder: var AbiEncoder, slot: Slot) = func encode*(encoder: var AbiEncoder, request: Slot) =
encoder.write(slot.fieldValues) encoder.write(request.fieldValues)
func decode*(decoder: var AbiDecoder, T: type Cid): ?!T =
let data = ?decoder.read(seq[byte])
Cid.init(data).mapFailure
func decode*(decoder: var AbiDecoder, T: type StorageContent): ?!T = func decode*(decoder: var AbiDecoder, T: type StorageContent): ?!T =
let tupl = ?decoder.read(StorageContent.fieldTypes) let tupl = ?decoder.read(StorageContent.fieldTypes)
@ -174,33 +160,27 @@ func decode*(decoder: var AbiDecoder, T: type Slot): ?!T =
success Slot.fromTuple(tupl) success Slot.fromTuple(tupl)
func id*(request: StorageRequest): RequestId = func id*(request: StorageRequest): RequestId =
let encoding = AbiEncoder.encode((request,)) let encoding = AbiEncoder.encode((request, ))
RequestId(keccak256.digest(encoding).data) RequestId(keccak256.digest(encoding).data)
func slotId*(requestId: RequestId, slotIndex: uint64): SlotId = func slotId*(requestId: RequestId, slotIndex: UInt256): SlotId =
let encoding = AbiEncoder.encode((requestId, slotIndex)) let encoding = AbiEncoder.encode((requestId, slotIndex))
SlotId(keccak256.digest(encoding).data) SlotId(keccak256.digest(encoding).data)
func slotId*(request: StorageRequest, slotIndex: uint64): SlotId = func slotId*(request: StorageRequest, slotIndex: UInt256): SlotId =
slotId(request.id, slotIndex) slotId(request.id, slotIndex)
func id*(slot: Slot): SlotId = func id*(slot: Slot): SlotId =
slotId(slot.request, slot.slotIndex) slotId(slot.request, slot.slotIndex)
func pricePerSlotPerSecond*(ask: StorageAsk): UInt256 =
ask.pricePerBytePerSecond * ask.slotSize.u256
func pricePerSlot*(ask: StorageAsk): UInt256 = func pricePerSlot*(ask: StorageAsk): UInt256 =
ask.duration.u256 * ask.pricePerSlotPerSecond ask.duration * ask.reward
func totalPrice*(ask: StorageAsk): UInt256 = func price*(ask: StorageAsk): UInt256 =
ask.slots.u256 * ask.pricePerSlot ask.slots.u256 * ask.pricePerSlot
func totalPrice*(request: StorageRequest): UInt256 = func price*(request: StorageRequest): UInt256 =
request.ask.totalPrice request.ask.price
func collateralPerSlot*(ask: StorageAsk): UInt256 = func size*(ask: StorageAsk): UInt256 =
ask.collateralPerByte * ask.slotSize.u256 ask.slots.u256 * ask.slotSize
func size*(ask: StorageAsk): uint64 =
ask.slots * ask.slotSize

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,19 +7,16 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/algorithm import std/algorithm
import std/net
import std/sequtils import std/sequtils
import pkg/chronos import pkg/chronos
import pkg/libp2p/[cid, multicodec, routing_record, signed_envelope] import pkg/libp2p/[cid, multicodec, routing_record, signed_envelope]
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/stew/shims/net
import pkg/contractabi/address as ca import pkg/contractabi/address as ca
import pkg/codexdht/discv5/[routing_table, protocol as discv5] import pkg/codexdht/discv5/[routing_table, protocol as discv5]
from pkg/nimcrypto import keccak256
import ./rng import ./rng
import ./errors import ./errors
@ -34,16 +31,15 @@ export discv5
logScope: logScope:
topics = "codex discovery" topics = "codex discovery"
type Discovery* = ref object of RootObj type
protocol*: discv5.Protocol # dht protocol Discovery* = ref object of RootObj
key: PrivateKey # private key protocol*: discv5.Protocol # dht protocol
peerId: PeerId # the peer id of the local node key: PrivateKey # private key
announceAddrs*: seq[MultiAddress] # addresses announced as part of the provider records peerId: PeerId # the peer id of the local node
providerRecord*: ?SignedPeerRecord announceAddrs*: seq[MultiAddress] # addresses announced as part of the provider records
# record to advertice node connection information, this carry any providerRecord*: ?SignedPeerRecord # record to advertice node connection information, this carry any
# address that the node can be connected on # address that the node can be connected on
dhtRecord*: ?SignedPeerRecord # record to advertice DHT connection information dhtRecord*: ?SignedPeerRecord # record to advertice DHT connection information
isStarted: bool
proc toNodeId*(cid: Cid): NodeId = proc toNodeId*(cid: Cid): NodeId =
## Cid to discovery id ## Cid to discovery id
@ -58,121 +54,82 @@ proc toNodeId*(host: ca.Address): NodeId =
readUintBE[256](keccak256.digest(host.toArray).data) readUintBE[256](keccak256.digest(host.toArray).data)
proc findPeer*( proc findPeer*(
d: Discovery, peerId: PeerId d: Discovery,
): Future[?PeerRecord] {.async: (raises: [CancelledError]).} = peerId: PeerId): Future[?PeerRecord] {.async.} =
trace "protocol.resolve..." trace "protocol.resolve..."
## Find peer using the given Discovery object ## Find peer using the given Discovery object
## ##
let
node = await d.protocol.resolve(toNodeId(peerId))
try: return
let node = await d.protocol.resolve(toNodeId(peerId)) if node.isSome():
node.get().record.data.some
return else:
if node.isSome(): PeerRecord.none
node.get().record.data.some
else:
PeerRecord.none
except CancelledError as exc:
warn "Error finding peer", peerId = peerId, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding peer", peerId = peerId, exc = exc.msg
return PeerRecord.none
method find*( method find*(
d: Discovery, cid: Cid d: Discovery,
): Future[seq[SignedPeerRecord]] {.async: (raises: [CancelledError]), base.} = cid: Cid): Future[seq[SignedPeerRecord]] {.async, base.} =
## Find block providers ## Find block providers
## ##
without providers =?
(await d.protocol.getProviders(cid.toNodeId())).mapFailure, error:
warn "Error finding providers for block", cid, error = error.msg
try: return providers.filterIt( not (it.data.peerId == d.peerId) )
without providers =? (await d.protocol.getProviders(cid.toNodeId())).mapFailure,
error:
warn "Error finding providers for block", cid, error = error.msg
return providers.filterIt(not (it.data.peerId == d.peerId)) method provide*(d: Discovery, cid: Cid) {.async, base.} =
except CancelledError as exc:
warn "Error finding providers for block", cid, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding providers for block", cid, exc = exc.msg
method provide*(d: Discovery, cid: Cid) {.async: (raises: [CancelledError]), base.} =
## Provide a block Cid ## Provide a block Cid
## ##
try: let
let nodes = await d.protocol.addProvider(cid.toNodeId(), d.providerRecord.get) nodes = await d.protocol.addProvider(
cid.toNodeId(), d.providerRecord.get)
if nodes.len <= 0:
warn "Couldn't provide to any nodes!"
if nodes.len <= 0:
warn "Couldn't provide to any nodes!"
except CancelledError as exc:
warn "Error providing block", cid, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error providing block", cid, exc = exc.msg
method find*( method find*(
d: Discovery, host: ca.Address d: Discovery,
): Future[seq[SignedPeerRecord]] {.async: (raises: [CancelledError]), base.} = host: ca.Address): Future[seq[SignedPeerRecord]] {.async, base.} =
## Find host providers ## Find host providers
## ##
try: trace "Finding providers for host", host = $host
trace "Finding providers for host", host = $host without var providers =?
without var providers =? (await d.protocol.getProviders(host.toNodeId())).mapFailure, (await d.protocol.getProviders(host.toNodeId())).mapFailure, error:
error: trace "Error finding providers for host", host = $host, exc = error.msg
trace "Error finding providers for host", host = $host, exc = error.msg return
return
if providers.len <= 0: if providers.len <= 0:
trace "No providers found", host = $host trace "No providers found", host = $host
return return
providers.sort do(a, b: SignedPeerRecord) -> int: providers.sort do(a, b: SignedPeerRecord) -> int:
system.cmp[uint64](a.data.seqNo, b.data.seqNo) system.cmp[uint64](a.data.seqNo, b.data.seqNo)
return providers return providers
except CancelledError as exc:
warn "Error finding providers for host", host = $host, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding providers for host", host = $host, exc = exc.msg
method provide*( method provide*(d: Discovery, host: ca.Address) {.async, base.} =
d: Discovery, host: ca.Address
) {.async: (raises: [CancelledError]), base.} =
## Provide hosts ## Provide hosts
## ##
try: trace "Providing host", host = $host
trace "Providing host", host = $host let
let nodes = await d.protocol.addProvider(host.toNodeId(), d.providerRecord.get) nodes = await d.protocol.addProvider(
if nodes.len > 0: host.toNodeId(), d.providerRecord.get)
trace "Provided to nodes", nodes = nodes.len if nodes.len > 0:
except CancelledError as exc: trace "Provided to nodes", nodes = nodes.len
warn "Error providing host", host = $host, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error providing host", host = $host, exc = exc.msg
method removeProvider*( method removeProvider*(
d: Discovery, peerId: PeerId d: Discovery,
): Future[void] {.base, async: (raises: [CancelledError]).} = peerId: PeerId): Future[void] {.base.} =
## Remove provider from providers table ## Remove provider from providers table
## ##
trace "Removing provider", peerId trace "Removing provider", peerId
try: d.protocol.removeProvidersLocal(peerId)
await d.protocol.removeProvidersLocal(peerId)
except CancelledError as exc:
warn "Error removing provider", peerId = peerId, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error removing provider", peerId = peerId, exc = exc.msg
except Exception as exc: # Something in discv5 is raising Exception
warn "Error removing provider", peerId = peerId, exc = exc.msg
raiseAssert("Unexpected Exception in removeProvider")
proc updateAnnounceRecord*(d: Discovery, addrs: openArray[MultiAddress]) = proc updateAnnounceRecord*(d: Discovery, addrs: openArray[MultiAddress]) =
## Update providers record ## Update providers record
@ -180,58 +137,54 @@ proc updateAnnounceRecord*(d: Discovery, addrs: openArray[MultiAddress]) =
d.announceAddrs = @addrs d.announceAddrs = @addrs
info "Updating announce record", addrs = d.announceAddrs trace "Updating announce record", addrs = d.announceAddrs
d.providerRecord = SignedPeerRecord d.providerRecord = SignedPeerRecord.init(
.init(d.key, PeerRecord.init(d.peerId, d.announceAddrs)) d.key, PeerRecord.init(d.peerId, d.announceAddrs))
.expect("Should construct signed record").some .expect("Should construct signed record").some
if not d.protocol.isNil: if not d.protocol.isNil:
d.protocol.updateRecord(d.providerRecord).expect("Should update SPR") d.protocol.updateRecord(d.providerRecord)
.expect("Should update SPR")
proc updateDhtRecord*(d: Discovery, addrs: openArray[MultiAddress]) = proc updateDhtRecord*(d: Discovery, ip: ValidIpAddress, port: Port) =
## Update providers record ## Update providers record
## ##
info "Updating Dht record", addrs = addrs trace "Updating Dht record", ip, port = $port
d.dhtRecord = SignedPeerRecord d.dhtRecord = SignedPeerRecord.init(
.init(d.key, PeerRecord.init(d.peerId, @addrs)) d.key, PeerRecord.init(d.peerId, @[
.expect("Should construct signed record").some MultiAddress.init(
ip,
IpTransportProtocol.udpProtocol,
port)])).expect("Should construct signed record").some
if not d.protocol.isNil: if not d.protocol.isNil:
d.protocol.updateRecord(d.dhtRecord).expect("Should update SPR") d.protocol.updateRecord(d.dhtRecord)
.expect("Should update SPR")
proc start*(d: Discovery) {.async: (raises: []).} = proc start*(d: Discovery) {.async.} =
try: d.protocol.open()
d.protocol.open() await d.protocol.start()
await d.protocol.start()
d.isStarted = true
except CatchableError as exc:
error "Error starting discovery", exc = exc.msg
proc stop*(d: Discovery) {.async: (raises: []).} = proc stop*(d: Discovery) {.async.} =
if not d.isStarted: await d.protocol.closeWait()
warn "Discovery not started, skipping stop"
return
try:
await noCancel d.protocol.closeWait()
except CatchableError as exc:
error "Error stopping discovery", exc = exc.msg
proc new*( proc new*(
T: type Discovery, T: type Discovery,
key: PrivateKey, key: PrivateKey,
bindIp = IPv4_any(), bindIp = ValidIpAddress.init(IPv4_any()),
bindPort = 0.Port, bindPort = 0.Port,
announceAddrs: openArray[MultiAddress], announceAddrs: openArray[MultiAddress],
bootstrapNodes: openArray[SignedPeerRecord] = [], bootstrapNodes: openArray[SignedPeerRecord] = [],
store: Datastore = SQLiteDatastore.new(Memory).expect("Should not fail!"), store: Datastore = SQLiteDatastore.new(Memory).expect("Should not fail!")
): Discovery = ): Discovery =
## Create a new Discovery node instance for the given key and datastore ## Create a new Discovery node instance for the given key and datastore
## ##
var self = var
Discovery(key: key, peerId: PeerId.init(key).expect("Should construct PeerId")) self = Discovery(
key: key,
peerId: PeerId.init(key).expect("Should construct PeerId"))
self.updateAnnounceRecord(announceAddrs) self.updateAnnounceRecord(announceAddrs)
@ -239,20 +192,22 @@ proc new*(
# FIXME disable IP limits temporarily so we can run our workshop. Re-enable # FIXME disable IP limits temporarily so we can run our workshop. Re-enable
# and figure out proper solution. # and figure out proper solution.
let discoveryConfig = DiscoveryConfig( let discoveryConfig = DiscoveryConfig(
tableIpLimits: TableIpLimits(tableIpLimit: high(uint), bucketIpLimit: high(uint)), tableIpLimits: TableIpLimits(
bitsPerHop: DefaultBitsPerHop, tableIpLimit: high(uint),
bucketIpLimit:high(uint)
),
bitsPerHop: DefaultBitsPerHop
) )
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
self.protocol = newProtocol( self.protocol = newProtocol(
key, key,
bindIp = bindIp, bindIp = bindIp.toNormalIp,
bindPort = bindPort, bindPort = bindPort,
record = self.providerRecord.get, record = self.providerRecord.get,
bootstrapRecords = bootstrapNodes, bootstrapRecords = bootstrapNodes,
rng = Rng.instance(), rng = Rng.instance(),
providers = ProvidersManager.new(store), providers = ProvidersManager.new(store),
config = discoveryConfig, config = discoveryConfig)
)
self self

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))

View File

@ -0,0 +1,225 @@
## Nim-Codex
## Copyright (c) 2024 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import std/sequtils
import pkg/taskpools
import pkg/taskpools/flowvars
import pkg/chronos
import pkg/chronos/threadsync
import pkg/questionable/results
import ./backend
import ../errors
import ../logutils
logScope:
topics = "codex asyncerasure"
const
CompletitionTimeout = 1.seconds # Maximum await time for completition after receiving a signal
CompletitionRetryDelay = 10.millis
type
EncoderBackendPtr = ptr EncoderBackend
DecoderBackendPtr = ptr DecoderBackend
# Args objects are missing seq[seq[byte]] field, to avoid unnecessary data copy
EncodeTaskArgs = object
signal: ThreadSignalPtr
backend: EncoderBackendPtr
blockSize: int
ecM: int
DecodeTaskArgs = object
signal: ThreadSignalPtr
backend: DecoderBackendPtr
blockSize: int
ecK: int
SharedArrayHolder*[T] = object
data: ptr UncheckedArray[T]
size: int
EncodeTaskResult = Result[SharedArrayHolder[byte], cstring]
DecodeTaskResult = Result[SharedArrayHolder[byte], cstring]
proc encodeTask(args: EncodeTaskArgs, data: seq[seq[byte]]): EncodeTaskResult =
var
data = data.unsafeAddr
parity = newSeqWith[seq[byte]](args.ecM, newSeq[byte](args.blockSize))
try:
let res = args.backend[].encode(data[], parity)
if res.isOk:
let
resDataSize = parity.len * args.blockSize
resData = cast[ptr UncheckedArray[byte]](allocShared0(resDataSize))
arrHolder = SharedArrayHolder[byte](
data: resData,
size: resDataSize
)
for i in 0..<parity.len:
copyMem(addr resData[i * args.blockSize], addr parity[i][0], args.blockSize)
return ok(arrHolder)
else:
return err(res.error)
except CatchableError as exception:
return err(exception.msg.cstring)
finally:
if err =? args.signal.fireSync().mapFailure.errorOption():
error "Error firing signal", msg = err.msg
proc decodeTask(args: DecodeTaskArgs, data: seq[seq[byte]], parity: seq[seq[byte]]): DecodeTaskResult =
var
data = data.unsafeAddr
parity = parity.unsafeAddr
recovered = newSeqWith[seq[byte]](args.ecK, newSeq[byte](args.blockSize))
try:
let res = args.backend[].decode(data[], parity[], recovered)
if res.isOk:
let
resDataSize = recovered.len * args.blockSize
resData = cast[ptr UncheckedArray[byte]](allocShared0(resDataSize))
arrHolder = SharedArrayHolder[byte](
data: resData,
size: resDataSize
)
for i in 0..<recovered.len:
copyMem(addr resData[i * args.blockSize], addr recovered[i][0], args.blockSize)
return ok(arrHolder)
else:
return err(res.error)
except CatchableError as exception:
return err(exception.msg.cstring)
finally:
if err =? args.signal.fireSync().mapFailure.errorOption():
error "Error firing signal", msg = err.msg
proc proxySpawnEncodeTask(
tp: Taskpool,
args: EncodeTaskArgs,
data: ref seq[seq[byte]]
): Flowvar[EncodeTaskResult] =
# FIXME Uncomment the code below after addressing an issue:
# https://github.com/codex-storage/nim-codex/issues/854
# tp.spawn encodeTask(args, data[])
let fv = EncodeTaskResult.newFlowVar
fv.readyWith(encodeTask(args, data[]))
return fv
proc proxySpawnDecodeTask(
tp: Taskpool,
args: DecodeTaskArgs,
data: ref seq[seq[byte]],
parity: ref seq[seq[byte]]
): Flowvar[DecodeTaskResult] =
# FIXME Uncomment the code below after addressing an issue:
# https://github.com/codex-storage/nim-codex/issues/854
# tp.spawn decodeTask(args, data[], parity[])
let fv = DecodeTaskResult.newFlowVar
fv.readyWith(decodeTask(args, data[], parity[]))
return fv
proc awaitResult[T](signal: ThreadSignalPtr, handle: Flowvar[T]): Future[?!T] {.async.} =
await wait(signal)
var
res: T
awaitTotal: Duration
while awaitTotal < CompletitionTimeout:
if handle.tryComplete(res):
return success(res)
else:
awaitTotal += CompletitionRetryDelay
await sleepAsync(CompletitionRetryDelay)
return failure("Task signaled finish but didn't return any result within " & $CompletitionRetryDelay)
proc asyncEncode*(
tp: Taskpool,
backend: EncoderBackend,
data: ref seq[seq[byte]],
blockSize: int,
ecM: int
): Future[?!ref seq[seq[byte]]] {.async.} =
without signal =? ThreadSignalPtr.new().mapFailure, err:
return failure(err)
try:
let
blockSize = data[0].len
args = EncodeTaskArgs(signal: signal, backend: unsafeAddr backend, blockSize: blockSize, ecM: ecM)
handle = proxySpawnEncodeTask(tp, args, data)
without res =? await awaitResult(signal, handle), err:
return failure(err)
if res.isOk:
var parity = seq[seq[byte]].new()
parity[].setLen(ecM)
for i in 0..<parity[].len:
parity[i] = newSeq[byte](blockSize)
copyMem(addr parity[i][0], addr res.value.data[i * blockSize], blockSize)
deallocShared(res.value.data)
return success(parity)
else:
return failure($res.error)
finally:
if err =? signal.close().mapFailure.errorOption():
error "Error closing signal", msg = $err.msg
proc asyncDecode*(
tp: Taskpool,
backend: DecoderBackend,
data, parity: ref seq[seq[byte]],
blockSize: int
): Future[?!ref seq[seq[byte]]] {.async.} =
without signal =? ThreadSignalPtr.new().mapFailure, err:
return failure(err)
try:
let
ecK = data[].len
args = DecodeTaskArgs(signal: signal, backend: unsafeAddr backend, blockSize: blockSize, ecK: ecK)
handle = proxySpawnDecodeTask(tp, args, data, parity)
without res =? await awaitResult(signal, handle), err:
return failure(err)
if res.isOk:
var recovered = seq[seq[byte]].new()
recovered[].setLen(ecK)
for i in 0..<recovered[].len:
recovered[i] = newSeq[byte](blockSize)
copyMem(addr recovered[i][0], addr res.value.data[i * blockSize], blockSize)
deallocShared(res.value.data)
return success(recovered)
else:
return failure($res.error)
finally:
if err =? signal.close().mapFailure.errorOption():
error "Error closing signal", msg = $err.msg

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,38 +7,41 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import ../stores import ../stores
type type
ErasureBackend* = ref object of RootObj ErasureBackend* = ref object of RootObj
blockSize*: int # block size in bytes blockSize*: int # block size in bytes
buffers*: int # number of original pieces buffers*: int # number of original pieces
parity*: int # number of redundancy pieces parity*: int # number of redundancy pieces
EncoderBackend* = ref object of ErasureBackend EncoderBackend* = ref object of ErasureBackend
DecoderBackend* = ref object of ErasureBackend DecoderBackend* = ref object of ErasureBackend
method release*(self: ErasureBackend) {.base, gcsafe.} = method release*(self: ErasureBackend) {.base.} =
## release the backend ## release the backend
## ##
raiseAssert("not implemented!") raiseAssert("not implemented!")
method encode*( method encode*(
self: EncoderBackend, self: EncoderBackend,
buffers, parity: ptr UncheckedArray[ptr UncheckedArray[byte]], buffers,
dataLen, parityLen: int, parity: var openArray[seq[byte]]
): Result[void, cstring] {.base, gcsafe.} = ): Result[void, cstring] {.base.} =
## encode buffers using a backend ## encode buffers using a backend
## ##
raiseAssert("not implemented!") raiseAssert("not implemented!")
method decode*( method decode*(
self: DecoderBackend, self: DecoderBackend,
buffers, parity, recovered: ptr UncheckedArray[ptr UncheckedArray[byte]], buffers,
dataLen, parityLen, recoveredLen: int, parity,
): Result[void, cstring] {.base, gcsafe.} = recovered: var openArray[seq[byte]]
): Result[void, cstring] {.base.} =
## decode buffers using a backend ## decode buffers using a backend
## ##
raiseAssert("not implemented!") raiseAssert("not implemented!")

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -10,7 +10,7 @@
import std/options import std/options
import pkg/leopard import pkg/leopard
import pkg/results import pkg/stew/results
import ../backend import ../backend
@ -22,39 +22,43 @@ type
decoder*: Option[LeoDecoder] decoder*: Option[LeoDecoder]
method encode*( method encode*(
self: LeoEncoderBackend, self: LeoEncoderBackend,
data, parity: ptr UncheckedArray[ptr UncheckedArray[byte]], data,
dataLen, parityLen: int, parity: var openArray[seq[byte]]): Result[void, cstring] =
): Result[void, cstring] =
## Encode data using Leopard backend ## Encode data using Leopard backend
if parityLen == 0: if parity.len == 0:
return ok() return ok()
var encoder = var encoder = if self.encoder.isNone:
if self.encoder.isNone: self.encoder = (? LeoEncoder.init(
self.encoder = (?LeoEncoder.init(self.blockSize, self.buffers, self.parity)).some self.blockSize,
self.buffers,
self.parity)).some
self.encoder.get() self.encoder.get()
else: else:
self.encoder.get() self.encoder.get()
encoder.encode(data, parity, dataLen, parityLen) encoder.encode(data, parity)
method decode*( method decode*(
self: LeoDecoderBackend, self: LeoDecoderBackend,
data, parity, recovered: ptr UncheckedArray[ptr UncheckedArray[byte]], data,
dataLen, parityLen, recoveredLen: int, parity,
): Result[void, cstring] = recovered: var openArray[seq[byte]]): Result[void, cstring] =
## Decode data using given Leopard backend ## Decode data using given Leopard backend
var decoder = var decoder =
if self.decoder.isNone: if self.decoder.isNone:
self.decoder = (?LeoDecoder.init(self.blockSize, self.buffers, self.parity)).some self.decoder = (? LeoDecoder.init(
self.blockSize,
self.buffers,
self.parity)).some
self.decoder.get() self.decoder.get()
else: else:
self.decoder.get() self.decoder.get()
decoder.decode(data, parity, recovered, dataLen, parityLen, recoveredLen) decoder.decode(data, parity, recovered)
method release*(self: LeoEncoderBackend) = method release*(self: LeoEncoderBackend) =
if self.encoder.isSome: if self.encoder.isSome:
@ -65,15 +69,25 @@ method release*(self: LeoDecoderBackend) =
self.decoder.get().free() self.decoder.get().free()
proc new*( proc new*(
T: type LeoEncoderBackend, blockSize, buffers, parity: int T: type LeoEncoderBackend,
): LeoEncoderBackend = blockSize,
buffers,
parity: int): LeoEncoderBackend =
## Create an instance of an Leopard Encoder backend ## Create an instance of an Leopard Encoder backend
## ##
LeoEncoderBackend(blockSize: blockSize, buffers: buffers, parity: parity) LeoEncoderBackend(
blockSize: blockSize,
buffers: buffers,
parity: parity)
proc new*( proc new*(
T: type LeoDecoderBackend, blockSize, buffers, parity: int T: type LeoDecoderBackend,
): LeoDecoderBackend = blockSize,
buffers,
parity: int): LeoDecoderBackend =
## Create an instance of an Leopard Decoder backend ## Create an instance of an Leopard Decoder backend
## ##
LeoDecoderBackend(blockSize: blockSize, buffers: buffers, parity: parity) LeoDecoderBackend(
blockSize: blockSize,
buffers: buffers,
parity: parity)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,13 +7,14 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [], gcsafe.} import pkg/upraises
import std/[sugar, atomics, sequtils] push: {.upraises: [].}
import std/sequtils
import std/sugar
import pkg/chronos import pkg/chronos
import pkg/chronos/threadsync
import pkg/chronicles
import pkg/libp2p/[multicodec, cid, multihash] import pkg/libp2p/[multicodec, cid, multihash]
import pkg/libp2p/protobuf/minprotobuf import pkg/libp2p/protobuf/minprotobuf
import pkg/taskpools import pkg/taskpools
@ -22,17 +23,16 @@ import ../logutils
import ../manifest import ../manifest
import ../merkletree import ../merkletree
import ../stores import ../stores
import ../clock
import ../blocktype as bt import ../blocktype as bt
import ../utils import ../utils
import ../utils/asynciter import ../utils/asynciter
import ../indexingstrategy import ../indexingstrategy
import ../errors import ../errors
import ../utils/arrayutils
import pkg/stew/byteutils import pkg/stew/byteutils
import ./backend import ./backend
import ./asyncbackend
export backend export backend
@ -62,17 +62,18 @@ type
## columns (with up to M blocks missing per column), ## columns (with up to M blocks missing per column),
## or any combination there of. ## or any combination there of.
## ##
EncoderProvider* =
proc(size, blocks, parity: int): EncoderBackend {.raises: [Defect], noSideEffect.}
DecoderProvider* = EncoderProvider* = proc(size, blocks, parity: int): EncoderBackend
proc(size, blocks, parity: int): DecoderBackend {.raises: [Defect], noSideEffect.} {.raises: [Defect], noSideEffect.}
DecoderProvider* = proc(size, blocks, parity: int): DecoderBackend
{.raises: [Defect], noSideEffect.}
Erasure* = ref object Erasure* = ref object
taskPool: Taskpool
encoderProvider*: EncoderProvider encoderProvider*: EncoderProvider
decoderProvider*: DecoderProvider decoderProvider*: DecoderProvider
store*: BlockStore store*: BlockStore
taskpool: Taskpool
EncodingParams = object EncodingParams = object
ecK: Natural ecK: Natural
@ -89,24 +90,6 @@ type
# provided. # provided.
minSize*: NBytes minSize*: NBytes
EncodeTask = object
success: Atomic[bool]
erasure: ptr Erasure
blocks: ptr UncheckedArray[ptr UncheckedArray[byte]]
parity: ptr UncheckedArray[ptr UncheckedArray[byte]]
blockSize, blocksLen, parityLen: int
signal: ThreadSignalPtr
DecodeTask = object
success: Atomic[bool]
erasure: ptr Erasure
blocks: ptr UncheckedArray[ptr UncheckedArray[byte]]
parity: ptr UncheckedArray[ptr UncheckedArray[byte]]
recovered: ptr UncheckedArray[ptr UncheckedArray[byte]]
blockSize, blocksLen: int
parityLen, recoveredLen: int
signal: ThreadSignalPtr
func indexToPos(steps, idx, step: int): int {.inline.} = func indexToPos(steps, idx, step: int): int {.inline.} =
## Convert an index to a position in the encoded ## Convert an index to a position in the encoded
## dataset ## dataset
@ -118,25 +101,21 @@ func indexToPos(steps, idx, step: int): int {.inline.} =
(idx - step) div steps (idx - step) div steps
proc getPendingBlocks( proc getPendingBlocks(
self: Erasure, manifest: Manifest, indices: seq[int] self: Erasure,
): AsyncIter[(?!bt.Block, int)] = manifest: Manifest,
indicies: seq[int]): AsyncIter[(?!bt.Block, int)] =
## Get pending blocks iterator ## Get pending blocks iterator
## ##
var pendingBlocks: seq[Future[(?!bt.Block, int)]] = @[]
proc attachIndex( var
fut: Future[?!bt.Block], i: int
): Future[(?!bt.Block, int)] {.async.} =
## avoids closure capture issues
return (await fut, i)
for blockIndex in indices:
# request blocks from the store # request blocks from the store
let fut = self.store.getBlock(BlockAddress.init(manifest.treeCid, blockIndex)) pendingBlocks = indicies.map( (i: int) =>
pendingBlocks.add(attachIndex(fut, blockIndex)) self.store.getBlock(
BlockAddress.init(manifest.treeCid, i)
).map((r: ?!bt.Block) => (r, i)) # Get the data blocks (first K)
)
proc isFinished(): bool = proc isFinished(): bool = pendingBlocks.len == 0
pendingBlocks.len == 0
proc genNext(): Future[(?!bt.Block, int)] {.async.} = proc genNext(): Future[(?!bt.Block, int)] {.async.} =
let completedFut = await one(pendingBlocks) let completedFut = await one(pendingBlocks)
@ -147,38 +126,36 @@ proc getPendingBlocks(
let (_, index) = await completedFut let (_, index) = await completedFut
raise newException( raise newException(
CatchableError, CatchableError,
"Future for block id not found, tree cid: " & $manifest.treeCid & ", index: " & "Future for block id not found, tree cid: " & $manifest.treeCid & ", index: " & $index)
$index,
)
AsyncIter[(?!bt.Block, int)].new(genNext, isFinished) AsyncIter[(?!bt.Block, int)].new(genNext, isFinished)
proc prepareEncodingData( proc prepareEncodingData(
self: Erasure, self: Erasure,
manifest: Manifest, manifest: Manifest,
params: EncodingParams, params: EncodingParams,
step: Natural, step: Natural,
data: ref seq[seq[byte]], data: ref seq[seq[byte]],
cids: ref seq[Cid], cids: ref seq[Cid],
emptyBlock: seq[byte], emptyBlock: seq[byte]): Future[?!Natural] {.async.} =
): Future[?!Natural] {.async.} =
## Prepare data for encoding ## Prepare data for encoding
## ##
let let
strategy = params.strategy.init( strategy = params.strategy.init(
firstIndex = 0, lastIndex = params.rounded - 1, iterations = params.steps firstIndex = 0,
lastIndex = params.rounded - 1,
iterations = params.steps
) )
indices = toSeq(strategy.getIndices(step)) indicies = toSeq(strategy.getIndicies(step))
pendingBlocksIter = pendingBlocksIter = self.getPendingBlocks(manifest, indicies.filterIt(it < manifest.blocksCount))
self.getPendingBlocks(manifest, indices.filterIt(it < manifest.blocksCount))
var resolved = 0 var resolved = 0
for fut in pendingBlocksIter: for fut in pendingBlocksIter:
let (blkOrErr, idx) = await fut let (blkOrErr, idx) = await fut
without blk =? blkOrErr, err: without blk =? blkOrErr, err:
warn "Failed retrieving a block", treeCid = manifest.treeCid, idx, msg = err.msg warn "Failed retreiving a block", treeCid = manifest.treeCid, idx, msg = err.msg
return failure(err) continue
let pos = indexToPos(params.steps, idx, step) let pos = indexToPos(params.steps, idx, step)
shallowCopy(data[pos], if blk.isEmpty: emptyBlock else: blk.data) shallowCopy(data[pos], if blk.isEmpty: emptyBlock else: blk.data)
@ -186,26 +163,24 @@ proc prepareEncodingData(
resolved.inc() resolved.inc()
for idx in indices.filterIt(it >= manifest.blocksCount): for idx in indicies.filterIt(it >= manifest.blocksCount):
let pos = indexToPos(params.steps, idx, step) let pos = indexToPos(params.steps, idx, step)
trace "Padding with empty block", idx trace "Padding with empty block", idx
shallowCopy(data[pos], emptyBlock) shallowCopy(data[pos], emptyBlock)
without emptyBlockCid =? emptyCid(manifest.version, manifest.hcodec, manifest.codec), without emptyBlockCid =? emptyCid(manifest.version, manifest.hcodec, manifest.codec), err:
err:
return failure(err) return failure(err)
cids[idx] = emptyBlockCid cids[idx] = emptyBlockCid
success(resolved.Natural) success(resolved.Natural)
proc prepareDecodingData( proc prepareDecodingData(
self: Erasure, self: Erasure,
encoded: Manifest, encoded: Manifest,
step: Natural, step: Natural,
data: ref seq[seq[byte]], data: ref seq[seq[byte]],
parityData: ref seq[seq[byte]], parityData: ref seq[seq[byte]],
cids: ref seq[Cid], cids: ref seq[Cid],
emptyBlock: seq[byte], emptyBlock: seq[byte]): Future[?!(Natural, Natural)] {.async.} =
): Future[?!(Natural, Natural)] {.async.} =
## Prepare data for decoding ## Prepare data for decoding
## `encoded` - the encoded manifest ## `encoded` - the encoded manifest
## `step` - the current step ## `step` - the current step
@ -217,10 +192,12 @@ proc prepareDecodingData(
let let
strategy = encoded.protectedStrategy.init( strategy = encoded.protectedStrategy.init(
firstIndex = 0, lastIndex = encoded.blocksCount - 1, iterations = encoded.steps firstIndex = 0,
lastIndex = encoded.blocksCount - 1,
iterations = encoded.steps
) )
indices = toSeq(strategy.getIndices(step)) indicies = toSeq(strategy.getIndicies(step))
pendingBlocksIter = self.getPendingBlocks(encoded, indices) pendingBlocksIter = self.getPendingBlocks(encoded, indicies)
var var
dataPieces = 0 dataPieces = 0
@ -234,24 +211,23 @@ proc prepareDecodingData(
let (blkOrErr, idx) = await fut let (blkOrErr, idx) = await fut
without blk =? blkOrErr, err: without blk =? blkOrErr, err:
trace "Failed retrieving a block", idx, treeCid = encoded.treeCid, msg = err.msg trace "Failed retreiving a block", idx, treeCid = encoded.treeCid, msg = err.msg
continue continue
let pos = indexToPos(encoded.steps, idx, step) let
pos = indexToPos(encoded.steps, idx, step)
logScope: logScope:
cid = blk.cid cid = blk.cid
idx = idx idx = idx
pos = pos pos = pos
step = step step = step
empty = blk.isEmpty empty = blk.isEmpty
cids[idx] = blk.cid cids[idx] = blk.cid
if idx >= encoded.rounded: if idx >= encoded.rounded:
trace "Retrieved parity block" trace "Retrieved parity block"
shallowCopy( shallowCopy(parityData[pos - encoded.ecK], if blk.isEmpty: emptyBlock else: blk.data)
parityData[pos - encoded.ecK], if blk.isEmpty: emptyBlock else: blk.data
)
parityPieces.inc parityPieces.inc
else: else:
trace "Retrieved data block" trace "Retrieved data block"
@ -263,19 +239,17 @@ proc prepareDecodingData(
return success (dataPieces.Natural, parityPieces.Natural) return success (dataPieces.Natural, parityPieces.Natural)
proc init*( proc init*(
_: type EncodingParams, _: type EncodingParams,
manifest: Manifest, manifest: Manifest,
ecK: Natural, ecK: Natural, ecM: Natural,
ecM: Natural, strategy: StrategyType): ?!EncodingParams =
strategy: StrategyType,
): ?!EncodingParams =
if ecK > manifest.blocksCount: if ecK > manifest.blocksCount:
let exc = (ref InsufficientBlocksError)( let exc = (ref InsufficientBlocksError)(
msg: msg: "Unable to encode manifest, not enough blocks, ecK = " &
"Unable to encode manifest, not enough blocks, ecK = " & $ecK & $ecK &
", blocksCount = " & $manifest.blocksCount, ", blocksCount = " &
minSize: ecK.NBytes * manifest.blockSize, $manifest.blocksCount,
) minSize: ecK.NBytes * manifest.blockSize)
return failure(exc) return failure(exc)
let let
@ -289,139 +263,62 @@ proc init*(
rounded: rounded, rounded: rounded,
steps: steps, steps: steps,
blocksCount: blocksCount, blocksCount: blocksCount,
strategy: strategy, strategy: strategy
) )
proc leopardEncodeTask(tp: Taskpool, task: ptr EncodeTask) {.gcsafe.} =
# Task suitable for running in taskpools - look, no GC!
let encoder =
task[].erasure.encoderProvider(task[].blockSize, task[].blocksLen, task[].parityLen)
defer:
encoder.release()
discard task[].signal.fireSync()
if (
let res =
encoder.encode(task[].blocks, task[].parity, task[].blocksLen, task[].parityLen)
res.isErr
):
warn "Error from leopard encoder backend!", error = $res.error
task[].success.store(false)
else:
task[].success.store(true)
proc asyncEncode*(
self: Erasure,
blockSize, blocksLen, parityLen: int,
blocks: ref seq[seq[byte]],
parity: ptr UncheckedArray[ptr UncheckedArray[byte]],
): Future[?!void] {.async: (raises: [CancelledError]).} =
without threadPtr =? ThreadSignalPtr.new():
return failure("Unable to create thread signal")
defer:
threadPtr.close().expect("closing once works")
var data = makeUncheckedArray(blocks)
defer:
dealloc(data)
## Create an ecode task with block data
var task = EncodeTask(
erasure: addr self,
blockSize: blockSize,
blocksLen: blocksLen,
parityLen: parityLen,
blocks: data,
parity: parity,
signal: threadPtr,
)
doAssert self.taskPool.numThreads > 1,
"Must have at least one separate thread or signal will never be fired"
self.taskPool.spawn leopardEncodeTask(self.taskPool, addr task)
let threadFut = threadPtr.wait()
if joinErr =? catch(await threadFut.join()).errorOption:
if err =? catch(await noCancel threadFut).errorOption:
return failure(err)
if joinErr of CancelledError:
raise (ref CancelledError) joinErr
else:
return failure(joinErr)
if not task.success.load():
return failure("Leopard encoding task failed")
success()
proc encodeData( proc encodeData(
self: Erasure, manifest: Manifest, params: EncodingParams self: Erasure,
): Future[?!Manifest] {.async.} = manifest: Manifest,
params: EncodingParams
): Future[?!Manifest] {.async.} =
## Encode blocks pointed to by the protected manifest ## Encode blocks pointed to by the protected manifest
## ##
## `manifest` - the manifest to encode ## `manifest` - the manifest to encode
## ##
logScope: logScope:
steps = params.steps steps = params.steps
rounded_blocks = params.rounded rounded_blocks = params.rounded
blocks_count = params.blocksCount blocks_count = params.blocksCount
ecK = params.ecK ecK = params.ecK
ecM = params.ecM ecM = params.ecM
var var
cids = seq[Cid].new() cids = seq[Cid].new()
encoder = self.encoderProvider(manifest.blockSize.int, params.ecK, params.ecM)
emptyBlock = newSeq[byte](manifest.blockSize.int) emptyBlock = newSeq[byte](manifest.blockSize.int)
cids[].setLen(params.blocksCount) cids[].setLen(params.blocksCount)
try: try:
for step in 0 ..< params.steps: for step in 0..<params.steps:
# TODO: Don't allocate a new seq every time, allocate once and zero out # TODO: Don't allocate a new seq every time, allocate once and zero out
var var
data = seq[seq[byte]].new() # number of blocks to encode data = seq[seq[byte]].new() # number of blocks to encode
parity = createDoubleArray(params.ecM, manifest.blockSize.int)
defer:
freeDoubleArray(parity, params.ecM)
data[].setLen(params.ecK) data[].setLen(params.ecK)
# TODO: this is a tight blocking loop so we sleep here to allow
# other events to be processed, this should be addressed
# by threading
await sleepAsync(10.millis)
without resolved =? without resolved =?
(await self.prepareEncodingData(manifest, params, step, data, cids, emptyBlock)), (await self.prepareEncodingData(manifest, params, step, data, cids, emptyBlock)), err:
err: trace "Unable to prepare data", error = err.msg
trace "Unable to prepare data", error = err.msg return failure(err)
trace "Erasure coding data", data = data[].len, parity = params.ecM
without parity =? await asyncEncode(self.taskpool, encoder, data, manifest.blockSize.int, params.ecM), err:
trace "Error encoding data", err = err.msg
return failure(err) return failure(err)
trace "Erasure coding data", data = data[].len
try:
if err =? (
await self.asyncEncode(
manifest.blockSize.int, params.ecK, params.ecM, data, parity
)
).errorOption:
return failure(err)
except CancelledError as exc:
raise exc
var idx = params.rounded + step var idx = params.rounded + step
for j in 0 ..< params.ecM: for j in 0..<params.ecM:
var innerPtr: ptr UncheckedArray[byte] = parity[][j] without blk =? bt.Block.new(parity[j]), error:
without blk =? bt.Block.new(innerPtr.toOpenArray(0, manifest.blockSize.int - 1)),
error:
trace "Unable to create parity block", err = error.msg trace "Unable to create parity block", err = error.msg
return failure(error) return failure(error)
trace "Adding parity block", cid = blk.cid, idx trace "Adding parity block", cid = blk.cid, idx
cids[idx] = blk.cid cids[idx] = blk.cid
if error =? (await self.store.putBlock(blk)).errorOption: if isErr (await self.store.putBlock(blk)):
warn "Unable to store block!", cid = blk.cid, msg = error.msg trace "Unable to store block!", cid = blk.cid
return failure("Unable to store block!") return failure("Unable to store block!")
idx.inc(params.steps) idx.inc(params.steps)
@ -440,7 +337,7 @@ proc encodeData(
datasetSize = (manifest.blockSize.int * params.blocksCount).NBytes, datasetSize = (manifest.blockSize.int * params.blocksCount).NBytes,
ecK = params.ecK, ecK = params.ecK,
ecM = params.ecM, ecM = params.ecM,
strategy = params.strategy, strategy = params.strategy
) )
trace "Encoded data successfully", treeCid, blocksCount = params.blocksCount trace "Encoded data successfully", treeCid, blocksCount = params.blocksCount
@ -451,14 +348,15 @@ proc encodeData(
except CatchableError as exc: except CatchableError as exc:
trace "Erasure coding encoding error", exc = exc.msg trace "Erasure coding encoding error", exc = exc.msg
return failure(exc) return failure(exc)
finally:
encoder.release()
proc encode*( proc encode*(
self: Erasure, self: Erasure,
manifest: Manifest, manifest: Manifest,
blocks: Natural, blocks: Natural,
parity: Natural, parity: Natural,
strategy = SteppedStrategy, strategy = SteppedStrategy): Future[?!Manifest] {.async.} =
): Future[?!Manifest] {.async.} =
## Encode a manifest into one that is erasure protected. ## Encode a manifest into one that is erasure protected.
## ##
## `manifest` - the original manifest to be encoded ## `manifest` - the original manifest to be encoded
@ -474,88 +372,20 @@ proc encode*(
return success encodedManifest return success encodedManifest
proc leopardDecodeTask(tp: Taskpool, task: ptr DecodeTask) {.gcsafe.} = proc decode*(
# Task suitable for running in taskpools - look, no GC! self: Erasure,
let decoder = encoded: Manifest): Future[?!Manifest] {.async.} =
task[].erasure.decoderProvider(task[].blockSize, task[].blocksLen, task[].parityLen) ## Decode a protected manifest into it's original
defer: ## manifest
decoder.release() ##
discard task[].signal.fireSync() ## `encoded` - the encoded (protected) manifest to
## be recovered
##
if (
let res = decoder.decode(
task[].blocks,
task[].parity,
task[].recovered,
task[].blocksLen,
task[].parityLen,
task[].recoveredLen,
)
res.isErr
):
warn "Error from leopard decoder backend!", error = $res.error
task[].success.store(false)
else:
task[].success.store(true)
proc asyncDecode*(
self: Erasure,
blockSize, blocksLen, parityLen: int,
blocks, parity: ref seq[seq[byte]],
recovered: ptr UncheckedArray[ptr UncheckedArray[byte]],
): Future[?!void] {.async: (raises: [CancelledError]).} =
without threadPtr =? ThreadSignalPtr.new():
return failure("Unable to create thread signal")
defer:
threadPtr.close().expect("closing once works")
var
blockData = makeUncheckedArray(blocks)
parityData = makeUncheckedArray(parity)
defer:
dealloc(blockData)
dealloc(parityData)
## Create an decode task with block data
var task = DecodeTask(
erasure: addr self,
blockSize: blockSize,
blocksLen: blocksLen,
parityLen: parityLen,
recoveredLen: blocksLen,
blocks: blockData,
parity: parityData,
recovered: recovered,
signal: threadPtr,
)
doAssert self.taskPool.numThreads > 1,
"Must have at least one separate thread or signal will never be fired"
self.taskPool.spawn leopardDecodeTask(self.taskPool, addr task)
let threadFut = threadPtr.wait()
if joinErr =? catch(await threadFut.join()).errorOption:
if err =? catch(await noCancel threadFut).errorOption:
return failure(err)
if joinErr of CancelledError:
raise (ref CancelledError) joinErr
else:
return failure(joinErr)
if not task.success.load():
return failure("Leopard decoding task failed")
success()
proc decodeInternal(
self: Erasure, encoded: Manifest
): Future[?!(ref seq[Cid], seq[Natural])] {.async.} =
logScope: logScope:
steps = encoded.steps steps = encoded.steps
rounded_blocks = encoded.rounded rounded_blocks = encoded.rounded
new_manifest = encoded.blocksCount new_manifest = encoded.blocksCount
var var
cids = seq[Cid].new() cids = seq[Cid].new()
@ -565,27 +395,16 @@ proc decodeInternal(
cids[].setLen(encoded.blocksCount) cids[].setLen(encoded.blocksCount)
try: try:
for step in 0 ..< encoded.steps: for step in 0..<encoded.steps:
# TODO: this is a tight blocking loop so we sleep here to allow
# other events to be processed, this should be addressed
# by threading
await sleepAsync(10.millis)
var var
data = seq[seq[byte]].new() data = seq[seq[byte]].new()
parityData = seq[seq[byte]].new() parity = seq[seq[byte]].new()
recovered = createDoubleArray(encoded.ecK, encoded.blockSize.int)
defer:
freeDoubleArray(recovered, encoded.ecK)
data[].setLen(encoded.ecK) # set len to K data[].setLen(encoded.ecK) # set len to K
parityData[].setLen(encoded.ecM) # set len to M parity[].setLen(encoded.ecM) # set len to M
without (dataPieces, _) =? ( without (dataPieces, _) =?
await self.prepareDecodingData( (await self.prepareDecodingData(encoded, step, data, parity, cids, emptyBlock)), err:
encoded, step, data, parityData, cids, emptyBlock
)
), err:
trace "Unable to prepare data", error = err.msg trace "Unable to prepare data", error = err.msg
return failure(err) return failure(err)
@ -594,34 +413,23 @@ proc decodeInternal(
continue continue
trace "Erasure decoding data" trace "Erasure decoding data"
try:
if err =? (
await self.asyncDecode(
encoded.blockSize.int, encoded.ecK, encoded.ecM, data, parityData, recovered
)
).errorOption:
return failure(err)
except CancelledError as exc:
raise exc
for i in 0 ..< encoded.ecK: without recovered =? await asyncDecode(self.taskpool, decoder, data, parity, encoded.blockSize.int), err:
trace "Error decoding data", err = err.msg
return failure(err)
for i in 0..<encoded.ecK:
let idx = i * encoded.steps + step let idx = i * encoded.steps + step
if data[i].len <= 0 and not cids[idx].isEmpty: if data[i].len <= 0 and not cids[idx].isEmpty:
var innerPtr: ptr UncheckedArray[byte] = recovered[][i] without blk =? bt.Block.new(recovered[i]), error:
without blk =? bt.Block.new(
innerPtr.toOpenArray(0, encoded.blockSize.int - 1)
), error:
trace "Unable to create block!", exc = error.msg trace "Unable to create block!", exc = error.msg
return failure(error) return failure(error)
trace "Recovered block", cid = blk.cid, index = i trace "Recovered block", cid = blk.cid, index = i
if error =? (await self.store.putBlock(blk)).errorOption: if isErr (await self.store.putBlock(blk)):
warn "Unable to store block!", cid = blk.cid, msg = error.msg trace "Unable to store block!", cid = blk.cid
return failure("Unable to store block!") return failure("Unable to store block!")
self.store.completeBlock(BlockAddress.init(encoded.treeCid, idx), blk)
cids[idx] = blk.cid cids[idx] = blk.cid
recoveredIndices.add(idx) recoveredIndices.add(idx)
except CancelledError as exc: except CancelledError as exc:
@ -633,78 +441,25 @@ proc decodeInternal(
finally: finally:
decoder.release() decoder.release()
return (cids, recoveredIndices).success without tree =? CodexTree.init(cids[0..<encoded.originalBlocksCount]), err:
proc decode*(self: Erasure, encoded: Manifest): Future[?!Manifest] {.async.} =
## Decode a protected manifest into it's original
## manifest
##
## `encoded` - the encoded (protected) manifest to
## be recovered
##
without (cids, recoveredIndices) =? (await self.decodeInternal(encoded)), err:
return failure(err)
without tree =? CodexTree.init(cids[0 ..< encoded.originalBlocksCount]), err:
return failure(err) return failure(err)
without treeCid =? tree.rootCid, err: without treeCid =? tree.rootCid, err:
return failure(err) return failure(err)
if treeCid != encoded.originalTreeCid: if treeCid != encoded.originalTreeCid:
return failure( return failure("Original tree root differs from the tree root computed out of recovered data")
"Original tree root differs from the tree root computed out of recovered data"
)
let idxIter = let idxIter = Iter[Natural].new(recoveredIndices)
Iter[Natural].new(recoveredIndices).filter((i: Natural) => i < tree.leavesCount) .filter((i: Natural) => i < tree.leavesCount)
if err =? (await self.store.putSomeProofs(tree, idxIter)).errorOption: if err =? (await self.store.putSomeProofs(tree, idxIter)).errorOption:
return failure(err) return failure(err)
let decoded = Manifest.new(encoded) let decoded = Manifest.new(encoded)
return decoded.success return decoded.success
proc repair*(self: Erasure, encoded: Manifest): Future[?!void] {.async.} =
## Repair a protected manifest by reconstructing the full dataset
##
## `encoded` - the encoded (protected) manifest to
## be repaired
##
without (cids, _) =? (await self.decodeInternal(encoded)), err:
return failure(err)
without tree =? CodexTree.init(cids[0 ..< encoded.originalBlocksCount]), err:
return failure(err)
without treeCid =? tree.rootCid, err:
return failure(err)
if treeCid != encoded.originalTreeCid:
return failure(
"Original tree root differs from the tree root computed out of recovered data"
)
if err =? (await self.store.putAllProofs(tree)).errorOption:
return failure(err)
without repaired =? (
await self.encode(
Manifest.new(encoded), encoded.ecK, encoded.ecM, encoded.protectedStrategy
)
), err:
return failure(err)
if repaired.treeCid != encoded.treeCid:
return failure(
"Original tree root differs from the repaired tree root encoded out of recovered data"
)
return success()
proc start*(self: Erasure) {.async.} = proc start*(self: Erasure) {.async.} =
return return
@ -712,17 +467,16 @@ proc stop*(self: Erasure) {.async.} =
return return
proc new*( proc new*(
T: type Erasure, T: type Erasure,
store: BlockStore, store: BlockStore,
encoderProvider: EncoderProvider, encoderProvider: EncoderProvider,
decoderProvider: DecoderProvider, decoderProvider: DecoderProvider,
taskPool: Taskpool, taskpool: Taskpool): Erasure =
): Erasure =
## Create a new Erasure instance for encoding and decoding manifests ## Create a new Erasure instance for encoding and decoding manifests
## ##
Erasure( Erasure(
store: store, store: store,
encoderProvider: encoderProvider, encoderProvider: encoderProvider,
decoderProvider: decoderProvider, decoderProvider: decoderProvider,
taskPool: taskPool, taskpool: taskpool)
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,13 +7,9 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/options import std/options
import std/sugar
import std/sequtils
import pkg/results import pkg/stew/results
import pkg/chronos import pkg/chronos
import pkg/questionable/results import pkg/questionable/results
@ -23,18 +19,14 @@ type
CodexError* = object of CatchableError # base codex error CodexError* = object of CatchableError # base codex error
CodexResult*[T] = Result[T, ref CodexError] CodexResult*[T] = Result[T, ref CodexError]
FinishedFailed*[T] = tuple[success: seq[Future[T]], failure: seq[Future[T]]]
template mapFailure*[T, V, E]( template mapFailure*[T, V, E](
exp: Result[T, V], exc: typedesc[E] exp: Result[T, V],
exc: typedesc[E],
): Result[T, ref CatchableError] = ): Result[T, ref CatchableError] =
## Convert `Result[T, E]` to `Result[E, ref CatchableError]` ## Convert `Result[T, E]` to `Result[E, ref CatchableError]`
## ##
exp.mapErr( exp.mapErr(proc (e: V): ref CatchableError = (ref exc)(msg: $e))
proc(e: V): ref CatchableError =
(ref exc)(msg: $e)
)
template mapFailure*[T, V](exp: Result[T, V]): Result[T, ref CatchableError] = template mapFailure*[T, V](exp: Result[T, V]): Result[T, ref CatchableError] =
mapFailure(exp, CodexError) mapFailure(exp, CodexError)
@ -46,43 +38,12 @@ func toFailure*[T](exp: Option[T]): Result[T, ref CatchableError] {.inline.} =
else: else:
T.failure("Option is None") T.failure("Option is None")
proc allFinishedFailed*[T]( proc allFutureResult*[T](fut: seq[Future[T]]): Future[?!void] {.async.} =
futs: auto try:
): Future[FinishedFailed[T]] {.async: (raises: [CancelledError]).} = await allFuturesThrowing(fut)
## Check if all futures have finished or failed except CancelledError as exc:
## raise exc
## TODO: wip, not sure if we want this - at the minimum, except CatchableError as exc:
## we should probably avoid the async transform return failure(exc.msg)
var res: FinishedFailed[T] = (@[], @[]) return success()
await allFutures(futs)
for f in futs:
if f.failed:
res.failure.add f
else:
res.success.add f
return res
proc allFinishedValues*[T](
futs: auto
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} =
## If all futures have finished, return corresponding values,
## otherwise return failure
##
# wait for all futures to be either completed, failed or canceled
await allFutures(futs)
let numOfFailed = futs.countIt(it.failed)
if numOfFailed > 0:
return failure "Some futures failed (" & $numOfFailed & "))"
# here, we know there are no failed futures in "futs"
# and we are only interested in those that completed successfully
let values = collect:
for b in futs:
if b.finished:
b.value
return success values

View File

@ -10,7 +10,7 @@ type
# 0 => 0, 1, 2 # 0 => 0, 1, 2
# 1 => 3, 4, 5 # 1 => 3, 4, 5
# 2 => 6, 7, 8 # 2 => 6, 7, 8
LinearStrategy LinearStrategy,
# Stepped indexing: # Stepped indexing:
# 0 => 0, 3, 6 # 0 => 0, 3, 6
@ -21,106 +21,77 @@ type
# Representing a strategy for grouping indices (of blocks usually) # Representing a strategy for grouping indices (of blocks usually)
# Given an interation-count as input, will produce a seq of # Given an interation-count as input, will produce a seq of
# selected indices. # selected indices.
IndexingError* = object of CodexError IndexingError* = object of CodexError
IndexingWrongIndexError* = object of IndexingError IndexingWrongIndexError* = object of IndexingError
IndexingWrongIterationsError* = object of IndexingError IndexingWrongIterationsError* = object of IndexingError
IndexingWrongGroupCountError* = object of IndexingError
IndexingWrongPadBlockCountError* = object of IndexingError
IndexingStrategy* = object IndexingStrategy* = object
strategyType*: StrategyType # Indexing strategy algorithm strategyType*: StrategyType
firstIndex*: int # Lowest index that can be returned firstIndex*: int # Lowest index that can be returned
lastIndex*: int # Highest index that can be returned lastIndex*: int # Highest index that can be returned
iterations*: int # Number of iteration steps (0 ..< iterations) iterations*: int # getIndices(iteration) will run from 0 ..< iterations
step*: int # Step size between generated indices step*: int
groupCount*: int # Number of groups to partition indices into
padBlockCount*: int # Number of padding blocks to append per group
func checkIteration( func checkIteration(self: IndexingStrategy, iteration: int): void {.raises: [IndexingError].} =
self: IndexingStrategy, iteration: int
): void {.raises: [IndexingError].} =
if iteration >= self.iterations: if iteration >= self.iterations:
raise newException( raise newException(
IndexingError, "Indexing iteration can't be greater than or equal to iterations." IndexingError,
) "Indexing iteration can't be greater than or equal to iterations.")
func getIter(first, last, step: int): Iter[int] = func getIter(first, last, step: int): Iter[int] =
{.cast(noSideEffect).}: {.cast(noSideEffect).}:
Iter[int].new(first, last, step) Iter[int].new(first, last, step)
func getLinearIndices(self: IndexingStrategy, iteration: int): Iter[int] = func getLinearIndicies(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
self.checkIteration(iteration)
let let
first = self.firstIndex + iteration * self.step first = self.firstIndex + iteration * self.step
last = min(first + self.step - 1, self.lastIndex) last = min(first + self.step - 1, self.lastIndex)
getIter(first, last, 1) getIter(first, last, 1)
func getSteppedIndices(self: IndexingStrategy, iteration: int): Iter[int] = func getSteppedIndicies(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
self.checkIteration(iteration)
let let
first = self.firstIndex + iteration first = self.firstIndex + iteration
last = self.lastIndex last = self.lastIndex
getIter(first, last, self.iterations) getIter(first, last, self.iterations)
func getStrategyIndices(self: IndexingStrategy, iteration: int): Iter[int] = func getIndicies*(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
case self.strategyType case self.strategyType
of StrategyType.LinearStrategy: of StrategyType.LinearStrategy:
self.getLinearIndices(iteration) self.getLinearIndicies(iteration)
of StrategyType.SteppedStrategy: of StrategyType.SteppedStrategy:
self.getSteppedIndices(iteration) self.getSteppedIndicies(iteration)
func getIndices*(
self: IndexingStrategy, iteration: int
): Iter[int] {.raises: [IndexingError].} =
self.checkIteration(iteration)
{.cast(noSideEffect).}:
Iter[int].new(
iterator (): int {.gcsafe.} =
for value in self.getStrategyIndices(iteration):
yield value
for i in 0 ..< self.padBlockCount:
yield self.lastIndex + (iteration + 1) + i * self.groupCount
)
func init*( func init*(
strategy: StrategyType, strategy: StrategyType,
firstIndex, lastIndex, iterations: int, firstIndex, lastIndex, iterations: int): IndexingStrategy {.raises: [IndexingError].} =
groupCount = 0,
padBlockCount = 0,
): IndexingStrategy {.raises: [IndexingError].} =
if firstIndex > lastIndex: if firstIndex > lastIndex:
raise newException( raise newException(
IndexingWrongIndexError, IndexingWrongIndexError,
"firstIndex (" & $firstIndex & ") can't be greater than lastIndex (" & $lastIndex & "firstIndex (" & $firstIndex & ") can't be greater than lastIndex (" & $lastIndex & ")")
")",
)
if iterations <= 0: if iterations <= 0:
raise newException( raise newException(
IndexingWrongIterationsError, IndexingWrongIterationsError,
"iterations (" & $iterations & ") must be greater than zero.", "iterations (" & $iterations & ") must be greater than zero.")
)
if padBlockCount < 0:
raise newException(
IndexingWrongPadBlockCountError,
"padBlockCount (" & $padBlockCount & ") must be equal or greater than zero.",
)
if padBlockCount > 0 and groupCount <= 0:
raise newException(
IndexingWrongGroupCountError,
"groupCount (" & $groupCount & ") must be greater than zero.",
)
IndexingStrategy( IndexingStrategy(
strategyType: strategy, strategyType: strategy,
firstIndex: firstIndex, firstIndex: firstIndex,
lastIndex: lastIndex, lastIndex: lastIndex,
iterations: iterations, iterations: iterations,
step: divUp((lastIndex - firstIndex + 1), iterations), step: divUp((lastIndex - firstIndex + 1), iterations))
groupCount: groupCount,
padBlockCount: padBlockCount,
)

View File

@ -11,7 +11,7 @@
## 4. Remove usages of `nim-json-serialization` from the codebase ## 4. Remove usages of `nim-json-serialization` from the codebase
## 5. Remove need to declare `writeValue` for new types ## 5. Remove need to declare `writeValue` for new types
## 6. Remove need to [avoid importing or exporting `toJson`, `%`, `%*` to prevent ## 6. Remove need to [avoid importing or exporting `toJson`, `%`, `%*` to prevent
## conflicts](https://github.com/logos-storage/logos-storage-nim/pull/645#issuecomment-1838834467) ## conflicts](https://github.com/codex-storage/nim-codex/pull/645#issuecomment-1838834467)
## ##
## When declaring a new type, one should consider importing the `codex/logutils` ## When declaring a new type, one should consider importing the `codex/logutils`
## module, and specifying `formatIt`. If textlines log output and json log output ## module, and specifying `formatIt`. If textlines log output and json log output
@ -98,6 +98,7 @@ import pkg/questionable/results
import ./utils/json except formatIt # TODO: remove exception? import ./utils/json except formatIt # TODO: remove exception?
import pkg/stew/byteutils import pkg/stew/byteutils
import pkg/stint import pkg/stint
import pkg/upraises
export byteutils export byteutils
export chronicles except toJson, formatIt, `%` export chronicles except toJson, formatIt, `%`
@ -106,6 +107,7 @@ export sequtils
export json except formatIt export json except formatIt
export strutils export strutils
export sugar export sugar
export upraises
export results export results
func shortLog*(long: string, ellipses = "*", start = 3, stop = 6): string = func shortLog*(long: string, ellipses = "*", start = 3, stop = 6): string =
@ -123,9 +125,8 @@ func shortLog*(long: string, ellipses = "*", start = 3, stop = 6): string =
short short
func shortHexLog*(long: string): string = func shortHexLog*(long: string): string =
if long[0 .. 1] == "0x": if long[0..1] == "0x": result &= "0x"
result &= "0x" result &= long[2..long.high].shortLog("..", 4, 4)
result &= long[2 .. long.high].shortLog("..", 4, 4)
func short0xHexLog*[N: static[int], T: array[N, byte]](v: T): string = func short0xHexLog*[N: static[int], T: array[N, byte]](v: T): string =
v.to0xHex.shortHexLog v.to0xHex.shortHexLog
@ -152,7 +153,7 @@ proc formatTextLineSeq*(val: seq[string]): string =
template formatIt*(format: LogFormat, T: typedesc, body: untyped) = template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
# Provides formatters for logging with Chronicles for the given type and # Provides formatters for logging with Chronicles for the given type and
# `LogFormat`. # `LogFormat`.
# NOTE: `seq[T]`, `Option[T]`, and `seq[Option[T]]` are overridden # NOTE: `seq[T]`, `Option[T]`, and `seq[Option[T]]` are overriddden
# since the base `setProperty` is generic using `auto` and conflicts with # since the base `setProperty` is generic using `auto` and conflicts with
# providing a generic `seq` and `Option` override. # providing a generic `seq` and `Option` override.
when format == LogFormat.json: when format == LogFormat.json:
@ -183,16 +184,12 @@ template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
let v = opts.map(opt => opt.formatJsonOption) let v = opts.map(opt => opt.formatJsonOption)
setProperty(r, key, json.`%`(v)) setProperty(r, key, json.`%`(v))
proc setProperty*( proc setProperty*(r: var JsonRecord, key: string, val: seq[T]) =
r: var JsonRecord, key: string, val: seq[T]
) {.raises: [ValueError, IOError].} =
var it {.inject, used.}: T var it {.inject, used.}: T
let v = val.map(it => body) let v = val.map(it => body)
setProperty(r, key, json.`%`(v)) setProperty(r, key, json.`%`(v))
proc setProperty*( proc setProperty*(r: var JsonRecord, key: string, val: T) {.upraises:[ValueError, IOError].} =
r: var JsonRecord, key: string, val: T
) {.raises: [ValueError, IOError].} =
var it {.inject, used.}: T = val var it {.inject, used.}: T = val
let v = body let v = body
setProperty(r, key, json.`%`(v)) setProperty(r, key, json.`%`(v))
@ -223,35 +220,23 @@ template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
let v = opts.map(opt => opt.formatTextLineOption) let v = opts.map(opt => opt.formatTextLineOption)
setProperty(r, key, v.formatTextLineSeq) setProperty(r, key, v.formatTextLineSeq)
proc setProperty*( proc setProperty*(r: var TextLineRecord, key: string, val: seq[T]) =
r: var TextLineRecord, key: string, val: seq[T]
) {.raises: [ValueError, IOError].} =
var it {.inject, used.}: T var it {.inject, used.}: T
let v = val.map(it => body) let v = val.map(it => body)
setProperty(r, key, v.formatTextLineSeq) setProperty(r, key, v.formatTextLineSeq)
proc setProperty*( proc setProperty*(r: var TextLineRecord, key: string, val: T) {.upraises:[ValueError, IOError].} =
r: var TextLineRecord, key: string, val: T
) {.raises: [ValueError, IOError].} =
var it {.inject, used.}: T = val var it {.inject, used.}: T = val
let v = body let v = body
setProperty(r, key, v) setProperty(r, key, v)
template formatIt*(T: type, body: untyped) {.dirty.} = template formatIt*(T: type, body: untyped) {.dirty.} =
formatIt(LogFormat.textLines, T): formatIt(LogFormat.textLines, T): body
body formatIt(LogFormat.json, T): body
formatIt(LogFormat.json, T):
body
formatIt(LogFormat.textLines, Cid): formatIt(LogFormat.textLines, Cid): shortLog($it)
shortLog($it) formatIt(LogFormat.json, Cid): $it
formatIt(LogFormat.json, Cid): formatIt(UInt256): $it
$it formatIt(MultiAddress): $it
formatIt(UInt256): formatIt(LogFormat.textLines, array[32, byte]): it.short0xHexLog
$it formatIt(LogFormat.json, array[32, byte]): it.to0xHex
formatIt(MultiAddress):
$it
formatIt(LogFormat.textLines, array[32, byte]):
it.short0xHexLog
formatIt(LogFormat.json, array[32, byte]):
it.to0xHex

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,9 +9,9 @@
# This module implements serialization and deserialization of Manifest # This module implements serialization and deserialization of Manifest
import times import pkg/upraises
{.push raises: [].} push: {.upraises: [].}
import std/tables import std/tables
import std/sequtils import std/sequtils
@ -32,7 +32,7 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
## multicodec container (Dag-pb) for now ## multicodec container (Dag-pb) for now
## ##
?manifest.verify() ? manifest.verify()
var pbNode = initProtoBuffer() var pbNode = initProtoBuffer()
# NOTE: The `Data` field in the the `dag-pb` # NOTE: The `Data` field in the the `dag-pb`
@ -59,8 +59,6 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
# optional hcodec: MultiCodec = 5 # Multihash codec # optional hcodec: MultiCodec = 5 # Multihash codec
# optional version: CidVersion = 6; # Cid version # optional version: CidVersion = 6; # Cid version
# optional ErasureInfo erasure = 7; # erasure coding info # optional ErasureInfo erasure = 7; # erasure coding info
# optional filename: ?string = 8; # original filename
# optional mimetype: ?string = 9; # original mimetype
# } # }
# ``` # ```
# #
@ -72,7 +70,6 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
header.write(4, manifest.codec.uint32) header.write(4, manifest.codec.uint32)
header.write(5, manifest.hcodec.uint32) header.write(5, manifest.hcodec.uint32)
header.write(6, manifest.version.uint32) header.write(6, manifest.version.uint32)
if manifest.protected: if manifest.protected:
var erasureInfo = initProtoBuffer() var erasureInfo = initProtoBuffer()
erasureInfo.write(1, manifest.ecK.uint32) erasureInfo.write(1, manifest.ecK.uint32)
@ -93,12 +90,6 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
erasureInfo.finish() erasureInfo.finish()
header.write(7, erasureInfo) header.write(7, erasureInfo)
if manifest.filename.isSome:
header.write(8, manifest.filename.get())
if manifest.mimetype.isSome:
header.write(9, manifest.mimetype.get())
pbNode.write(1, header) # set the treeCid as the data field pbNode.write(1, header) # set the treeCid as the data field
pbNode.finish() pbNode.finish()
@ -127,8 +118,6 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
slotRoots: seq[seq[byte]] slotRoots: seq[seq[byte]]
cellSize: uint32 cellSize: uint32
verifiableStrategy: uint32 verifiableStrategy: uint32
filename: string
mimetype: string
# Decode `Header` message # Decode `Header` message
if pbNode.getField(1, pbHeader).isErr: if pbNode.getField(1, pbHeader).isErr:
@ -156,12 +145,6 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
if pbHeader.getField(7, pbErasureInfo).isErr: if pbHeader.getField(7, pbErasureInfo).isErr:
return failure("Unable to decode `erasureInfo` from manifest!") return failure("Unable to decode `erasureInfo` from manifest!")
if pbHeader.getField(8, filename).isErr:
return failure("Unable to decode `filename` from manifest!")
if pbHeader.getField(9, mimetype).isErr:
return failure("Unable to decode `mimetype` from manifest!")
let protected = pbErasureInfo.buffer.len > 0 let protected = pbErasureInfo.buffer.len > 0
var verifiable = false var verifiable = false
if protected: if protected:
@ -197,13 +180,11 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
if pbVerificationInfo.getField(4, verifiableStrategy).isErr: if pbVerificationInfo.getField(4, verifiableStrategy).isErr:
return failure("Unable to decode `verifiableStrategy` from manifest!") return failure("Unable to decode `verifiableStrategy` from manifest!")
let treeCid = ?Cid.init(treeCidBuf).mapFailure let
treeCid = ? Cid.init(treeCidBuf).mapFailure
var filenameOption = if filename.len == 0: string.none else: filename.some let
var mimetypeOption = if mimetype.len == 0: string.none else: mimetype.some self = if protected:
let self =
if protected:
Manifest.new( Manifest.new(
treeCid = treeCid, treeCid = treeCid,
datasetSize = datasetSize.NBytes, datasetSize = datasetSize.NBytes,
@ -213,37 +194,31 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
codec = codec.MultiCodec, codec = codec.MultiCodec,
ecK = ecK.int, ecK = ecK.int,
ecM = ecM.int, ecM = ecM.int,
originalTreeCid = ?Cid.init(originalTreeCid).mapFailure, originalTreeCid = ? Cid.init(originalTreeCid).mapFailure,
originalDatasetSize = originalDatasetSize.NBytes, originalDatasetSize = originalDatasetSize.NBytes,
strategy = StrategyType(protectedStrategy), strategy = StrategyType(protectedStrategy))
filename = filenameOption, else:
mimetype = mimetypeOption, Manifest.new(
) treeCid = treeCid,
else: datasetSize = datasetSize.NBytes,
Manifest.new( blockSize = blockSize.NBytes,
treeCid = treeCid, version = CidVersion(version),
datasetSize = datasetSize.NBytes, hcodec = hcodec.MultiCodec,
blockSize = blockSize.NBytes, codec = codec.MultiCodec)
version = CidVersion(version),
hcodec = hcodec.MultiCodec,
codec = codec.MultiCodec,
filename = filenameOption,
mimetype = mimetypeOption,
)
?self.verify() ? self.verify()
if verifiable: if verifiable:
let let
verifyRootCid = ?Cid.init(verifyRoot).mapFailure verifyRootCid = ? Cid.init(verifyRoot).mapFailure
slotRootCids = slotRoots.mapIt(?Cid.init(it).mapFailure) slotRootCids = slotRoots.mapIt(? Cid.init(it).mapFailure)
return Manifest.new( return Manifest.new(
manifest = self, manifest = self,
verifyRoot = verifyRootCid, verifyRoot = verifyRootCid,
slotRoots = slotRootCids, slotRoots = slotRootCids,
cellSize = cellSize.NBytes, cellSize = cellSize.NBytes,
strategy = StrategyType(verifiableStrategy), strategy = StrategyType(verifiableStrategy)
) )
self.success self.success
@ -252,7 +227,7 @@ func decode*(_: type Manifest, blk: Block): ?!Manifest =
## Decode a manifest using `decoder` ## Decode a manifest using `decoder`
## ##
if not ?blk.cid.isManifest: if not ? blk.cid.isManifest:
return failure "Cid not a manifest codec" return failure "Cid not a manifest codec"
Manifest.decode(blk.data) Manifest.decode(blk.data)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,7 +9,9 @@
# This module defines all operations on Manifest # This module defines all operations on Manifest
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p/protobuf/minprotobuf import pkg/libp2p/protobuf/minprotobuf
import pkg/libp2p/[cid, multihash, multicodec] import pkg/libp2p/[cid, multihash, multicodec]
@ -23,36 +25,34 @@ import ../blocktype
import ../indexingstrategy import ../indexingstrategy
import ../logutils import ../logutils
# TODO: Manifest should be reworked to more concrete types, # TODO: Manifest should be reworked to more concrete types,
# perhaps using inheritance # perhaps using inheritance
type Manifest* = ref object of RootObj type
treeCid {.serialize.}: Cid # Root of the merkle tree Manifest* = ref object of RootObj
datasetSize {.serialize.}: NBytes # Total size of all blocks treeCid {.serialize.}: Cid # Root of the merkle tree
blockSize {.serialize.}: NBytes datasetSize {.serialize.}: NBytes # Total size of all blocks
# Size of each contained block (might not be needed if blocks are len-prefixed) blockSize {.serialize.}: NBytes # Size of each contained block (might not be needed if blocks are len-prefixed)
codec: MultiCodec # Dataset codec codec: MultiCodec # Dataset codec
hcodec: MultiCodec # Multihash codec hcodec: MultiCodec # Multihash codec
version: CidVersion # Cid version version: CidVersion # Cid version
filename {.serialize.}: ?string # The filename of the content uploaded (optional) case protected {.serialize.}: bool # Protected datasets have erasure coded info
mimetype {.serialize.}: ?string # The mimetype of the content uploaded (optional)
case protected {.serialize.}: bool # Protected datasets have erasure coded info
of true:
ecK: int # Number of blocks to encode
ecM: int # Number of resulting parity blocks
originalTreeCid: Cid # The original root of the dataset being erasure coded
originalDatasetSize: NBytes
protectedStrategy: StrategyType # Indexing strategy used to build the slot roots
case verifiable {.serialize.}: bool
# Verifiable datasets can be used to generate storage proofs
of true: of true:
verifyRoot: Cid # Root of the top level merkle tree built from slot roots ecK: int # Number of blocks to encode
slotRoots: seq[Cid] # Individual slot root built from the original dataset blocks ecM: int # Number of resulting parity blocks
cellSize: NBytes # Size of each slot cell originalTreeCid: Cid # The original root of the dataset being erasure coded
verifiableStrategy: StrategyType # Indexing strategy used to build the slot roots originalDatasetSize: NBytes
protectedStrategy: StrategyType # Indexing strategy used to build the slot roots
case verifiable {.serialize.}: bool # Verifiable datasets can be used to generate storage proofs
of true:
verifyRoot: Cid # Root of the top level merkle tree built from slot roots
slotRoots: seq[Cid] # Individual slot root built from the original dataset blocks
cellSize: NBytes # Size of each slot cell
verifiableStrategy: StrategyType # Indexing strategy used to build the slot roots
else:
discard
else: else:
discard discard
else:
discard
############################################################ ############################################################
# Accessors # Accessors
@ -121,18 +121,12 @@ func verifiableStrategy*(self: Manifest): StrategyType =
func numSlotBlocks*(self: Manifest): int = func numSlotBlocks*(self: Manifest): int =
divUp(self.blocksCount, self.numSlots) divUp(self.blocksCount, self.numSlots)
func filename*(self: Manifest): ?string =
self.filename
func mimetype*(self: Manifest): ?string =
self.mimetype
############################################################ ############################################################
# Operations on block list # Operations on block list
############################################################ ############################################################
func isManifest*(cid: Cid): ?!bool = func isManifest*(cid: Cid): ?!bool =
success (ManifestCodec == ?cid.contentType().mapFailure(CodexError)) success (ManifestCodec == ? cid.contentType().mapFailure(CodexError))
func isManifest*(mc: MultiCodec): ?!bool = func isManifest*(mc: MultiCodec): ?!bool =
success mc == ManifestCodec success mc == ManifestCodec
@ -154,77 +148,74 @@ func verify*(self: Manifest): ?!void =
## ##
if self.protected and (self.blocksCount != self.steps * (self.ecK + self.ecM)): if self.protected and (self.blocksCount != self.steps * (self.ecK + self.ecM)):
return return failure newException(CodexError, "Broken manifest: wrong originalBlocksCount")
failure newException(CodexError, "Broken manifest: wrong originalBlocksCount")
return success() return success()
func cid*(self: Manifest): ?!Cid {.deprecated: "use treeCid instead".} =
self.treeCid.success
func `==`*(a, b: Manifest): bool = func `==`*(a, b: Manifest): bool =
(a.treeCid == b.treeCid) and (a.datasetSize == b.datasetSize) and (a.treeCid == b.treeCid) and
(a.blockSize == b.blockSize) and (a.version == b.version) and (a.hcodec == b.hcodec) and (a.datasetSize == b.datasetSize) and
(a.codec == b.codec) and (a.protected == b.protected) and (a.filename == b.filename) and (a.blockSize == b.blockSize) and
(a.mimetype == b.mimetype) and ( (a.version == b.version) and
if a.protected: (a.hcodec == b.hcodec) and
(a.ecK == b.ecK) and (a.ecM == b.ecM) and (a.originalTreeCid == b.originalTreeCid) and (a.codec == b.codec) and
(a.originalDatasetSize == b.originalDatasetSize) and (a.protected == b.protected) and
(a.protectedStrategy == b.protectedStrategy) and (a.verifiable == b.verifiable) and (if a.protected:
( (a.ecK == b.ecK) and
if a.verifiable: (a.ecM == b.ecM) and
(a.verifyRoot == b.verifyRoot) and (a.slotRoots == b.slotRoots) and (a.originalTreeCid == b.originalTreeCid) and
(a.cellSize == b.cellSize) and ( (a.originalDatasetSize == b.originalDatasetSize) and
a.verifiableStrategy == b.verifiableStrategy (a.protectedStrategy == b.protectedStrategy) and
) (a.verifiable == b.verifiable) and
(if a.verifiable:
(a.verifyRoot == b.verifyRoot) and
(a.slotRoots == b.slotRoots) and
(a.cellSize == b.cellSize) and
(a.verifiableStrategy == b.verifiableStrategy)
else: else:
true true)
)
else: else:
true true)
)
func `$`*(self: Manifest): string = func `$`*(self: Manifest): string =
result = "treeCid: " & $self.treeCid &
"treeCid: " & $self.treeCid & ", datasetSize: " & $self.datasetSize & ", blockSize: " & ", datasetSize: " & $self.datasetSize &
$self.blockSize & ", version: " & $self.version & ", hcodec: " & $self.hcodec & ", blockSize: " & $self.blockSize &
", codec: " & $self.codec & ", protected: " & $self.protected ", version: " & $self.version &
", hcodec: " & $self.hcodec &
if self.filename.isSome: ", codec: " & $self.codec &
result &= ", filename: " & $self.filename ", protected: " & $self.protected &
(if self.protected:
if self.mimetype.isSome: ", ecK: " & $self.ecK &
result &= ", mimetype: " & $self.mimetype ", ecM: " & $self.ecM &
", originalTreeCid: " & $self.originalTreeCid &
result &= ( ", originalDatasetSize: " & $self.originalDatasetSize &
if self.protected: ", verifiable: " & $self.verifiable &
", ecK: " & $self.ecK & ", ecM: " & $self.ecM & ", originalTreeCid: " & (if self.verifiable:
$self.originalTreeCid & ", originalDatasetSize: " & $self.originalDatasetSize & ", verifyRoot: " & $self.verifyRoot &
", verifiable: " & $self.verifiable & ( ", slotRoots: " & $self.slotRoots
if self.verifiable: else:
", verifyRoot: " & $self.verifyRoot & ", slotRoots: " & $self.slotRoots "")
else:
""
)
else: else:
"" "")
)
return result
############################################################ ############################################################
# Constructors # Constructors
############################################################ ############################################################
func new*( func new*(
T: type Manifest, T: type Manifest,
treeCid: Cid, treeCid: Cid,
blockSize: NBytes, blockSize: NBytes,
datasetSize: NBytes, datasetSize: NBytes,
version: CidVersion = CIDv1, version: CidVersion = CIDv1,
hcodec = Sha256HashCodec, hcodec = Sha256HashCodec,
codec = BlockCodec, codec = BlockCodec,
protected = false, protected = false): Manifest =
filename: ?string = string.none,
mimetype: ?string = string.none,
): Manifest =
T( T(
treeCid: treeCid, treeCid: treeCid,
blockSize: blockSize, blockSize: blockSize,
@ -232,19 +223,15 @@ func new*(
version: version, version: version,
codec: codec, codec: codec,
hcodec: hcodec, hcodec: hcodec,
protected: protected, protected: protected)
filename: filename,
mimetype: mimetype,
)
func new*( func new*(
T: type Manifest, T: type Manifest,
manifest: Manifest, manifest: Manifest,
treeCid: Cid, treeCid: Cid,
datasetSize: NBytes, datasetSize: NBytes,
ecK, ecM: int, ecK, ecM: int,
strategy = SteppedStrategy, strategy = SteppedStrategy): Manifest =
): Manifest =
## Create an erasure protected dataset from an ## Create an erasure protected dataset from an
## unprotected one ## unprotected one
## ##
@ -257,16 +244,14 @@ func new*(
hcodec: manifest.hcodec, hcodec: manifest.hcodec,
blockSize: manifest.blockSize, blockSize: manifest.blockSize,
protected: true, protected: true,
ecK: ecK, ecK: ecK, ecM: ecM,
ecM: ecM,
originalTreeCid: manifest.treeCid, originalTreeCid: manifest.treeCid,
originalDatasetSize: manifest.datasetSize, originalDatasetSize: manifest.datasetSize,
protectedStrategy: strategy, protectedStrategy: strategy)
filename: manifest.filename,
mimetype: manifest.mimetype,
)
func new*(T: type Manifest, manifest: Manifest): Manifest = func new*(
T: type Manifest,
manifest: Manifest): Manifest =
## Create an unprotected dataset from an ## Create an unprotected dataset from an
## erasure protected one ## erasure protected one
## ##
@ -278,27 +263,22 @@ func new*(T: type Manifest, manifest: Manifest): Manifest =
codec: manifest.codec, codec: manifest.codec,
hcodec: manifest.hcodec, hcodec: manifest.hcodec,
blockSize: manifest.blockSize, blockSize: manifest.blockSize,
protected: false, protected: false)
filename: manifest.filename,
mimetype: manifest.mimetype,
)
func new*( func new*(
T: type Manifest, T: type Manifest,
treeCid: Cid, treeCid: Cid,
datasetSize: NBytes, datasetSize: NBytes,
blockSize: NBytes, blockSize: NBytes,
version: CidVersion, version: CidVersion,
hcodec: MultiCodec, hcodec: MultiCodec,
codec: MultiCodec, codec: MultiCodec,
ecK: int, ecK: int,
ecM: int, ecM: int,
originalTreeCid: Cid, originalTreeCid: Cid,
originalDatasetSize: NBytes, originalDatasetSize: NBytes,
strategy = SteppedStrategy, strategy = SteppedStrategy): Manifest =
filename: ?string = string.none,
mimetype: ?string = string.none,
): Manifest =
Manifest( Manifest(
treeCid: treeCid, treeCid: treeCid,
datasetSize: datasetSize, datasetSize: datasetSize,
@ -311,30 +291,26 @@ func new*(
ecM: ecM, ecM: ecM,
originalTreeCid: originalTreeCid, originalTreeCid: originalTreeCid,
originalDatasetSize: originalDatasetSize, originalDatasetSize: originalDatasetSize,
protectedStrategy: strategy, protectedStrategy: strategy)
filename: filename,
mimetype: mimetype,
)
func new*( func new*(
T: type Manifest, T: type Manifest,
manifest: Manifest, manifest: Manifest,
verifyRoot: Cid, verifyRoot: Cid,
slotRoots: openArray[Cid], slotRoots: openArray[Cid],
cellSize = DefaultCellSize, cellSize = DefaultCellSize,
strategy = LinearStrategy, strategy = LinearStrategy): ?!Manifest =
): ?!Manifest =
## Create a verifiable dataset from an ## Create a verifiable dataset from an
## protected one ## protected one
## ##
if not manifest.protected: if not manifest.protected:
return failure newException( return failure newException(
CodexError, "Can create verifiable manifest only from protected manifest." CodexError, "Can create verifiable manifest only from protected manifest.")
)
if slotRoots.len != manifest.numSlots: if slotRoots.len != manifest.numSlots:
return failure newException(CodexError, "Wrong number of slot roots.") return failure newException(
CodexError, "Wrong number of slot roots.")
success Manifest( success Manifest(
treeCid: manifest.treeCid, treeCid: manifest.treeCid,
@ -353,12 +329,11 @@ func new*(
verifyRoot: verifyRoot, verifyRoot: verifyRoot,
slotRoots: @slotRoots, slotRoots: @slotRoots,
cellSize: cellSize, cellSize: cellSize,
verifiableStrategy: strategy, verifiableStrategy: strategy)
filename: manifest.filename,
mimetype: manifest.mimetype,
)
func new*(T: type Manifest, data: openArray[byte]): ?!Manifest = func new*(
T: type Manifest,
data: openArray[byte]): ?!Manifest =
## Create a manifest instance from given data ## Create a manifest instance from given data
## ##

View File

@ -1,4 +1,5 @@
import pkg/chronos import pkg/chronos
import pkg/upraises
import pkg/questionable import pkg/questionable
import pkg/ethers/erc20 import pkg/ethers/erc20
import ./contracts/requests import ./contracts/requests
@ -17,20 +18,17 @@ export periods
type type
Market* = ref object of RootObj Market* = ref object of RootObj
MarketError* = object of CodexError MarketError* = object of CodexError
SlotStateMismatchError* = object of MarketError
SlotReservationNotAllowedError* = object of MarketError
ProofInvalidError* = object of MarketError
Subscription* = ref object of RootObj Subscription* = ref object of RootObj
OnRequest* = OnRequest* = proc(id: RequestId,
proc(id: RequestId, ask: StorageAsk, expiry: uint64) {.gcsafe, raises: [].} ask: StorageAsk,
OnFulfillment* = proc(requestId: RequestId) {.gcsafe, raises: [].} expiry: UInt256) {.gcsafe, upraises:[].}
OnSlotFilled* = proc(requestId: RequestId, slotIndex: uint64) {.gcsafe, raises: [].} OnFulfillment* = proc(requestId: RequestId) {.gcsafe, upraises: [].}
OnSlotFreed* = proc(requestId: RequestId, slotIndex: uint64) {.gcsafe, raises: [].} OnSlotFilled* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises:[].}
OnSlotReservationsFull* = OnSlotFreed* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises: [].}
proc(requestId: RequestId, slotIndex: uint64) {.gcsafe, raises: [].} OnSlotReservationsFull* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises: [].}
OnRequestCancelled* = proc(requestId: RequestId) {.gcsafe, raises: [].} OnRequestCancelled* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
OnRequestFailed* = proc(requestId: RequestId) {.gcsafe, raises: [].} OnRequestFailed* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
OnProofSubmitted* = proc(id: SlotId) {.gcsafe, raises: [].} OnProofSubmitted* = proc(id: SlotId) {.gcsafe, upraises:[].}
ProofChallenge* = array[32, byte] ProofChallenge* = array[32, byte]
# Marketplace events -- located here due to the Market abstraction # Marketplace events -- located here due to the Market abstraction
@ -38,68 +36,38 @@ type
StorageRequested* = object of MarketplaceEvent StorageRequested* = object of MarketplaceEvent
requestId*: RequestId requestId*: RequestId
ask*: StorageAsk ask*: StorageAsk
expiry*: uint64 expiry*: UInt256
SlotFilled* = object of MarketplaceEvent SlotFilled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId requestId* {.indexed.}: RequestId
slotIndex*: uint64 slotIndex*: UInt256
SlotFreed* = object of MarketplaceEvent SlotFreed* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId requestId* {.indexed.}: RequestId
slotIndex*: uint64 slotIndex*: UInt256
SlotReservationsFull* = object of MarketplaceEvent SlotReservationsFull* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId requestId* {.indexed.}: RequestId
slotIndex*: uint64 slotIndex*: UInt256
RequestFulfilled* = object of MarketplaceEvent RequestFulfilled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId requestId* {.indexed.}: RequestId
RequestCancelled* = object of MarketplaceEvent RequestCancelled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId requestId* {.indexed.}: RequestId
RequestFailed* = object of MarketplaceEvent RequestFailed* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId requestId* {.indexed.}: RequestId
ProofSubmitted* = object of MarketplaceEvent ProofSubmitted* = object of MarketplaceEvent
id*: SlotId id*: SlotId
method loadConfig*( method getZkeyHash*(market: Market): Future[?string] {.base, async.} =
market: Market
): Future[?!void] {.base, async: (raises: [CancelledError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method getZkeyHash*( method getSigner*(market: Market): Future[Address] {.base, async.} =
market: Market
): Future[?string] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method getSigner*( method periodicity*(market: Market): Future[Periodicity] {.base, async.} =
market: Market
): Future[Address] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method periodicity*( method proofTimeout*(market: Market): Future[UInt256] {.base, async.} =
market: Market
): Future[Periodicity] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method proofTimeout*( method proofDowntime*(market: Market): Future[uint8] {.base, async.} =
market: Market
): Future[uint64] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented")
method repairRewardPercentage*(
market: Market
): Future[uint8] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented")
method requestDurationLimit*(market: Market): Future[uint64] {.base, async.} =
raiseAssert("not implemented")
method proofDowntime*(
market: Market
): Future[uint8] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method getPointer*(market: Market, slotId: SlotId): Future[uint8] {.base, async.} = method getPointer*(market: Market, slotId: SlotId): Future[uint8] {.base, async.} =
@ -110,9 +78,8 @@ proc inDowntime*(market: Market, slotId: SlotId): Future[bool] {.async.} =
let pntr = await market.getPointer(slotId) let pntr = await market.getPointer(slotId)
return pntr < downtime return pntr < downtime
method requestStorage*( method requestStorage*(market: Market,
market: Market, request: StorageRequest request: StorageRequest) {.base, async.} =
) {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method myRequests*(market: Market): Future[seq[RequestId]] {.base, async.} = method myRequests*(market: Market): Future[seq[RequestId]] {.base, async.} =
@ -121,193 +88,163 @@ method myRequests*(market: Market): Future[seq[RequestId]] {.base, async.} =
method mySlots*(market: Market): Future[seq[SlotId]] {.base, async.} = method mySlots*(market: Market): Future[seq[SlotId]] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method getRequest*( method getRequest*(market: Market,
market: Market, id: RequestId id: RequestId):
): Future[?StorageRequest] {.base, async: (raises: [CancelledError]).} = Future[?StorageRequest] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method requestState*( method requestState*(market: Market,
market: Market, requestId: RequestId requestId: RequestId): Future[?RequestState] {.base, async.} =
): Future[?RequestState] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method slotState*( method slotState*(market: Market,
market: Market, slotId: SlotId slotId: SlotId): Future[SlotState] {.base, async.} =
): Future[SlotState] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method getRequestEnd*( method getRequestEnd*(market: Market,
market: Market, id: RequestId id: RequestId): Future[SecondsSince1970] {.base, async.} =
): Future[SecondsSince1970] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method requestExpiresAt*( method requestExpiresAt*(market: Market,
market: Market, id: RequestId id: RequestId): Future[SecondsSince1970] {.base, async.} =
): Future[SecondsSince1970] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method getHost*( method getHost*(market: Market,
market: Market, requestId: RequestId, slotIndex: uint64 requestId: RequestId,
): Future[?Address] {.base, async: (raises: [CancelledError, MarketError]).} = slotIndex: UInt256): Future[?Address] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method currentCollateral*( method getActiveSlot*(
market: Market, slotId: SlotId market: Market,
): Future[UInt256] {.base, async: (raises: [MarketError, CancelledError]).} = slotId: SlotId): Future[?Slot] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method getActiveSlot*(market: Market, slotId: SlotId): Future[?Slot] {.base, async.} = method fillSlot*(market: Market,
requestId: RequestId,
slotIndex: UInt256,
proof: Groth16Proof,
collateral: UInt256) {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method fillSlot*( method freeSlot*(market: Market, slotId: SlotId) {.base, async.} =
market: Market,
requestId: RequestId,
slotIndex: uint64,
proof: Groth16Proof,
collateral: UInt256,
) {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method freeSlot*( method withdrawFunds*(market: Market,
market: Market, slotId: SlotId requestId: RequestId) {.base, async.} =
) {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method withdrawFunds*( method subscribeRequests*(market: Market,
market: Market, requestId: RequestId callback: OnRequest):
) {.base, async: (raises: [CancelledError, MarketError]).} = Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeRequests*( method isProofRequired*(market: Market,
market: Market, callback: OnRequest id: SlotId): Future[bool] {.base, async.} =
): Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method isProofRequired*(market: Market, id: SlotId): Future[bool] {.base, async.} = method willProofBeRequired*(market: Market,
id: SlotId): Future[bool] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method willProofBeRequired*(market: Market, id: SlotId): Future[bool] {.base, async.} = method getChallenge*(market: Market, id: SlotId): Future[ProofChallenge] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method getChallenge*( method submitProof*(market: Market,
market: Market, id: SlotId id: SlotId,
): Future[ProofChallenge] {.base, async.} = proof: Groth16Proof) {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method submitProof*( method markProofAsMissing*(market: Market,
market: Market, id: SlotId, proof: Groth16Proof id: SlotId,
) {.base, async: (raises: [CancelledError, MarketError]).} = period: Period) {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method markProofAsMissing*( method canProofBeMarkedAsMissing*(market: Market,
market: Market, id: SlotId, period: Period id: SlotId,
) {.base, async: (raises: [CancelledError, MarketError]).} = period: Period): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method canMarkProofAsMissing*(
market: Market, id: SlotId, period: Period
): Future[bool] {.base, async: (raises: [CancelledError]).} =
raiseAssert("not implemented") raiseAssert("not implemented")
method reserveSlot*( method reserveSlot*(
market: Market, requestId: RequestId, slotIndex: uint64 market: Market,
) {.base, async: (raises: [CancelledError, MarketError]).} = requestId: RequestId,
slotIndex: UInt256) {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method canReserveSlot*( method canReserveSlot*(
market: Market, requestId: RequestId, slotIndex: uint64 market: Market,
): Future[bool] {.base, async.} = requestId: RequestId,
slotIndex: UInt256): Future[bool] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeFulfillment*( method subscribeFulfillment*(market: Market,
market: Market, callback: OnFulfillment callback: OnFulfillment):
): Future[Subscription] {.base, async.} = Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeFulfillment*( method subscribeFulfillment*(market: Market,
market: Market, requestId: RequestId, callback: OnFulfillment requestId: RequestId,
): Future[Subscription] {.base, async.} = callback: OnFulfillment):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeSlotFilled*( method subscribeSlotFilled*(market: Market,
market: Market, callback: OnSlotFilled callback: OnSlotFilled):
): Future[Subscription] {.base, async.} = Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeSlotFilled*( method subscribeSlotFilled*(market: Market,
market: Market, requestId: RequestId, slotIndex: uint64, callback: OnSlotFilled requestId: RequestId,
): Future[Subscription] {.base, async.} = slotIndex: UInt256,
callback: OnSlotFilled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeSlotFreed*( method subscribeSlotFreed*(market: Market,
market: Market, callback: OnSlotFreed callback: OnSlotFreed):
): Future[Subscription] {.base, async.} = Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeSlotReservationsFull*( method subscribeSlotReservationsFull*(
market: Market, callback: OnSlotReservationsFull market: Market,
): Future[Subscription] {.base, async.} = callback: OnSlotReservationsFull): Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeRequestCancelled*( method subscribeRequestCancelled*(market: Market,
market: Market, callback: OnRequestCancelled callback: OnRequestCancelled):
): Future[Subscription] {.base, async.} = Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeRequestCancelled*( method subscribeRequestCancelled*(market: Market,
market: Market, requestId: RequestId, callback: OnRequestCancelled requestId: RequestId,
): Future[Subscription] {.base, async.} = callback: OnRequestCancelled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeRequestFailed*( method subscribeRequestFailed*(market: Market,
market: Market, callback: OnRequestFailed callback: OnRequestFailed):
): Future[Subscription] {.base, async.} = Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeRequestFailed*( method subscribeRequestFailed*(market: Market,
market: Market, requestId: RequestId, callback: OnRequestFailed requestId: RequestId,
): Future[Subscription] {.base, async.} = callback: OnRequestFailed):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method subscribeProofSubmission*( method subscribeProofSubmission*(market: Market,
market: Market, callback: OnProofSubmitted callback: OnProofSubmitted):
): Future[Subscription] {.base, async.} = Future[Subscription] {.base, async.} =
raiseAssert("not implemented") raiseAssert("not implemented")
method unsubscribe*(subscription: Subscription) {.base, async.} = method unsubscribe*(subscription: Subscription) {.base, async, upraises:[].} =
raiseAssert("not implemented") raiseAssert("not implemented")
method queryPastSlotFilledEvents*( method queryPastEvents*[T: MarketplaceEvent](
market: Market, fromBlock: BlockTag market: Market,
): Future[seq[SlotFilled]] {.base, async.} = _: type T,
raiseAssert("not implemented") blocksAgo: int): Future[seq[T]] {.base, async.} =
method queryPastSlotFilledEvents*(
market: Market, blocksAgo: int
): Future[seq[SlotFilled]] {.base, async.} =
raiseAssert("not implemented")
method queryPastSlotFilledEvents*(
market: Market, fromTime: SecondsSince1970
): Future[seq[SlotFilled]] {.base, async.} =
raiseAssert("not implemented")
method queryPastStorageRequestedEvents*(
market: Market, fromBlock: BlockTag
): Future[seq[StorageRequested]] {.base, async.} =
raiseAssert("not implemented")
method queryPastStorageRequestedEvents*(
market: Market, blocksAgo: int
): Future[seq[StorageRequested]] {.base, async.} =
raiseAssert("not implemented")
method slotCollateral*(
market: Market, requestId: RequestId, slotIndex: uint64
): Future[?!UInt256] {.base, async: (raises: [CancelledError]).} =
raiseAssert("not implemented")
method slotCollateral*(
market: Market, collateralPerSlot: UInt256, slotState: SlotState
): ?!UInt256 {.base, gcsafe, raises: [].} =
raiseAssert("not implemented") raiseAssert("not implemented")

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,7 +7,9 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p import pkg/libp2p
import pkg/questionable import pkg/questionable
@ -24,11 +26,11 @@ const MaxMerkleTreeSize = 100.MiBs.uint
const MaxMerkleProofSize = 1.MiBs.uint const MaxMerkleProofSize = 1.MiBs.uint
proc encode*(self: CodexTree): seq[byte] = proc encode*(self: CodexTree): seq[byte] =
var pb = initProtoBuffer() var pb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
pb.write(1, self.mcodec.uint64) pb.write(1, self.mcodec.uint64)
pb.write(2, self.leavesCount.uint64) pb.write(2, self.leavesCount.uint64)
for node in self.nodes: for node in self.nodes:
var nodesPb = initProtoBuffer() var nodesPb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
nodesPb.write(1, node) nodesPb.write(1, node)
nodesPb.finish() nodesPb.finish()
pb.write(3, nodesPb) pb.write(3, nodesPb)
@ -37,11 +39,11 @@ proc encode*(self: CodexTree): seq[byte] =
pb.buffer pb.buffer
proc decode*(_: type CodexTree, data: seq[byte]): ?!CodexTree = proc decode*(_: type CodexTree, data: seq[byte]): ?!CodexTree =
var pb = initProtoBuffer(data) var pb = initProtoBuffer(data, maxSize = MaxMerkleTreeSize)
var mcodecCode: uint64 var mcodecCode: uint64
var leavesCount: uint64 var leavesCount: uint64
discard ?pb.getField(1, mcodecCode).mapFailure discard ? pb.getField(1, mcodecCode).mapFailure
discard ?pb.getField(2, leavesCount).mapFailure discard ? pb.getField(2, leavesCount).mapFailure
let mcodec = MultiCodec.codec(mcodecCode.int) let mcodec = MultiCodec.codec(mcodecCode.int)
if mcodec == InvalidMultiCodec: if mcodec == InvalidMultiCodec:
@ -51,22 +53,22 @@ proc decode*(_: type CodexTree, data: seq[byte]): ?!CodexTree =
nodesBuff: seq[seq[byte]] nodesBuff: seq[seq[byte]]
nodes: seq[ByteHash] nodes: seq[ByteHash]
if ?pb.getRepeatedField(3, nodesBuff).mapFailure: if ? pb.getRepeatedField(3, nodesBuff).mapFailure:
for nodeBuff in nodesBuff: for nodeBuff in nodesBuff:
var node: ByteHash var node: ByteHash
discard ?initProtoBuffer(nodeBuff).getField(1, node).mapFailure discard ? initProtoBuffer(nodeBuff).getField(1, node).mapFailure
nodes.add node nodes.add node
CodexTree.fromNodes(mcodec, nodes, leavesCount.int) CodexTree.fromNodes(mcodec, nodes, leavesCount.int)
proc encode*(self: CodexProof): seq[byte] = proc encode*(self: CodexProof): seq[byte] =
var pb = initProtoBuffer() var pb = initProtoBuffer(maxSize = MaxMerkleProofSize)
pb.write(1, self.mcodec.uint64) pb.write(1, self.mcodec.uint64)
pb.write(2, self.index.uint64) pb.write(2, self.index.uint64)
pb.write(3, self.nleaves.uint64) pb.write(3, self.nleaves.uint64)
for node in self.path: for node in self.path:
var nodesPb = initProtoBuffer() var nodesPb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
nodesPb.write(1, node) nodesPb.write(1, node)
nodesPb.finish() nodesPb.finish()
pb.write(4, nodesPb) pb.write(4, nodesPb)
@ -75,33 +77,36 @@ proc encode*(self: CodexProof): seq[byte] =
pb.buffer pb.buffer
proc decode*(_: type CodexProof, data: seq[byte]): ?!CodexProof = proc decode*(_: type CodexProof, data: seq[byte]): ?!CodexProof =
var pb = initProtoBuffer(data) var pb = initProtoBuffer(data, maxSize = MaxMerkleProofSize)
var mcodecCode: uint64 var mcodecCode: uint64
var index: uint64 var index: uint64
var nleaves: uint64 var nleaves: uint64
discard ?pb.getField(1, mcodecCode).mapFailure discard ? pb.getField(1, mcodecCode).mapFailure
let mcodec = MultiCodec.codec(mcodecCode.int) let mcodec = MultiCodec.codec(mcodecCode.int)
if mcodec == InvalidMultiCodec: if mcodec == InvalidMultiCodec:
return failure("Invalid MultiCodec code " & $mcodecCode) return failure("Invalid MultiCodec code " & $mcodecCode)
discard ?pb.getField(2, index).mapFailure discard ? pb.getField(2, index).mapFailure
discard ?pb.getField(3, nleaves).mapFailure discard ? pb.getField(3, nleaves).mapFailure
var var
nodesBuff: seq[seq[byte]] nodesBuff: seq[seq[byte]]
nodes: seq[ByteHash] nodes: seq[ByteHash]
if ?pb.getRepeatedField(4, nodesBuff).mapFailure: if ? pb.getRepeatedField(4, nodesBuff).mapFailure:
for nodeBuff in nodesBuff: for nodeBuff in nodesBuff:
var node: ByteHash var node: ByteHash
let nodePb = initProtoBuffer(nodeBuff) let nodePb = initProtoBuffer(nodeBuff)
discard ?nodePb.getField(1, node).mapFailure discard ? nodePb.getField(1, node).mapFailure
nodes.add node nodes.add node
CodexProof.init(mcodec, index.int, nleaves.int, nodes) CodexProof.init(mcodec, index.int, nleaves.int, nodes)
proc fromJson*(_: type CodexProof, json: JsonNode): ?!CodexProof = proc fromJson*(
_: type CodexProof,
json: JsonNode
): ?!CodexProof =
expectJsonKind(Cid, JString, json) expectJsonKind(Cid, JString, json)
var bytes: seq[byte] var bytes: seq[byte]
try: try:
@ -111,5 +116,4 @@ proc fromJson*(_: type CodexProof, json: JsonNode): ?!CodexProof =
CodexProof.decode(bytes) CodexProof.decode(bytes)
func `%`*(proof: CodexProof): JsonNode = func `%`*(proof: CodexProof): JsonNode = % byteutils.toHex(proof.encode())
%byteutils.toHex(proof.encode())

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -15,7 +15,7 @@ import std/sequtils
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/libp2p/[cid, multicodec, multihash] import pkg/libp2p/[cid, multicodec, multihash]
import pkg/constantine/hashes
import ../../utils import ../../utils
import ../../rng import ../../rng
import ../../errors import ../../errors
@ -32,10 +32,10 @@ logScope:
type type
ByteTreeKey* {.pure.} = enum ByteTreeKey* {.pure.} = enum
KeyNone = 0x0.byte KeyNone = 0x0.byte
KeyBottomLayer = 0x1.byte KeyBottomLayer = 0x1.byte
KeyOdd = 0x2.byte KeyOdd = 0x2.byte
KeyOddAndBottomLayer = 0x3.byte KeyOddAndBottomLayer = 0x3.byte
ByteHash* = seq[byte] ByteHash* = seq[byte]
ByteTree* = MerkleTree[ByteHash, ByteTreeKey] ByteTree* = MerkleTree[ByteHash, ByteTreeKey]
@ -47,10 +47,26 @@ type
CodexProof* = ref object of ByteProof CodexProof* = ref object of ByteProof
mcodec*: MultiCodec mcodec*: MultiCodec
func getProof*(self: CodexTree, index: int): ?!CodexProof = func mhash*(mcodec: MultiCodec): ?!MHash =
var proof = CodexProof(mcodec: self.mcodec) let
mhash = CodeHashes.getOrDefault(mcodec)
?self.getProof(index, proof) if isNil(mhash.coder):
return failure "Invalid multihash codec"
success mhash
func digestSize*(self: (CodexTree or CodexProof)): int =
## Number of leaves
##
self.mhash.size
func getProof*(self: CodexTree, index: int): ?!CodexProof =
var
proof = CodexProof(mcodec: self.mcodec)
? self.getProof(index, proof)
success proof success proof
@ -62,113 +78,137 @@ func verify*(self: CodexProof, leaf: MultiHash, root: MultiHash): ?!bool =
rootBytes = root.digestBytes rootBytes = root.digestBytes
leafBytes = leaf.digestBytes leafBytes = leaf.digestBytes
if self.mcodec != root.mcodec or self.mcodec != leaf.mcodec: if self.mcodec != root.mcodec or
self.mcodec != leaf.mcodec:
return failure "Hash codec mismatch" return failure "Hash codec mismatch"
if rootBytes.len != root.size and leafBytes.len != leaf.size: if rootBytes.len != root.size and
leafBytes.len != leaf.size:
return failure "Invalid hash length" return failure "Invalid hash length"
self.verify(leafBytes, rootBytes) self.verify(leafBytes, rootBytes)
func verify*(self: CodexProof, leaf: Cid, root: Cid): ?!bool = func verify*(self: CodexProof, leaf: Cid, root: Cid): ?!bool =
self.verify(?leaf.mhash.mapFailure, ?leaf.mhash.mapFailure) self.verify(? leaf.mhash.mapFailure, ? leaf.mhash.mapFailure)
proc rootCid*(self: CodexTree, version = CIDv1, dataCodec = DatasetRootCodec): ?!Cid = proc rootCid*(
if (?self.root).len == 0: self: CodexTree,
version = CIDv1,
dataCodec = DatasetRootCodec): ?!Cid =
if (? self.root).len == 0:
return failure "Empty root" return failure "Empty root"
let mhash = ?MultiHash.init(self.mcodec, ?self.root).mapFailure let
mhash = ? MultiHash.init(self.mcodec, ? self.root).mapFailure
Cid.init(version, DatasetRootCodec, mhash).mapFailure Cid.init(version, DatasetRootCodec, mhash).mapFailure
func getLeafCid*( func getLeafCid*(
self: CodexTree, i: Natural, version = CIDv1, dataCodec = BlockCodec self: CodexTree,
): ?!Cid = i: Natural,
version = CIDv1,
dataCodec = BlockCodec): ?!Cid =
if i >= self.leavesCount: if i >= self.leavesCount:
return failure "Invalid leaf index " & $i return failure "Invalid leaf index " & $i
let let
leaf = self.leaves[i] leaf = self.leaves[i]
mhash = ?MultiHash.init($self.mcodec, leaf).mapFailure mhash = ? MultiHash.init($self.mcodec, leaf).mapFailure
Cid.init(version, dataCodec, mhash).mapFailure Cid.init(version, dataCodec, mhash).mapFailure
proc `$`*(self: CodexTree): string = proc `$`*(self: CodexTree): string =
let root = let root = if self.root.isOk: byteutils.toHex(self.root.get) else: "none"
if self.root.isOk: "CodexTree(" &
byteutils.toHex(self.root.get) " root: " & root &
else: ", leavesCount: " & $self.leavesCount &
"none" ", levels: " & $self.levels &
"CodexTree(" & " root: " & root & ", leavesCount: " & $self.leavesCount & ", levels: " & ", mcodec: " & $self.mcodec & " )"
$self.levels & ", mcodec: " & $self.mcodec & " )"
proc `$`*(self: CodexProof): string = proc `$`*(self: CodexProof): string =
"CodexProof(" & " nleaves: " & $self.nleaves & ", index: " & $self.index & ", path: " & "CodexProof(" &
$self.path.mapIt(byteutils.toHex(it)) & ", mcodec: " & $self.mcodec & " )" " nleaves: " & $self.nleaves &
", index: " & $self.index &
", path: " & $self.path.mapIt( byteutils.toHex(it) ) &
", mcodec: " & $self.mcodec & " )"
func compress*(x, y: openArray[byte], key: ByteTreeKey, codec: MultiCodec): ?!ByteHash = func compress*(
x, y: openArray[byte],
key: ByteTreeKey,
mhash: MHash): ?!ByteHash =
## Compress two hashes ## Compress two hashes
## ##
let input = @x & @y & @[key.byte]
let digest = ?MultiHash.digest(codec, input).mapFailure var digest = newSeq[byte](mhash.size)
success digest.digestBytes mhash.coder(@x & @y & @[ key.byte ], digest)
success digest
func init*( func init*(
_: type CodexTree, mcodec: MultiCodec = Sha256HashCodec, leaves: openArray[ByteHash] _: type CodexTree,
): ?!CodexTree = mcodec: MultiCodec = Sha256HashCodec,
leaves: openArray[ByteHash]): ?!CodexTree =
if leaves.len == 0: if leaves.len == 0:
return failure "Empty leaves" return failure "Empty leaves"
let let
mhash = ? mcodec.mhash()
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} = compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} =
compress(x, y, key, mcodec) compress(x, y, key, mhash)
digestSize = ?mcodec.digestSize.mapFailure Zero: ByteHash = newSeq[byte](mhash.size)
Zero: ByteHash = newSeq[byte](digestSize)
if digestSize != leaves[0].len: if mhash.size != leaves[0].len:
return failure "Invalid hash length" return failure "Invalid hash length"
var self = CodexTree(mcodec: mcodec, compress: compressor, zero: Zero) var
self = CodexTree(mcodec: mcodec, compress: compressor, zero: Zero)
self.layers = ?merkleTreeWorker(self, leaves, isBottomLayer = true) self.layers = ? merkleTreeWorker(self, leaves, isBottomLayer = true)
success self success self
func init*(_: type CodexTree, leaves: openArray[MultiHash]): ?!CodexTree = func init*(
_: type CodexTree,
leaves: openArray[MultiHash]): ?!CodexTree =
if leaves.len == 0: if leaves.len == 0:
return failure "Empty leaves" return failure "Empty leaves"
let let
mcodec = leaves[0].mcodec mcodec = leaves[0].mcodec
leaves = leaves.mapIt(it.digestBytes) leaves = leaves.mapIt( it.digestBytes )
CodexTree.init(mcodec, leaves) CodexTree.init(mcodec, leaves)
func init*(_: type CodexTree, leaves: openArray[Cid]): ?!CodexTree = func init*(
_: type CodexTree,
leaves: openArray[Cid]): ?!CodexTree =
if leaves.len == 0: if leaves.len == 0:
return failure "Empty leaves" return failure "Empty leaves"
let let
mcodec = (?leaves[0].mhash.mapFailure).mcodec mcodec = (? leaves[0].mhash.mapFailure).mcodec
leaves = leaves.mapIt((?it.mhash.mapFailure).digestBytes) leaves = leaves.mapIt( (? it.mhash.mapFailure).digestBytes )
CodexTree.init(mcodec, leaves) CodexTree.init(mcodec, leaves)
proc fromNodes*( proc fromNodes*(
_: type CodexTree, _: type CodexTree,
mcodec: MultiCodec = Sha256HashCodec, mcodec: MultiCodec = Sha256HashCodec,
nodes: openArray[ByteHash], nodes: openArray[ByteHash],
nleaves: int, nleaves: int): ?!CodexTree =
): ?!CodexTree =
if nodes.len == 0: if nodes.len == 0:
return failure "Empty nodes" return failure "Empty nodes"
let let
digestSize = ?mcodec.digestSize.mapFailure mhash = ? mcodec.mhash()
Zero = newSeq[byte](digestSize) Zero = newSeq[byte](mhash.size)
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} = compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} =
compress(x, y, key, mcodec) compress(x, y, key, mhash)
if digestSize != nodes[0].len: if mhash.size != nodes[0].len:
return failure "Invalid hash length" return failure "Invalid hash length"
var var
@ -177,34 +217,34 @@ proc fromNodes*(
pos = 0 pos = 0
while pos < nodes.len: while pos < nodes.len:
self.layers.add(nodes[pos ..< (pos + layer)]) self.layers.add( nodes[pos..<(pos + layer)] )
pos += layer pos += layer
layer = divUp(layer, 2) layer = divUp(layer, 2)
let let
index = Rng.instance.rand(nleaves - 1) index = Rng.instance.rand(nleaves - 1)
proof = ?self.getProof(index) proof = ? self.getProof(index)
if not ?proof.verify(self.leaves[index], ?self.root): # sanity check if not ? proof.verify(self.leaves[index], ? self.root): # sanity check
return failure "Unable to verify tree built from nodes" return failure "Unable to verify tree built from nodes"
success self success self
func init*( func init*(
_: type CodexProof, _: type CodexProof,
mcodec: MultiCodec = Sha256HashCodec, mcodec: MultiCodec = Sha256HashCodec,
index: int, index: int,
nleaves: int, nleaves: int,
nodes: openArray[ByteHash], nodes: openArray[ByteHash]): ?!CodexProof =
): ?!CodexProof =
if nodes.len == 0: if nodes.len == 0:
return failure "Empty nodes" return failure "Empty nodes"
let let
digestSize = ?mcodec.digestSize.mapFailure mhash = ? mcodec.mhash()
Zero = newSeq[byte](digestSize) Zero = newSeq[byte](mhash.size)
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!seq[byte] {.noSideEffect.} = compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!seq[byte] {.noSideEffect.} =
compress(x, y, key, mcodec) compress(x, y, key, mhash)
success CodexProof( success CodexProof(
compress: compressor, compress: compressor,
@ -212,5 +252,4 @@ func init*(
mcodec: mcodec, mcodec: mcodec,
index: index, index: index,
nleaves: nleaves, nleaves: nleaves,
path: @nodes, path: @nodes)
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -16,19 +16,19 @@ import pkg/questionable/results
import ../errors import ../errors
type type
CompressFn*[H, K] = proc(x, y: H, key: K): ?!H {.noSideEffect, raises: [].} CompressFn*[H, K] = proc (x, y: H, key: K): ?!H {.noSideEffect, raises: [].}
MerkleTree*[H, K] = ref object of RootObj MerkleTree*[H, K] = ref object of RootObj
layers*: seq[seq[H]] layers* : seq[seq[H]]
compress*: CompressFn[H, K] compress*: CompressFn[H, K]
zero*: H zero* : H
MerkleProof*[H, K] = ref object of RootObj MerkleProof*[H, K] = ref object of RootObj
index*: int # linear index of the leaf, starting from 0 index* : int # linear index of the leaf, starting from 0
path*: seq[H] # order: from the bottom to the top path* : seq[H] # order: from the bottom to the top
nleaves*: int # number of leaves in the tree (=size of input) nleaves* : int # number of leaves in the tree (=size of input)
compress*: CompressFn[H, K] # compress function compress*: CompressFn[H, K] # compress function
zero*: H # zero value zero* : H # zero value
func depth*[H, K](self: MerkleTree[H, K]): int = func depth*[H, K](self: MerkleTree[H, K]): int =
return self.layers.len - 1 return self.layers.len - 1
@ -59,38 +59,36 @@ func root*[H, K](self: MerkleTree[H, K]): ?!H =
return success last[0] return success last[0]
func getProof*[H, K]( func getProof*[H, K](
self: MerkleTree[H, K], index: int, proof: MerkleProof[H, K] self: MerkleTree[H, K],
): ?!void = index: int,
let depth = self.depth proof: MerkleProof[H, K]): ?!void =
let depth = self.depth
let nleaves = self.leavesCount let nleaves = self.leavesCount
if not (index >= 0 and index < nleaves): if not (index >= 0 and index < nleaves):
return failure "index out of bounds" return failure "index out of bounds"
var path: seq[H] = newSeq[H](depth) var path : seq[H] = newSeq[H](depth)
var k = index var k = index
var m = nleaves var m = nleaves
for i in 0 ..< depth: for i in 0..<depth:
let j = k xor 1 let j = k xor 1
path[i] = path[i] = if (j < m): self.layers[i][j] else: self.zero
if (j < m): k = k shr 1
self.layers[i][j]
else:
self.zero
k = k shr 1
m = (m + 1) shr 1 m = (m + 1) shr 1
proof.index = index proof.index = index
proof.path = path proof.path = path
proof.nleaves = nleaves proof.nleaves = nleaves
proof.compress = self.compress proof.compress = self.compress
success() success()
func getProof*[H, K](self: MerkleTree[H, K], index: int): ?!MerkleProof[H, K] = func getProof*[H, K](self: MerkleTree[H, K], index: int): ?!MerkleProof[H, K] =
var proof = MerkleProof[H, K]() var
proof = MerkleProof[H, K]()
?self.getProof(index, proof) ? self.getProof(index, proof)
success proof success proof
@ -102,39 +100,41 @@ func reconstructRoot*[H, K](proof: MerkleProof[H, K], leaf: H): ?!H =
bottomFlag = K.KeyBottomLayer bottomFlag = K.KeyBottomLayer
for p in proof.path: for p in proof.path:
let oddIndex: bool = (bitand(j, 1) != 0) let oddIndex : bool = (bitand(j,1) != 0)
if oddIndex: if oddIndex:
# the index of the child is odd, so the node itself can't be odd (a bit counterintuitive, yeah :) # the index of the child is odd, so the node itself can't be odd (a bit counterintuitive, yeah :)
h = ?proof.compress(p, h, bottomFlag) h = ? proof.compress( p, h, bottomFlag )
else: else:
if j == m - 1: if j == m - 1:
# single child => odd node # single child => odd node
h = ?proof.compress(h, p, K(bottomFlag.ord + 2)) h = ? proof.compress( h, p, K(bottomFlag.ord + 2) )
else: else:
# even node # even node
h = ?proof.compress(h, p, bottomFlag) h = ? proof.compress( h , p, bottomFlag )
bottomFlag = K.KeyNone bottomFlag = K.KeyNone
j = j shr 1 j = j shr 1
m = (m + 1) shr 1 m = (m+1) shr 1
return success h return success h
func verify*[H, K](proof: MerkleProof[H, K], leaf: H, root: H): ?!bool = func verify*[H, K](proof: MerkleProof[H, K], leaf: H, root: H): ?!bool =
success bool(root == ?proof.reconstructRoot(leaf)) success bool(root == ? proof.reconstructRoot(leaf))
func merkleTreeWorker*[H, K]( func merkleTreeWorker*[H, K](
self: MerkleTree[H, K], xs: openArray[H], isBottomLayer: static bool self: MerkleTree[H, K],
): ?!seq[seq[H]] = xs: openArray[H],
isBottomLayer: static bool): ?!seq[seq[H]] =
let a = low(xs) let a = low(xs)
let b = high(xs) let b = high(xs)
let m = b - a + 1 let m = b - a + 1
when not isBottomLayer: when not isBottomLayer:
if m == 1: if m == 1:
return success @[@xs] return success @[ @xs ]
let halfn: int = m div 2 let halfn: int = m div 2
let n: int = 2 * halfn let n : int = 2 * halfn
let isOdd: bool = (n != m) let isOdd: bool = (n != m)
var ys: seq[H] var ys: seq[H]
@ -143,11 +143,11 @@ func merkleTreeWorker*[H, K](
else: else:
ys = newSeq[H](halfn + 1) ys = newSeq[H](halfn + 1)
for i in 0 ..< halfn: for i in 0..<halfn:
const key = when isBottomLayer: K.KeyBottomLayer else: K.KeyNone const key = when isBottomLayer: K.KeyBottomLayer else: K.KeyNone
ys[i] = ?self.compress(xs[a + 2 * i], xs[a + 2 * i + 1], key = key) ys[i] = ? self.compress( xs[a + 2 * i], xs[a + 2 * i + 1], key = key )
if isOdd: if isOdd:
const key = when isBottomLayer: K.KeyOddAndBottomLayer else: K.KeyOdd const key = when isBottomLayer: K.KeyOddAndBottomLayer else: K.KeyOdd
ys[halfn] = ?self.compress(xs[n], self.zero, key = key) ys[halfn] = ? self.compress( xs[n], self.zero, key = key )
success @[@xs] & ?self.merkleTreeWorker(ys, isBottomLayer = false) success @[ @xs ] & ? self.merkleTreeWorker(ys, isBottomLayer = false)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -24,10 +24,10 @@ import ./merkletree
export merkletree, poseidon2 export merkletree, poseidon2
const const
KeyNoneF = F.fromHex("0x0") KeyNoneF = F.fromhex("0x0")
KeyBottomLayerF = F.fromHex("0x1") KeyBottomLayerF = F.fromhex("0x1")
KeyOddF = F.fromHex("0x2") KeyOddF = F.fromhex("0x2")
KeyOddAndBottomLayerF = F.fromHex("0x3") KeyOddAndBottomLayerF = F.fromhex("0x3")
Poseidon2Zero* = zero Poseidon2Zero* = zero
@ -35,7 +35,7 @@ type
Bn254Fr* = F Bn254Fr* = F
Poseidon2Hash* = Bn254Fr Poseidon2Hash* = Bn254Fr
PoseidonKeysEnum* = enum # can't use non-ordinals as enum values PoseidonKeysEnum* = enum # can't use non-ordinals as enum values
KeyNone KeyNone
KeyBottomLayer KeyBottomLayer
KeyOdd KeyOdd
@ -46,50 +46,65 @@ type
proc `$`*(self: Poseidon2Tree): string = proc `$`*(self: Poseidon2Tree): string =
let root = if self.root.isOk: self.root.get.toHex else: "none" let root = if self.root.isOk: self.root.get.toHex else: "none"
"Poseidon2Tree(" & " root: " & root & ", leavesCount: " & $self.leavesCount & "Poseidon2Tree(" &
" root: " & root &
", leavesCount: " & $self.leavesCount &
", levels: " & $self.levels & " )" ", levels: " & $self.levels & " )"
proc `$`*(self: Poseidon2Proof): string = proc `$`*(self: Poseidon2Proof): string =
"Poseidon2Proof(" & " nleaves: " & $self.nleaves & ", index: " & $self.index & "Poseidon2Proof(" &
", path: " & $self.path.mapIt(it.toHex) & " )" " nleaves: " & $self.nleaves &
", index: " & $self.index &
", path: " & $self.path.mapIt( it.toHex ) & " )"
func toArray32*(bytes: openArray[byte]): array[32, byte] = func toArray32*(bytes: openArray[byte]): array[32, byte] =
result[0 ..< bytes.len] = bytes[0 ..< bytes.len] result[0..<bytes.len] = bytes[0..<bytes.len]
converter toKey*(key: PoseidonKeysEnum): Poseidon2Hash = converter toKey*(key: PoseidonKeysEnum): Poseidon2Hash =
case key case key:
of KeyNone: KeyNoneF of KeyNone: KeyNoneF
of KeyBottomLayer: KeyBottomLayerF of KeyBottomLayer: KeyBottomLayerF
of KeyOdd: KeyOddF of KeyOdd: KeyOddF
of KeyOddAndBottomLayer: KeyOddAndBottomLayerF of KeyOddAndBottomLayer: KeyOddAndBottomLayerF
func init*(_: type Poseidon2Tree, leaves: openArray[Poseidon2Hash]): ?!Poseidon2Tree = func init*(
_: type Poseidon2Tree,
leaves: openArray[Poseidon2Hash]): ?!Poseidon2Tree =
if leaves.len == 0: if leaves.len == 0:
return failure "Empty leaves" return failure "Empty leaves"
let compressor = proc( let
x, y: Poseidon2Hash, key: PoseidonKeysEnum compressor = proc(
): ?!Poseidon2Hash {.noSideEffect.} = x, y: Poseidon2Hash,
success compress(x, y, key.toKey) key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
var self = Poseidon2Tree(compress: compressor, zero: Poseidon2Zero) var
self = Poseidon2Tree(compress: compressor, zero: Poseidon2Zero)
self.layers = ?merkleTreeWorker(self, leaves, isBottomLayer = true) self.layers = ? merkleTreeWorker(self, leaves, isBottomLayer = true)
success self success self
func init*(_: type Poseidon2Tree, leaves: openArray[array[31, byte]]): ?!Poseidon2Tree = func init*(
Poseidon2Tree.init(leaves.mapIt(Poseidon2Hash.fromBytes(it))) _: type Poseidon2Tree,
leaves: openArray[array[31, byte]]): ?!Poseidon2Tree =
Poseidon2Tree.init(
leaves.mapIt( Poseidon2Hash.fromBytes(it) ))
proc fromNodes*( proc fromNodes*(
_: type Poseidon2Tree, nodes: openArray[Poseidon2Hash], nleaves: int _: type Poseidon2Tree,
): ?!Poseidon2Tree = nodes: openArray[Poseidon2Hash],
nleaves: int): ?!Poseidon2Tree =
if nodes.len == 0: if nodes.len == 0:
return failure "Empty nodes" return failure "Empty nodes"
let compressor = proc( let
x, y: Poseidon2Hash, key: PoseidonKeysEnum compressor = proc(
): ?!Poseidon2Hash {.noSideEffect.} = x, y: Poseidon2Hash,
success compress(x, y, key.toKey) key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
var var
self = Poseidon2Tree(compress: compressor, zero: zero) self = Poseidon2Tree(compress: compressor, zero: zero)
@ -97,34 +112,37 @@ proc fromNodes*(
pos = 0 pos = 0
while pos < nodes.len: while pos < nodes.len:
self.layers.add(nodes[pos ..< (pos + layer)]) self.layers.add( nodes[pos..<(pos + layer)] )
pos += layer pos += layer
layer = divUp(layer, 2) layer = divUp(layer, 2)
let let
index = Rng.instance.rand(nleaves - 1) index = Rng.instance.rand(nleaves - 1)
proof = ?self.getProof(index) proof = ? self.getProof(index)
if not ?proof.verify(self.leaves[index], ?self.root): # sanity check if not ? proof.verify(self.leaves[index], ? self.root): # sanity check
return failure "Unable to verify tree built from nodes" return failure "Unable to verify tree built from nodes"
success self success self
func init*( func init*(
_: type Poseidon2Proof, index: int, nleaves: int, nodes: openArray[Poseidon2Hash] _: type Poseidon2Proof,
): ?!Poseidon2Proof = index: int,
nleaves: int,
nodes: openArray[Poseidon2Hash]): ?!Poseidon2Proof =
if nodes.len == 0: if nodes.len == 0:
return failure "Empty nodes" return failure "Empty nodes"
let compressor = proc( let
x, y: Poseidon2Hash, key: PoseidonKeysEnum compressor = proc(
): ?!Poseidon2Hash {.noSideEffect.} = x, y: Poseidon2Hash,
success compress(x, y, key.toKey) key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
success Poseidon2Proof( success Poseidon2Proof(
compress: compressor, compress: compressor,
zero: Poseidon2Zero, zero: Poseidon2Zero,
index: index, index: index,
nleaves: nleaves, nleaves: nleaves,
path: @nodes, path: @nodes)
)

View File

@ -1,11 +0,0 @@
const CodecExts = [
("poseidon2-alt_bn_128-sponge-r2", 0xCD10), # bn128 rate 2 sponge
("poseidon2-alt_bn_128-merkle-2kb", 0xCD11), # bn128 2kb compress & merkleize
("poseidon2-alt_bn_128-keyed-compress", 0xCD12), # bn128 keyed compress]
("codex-manifest", 0xCD01),
("codex-block", 0xCD02),
("codex-root", 0xCD03),
("codex-slot-root", 0xCD04),
("codex-proving-root", 0xCD05),
("codex-slot-cell", 0xCD06),
]

View File

@ -1,40 +0,0 @@
import blscurve/bls_public_exports
import pkg/constantine/hashes
import poseidon2
proc sha2_256hash_constantine(data: openArray[byte], output: var openArray[byte]) =
# Using Constantine's SHA256 instead of mhash for optimal performance on 32-byte merkle node hashing
# See: https://github.com/logos-storage/logos-storage-nim/issues/1162
if len(output) > 0:
let digest = hashes.sha256.hash(data)
copyMem(addr output[0], addr digest[0], 32)
proc poseidon2_sponge_rate2(data: openArray[byte], output: var openArray[byte]) =
if len(output) > 0:
var digest = poseidon2.Sponge.digest(data).toBytes()
copyMem(addr output[0], addr digest[0], uint(len(output)))
proc poseidon2_merkle_2kb_sponge(data: openArray[byte], output: var openArray[byte]) =
if len(output) > 0:
var digest = poseidon2.SpongeMerkle.digest(data, 2048).toBytes()
copyMem(addr output[0], addr digest[0], uint(len(output)))
const Sha2256MultiHash* = MHash(
mcodec: multiCodec("sha2-256"),
size: sha256.sizeDigest,
coder: sha2_256hash_constantine,
)
const HashExts = [
# override sha2-256 hash function
Sha2256MultiHash,
MHash(
mcodec: multiCodec("poseidon2-alt_bn_128-sponge-r2"),
size: 32,
coder: poseidon2_sponge_rate2,
),
MHash(
mcodec: multiCodec("poseidon2-alt_bn_128-merkle-2kb"),
size: 32,
coder: poseidon2_merkle_2kb_sponge,
),
]

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,17 +9,16 @@
const const
# Namespaces # Namespaces
CodexMetaNamespace* = "meta" # meta info stored here CodexMetaNamespace* = "meta" # meta info stored here
CodexRepoNamespace* = "repo" # repository namespace, blocks and manifests are subkeys CodexRepoNamespace* = "repo" # repository namespace, blocks and manifests are subkeys
CodexBlockTotalNamespace* = CodexMetaNamespace & "/total" CodexBlockTotalNamespace* = CodexMetaNamespace & "/total" # number of blocks in the repo
# number of blocks in the repo CodexBlocksNamespace* = CodexRepoNamespace & "/blocks" # blocks namespace
CodexBlocksNamespace* = CodexRepoNamespace & "/blocks" # blocks namespace
CodexManifestNamespace* = CodexRepoNamespace & "/manifests" # manifest namespace CodexManifestNamespace* = CodexRepoNamespace & "/manifests" # manifest namespace
CodexBlocksTtlNamespace* = # Cid TTL CodexBlocksTtlNamespace* = # Cid TTL
CodexMetaNamespace & "/ttl" CodexMetaNamespace & "/ttl"
CodexBlockProofNamespace* = # Cid and Proof CodexBlockProofNamespace* = # Cid and Proof
CodexMetaNamespace & "/proof" CodexMetaNamespace & "/proof"
CodexDhtNamespace* = "dht" # Dht namespace CodexDhtNamespace* = "dht" # Dht namespace
CodexDhtProvidersNamespace* = # Dht providers namespace CodexDhtProvidersNamespace* = # Dht providers namespace
CodexDhtNamespace & "/providers" CodexDhtNamespace & "/providers"
CodexQuotaNamespace* = CodexMetaNamespace & "/quota" # quota's namespace CodexQuotaNamespace* = CodexMetaNamespace & "/quota" # quota's namespace

View File

@ -1,432 +0,0 @@
# Copyright (c) 2019-2023 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
{.push raises: [].}
import
std/[options, os, strutils, times, net, atomics],
stew/[objects],
nat_traversal/[miniupnpc, natpmp],
json_serialization/std/net,
results
import pkg/chronos
import pkg/chronicles
import pkg/libp2p
import ./utils
import ./utils/natutils
import ./utils/addrutils
const
UPNP_TIMEOUT = 200 # ms
PORT_MAPPING_INTERVAL = 20 * 60 # seconds
NATPMP_LIFETIME = 60 * 60 # in seconds, must be longer than PORT_MAPPING_INTERVAL
type PortMappings* = object
internalTcpPort: Port
externalTcpPort: Port
internalUdpPort: Port
externalUdpPort: Port
description: string
type PortMappingArgs =
tuple[strategy: NatStrategy, tcpPort, udpPort: Port, description: string]
type NatConfig* = object
case hasExtIp*: bool
of true: extIp*: IpAddress
of false: nat*: NatStrategy
var
upnp {.threadvar.}: Miniupnp
npmp {.threadvar.}: NatPmp
strategy = NatStrategy.NatNone
natClosed: Atomic[bool]
extIp: Option[IpAddress]
activeMappings: seq[PortMappings]
natThreads: seq[Thread[PortMappingArgs]] = @[]
logScope:
topics = "nat"
type PrefSrcStatus = enum
NoRoutingInfo
PrefSrcIsPublic
PrefSrcIsPrivate
BindAddressIsPublic
BindAddressIsPrivate
## Also does threadvar initialisation.
## Must be called before redirectPorts() in each thread.
proc getExternalIP*(natStrategy: NatStrategy, quiet = false): Option[IpAddress] =
var externalIP: IpAddress
if natStrategy == NatStrategy.NatAny or natStrategy == NatStrategy.NatUpnp:
if upnp == nil:
upnp = newMiniupnp()
upnp.discoverDelay = UPNP_TIMEOUT
let dres = upnp.discover()
if dres.isErr:
debug "UPnP", msg = dres.error
else:
var
msg: cstring
canContinue = true
case upnp.selectIGD()
of IGDNotFound:
msg = "Internet Gateway Device not found. Giving up."
canContinue = false
of IGDFound:
msg = "Internet Gateway Device found."
of IGDNotConnected:
msg = "Internet Gateway Device found but it's not connected. Trying anyway."
of NotAnIGD:
msg =
"Some device found, but it's not recognised as an Internet Gateway Device. Trying anyway."
of IGDIpNotRoutable:
msg =
"Internet Gateway Device found and is connected, but with a reserved or non-routable IP. Trying anyway."
if not quiet:
debug "UPnP", msg
if canContinue:
let ires = upnp.externalIPAddress()
if ires.isErr:
debug "UPnP", msg = ires.error
else:
# if we got this far, UPnP is working and we don't need to try NAT-PMP
try:
externalIP = parseIpAddress(ires.value)
strategy = NatStrategy.NatUpnp
return some(externalIP)
except ValueError as e:
error "parseIpAddress() exception", err = e.msg
return
if natStrategy == NatStrategy.NatAny or natStrategy == NatStrategy.NatPmp:
if npmp == nil:
npmp = newNatPmp()
let nres = npmp.init()
if nres.isErr:
debug "NAT-PMP", msg = nres.error
else:
let nires = npmp.externalIPAddress()
if nires.isErr:
debug "NAT-PMP", msg = nires.error
else:
try:
externalIP = parseIpAddress($(nires.value))
strategy = NatStrategy.NatPmp
return some(externalIP)
except ValueError as e:
error "parseIpAddress() exception", err = e.msg
return
# This queries the routing table to get the "preferred source" attribute and
# checks if it's a public IP. If so, then it's our public IP.
#
# Further more, we check if the bind address (user provided, or a "0.0.0.0"
# default) is a public IP. That's a long shot, because code paths involving a
# user-provided bind address are not supposed to get here.
proc getRoutePrefSrc(bindIp: IpAddress): (Option[IpAddress], PrefSrcStatus) =
let bindAddress = initTAddress(bindIp, Port(0))
if bindAddress.isAnyLocal():
let ip = getRouteIpv4()
if ip.isErr():
# No route was found, log error and continue without IP.
error "No routable IP address found, check your network connection",
error = ip.error
return (none(IpAddress), NoRoutingInfo)
elif ip.get().isGlobalUnicast():
return (some(ip.get()), PrefSrcIsPublic)
else:
return (none(IpAddress), PrefSrcIsPrivate)
elif bindAddress.isGlobalUnicast():
return (some(bindIp), BindAddressIsPublic)
else:
return (none(IpAddress), BindAddressIsPrivate)
# Try to detect a public IP assigned to this host, before trying NAT traversal.
proc getPublicRoutePrefSrcOrExternalIP*(
natStrategy: NatStrategy, bindIp: IpAddress, quiet = true
): Option[IpAddress] =
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return prefSrcIp
of PrefSrcIsPrivate, BindAddressIsPrivate:
let extIp = getExternalIP(natStrategy, quiet)
if extIp.isSome:
return some(extIp.get)
proc doPortMapping(
strategy: NatStrategy, tcpPort, udpPort: Port, description: string
): Option[(Port, Port)] {.gcsafe.} =
var
extTcpPort: Port
extUdpPort: Port
if strategy == NatStrategy.NatUpnp:
for t in [(tcpPort, UPNPProtocol.TCP), (udpPort, UPNPProtocol.UDP)]:
let
(port, protocol) = t
pmres = upnp.addPortMapping(
externalPort = $port,
protocol = protocol,
internalHost = upnp.lanAddr,
internalPort = $port,
desc = description,
leaseDuration = 0,
)
if pmres.isErr:
error "UPnP port mapping", msg = pmres.error, port
return
else:
# let's check it
let cres =
upnp.getSpecificPortMapping(externalPort = $port, protocol = protocol)
if cres.isErr:
warn "UPnP port mapping check failed. Assuming the check itself is broken and the port mapping was done.",
msg = cres.error
info "UPnP: added port mapping",
externalPort = port, internalPort = port, protocol = protocol
case protocol
of UPNPProtocol.TCP:
extTcpPort = port
of UPNPProtocol.UDP:
extUdpPort = port
elif strategy == NatStrategy.NatPmp:
for t in [(tcpPort, NatPmpProtocol.TCP), (udpPort, NatPmpProtocol.UDP)]:
let
(port, protocol) = t
pmres = npmp.addPortMapping(
eport = port.cushort,
iport = port.cushort,
protocol = protocol,
lifetime = NATPMP_LIFETIME,
)
if pmres.isErr:
error "NAT-PMP port mapping", msg = pmres.error, port
return
else:
let extPort = Port(pmres.value)
info "NAT-PMP: added port mapping",
externalPort = extPort, internalPort = port, protocol = protocol
case protocol
of NatPmpProtocol.TCP:
extTcpPort = extPort
of NatPmpProtocol.UDP:
extUdpPort = extPort
return some((extTcpPort, extUdpPort))
proc repeatPortMapping(args: PortMappingArgs) {.thread, raises: [ValueError].} =
ignoreSignalsInThread()
let
(strategy, tcpPort, udpPort, description) = args
interval = initDuration(seconds = PORT_MAPPING_INTERVAL)
sleepDuration = 1_000 # in ms, also the maximum delay after pressing Ctrl-C
var lastUpdate = now()
# We can't use copies of Miniupnp and NatPmp objects in this thread, because they share
# C pointers with other instances that have already been garbage collected, so
# we use threadvars instead and initialise them again with getExternalIP(),
# even though we don't need the external IP's value.
let ipres = getExternalIP(strategy, quiet = true)
if ipres.isSome:
while natClosed.load() == false:
let
# we're being silly here with this channel polling because we can't
# select on Nim channels like on Go ones
currTime = now()
if currTime >= (lastUpdate + interval):
discard doPortMapping(strategy, tcpPort, udpPort, description)
lastUpdate = currTime
sleep(sleepDuration)
proc stopNatThreads() {.noconv.} =
# stop the thread
debug "Stopping NAT port mapping renewal threads"
try:
natClosed.store(true)
joinThreads(natThreads)
except Exception as exc:
warn "Failed to stop NAT port mapping renewal thread", exc = exc.msg
# delete our port mappings
# FIXME: if the initial port mapping failed because it already existed for the
# required external port, we should not delete it. It might have been set up
# by another program.
# In Windows, a new thread is created for the signal handler, so we need to
# initialise our threadvars again.
let ipres = getExternalIP(strategy, quiet = true)
if ipres.isSome:
if strategy == NatStrategy.NatUpnp:
for entry in activeMappings:
for t in [
(entry.externalTcpPort, entry.internalTcpPort, UPNPProtocol.TCP),
(entry.externalUdpPort, entry.internalUdpPort, UPNPProtocol.UDP),
]:
let
(eport, iport, protocol) = t
pmres = upnp.deletePortMapping(externalPort = $eport, protocol = protocol)
if pmres.isErr:
error "UPnP port mapping deletion", msg = pmres.error
else:
debug "UPnP: deleted port mapping",
externalPort = eport, internalPort = iport, protocol = protocol
elif strategy == NatStrategy.NatPmp:
for entry in activeMappings:
for t in [
(entry.externalTcpPort, entry.internalTcpPort, NatPmpProtocol.TCP),
(entry.externalUdpPort, entry.internalUdpPort, NatPmpProtocol.UDP),
]:
let
(eport, iport, protocol) = t
pmres = npmp.deletePortMapping(
eport = eport.cushort, iport = iport.cushort, protocol = protocol
)
if pmres.isErr:
error "NAT-PMP port mapping deletion", msg = pmres.error
else:
debug "NAT-PMP: deleted port mapping",
externalPort = eport, internalPort = iport, protocol = protocol
proc redirectPorts*(
strategy: NatStrategy, tcpPort, udpPort: Port, description: string
): Option[(Port, Port)] =
result = doPortMapping(strategy, tcpPort, udpPort, description)
if result.isSome:
let (externalTcpPort, externalUdpPort) = result.get()
# needed by NAT-PMP on port mapping deletion
# Port mapping works. Let's launch a thread that repeats it, in case the
# NAT-PMP lease expires or the router is rebooted and forgets all about
# these mappings.
activeMappings.add(
PortMappings(
internalTcpPort: tcpPort,
externalTcpPort: externalTcpPort,
internalUdpPort: udpPort,
externalUdpPort: externalUdpPort,
description: description,
)
)
try:
natThreads.add(Thread[PortMappingArgs]())
natThreads[^1].createThread(
repeatPortMapping, (strategy, externalTcpPort, externalUdpPort, description)
)
# atexit() in disguise
if natThreads.len == 1:
# we should register the thread termination function only once
addQuitProc(stopNatThreads)
except Exception as exc:
warn "Failed to create NAT port mapping renewal thread", exc = exc.msg
proc setupNat*(
natStrategy: NatStrategy, tcpPort, udpPort: Port, clientId: string
): tuple[ip: Option[IpAddress], tcpPort, udpPort: Option[Port]] =
## Setup NAT port mapping and get external IP address.
## If any of this fails, we don't return any IP address but do return the
## original ports as best effort.
## TODO: Allow for tcp or udp port mapping to be optional.
if extIp.isNone:
extIp = getExternalIP(natStrategy)
if extIp.isSome:
let ip = extIp.get
let extPorts = (
{.gcsafe.}:
redirectPorts(
strategy, tcpPort = tcpPort, udpPort = udpPort, description = clientId
)
)
if extPorts.isSome:
let (extTcpPort, extUdpPort) = extPorts.get()
(ip: some(ip), tcpPort: some(extTcpPort), udpPort: some(extUdpPort))
else:
warn "UPnP/NAT-PMP available but port forwarding failed"
(ip: none(IpAddress), tcpPort: some(tcpPort), udpPort: some(udpPort))
else:
warn "UPnP/NAT-PMP not available"
(ip: none(IpAddress), tcpPort: some(tcpPort), udpPort: some(udpPort))
proc setupAddress*(
natConfig: NatConfig, bindIp: IpAddress, tcpPort, udpPort: Port, clientId: string
): tuple[ip: Option[IpAddress], tcpPort, udpPort: Option[Port]] {.gcsafe.} =
## Set-up of the external address via any of the ways as configured in
## `NatConfig`. In case all fails an error is logged and the bind ports are
## selected also as external ports, as best effort and in hope that the
## external IP can be figured out by other means at a later stage.
## TODO: Allow for tcp or udp bind ports to be optional.
if natConfig.hasExtIp:
# any required port redirection must be done by hand
return (some(natConfig.extIp), some(tcpPort), some(udpPort))
case natConfig.nat
of NatStrategy.NatAny:
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return (prefSrcIp, some(tcpPort), some(udpPort))
of PrefSrcIsPrivate, BindAddressIsPrivate:
return setupNat(natConfig.nat, tcpPort, udpPort, clientId)
of NatStrategy.NatNone:
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return (prefSrcIp, some(tcpPort), some(udpPort))
of PrefSrcIsPrivate:
error "No public IP address found. Should not use --nat:none option"
return (none(IpAddress), some(tcpPort), some(udpPort))
of BindAddressIsPrivate:
error "Bind IP is not a public IP address. Should not use --nat:none option"
return (none(IpAddress), some(tcpPort), some(udpPort))
of NatStrategy.NatUpnp, NatStrategy.NatPmp:
return setupNat(natConfig.nat, tcpPort, udpPort, clientId)
proc nattedAddress*(
natConfig: NatConfig, addrs: seq[MultiAddress], udpPort: Port
): tuple[libp2p, discovery: seq[MultiAddress]] =
## Takes a NAT configuration, sequence of multiaddresses and UDP port and returns:
## - Modified multiaddresses with NAT-mapped addresses for libp2p
## - Discovery addresses with NAT-mapped UDP ports
var discoveryAddrs = newSeq[MultiAddress](0)
let newAddrs = addrs.mapIt:
block:
# Extract IP address and port from the multiaddress
let (ipPart, port) = getAddressAndPort(it)
if ipPart.isSome and port.isSome:
# Try to setup NAT mapping for the address
let (newIP, tcp, udp) =
setupAddress(natConfig, ipPart.get, port.get, udpPort, "codex")
if newIP.isSome:
# NAT mapping successful - add discovery address with mapped UDP port
discoveryAddrs.add(getMultiAddrWithIPAndUDPPort(newIP.get, udp.get))
# Remap original address with NAT IP and TCP port
it.remapAddr(ip = newIP, port = tcp)
else:
# NAT mapping failed - use original address
echo "Failed to get external IP, using original address", it
discoveryAddrs.add(getMultiAddrWithIPAndUDPPort(ipPart.get, udpPort))
it
else:
# Invalid multiaddress format - return as is
it
(newAddrs, discoveryAddrs)

File diff suppressed because it is too large Load Diff

View File

@ -2,10 +2,9 @@ import pkg/stint
type type
Periodicity* = object Periodicity* = object
seconds*: uint64 seconds*: UInt256
Period* = UInt256
Period* = uint64 Timestamp* = UInt256
Timestamp* = uint64
func periodOf*(periodicity: Periodicity, timestamp: Timestamp): Period = func periodOf*(periodicity: Periodicity, timestamp: Timestamp): Period =
timestamp div periodicity.seconds timestamp div periodicity.seconds

View File

@ -14,17 +14,20 @@ export purchase
type type
Purchasing* = ref object Purchasing* = ref object
market*: Market market: Market
clock: Clock clock: Clock
purchases: Table[PurchaseId, Purchase] purchases: Table[PurchaseId, Purchase]
proofProbability*: UInt256 proofProbability*: UInt256
PurchaseTimeout* = Timeout PurchaseTimeout* = Timeout
const DefaultProofProbability = 100.u256 const DefaultProofProbability = 100.u256
proc new*(_: type Purchasing, market: Market, clock: Clock): Purchasing = proc new*(_: type Purchasing, market: Market, clock: Clock): Purchasing =
Purchasing(market: market, clock: clock, proofProbability: DefaultProofProbability) Purchasing(
market: market,
clock: clock,
proofProbability: DefaultProofProbability,
)
proc load*(purchasing: Purchasing) {.async.} = proc load*(purchasing: Purchasing) {.async.} =
let market = purchasing.market let market = purchasing.market
@ -40,9 +43,9 @@ proc start*(purchasing: Purchasing) {.async.} =
proc stop*(purchasing: Purchasing) {.async.} = proc stop*(purchasing: Purchasing) {.async.} =
discard discard
proc populate*( proc populate*(purchasing: Purchasing,
purchasing: Purchasing, request: StorageRequest request: StorageRequest
): Future[StorageRequest] {.async.} = ): Future[StorageRequest] {.async.} =
result = request result = request
if result.ask.proofProbability == 0.u256: if result.ask.proofProbability == 0.u256:
result.ask.proofProbability = purchasing.proofProbability result.ask.proofProbability = purchasing.proofProbability
@ -52,9 +55,9 @@ proc populate*(
result.nonce = Nonce(id) result.nonce = Nonce(id)
result.client = await purchasing.market.getSigner() result.client = await purchasing.market.getSigner()
proc purchase*( proc purchase*(purchasing: Purchasing,
purchasing: Purchasing, request: StorageRequest request: StorageRequest
): Future[Purchase] {.async.} = ): Future[Purchase] {.async.} =
let request = await purchasing.populate(request) let request = await purchasing.populate(request)
let purchase = Purchase.new(request, purchasing.market, purchasing.clock) let purchase = Purchase.new(request, purchasing.market, purchasing.clock)
purchase.start() purchase.start()
@ -72,3 +75,4 @@ func getPurchaseIds*(purchasing: Purchasing): seq[PurchaseId] =
for key in purchasing.purchases.keys: for key in purchasing.purchases.keys:
pIds.add(key) pIds.add(key)
return pIds return pIds

View File

@ -25,7 +25,10 @@ export purchaseid
export statemachine export statemachine
func new*( func new*(
_: type Purchase, requestId: RequestId, market: Market, clock: Clock _: type Purchase,
requestId: RequestId,
market: Market,
clock: Clock
): Purchase = ): Purchase =
## create a new instance of a Purchase ## create a new instance of a Purchase
## ##
@ -39,7 +42,10 @@ func new*(
return purchase return purchase
func new*( func new*(
_: type Purchase, request: StorageRequest, market: Market, clock: Clock _: type Purchase,
request: StorageRequest,
market: Market,
clock: Clock
): Purchase = ): Purchase =
## Create a new purchase using the given market and clock ## Create a new purchase using the given market and clock
let purchase = Purchase.new(request.id, market, clock) let purchase = Purchase.new(request.id, market, clock)
@ -70,5 +76,4 @@ func error*(purchase: Purchase): ?(ref CatchableError) =
func state*(purchase: Purchase): ?string = func state*(purchase: Purchase): ?string =
proc description(state: State): string = proc description(state: State): string =
$state $state
purchase.query(description) purchase.query(description)

View File

@ -1,14 +1,12 @@
import std/hashes import std/hashes
import pkg/nimcrypto
import ../logutils import ../logutils
type PurchaseId* = distinct array[32, byte] type PurchaseId* = distinct array[32, byte]
logutils.formatIt(LogFormat.textLines, PurchaseId): logutils.formatIt(LogFormat.textLines, PurchaseId): it.short0xHexLog
it.short0xHexLog logutils.formatIt(LogFormat.json, PurchaseId): it.to0xHexLog
logutils.formatIt(LogFormat.json, PurchaseId):
it.to0xHexLog
proc hash*(x: PurchaseId): Hash {.borrow.} proc hash*(x: PurchaseId): Hash {.borrow.}
proc `==`*(x, y: PurchaseId): bool {.borrow.} proc `==`*(x, y: PurchaseId): bool {.borrow.}
proc toHex*(x: PurchaseId): string = proc toHex*(x: PurchaseId): string = array[32, byte](x).toHex
array[32, byte](x).toHex

View File

@ -14,6 +14,5 @@ type
clock*: Clock clock*: Clock
requestId*: RequestId requestId*: RequestId
request*: ?StorageRequest request*: ?StorageRequest
PurchaseState* = ref object of State PurchaseState* = ref object of State
PurchaseError* = object of CodexError PurchaseError* = object of CodexError

View File

@ -1,35 +1,25 @@
import pkg/metrics import pkg/metrics
import ../../logutils import ../../logutils
import ../../utils/exceptions
import ../statemachine import ../statemachine
import ./error import ./errorhandling
declareCounter(codex_purchases_cancelled, "codex purchases cancelled") declareCounter(codex_purchases_cancelled, "codex purchases cancelled")
logScope: logScope:
topics = "marketplace purchases cancelled" topics = "marketplace purchases cancelled"
type PurchaseCancelled* = ref object of PurchaseState type PurchaseCancelled* = ref object of ErrorHandlingState
method `$`*(state: PurchaseCancelled): string = method `$`*(state: PurchaseCancelled): string =
"cancelled" "cancelled"
method run*( method run*(state: PurchaseCancelled, machine: Machine): Future[?State] {.async.} =
state: PurchaseCancelled, machine: Machine
): Future[?State] {.async: (raises: []).} =
codex_purchases_cancelled.inc() codex_purchases_cancelled.inc()
let purchase = Purchase(machine) let purchase = Purchase(machine)
try: warn "Request cancelled, withdrawing remaining funds", requestId = purchase.requestId
warn "Request cancelled, withdrawing remaining funds", await purchase.market.withdrawFunds(purchase.requestId)
requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
let error = newException(Timeout, "Purchase cancelled due to timeout") let error = newException(Timeout, "Purchase cancelled due to timeout")
purchase.future.fail(error) purchase.future.fail(error)
except CancelledError as e:
trace "PurchaseCancelled.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseCancelled.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))

View File

@ -14,13 +14,10 @@ type PurchaseErrored* = ref object of PurchaseState
method `$`*(state: PurchaseErrored): string = method `$`*(state: PurchaseErrored): string =
"errored" "errored"
method run*( method run*(state: PurchaseErrored, machine: Machine): Future[?State] {.async.} =
state: PurchaseErrored, machine: Machine
): Future[?State] {.async: (raises: []).} =
codex_purchases_error.inc() codex_purchases_error.inc()
let purchase = Purchase(machine) let purchase = Purchase(machine)
error "Purchasing error", error "Purchasing error", error=state.error.msgDetail, requestId = purchase.requestId
error = state.error.msgDetail, requestId = purchase.requestId
purchase.future.fail(state.error) purchase.future.fail(state.error)

View File

@ -0,0 +1,9 @@
import pkg/questionable
import ../statemachine
import ./error
type
ErrorHandlingState* = ref object of PurchaseState
method onError*(state: ErrorHandlingState, error: ref CatchableError): ?State =
some State(PurchaseErrored(error: error))

View File

@ -1,30 +1,21 @@
import pkg/metrics import pkg/metrics
import ../statemachine import ../statemachine
import ../../logutils import ../../logutils
import ../../utils/exceptions
import ./error import ./error
declareCounter(codex_purchases_failed, "codex purchases failed") declareCounter(codex_purchases_failed, "codex purchases failed")
type PurchaseFailed* = ref object of PurchaseState type
PurchaseFailed* = ref object of PurchaseState
method `$`*(state: PurchaseFailed): string = method `$`*(state: PurchaseFailed): string =
"failed" "failed"
method run*( method run*(state: PurchaseFailed, machine: Machine): Future[?State] {.async.} =
state: PurchaseFailed, machine: Machine
): Future[?State] {.async: (raises: []).} =
codex_purchases_failed.inc() codex_purchases_failed.inc()
let purchase = Purchase(machine) let purchase = Purchase(machine)
warn "Request failed, withdrawing remaining funds", requestId = purchase.requestId
try: await purchase.market.withdrawFunds(purchase.requestId)
warn "Request failed, withdrawing remaining funds", requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
except CancelledError as e:
trace "PurchaseFailed.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseFailed.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
let error = newException(PurchaseError, "Purchase failed") let error = newException(PurchaseError, "Purchase failed")
return some State(PurchaseErrored(error: error)) return some State(PurchaseErrored(error: error))

View File

@ -1,9 +1,7 @@
import pkg/metrics import pkg/metrics
import ../statemachine import ../statemachine
import ../../utils/exceptions
import ../../logutils import ../../logutils
import ./error
declareCounter(codex_purchases_finished, "codex purchases finished") declareCounter(codex_purchases_finished, "codex purchases finished")
@ -15,19 +13,10 @@ type PurchaseFinished* = ref object of PurchaseState
method `$`*(state: PurchaseFinished): string = method `$`*(state: PurchaseFinished): string =
"finished" "finished"
method run*( method run*(state: PurchaseFinished, machine: Machine): Future[?State] {.async.} =
state: PurchaseFinished, machine: Machine
): Future[?State] {.async: (raises: []).} =
codex_purchases_finished.inc() codex_purchases_finished.inc()
let purchase = Purchase(machine) let purchase = Purchase(machine)
try: info "Purchase finished, withdrawing remaining funds", requestId = purchase.requestId
info "Purchase finished, withdrawing remaining funds", await purchase.market.withdrawFunds(purchase.requestId)
requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
purchase.future.complete() purchase.future.complete()
except CancelledError as e:
trace "PurchaseFinished.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseFinished.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))

View File

@ -1,28 +1,18 @@
import pkg/metrics import pkg/metrics
import ../../logutils
import ../../utils/exceptions
import ../statemachine import ../statemachine
import ./errorhandling
import ./submitted import ./submitted
import ./error
declareCounter(codex_purchases_pending, "codex purchases pending") declareCounter(codex_purchases_pending, "codex purchases pending")
type PurchasePending* = ref object of PurchaseState type PurchasePending* = ref object of ErrorHandlingState
method `$`*(state: PurchasePending): string = method `$`*(state: PurchasePending): string =
"pending" "pending"
method run*( method run*(state: PurchasePending, machine: Machine): Future[?State] {.async.} =
state: PurchasePending, machine: Machine
): Future[?State] {.async: (raises: []).} =
codex_purchases_pending.inc() codex_purchases_pending.inc()
let purchase = Purchase(machine) let purchase = Purchase(machine)
try: let request = !purchase.request
let request = !purchase.request await purchase.market.requestStorage(request)
await purchase.market.requestStorage(request) return some State(PurchaseSubmitted())
return some State(PurchaseSubmitted())
except CancelledError as e:
trace "PurchasePending.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchasePending.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))

View File

@ -1,25 +1,22 @@
import pkg/metrics import pkg/metrics
import ../../logutils import ../../logutils
import ../../utils/exceptions
import ../statemachine import ../statemachine
import ./errorhandling
import ./finished import ./finished
import ./failed import ./failed
import ./error
declareCounter(codex_purchases_started, "codex purchases started") declareCounter(codex_purchases_started, "codex purchases started")
logScope: logScope:
topics = "marketplace purchases started" topics = "marketplace purchases started"
type PurchaseStarted* = ref object of PurchaseState type PurchaseStarted* = ref object of ErrorHandlingState
method `$`*(state: PurchaseStarted): string = method `$`*(state: PurchaseStarted): string =
"started" "started"
method run*( method run*(state: PurchaseStarted, machine: Machine): Future[?State] {.async.} =
state: PurchaseStarted, machine: Machine
): Future[?State] {.async: (raises: []).} =
codex_purchases_started.inc() codex_purchases_started.inc()
let purchase = Purchase(machine) let purchase = Purchase(machine)
@ -30,25 +27,15 @@ method run*(
let failed = newFuture[void]() let failed = newFuture[void]()
proc callback(_: RequestId) = proc callback(_: RequestId) =
failed.complete() failed.complete()
let subscription = await market.subscribeRequestFailed(purchase.requestId, callback)
var ended: Future[void] # Ensure that we're past the request end by waiting an additional second
try: let ended = clock.waitUntil((await market.getRequestEnd(purchase.requestId)) + 1)
let subscription = await market.subscribeRequestFailed(purchase.requestId, callback) let fut = await one(ended, failed)
await subscription.unsubscribe()
# Ensure that we're past the request end by waiting an additional second if fut.id == failed.id:
ended = clock.waitUntil((await market.getRequestEnd(purchase.requestId)) + 1) ended.cancel()
let fut = await one(ended, failed) return some State(PurchaseFailed())
await subscription.unsubscribe() else:
if fut.id == failed.id: failed.cancel()
ended.cancelSoon() return some State(PurchaseFinished())
return some State(PurchaseFailed())
else:
failed.cancelSoon()
return some State(PurchaseFinished())
except CancelledError as e:
ended.cancelSoon()
failed.cancelSoon()
trace "PurchaseStarted.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseStarted.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))

View File

@ -1,41 +1,36 @@
import pkg/metrics import pkg/metrics
import ../../logutils import ../../logutils
import ../../utils/exceptions
import ../statemachine import ../statemachine
import ./errorhandling
import ./started import ./started
import ./cancelled import ./cancelled
import ./error
logScope: logScope:
topics = "marketplace purchases submitted" topics = "marketplace purchases submitted"
declareCounter(codex_purchases_submitted, "codex purchases submitted") declareCounter(codex_purchases_submitted, "codex purchases submitted")
type PurchaseSubmitted* = ref object of PurchaseState type PurchaseSubmitted* = ref object of ErrorHandlingState
method `$`*(state: PurchaseSubmitted): string = method `$`*(state: PurchaseSubmitted): string =
"submitted" "submitted"
method run*( method run*(state: PurchaseSubmitted, machine: Machine): Future[?State] {.async.} =
state: PurchaseSubmitted, machine: Machine
): Future[?State] {.async: (raises: []).} =
codex_purchases_submitted.inc() codex_purchases_submitted.inc()
let purchase = Purchase(machine) let purchase = Purchase(machine)
let request = !purchase.request let request = !purchase.request
let market = purchase.market let market = purchase.market
let clock = purchase.clock let clock = purchase.clock
info "Request submitted, waiting for slots to be filled", info "Request submitted, waiting for slots to be filled", requestId = purchase.requestId
requestId = purchase.requestId
proc wait() {.async.} = proc wait {.async.} =
let done = newAsyncEvent() let done = newFuture[void]()
proc callback(_: RequestId) = proc callback(_: RequestId) =
done.fire() done.complete()
let subscription = await market.subscribeFulfillment(request.id, callback) let subscription = await market.subscribeFulfillment(request.id, callback)
await done.wait() await done
await subscription.unsubscribe() await subscription.unsubscribe()
proc withTimeout(future: Future[void]) {.async.} = proc withTimeout(future: Future[void]) {.async.} =
@ -47,10 +42,5 @@ method run*(
await wait().withTimeout() await wait().withTimeout()
except Timeout: except Timeout:
return some State(PurchaseCancelled()) return some State(PurchaseCancelled())
except CancelledError as e:
trace "PurchaseSubmitted.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseSubmitted.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
return some State(PurchaseStarted()) return some State(PurchaseStarted())

View File

@ -1,44 +1,35 @@
import pkg/metrics import pkg/metrics
import ../../utils/exceptions
import ../../logutils
import ../statemachine import ../statemachine
import ./errorhandling
import ./submitted import ./submitted
import ./started import ./started
import ./cancelled import ./cancelled
import ./finished import ./finished
import ./failed import ./failed
import ./error
declareCounter(codex_purchases_unknown, "codex purchases unknown") declareCounter(codex_purchases_unknown, "codex purchases unknown")
type PurchaseUnknown* = ref object of PurchaseState type PurchaseUnknown* = ref object of ErrorHandlingState
method `$`*(state: PurchaseUnknown): string = method `$`*(state: PurchaseUnknown): string =
"unknown" "unknown"
method run*( method run*(state: PurchaseUnknown, machine: Machine): Future[?State] {.async.} =
state: PurchaseUnknown, machine: Machine codex_purchases_unknown.inc()
): Future[?State] {.async: (raises: []).} = let purchase = Purchase(machine)
try: if (request =? await purchase.market.getRequest(purchase.requestId)) and
codex_purchases_unknown.inc() (requestState =? await purchase.market.requestState(purchase.requestId)):
let purchase = Purchase(machine)
if (request =? await purchase.market.getRequest(purchase.requestId)) and
(requestState =? await purchase.market.requestState(purchase.requestId)):
purchase.request = some request
case requestState purchase.request = some request
of RequestState.New:
return some State(PurchaseSubmitted()) case requestState
of RequestState.Started: of RequestState.New:
return some State(PurchaseStarted()) return some State(PurchaseSubmitted())
of RequestState.Cancelled: of RequestState.Started:
return some State(PurchaseCancelled()) return some State(PurchaseStarted())
of RequestState.Finished: of RequestState.Cancelled:
return some State(PurchaseFinished()) return some State(PurchaseCancelled())
of RequestState.Failed: of RequestState.Finished:
return some State(PurchaseFailed()) return some State(PurchaseFinished())
except CancelledError as e: of RequestState.Failed:
trace "PurchaseUnknown.run was cancelled", error = e.msgDetail return some State(PurchaseFailed())
except CatchableError as e:
error "Error during PurchaseUnknown.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -14,7 +14,7 @@ import pkg/chronos
import pkg/libp2p import pkg/libp2p
import pkg/stew/base10 import pkg/stew/base10
import pkg/stew/byteutils import pkg/stew/byteutils
import pkg/results import pkg/stew/results
import pkg/stint import pkg/stint
import ../sales import ../sales
@ -25,7 +25,9 @@ proc encodeString*(cid: type Cid): Result[string, cstring] =
ok($cid) ok($cid)
proc decodeString*(T: type Cid, value: string): Result[Cid, cstring] = proc decodeString*(T: type Cid, value: string): Result[Cid, cstring] =
Cid.init(value).mapErr do(e: CidError) -> cstring: Cid
.init(value)
.mapErr do(e: CidError) -> cstring:
case e case e
of CidError.Incorrect: "Incorrect Cid".cstring of CidError.Incorrect: "Incorrect Cid".cstring
of CidError.Unsupported: "Unsupported Cid".cstring of CidError.Unsupported: "Unsupported Cid".cstring
@ -42,8 +44,9 @@ proc encodeString*(address: MultiAddress): Result[string, cstring] =
ok($address) ok($address)
proc decodeString*(T: type MultiAddress, value: string): Result[MultiAddress, cstring] = proc decodeString*(T: type MultiAddress, value: string): Result[MultiAddress, cstring] =
MultiAddress.init(value).mapErr do(e: string) -> cstring: MultiAddress
cstring(e) .init(value)
.mapErr do(e: string) -> cstring: cstring(e)
proc decodeString*(T: type SomeUnsignedInt, value: string): Result[T, cstring] = proc decodeString*(T: type SomeUnsignedInt, value: string): Result[T, cstring] =
Base10.decode(T, value) Base10.decode(T, value)
@ -52,7 +55,7 @@ proc encodeString*(value: SomeUnsignedInt): Result[string, cstring] =
ok(Base10.toString(value)) ok(Base10.toString(value))
proc decodeString*(T: type Duration, value: string): Result[T, cstring] = proc decodeString*(T: type Duration, value: string): Result[T, cstring] =
let v = ?Base10.decode(uint32, value) let v = ? Base10.decode(uint32, value)
ok(v.minutes) ok(v.minutes)
proc encodeString*(value: Duration): Result[string, cstring] = proc encodeString*(value: Duration): Result[string, cstring] =
@ -74,20 +77,19 @@ proc decodeString*(_: type UInt256, value: string): Result[UInt256, cstring] =
except ValueError as e: except ValueError as e:
err e.msg.cstring err e.msg.cstring
proc decodeString*( proc decodeString*(_: type array[32, byte],
_: type array[32, byte], value: string value: string): Result[array[32, byte], cstring] =
): Result[array[32, byte], cstring] =
try: try:
ok array[32, byte].fromHex(value) ok array[32, byte].fromHex(value)
except ValueError as e: except ValueError as e:
err e.msg.cstring err e.msg.cstring
proc decodeString*[T: PurchaseId | RequestId | Nonce | SlotId | AvailabilityId]( proc decodeString*[T: PurchaseId | RequestId | Nonce | SlotId | AvailabilityId](_: type T,
_: type T, value: string value: string): Result[T, cstring] =
): Result[T, cstring] =
array[32, byte].decodeString(value).map(id => T(id)) array[32, byte].decodeString(value).map(id => T(id))
proc decodeString*(t: typedesc[string], value: string): Result[string, cstring] = proc decodeString*(t: typedesc[string],
value: string): Result[string, cstring] =
ok(value) ok(value)
proc encodeString*(value: string): RestResult[string] = proc encodeString*(value: string): RestResult[string] =

View File

@ -13,11 +13,11 @@ export json
type type
StorageRequestParams* = object StorageRequestParams* = object
duration* {.serialize.}: uint64 duration* {.serialize.}: UInt256
proofProbability* {.serialize.}: UInt256 proofProbability* {.serialize.}: UInt256
pricePerBytePerSecond* {.serialize.}: UInt256 reward* {.serialize.}: UInt256
collateralPerByte* {.serialize.}: UInt256 collateral* {.serialize.}: UInt256
expiry* {.serialize.}: uint64 expiry* {.serialize.}: ?UInt256
nodes* {.serialize.}: ?uint nodes* {.serialize.}: ?uint
tolerance* {.serialize.}: ?uint tolerance* {.serialize.}: ?uint
@ -28,18 +28,16 @@ type
error* {.serialize.}: ?string error* {.serialize.}: ?string
RestAvailability* = object RestAvailability* = object
totalSize* {.serialize.}: uint64 totalSize* {.serialize.}: UInt256
duration* {.serialize.}: uint64 duration* {.serialize.}: UInt256
minPricePerBytePerSecond* {.serialize.}: UInt256 minPrice* {.serialize.}: UInt256
totalCollateral* {.serialize.}: UInt256 maxCollateral* {.serialize.}: UInt256
freeSize* {.serialize.}: ?uint64 freeSize* {.serialize.}: ?UInt256
enabled* {.serialize.}: ?bool
until* {.serialize.}: ?SecondsSince1970
RestSalesAgent* = object RestSalesAgent* = object
state* {.serialize.}: string state* {.serialize.}: string
requestId* {.serialize.}: RequestId requestId* {.serialize.}: RequestId
slotIndex* {.serialize.}: uint64 slotIndex* {.serialize.}: UInt256
request* {.serialize.}: ?StorageRequest request* {.serialize.}: ?StorageRequest
reservation* {.serialize.}: ?Reservation reservation* {.serialize.}: ?Reservation
@ -76,10 +74,15 @@ type
quotaReservedBytes* {.serialize.}: NBytes quotaReservedBytes* {.serialize.}: NBytes
proc init*(_: type RestContentList, content: seq[RestContent]): RestContentList = proc init*(_: type RestContentList, content: seq[RestContent]): RestContentList =
RestContentList(content: content) RestContentList(
content: content
)
proc init*(_: type RestContent, cid: Cid, manifest: Manifest): RestContent = proc init*(_: type RestContent, cid: Cid, manifest: Manifest): RestContent =
RestContent(cid: cid, manifest: manifest) RestContent(
cid: cid,
manifest: manifest
)
proc init*(_: type RestNode, node: dn.Node): RestNode = proc init*(_: type RestNode, node: dn.Node): RestNode =
RestNode( RestNode(
@ -87,7 +90,7 @@ proc init*(_: type RestNode, node: dn.Node): RestNode =
peerId: node.record.data.peerId, peerId: node.record.data.peerId,
record: node.record, record: node.record,
address: node.address, address: node.address,
seen: node.seen > 0.5, seen: node.seen
) )
proc init*(_: type RestRoutingTable, routingTable: rt.RoutingTable): RestRoutingTable = proc init*(_: type RestRoutingTable, routingTable: rt.RoutingTable): RestRoutingTable =
@ -96,23 +99,28 @@ proc init*(_: type RestRoutingTable, routingTable: rt.RoutingTable): RestRouting
for node in bucket.nodes: for node in bucket.nodes:
nodes.add(RestNode.init(node)) nodes.add(RestNode.init(node))
RestRoutingTable(localNode: RestNode.init(routingTable.localNode), nodes: nodes) RestRoutingTable(
localNode: RestNode.init(routingTable.localNode),
nodes: nodes
)
proc init*(_: type RestPeerRecord, peerRecord: PeerRecord): RestPeerRecord = proc init*(_: type RestPeerRecord, peerRecord: PeerRecord): RestPeerRecord =
RestPeerRecord( RestPeerRecord(
peerId: peerRecord.peerId, seqNo: peerRecord.seqNo, addresses: peerRecord.addresses peerId: peerRecord.peerId,
seqNo: peerRecord.seqNo,
addresses: peerRecord.addresses
) )
proc init*(_: type RestNodeId, id: NodeId): RestNodeId = proc init*(_: type RestNodeId, id: NodeId): RestNodeId =
RestNodeId(id: id) RestNodeId(
id: id
)
proc `%`*(obj: StorageRequest | Slot): JsonNode = proc `%`*(obj: StorageRequest | Slot): JsonNode =
let jsonObj = newJObject() let jsonObj = newJObject()
for k, v in obj.fieldPairs: for k, v in obj.fieldPairs: jsonObj[k] = %v
jsonObj[k] = %v
jsonObj["id"] = %(obj.id) jsonObj["id"] = %(obj.id)
return jsonObj return jsonObj
proc `%`*(obj: RestNodeId): JsonNode = proc `%`*(obj: RestNodeId): JsonNode = % $obj.id
% $obj.id

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,7 +7,9 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p/crypto/crypto import pkg/libp2p/crypto/crypto
import pkg/bearssl/rand import pkg/bearssl/rand
@ -28,8 +30,7 @@ proc instance*(t: type Rng): Rng =
const randMax = 18_446_744_073_709_551_615'u64 const randMax = 18_446_744_073_709_551_615'u64
proc rand*(rng: Rng, max: Natural): int = proc rand*(rng: Rng, max: Natural): int =
if max == 0: if max == 0: return 0
return 0
while true: while true:
let x = rng[].generate(uint64) let x = rng[].generate(uint64)
@ -40,8 +41,8 @@ proc sample*[T](rng: Rng, a: openArray[T]): T =
result = a[rng.rand(a.high)] result = a[rng.rand(a.high)]
proc sample*[T]( proc sample*[T](
rng: Rng, sample, exclude: openArray[T] rng: Rng, sample, exclude: openArray[T]): T
): T {.raises: [Defect, RngSampleError].} = {.raises: [Defect, RngSampleError].} =
if sample == exclude: if sample == exclude:
raise newException(RngSampleError, "Sample and exclude arrays are the same!") raise newException(RngSampleError, "Sample and exclude arrays are the same!")
@ -52,15 +53,6 @@ proc sample*[T](
break break
proc sample*[T](
rng: Rng, sample: openArray[T], limit: int
): seq[T] {.raises: [Defect, RngSampleError].} =
if limit > sample.len:
raise newException(RngSampleError, "Limit cannot be larger than sample!")
for _ in 0 ..< min(sample.len, limit):
result.add(rng.sample(sample, result))
proc shuffle*[T](rng: Rng, a: var openArray[T]) = proc shuffle*[T](rng: Rng, a: var openArray[T]) =
for i in countdown(a.high, 1): for i in countdown(a.high, 1):
let j = rng.rand(i) let j = rng.rand(i)

View File

@ -16,13 +16,13 @@ import ./sales/statemachine
import ./sales/slotqueue import ./sales/slotqueue
import ./sales/states/preparing import ./sales/states/preparing
import ./sales/states/unknown import ./sales/states/unknown
import ./utils/then
import ./utils/trackedfutures import ./utils/trackedfutures
import ./utils/exceptions
## Sales holds a list of available storage that it may sell. ## Sales holds a list of available storage that it may sell.
## ##
## When storage is requested on the market that matches availability, the Sales ## When storage is requested on the market that matches availability, the Sales
## object will instruct the Logos Storage node to persist the requested data. Once the ## object will instruct the Codex node to persist the requested data. Once the
## data has been persisted, it uploads a proof of storage to the market in an ## data has been persisted, it uploads a proof of storage to the market in an
## attempt to win a storage contract. ## attempt to win a storage contract.
## ##
@ -45,12 +45,13 @@ export salescontext
logScope: logScope:
topics = "sales marketplace" topics = "sales marketplace"
type Sales* = ref object type
context*: SalesContext Sales* = ref object
agents*: seq[SalesAgent] context*: SalesContext
running: bool agents*: seq[SalesAgent]
subscriptions: seq[market.Subscription] running: bool
trackedFutures: TrackedFutures subscriptions: seq[market.Subscription]
trackedFutures: TrackedFutures
proc `onStore=`*(sales: Sales, onStore: OnStore) = proc `onStore=`*(sales: Sales, onStore: OnStore) =
sales.context.onStore = some onStore sales.context.onStore = some onStore
@ -67,31 +68,28 @@ proc `onProve=`*(sales: Sales, callback: OnProve) =
proc `onExpiryUpdate=`*(sales: Sales, callback: OnExpiryUpdate) = proc `onExpiryUpdate=`*(sales: Sales, callback: OnExpiryUpdate) =
sales.context.onExpiryUpdate = some callback sales.context.onExpiryUpdate = some callback
proc onStore*(sales: Sales): ?OnStore = proc onStore*(sales: Sales): ?OnStore = sales.context.onStore
sales.context.onStore
proc onClear*(sales: Sales): ?OnClear = proc onClear*(sales: Sales): ?OnClear = sales.context.onClear
sales.context.onClear
proc onSale*(sales: Sales): ?OnSale = proc onSale*(sales: Sales): ?OnSale = sales.context.onSale
sales.context.onSale
proc onProve*(sales: Sales): ?OnProve = proc onProve*(sales: Sales): ?OnProve = sales.context.onProve
sales.context.onProve
proc onExpiryUpdate*(sales: Sales): ?OnExpiryUpdate = proc onExpiryUpdate*(sales: Sales): ?OnExpiryUpdate = sales.context.onExpiryUpdate
sales.context.onExpiryUpdate
proc new*(_: type Sales, market: Market, clock: Clock, repo: RepoStore): Sales = proc new*(_: type Sales,
market: Market,
clock: Clock,
repo: RepoStore): Sales =
Sales.new(market, clock, repo, 0) Sales.new(market, clock, repo, 0)
proc new*( proc new*(_: type Sales,
_: type Sales, market: Market,
market: Market, clock: Clock,
clock: Clock, repo: RepoStore,
repo: RepoStore, simulateProofFailures: int): Sales =
simulateProofFailures: int,
): Sales =
let reservations = Reservations.new(repo) let reservations = Reservations.new(repo)
Sales( Sales(
context: SalesContext( context: SalesContext(
@ -99,110 +97,117 @@ proc new*(
clock: clock, clock: clock,
reservations: reservations, reservations: reservations,
slotQueue: SlotQueue.new(), slotQueue: SlotQueue.new(),
simulateProofFailures: simulateProofFailures, simulateProofFailures: simulateProofFailures
), ),
trackedFutures: TrackedFutures.new(), trackedFutures: TrackedFutures.new(),
subscriptions: @[], subscriptions: @[]
) )
proc remove(sales: Sales, agent: SalesAgent) {.async: (raises: []).} = proc remove(sales: Sales, agent: SalesAgent) {.async.} =
await agent.stop() await agent.stop()
if sales.running: if sales.running:
sales.agents.keepItIf(it != agent) sales.agents.keepItIf(it != agent)
proc cleanUp( proc cleanUp(sales: Sales,
sales: Sales, agent: SalesAgent, reprocessSlot: bool, returnedCollateral: ?UInt256 agent: SalesAgent,
) {.async: (raises: []).} = returnBytes: bool,
reprocessSlot: bool,
processing: Future[void]) {.async.} =
let data = agent.data let data = agent.data
logScope: logScope:
topics = "sales cleanUp" topics = "sales cleanUp"
requestId = data.requestId requestId = data.requestId
slotIndex = data.slotIndex slotIndex = data.slotIndex
reservationId = data.reservation .? id |? ReservationId.default reservationId = data.reservation.?id |? ReservationId.default
availabilityId = data.reservation .? availabilityId |? AvailabilityId.default availabilityId = data.reservation.?availabilityId |? AvailabilityId.default
trace "cleaning up sales agent" trace "cleaning up sales agent"
# if reservation for the SalesAgent was not created, then it means # if reservation for the SalesAgent was not created, then it means
# that the cleanUp was called before the sales process really started, so # that the cleanUp was called before the sales process really started, so
# there are not really any bytes to be returned # there are not really any bytes to be returned
if request =? data.request and reservation =? data.reservation: if returnBytes and request =? data.request and reservation =? data.reservation:
if returnErr =? ( if returnErr =? (await sales.context.reservations.returnBytesToAvailability(
await noCancel sales.context.reservations.returnBytesToAvailability( reservation.availabilityId,
reservation.availabilityId, reservation.id, request.ask.slotSize reservation.id,
) request.ask.slotSize
).errorOption: )).errorOption:
error "failure returning bytes", error "failure returning bytes",
error = returnErr.msg, bytes = request.ask.slotSize error = returnErr.msg,
bytes = request.ask.slotSize
# delete reservation and return reservation bytes back to the availability # delete reservation and return reservation bytes back to the availability
if reservation =? data.reservation and if reservation =? data.reservation and
deleteErr =? ( deleteErr =? (await sales.context.reservations.deleteReservation(
await noCancel sales.context.reservations.deleteReservation( reservation.id,
reservation.id, reservation.availabilityId, returnedCollateral reservation.availabilityId
) )).errorOption:
).errorOption:
error "failure deleting reservation", error = deleteErr.msg error "failure deleting reservation", error = deleteErr.msg
# Re-add items back into the queue to prevent small availabilities from # Re-add items back into the queue to prevent small availabilities from
# draining the queue. Seen items will be ordered last. # draining the queue. Seen items will be ordered last.
if reprocessSlot and request =? data.request and var item =? agent.data.slotQueueItem: if reprocessSlot and request =? data.request:
let queue = sales.context.slotQueue let queue = sales.context.slotQueue
item.seen = true var seenItem = SlotQueueItem.init(data.requestId,
data.slotIndex.truncate(uint16),
data.ask,
request.expiry,
seen = true)
trace "pushing ignored item to queue, marked as seen" trace "pushing ignored item to queue, marked as seen"
if err =? queue.push(item).errorOption: if err =? queue.push(seenItem).errorOption:
error "failed to readd slot to queue", errorType = $(type err), error = err.msg error "failed to readd slot to queue",
errorType = $(type err), error = err.msg
let fut = sales.remove(agent) await sales.remove(agent)
sales.trackedFutures.track(fut)
# signal back to the slot queue to cycle a worker
if not processing.isNil and not processing.finished():
processing.complete()
proc filled(
sales: Sales,
request: StorageRequest,
slotIndex: UInt256,
processing: Future[void]) =
proc filled(sales: Sales, request: StorageRequest, slotIndex: uint64) =
if onSale =? sales.context.onSale: if onSale =? sales.context.onSale:
onSale(request, slotIndex) onSale(request, slotIndex)
proc processSlot( # signal back to the slot queue to cycle a worker
sales: Sales, item: SlotQueueItem if not processing.isNil and not processing.finished():
) {.async: (raises: [CancelledError]).} = processing.complete()
debug "Processing slot from queue", requestId = item.requestId, slot = item.slotIndex
proc processSlot(sales: Sales, item: SlotQueueItem, done: Future[void]) =
debug "Processing slot from queue", requestId = item.requestId,
slot = item.slotIndex
let agent = newSalesAgent( let agent = newSalesAgent(
sales.context, item.requestId, item.slotIndex, none StorageRequest, some item sales.context,
item.requestId,
item.slotIndex.u256,
none StorageRequest
) )
let completed = newAsyncEvent() agent.onCleanUp = proc (returnBytes = false, reprocessSlot = false) {.async.} =
await sales.cleanUp(agent, returnBytes, reprocessSlot, done)
agent.onCleanUp = proc( agent.onFilled = some proc(request: StorageRequest, slotIndex: UInt256) =
reprocessSlot = false, returnedCollateral = UInt256.none sales.filled(request, slotIndex, done)
) {.async: (raises: []).} =
trace "slot cleanup"
await sales.cleanUp(agent, reprocessSlot, returnedCollateral)
completed.fire()
agent.onFilled = some proc(request: StorageRequest, slotIndex: uint64) =
trace "slot filled"
sales.filled(request, slotIndex)
completed.fire()
agent.start(SalePreparing()) agent.start(SalePreparing())
sales.agents.add agent sales.agents.add agent
trace "waiting for slot processing to complete"
await completed.wait()
trace "slot processing completed"
proc deleteInactiveReservations(sales: Sales, activeSlots: seq[Slot]) {.async.} = proc deleteInactiveReservations(sales: Sales, activeSlots: seq[Slot]) {.async.} =
let reservations = sales.context.reservations let reservations = sales.context.reservations
without reservs =? await reservations.all(Reservation): without reservs =? await reservations.all(Reservation):
return return
let unused = reservs.filter( let unused = reservs.filter(r => (
r => ( let slotId = slotId(r.requestId, r.slotIndex)
let slotId = slotId(r.requestId, r.slotIndex) not activeSlots.any(slot => slot.id == slotId)
not activeSlots.any(slot => slot.id == slotId) ))
)
)
if unused.len == 0: if unused.len == 0:
return return
@ -210,13 +215,14 @@ proc deleteInactiveReservations(sales: Sales, activeSlots: seq[Slot]) {.async.}
info "Found unused reservations for deletion", unused = unused.len info "Found unused reservations for deletion", unused = unused.len
for reservation in unused: for reservation in unused:
logScope: logScope:
reservationId = reservation.id reservationId = reservation.id
availabilityId = reservation.availabilityId availabilityId = reservation.availabilityId
if err =? ( if err =? (await reservations.deleteReservation(
await reservations.deleteReservation(reservation.id, reservation.availabilityId) reservation.id, reservation.availabilityId
).errorOption: )).errorOption:
error "Failed to delete unused reservation", error = err.msg error "Failed to delete unused reservation", error = err.msg
else: else:
trace "Deleted unused reservation" trace "Deleted unused reservation"
@ -246,13 +252,17 @@ proc load*(sales: Sales) {.async.} =
await sales.deleteInactiveReservations(activeSlots) await sales.deleteInactiveReservations(activeSlots)
for slot in activeSlots: for slot in activeSlots:
let agent = let agent = newSalesAgent(
newSalesAgent(sales.context, slot.request.id, slot.slotIndex, some slot.request) sales.context,
slot.request.id,
slot.slotIndex,
some slot.request)
agent.onCleanUp = proc( agent.onCleanUp = proc(returnBytes = false, reprocessSlot = false) {.async.} =
reprocessSlot = false, returnedCollateral = UInt256.none # since workers are not being dispatched, this future has not been created
) {.async: (raises: []).} = # by a worker. Create a dummy one here so we can call sales.cleanUp
await sales.cleanUp(agent, reprocessSlot, returnedCollateral) let done: Future[void] = nil
await sales.cleanUp(agent, returnBytes, reprocessSlot, done)
# There is no need to assign agent.onFilled as slots loaded from `mySlots` # There is no need to assign agent.onFilled as slots loaded from `mySlots`
# are inherently already filled and so assigning agent.onFilled would be # are inherently already filled and so assigning agent.onFilled would be
@ -261,9 +271,7 @@ proc load*(sales: Sales) {.async.} =
agent.start(SaleUnknown()) agent.start(SaleUnknown())
sales.agents.add agent sales.agents.add agent
proc OnAvailabilitySaved( proc onAvailabilityAdded(sales: Sales, availability: Availability) {.async.} =
sales: Sales, availability: Availability
) {.async: (raises: []).} =
## When availabilities are modified or added, the queue should be unpaused if ## When availabilities are modified or added, the queue should be unpaused if
## it was paused and any slots in the queue should have their `seen` flag ## it was paused and any slots in the queue should have their `seen` flag
## cleared. ## cleared.
@ -274,9 +282,11 @@ proc OnAvailabilitySaved(
trace "unpausing queue after new availability added" trace "unpausing queue after new availability added"
queue.unpause() queue.unpause()
proc onStorageRequested( proc onStorageRequested(sales: Sales,
sales: Sales, requestId: RequestId, ask: StorageAsk, expiry: uint64 requestId: RequestId,
) {.raises: [].} = ask: StorageAsk,
expiry: UInt256) =
logScope: logScope:
topics = "marketplace sales onStorageRequested" topics = "marketplace sales onStorageRequested"
requestId requestId
@ -287,14 +297,7 @@ proc onStorageRequested(
trace "storage requested, adding slots to queue" trace "storage requested, adding slots to queue"
let market = sales.context.market without items =? SlotQueueItem.init(requestId, ask, expiry).catch, err:
without collateral =? market.slotCollateral(ask.collateralPerSlot, SlotState.Free),
err:
error "Request failure, unable to calculate collateral", error = err.msg
return
without items =? SlotQueueItem.init(requestId, ask, expiry, collateral).catch, err:
if err of SlotsOutOfRangeError: if err of SlotsOutOfRangeError:
warn "Too many slots, cannot add to queue" warn "Too many slots, cannot add to queue"
else: else:
@ -311,7 +314,10 @@ proc onStorageRequested(
else: else:
warn "Error adding request to SlotQueue", error = err.msg warn "Error adding request to SlotQueue", error = err.msg
proc onSlotFreed(sales: Sales, requestId: RequestId, slotIndex: uint64) = proc onSlotFreed(sales: Sales,
requestId: RequestId,
slotIndex: UInt256) =
logScope: logScope:
topics = "marketplace sales onSlotFreed" topics = "marketplace sales onSlotFreed"
requestId requestId
@ -319,59 +325,44 @@ proc onSlotFreed(sales: Sales, requestId: RequestId, slotIndex: uint64) =
trace "slot freed, adding to queue" trace "slot freed, adding to queue"
proc addSlotToQueue() {.async: (raises: []).} = proc addSlotToQueue() {.async.} =
let context = sales.context let context = sales.context
let market = context.market let market = context.market
let queue = context.slotQueue let queue = context.slotQueue
try: # first attempt to populate request using existing slot metadata in queue
without request =? (await market.getRequest(requestId)), err: without var found =? queue.populateItem(requestId,
error "unknown request in contract", error = err.msgDetail slotIndex.truncate(uint16)):
trace "no existing request metadata, getting request info from contract"
# if there's no existing slot for that request, retrieve the request
# from the contract.
without request =? await market.getRequest(requestId):
error "unknown request in contract"
return return
# Take the repairing state into consideration to calculate the collateral. found = SlotQueueItem.init(request, slotIndex.truncate(uint16))
# This is particularly needed because it will affect the priority in the queue
# and we want to give the user the ability to tweak the parameters.
# Adding the repairing state directly in the queue priority calculation
# would not allow this flexibility.
without collateral =?
market.slotCollateral(request.ask.collateralPerSlot, SlotState.Repair), err:
error "Failed to add freed slot to queue: unable to calculate collateral",
error = err.msg
return
if slotIndex > uint16.high.uint64: if err =? queue.push(found).errorOption:
error "Cannot cast slot index to uint16, value = ", slotIndex raise err
return
without slotQueueItem =? addSlotToQueue()
SlotQueueItem.init(request, slotIndex.uint16, collateral = collateral).catch, .track(sales)
err: .catch(proc(err: ref CatchableError) =
warn "Too many slots, cannot add to queue", error = err.msgDetail if err of SlotQueueItemExistsError:
return error "Failed to push item to queue becaue it already exists"
elif err of QueueNotRunningError:
if err =? queue.push(slotQueueItem).errorOption: warn "Failed to push item to queue becaue queue is not running"
if err of SlotQueueItemExistsError: else:
error "Failed to push item to queue because it already exists", warn "Error adding request to SlotQueue", error = err.msg
error = err.msgDetail )
elif err of QueueNotRunningError:
warn "Failed to push item to queue because queue is not running",
error = err.msgDetail
except CancelledError as e:
trace "sales.addSlotToQueue was cancelled"
# We could get rid of this by adding the storage ask in the SlotFreed event,
# so we would not need to call getRequest to get the collateralPerSlot.
let fut = addSlotToQueue()
sales.trackedFutures.track(fut)
proc subscribeRequested(sales: Sales) {.async.} = proc subscribeRequested(sales: Sales) {.async.} =
let context = sales.context let context = sales.context
let market = context.market let market = context.market
proc onStorageRequested( proc onStorageRequested(requestId: RequestId,
requestId: RequestId, ask: StorageAsk, expiry: uint64 ask: StorageAsk,
) {.raises: [].} = expiry: UInt256) =
sales.onStorageRequested(requestId, ask, expiry) sales.onStorageRequested(requestId, ask, expiry)
try: try:
@ -444,13 +435,9 @@ proc subscribeSlotFilled(sales: Sales) {.async.} =
let market = context.market let market = context.market
let queue = context.slotQueue let queue = context.slotQueue
proc onSlotFilled(requestId: RequestId, slotIndex: uint64) = proc onSlotFilled(requestId: RequestId, slotIndex: UInt256) =
if slotIndex > uint16.high.uint64:
error "Cannot cast slot index to uint16, value = ", slotIndex
return
trace "slot filled, removing from slot queue", requestId, slotIndex trace "slot filled, removing from slot queue", requestId, slotIndex
queue.delete(requestId, slotIndex.uint16) queue.delete(requestId, slotIndex.truncate(uint16))
for agent in sales.agents: for agent in sales.agents:
agent.onSlotFilled(requestId, slotIndex) agent.onSlotFilled(requestId, slotIndex)
@ -467,7 +454,7 @@ proc subscribeSlotFreed(sales: Sales) {.async.} =
let context = sales.context let context = sales.context
let market = context.market let market = context.market
proc onSlotFreed(requestId: RequestId, slotIndex: uint64) = proc onSlotFreed(requestId: RequestId, slotIndex: UInt256) =
sales.onSlotFreed(requestId, slotIndex) sales.onSlotFreed(requestId, slotIndex)
try: try:
@ -483,13 +470,9 @@ proc subscribeSlotReservationsFull(sales: Sales) {.async.} =
let market = context.market let market = context.market
let queue = context.slotQueue let queue = context.slotQueue
proc onSlotReservationsFull(requestId: RequestId, slotIndex: uint64) = proc onSlotReservationsFull(requestId: RequestId, slotIndex: UInt256) =
if slotIndex > uint16.high.uint64:
error "Cannot cast slot index to uint16, value = ", slotIndex
return
trace "reservations for slot full, removing from slot queue", requestId, slotIndex trace "reservations for slot full, removing from slot queue", requestId, slotIndex
queue.delete(requestId, slotIndex.uint16) queue.delete(requestId, slotIndex.truncate(uint16))
try: try:
let sub = await market.subscribeSlotReservationsFull(onSlotReservationsFull) let sub = await market.subscribeSlotReservationsFull(onSlotReservationsFull)
@ -499,24 +482,21 @@ proc subscribeSlotReservationsFull(sales: Sales) {.async.} =
except CatchableError as e: except CatchableError as e:
error "Unable to subscribe to slot filled events", msg = e.msg error "Unable to subscribe to slot filled events", msg = e.msg
proc startSlotQueue(sales: Sales) = proc startSlotQueue(sales: Sales) {.async.} =
let slotQueue = sales.context.slotQueue let slotQueue = sales.context.slotQueue
let reservations = sales.context.reservations let reservations = sales.context.reservations
slotQueue.onProcessSlot = proc(item: SlotQueueItem) {.async: (raises: []).} = slotQueue.onProcessSlot =
trace "processing slot queue item", reqId = item.requestId, slotIdx = item.slotIndex proc(item: SlotQueueItem, done: Future[void]) {.async.} =
try: trace "processing slot queue item", reqId = item.requestId, slotIdx = item.slotIndex
await sales.processSlot(item) sales.processSlot(item, done)
except CancelledError:
discard
slotQueue.start() asyncSpawn slotQueue.start()
proc OnAvailabilitySaved(availability: Availability) {.async: (raises: []).} = proc onAvailabilityAdded(availability: Availability) {.async.} =
if availability.enabled: await sales.onAvailabilityAdded(availability)
await sales.OnAvailabilitySaved(availability)
reservations.OnAvailabilitySaved = OnAvailabilitySaved reservations.onAvailabilityAdded = onAvailabilityAdded
proc subscribe(sales: Sales) {.async.} = proc subscribe(sales: Sales) {.async.} =
await sales.subscribeRequested() await sales.subscribeRequested()
@ -538,7 +518,7 @@ proc unsubscribe(sales: Sales) {.async.} =
proc start*(sales: Sales) {.async.} = proc start*(sales: Sales) {.async.} =
await sales.load() await sales.load()
sales.startSlotQueue() await sales.startSlotQueue()
await sales.subscribe() await sales.subscribe()
sales.running = true sales.running = true

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,35 +7,34 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
## ##
## +--------------------------------------+ ## +--------------------------------------+
## | RESERVATION | ## | RESERVATION |
## +---------------------------------------------------+ |--------------------------------------| ## +----------------------------------------+ |--------------------------------------|
## | AVAILABILITY | | ReservationId | id | PK | ## | AVAILABILITY | | ReservationId | id | PK |
## |---------------------------------------------------| |--------------------------------------| ## |----------------------------------------| |--------------------------------------|
## | AvailabilityId | id | PK |<-||-------o<-| AvailabilityId | availabilityId | FK | ## | AvailabilityId | id | PK |<-||-------o<-| AvailabilityId | availabilityId | FK |
## |---------------------------------------------------| |--------------------------------------| ## |----------------------------------------| |--------------------------------------|
## | UInt256 | totalSize | | | UInt256 | size | | ## | UInt256 | totalSize | | | UInt256 | size | |
## |---------------------------------------------------| |--------------------------------------| ## |----------------------------------------| |--------------------------------------|
## | UInt256 | freeSize | | | UInt256 | slotIndex | | ## | UInt256 | freeSize | | | UInt256 | slotIndex | |
## |---------------------------------------------------| +--------------------------------------+ ## |----------------------------------------| +--------------------------------------+
## | UInt256 | duration | | ## | UInt256 | duration | |
## |---------------------------------------------------| ## |----------------------------------------|
## | UInt256 | minPricePerBytePerSecond | | ## | UInt256 | minPrice | |
## |---------------------------------------------------| ## |----------------------------------------|
## | UInt256 | totalCollateral | | ## | UInt256 | maxCollateral | |
## |---------------------------------------------------| ## +----------------------------------------+
## | UInt256 | totalRemainingCollateral | |
## +---------------------------------------------------+
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import std/sequtils import std/sequtils
import std/sugar import std/sugar
import std/typetraits import std/typetraits
import std/sequtils import std/sequtils
import std/times
import pkg/chronos import pkg/chronos
import pkg/datastore import pkg/datastore
import pkg/nimcrypto
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/stint import pkg/stint
@ -52,10 +51,9 @@ import ../units
export requests export requests
export logutils export logutils
from nimcrypto import randomBytes
logScope: logScope:
topics = "marketplace sales reservations" topics = "sales reservations"
type type
AvailabilityId* = distinct array[32, byte] AvailabilityId* = distinct array[32, byte]
@ -64,42 +62,28 @@ type
SomeStorableId = AvailabilityId | ReservationId SomeStorableId = AvailabilityId | ReservationId
Availability* = ref object Availability* = ref object
id* {.serialize.}: AvailabilityId id* {.serialize.}: AvailabilityId
totalSize* {.serialize.}: uint64 totalSize* {.serialize.}: UInt256
freeSize* {.serialize.}: uint64 freeSize* {.serialize.}: UInt256
duration* {.serialize.}: uint64 duration* {.serialize.}: UInt256
minPricePerBytePerSecond* {.serialize.}: UInt256 minPrice* {.serialize.}: UInt256 # minimal price paid for the whole hosted slot for the request's duration
totalCollateral {.serialize.}: UInt256 maxCollateral* {.serialize.}: UInt256
totalRemainingCollateral* {.serialize.}: UInt256
# If set to false, the availability will not accept new slots.
# If enabled, it will not impact any existing slots that are already being hosted.
enabled* {.serialize.}: bool
# Specifies the latest timestamp after which the availability will no longer host any slots.
# If set to 0, there will be no restrictions.
until* {.serialize.}: SecondsSince1970
Reservation* = ref object Reservation* = ref object
id* {.serialize.}: ReservationId id* {.serialize.}: ReservationId
availabilityId* {.serialize.}: AvailabilityId availabilityId* {.serialize.}: AvailabilityId
size* {.serialize.}: uint64 size* {.serialize.}: UInt256
requestId* {.serialize.}: RequestId requestId* {.serialize.}: RequestId
slotIndex* {.serialize.}: uint64 slotIndex* {.serialize.}: UInt256
validUntil* {.serialize.}: SecondsSince1970
Reservations* = ref object of RootObj Reservations* = ref object of RootObj
availabilityLock: AsyncLock availabilityLock: AsyncLock # Lock for protecting assertions of availability's sizes when searching for matching availability
# Lock for protecting assertions of availability's sizes when searching for matching availability
repo: RepoStore repo: RepoStore
OnAvailabilitySaved: ?OnAvailabilitySaved onAvailabilityAdded: ?OnAvailabilityAdded
GetNext* = proc(): Future[?seq[byte]] {.upraises: [], gcsafe, closure.}
GetNext* = proc(): Future[?seq[byte]] {.async: (raises: [CancelledError]), closure.} IterDispose* = proc(): Future[?!void] {.gcsafe, closure.}
IterDispose* = proc(): Future[?!void] {.async: (raises: [CancelledError]), closure.} OnAvailabilityAdded* = proc(availability: Availability): Future[void] {.upraises: [], gcsafe.}
OnAvailabilitySaved* =
proc(availability: Availability): Future[void] {.async: (raises: []).}
StorableIter* = ref object StorableIter* = ref object
finished*: bool finished*: bool
next*: GetNext next*: GetNext
dispose*: IterDispose dispose*: IterDispose
ReservationsError* = object of CodexError ReservationsError* = object of CodexError
ReserveFailedError* = object of ReservationsError ReserveFailedError* = object of ReservationsError
ReleaseFailedError* = object of ReservationsError ReleaseFailedError* = object of ReservationsError
@ -109,20 +93,13 @@ type
SerializationError* = object of ReservationsError SerializationError* = object of ReservationsError
UpdateFailedError* = object of ReservationsError UpdateFailedError* = object of ReservationsError
BytesOutOfBoundsError* = object of ReservationsError BytesOutOfBoundsError* = object of ReservationsError
UntilOutOfBoundsError* = object of ReservationsError
const const
SalesKey = (CodexMetaKey / "sales").tryGet # TODO: move to sales module SalesKey = (CodexMetaKey / "sales").tryGet # TODO: move to sales module
ReservationsKey = (SalesKey / "reservations").tryGet ReservationsKey = (SalesKey / "reservations").tryGet
proc hash*(x: AvailabilityId): Hash {.borrow.} proc hash*(x: AvailabilityId): Hash {.borrow.}
proc all*( proc all*(self: Reservations, T: type SomeStorableObject): Future[?!seq[T]] {.async.}
self: Reservations, T: type SomeStorableObject
): Future[?!seq[T]] {.async: (raises: [CancelledError]).}
proc all*(
self: Reservations, T: type SomeStorableObject, availabilityId: AvailabilityId
): Future[?!seq[T]] {.async: (raises: [CancelledError]).}
template withLock(lock, body) = template withLock(lock, body) =
try: try:
@ -132,58 +109,35 @@ template withLock(lock, body) =
if lock.locked: if lock.locked:
lock.release() lock.release()
proc new*(T: type Reservations, repo: RepoStore): Reservations =
T(availabilityLock: newAsyncLock(), repo: repo) proc new*(T: type Reservations,
repo: RepoStore): Reservations =
T(availabilityLock: newAsyncLock(),repo: repo)
proc init*( proc init*(
_: type Availability, _: type Availability,
totalSize: uint64, totalSize: UInt256,
freeSize: uint64, freeSize: UInt256,
duration: uint64, duration: UInt256,
minPricePerBytePerSecond: UInt256, minPrice: UInt256,
totalCollateral: UInt256, maxCollateral: UInt256): Availability =
enabled: bool,
until: SecondsSince1970,
): Availability =
var id: array[32, byte] var id: array[32, byte]
doAssert randomBytes(id) == 32 doAssert randomBytes(id) == 32
Availability( Availability(id: AvailabilityId(id), totalSize:totalSize, freeSize: freeSize, duration: duration, minPrice: minPrice, maxCollateral: maxCollateral)
id: AvailabilityId(id),
totalSize: totalSize,
freeSize: freeSize,
duration: duration,
minPricePerBytePerSecond: minPricePerBytePerSecond,
totalCollateral: totalCollateral,
totalRemainingCollateral: totalCollateral,
enabled: enabled,
until: until,
)
func totalCollateral*(self: Availability): UInt256 {.inline.} =
return self.totalCollateral
proc `totalCollateral=`*(self: Availability, value: UInt256) {.inline.} =
self.totalCollateral = value
self.totalRemainingCollateral = value
proc init*( proc init*(
_: type Reservation, _: type Reservation,
availabilityId: AvailabilityId, availabilityId: AvailabilityId,
size: uint64, size: UInt256,
requestId: RequestId, requestId: RequestId,
slotIndex: uint64, slotIndex: UInt256
validUntil: SecondsSince1970,
): Reservation = ): Reservation =
var id: array[32, byte] var id: array[32, byte]
doAssert randomBytes(id) == 32 doAssert randomBytes(id) == 32
Reservation( Reservation(id: ReservationId(id), availabilityId: availabilityId, size: size, requestId: requestId, slotIndex: slotIndex)
id: ReservationId(id),
availabilityId: availabilityId,
size: size,
requestId: requestId,
slotIndex: slotIndex,
validUntil: validUntil,
)
func toArray(id: SomeStorableId): array[32, byte] = func toArray(id: SomeStorableId): array[32, byte] =
array[32, byte](id) array[32, byte](id)
@ -192,27 +146,24 @@ proc `==`*(x, y: AvailabilityId): bool {.borrow.}
proc `==`*(x, y: ReservationId): bool {.borrow.} proc `==`*(x, y: ReservationId): bool {.borrow.}
proc `==`*(x, y: Reservation): bool = proc `==`*(x, y: Reservation): bool =
x.id == y.id x.id == y.id
proc `==`*(x, y: Availability): bool = proc `==`*(x, y: Availability): bool =
x.id == y.id x.id == y.id
proc `$`*(id: SomeStorableId): string = proc `$`*(id: SomeStorableId): string = id.toArray.toHex
id.toArray.toHex
proc toErr[E1: ref CatchableError, E2: ReservationsError]( proc toErr[E1: ref CatchableError, E2: ReservationsError](
e1: E1, _: type E2, msg: string = e1.msg e1: E1,
): ref E2 = _: type E2,
msg: string = e1.msg): ref E2 =
return newException(E2, msg, e1) return newException(E2, msg, e1)
logutils.formatIt(LogFormat.textLines, SomeStorableId): logutils.formatIt(LogFormat.textLines, SomeStorableId): it.short0xHexLog
it.short0xHexLog logutils.formatIt(LogFormat.json, SomeStorableId): it.to0xHexLog
logutils.formatIt(LogFormat.json, SomeStorableId):
it.to0xHexLog
proc `OnAvailabilitySaved=`*( proc `onAvailabilityAdded=`*(self: Reservations,
self: Reservations, OnAvailabilitySaved: OnAvailabilitySaved onAvailabilityAdded: OnAvailabilityAdded) =
) = self.onAvailabilityAdded = some onAvailabilityAdded
self.OnAvailabilitySaved = some OnAvailabilitySaved
func key*(id: AvailabilityId): ?!Key = func key*(id: AvailabilityId): ?!Key =
## sales / reservations / <availabilityId> ## sales / reservations / <availabilityId>
@ -225,39 +176,27 @@ func key*(reservationId: ReservationId, availabilityId: AvailabilityId): ?!Key =
func key*(availability: Availability): ?!Key = func key*(availability: Availability): ?!Key =
return availability.id.key return availability.id.key
func maxCollateralPerByte*(availability: Availability): UInt256 =
# If freeSize happens to be zero, we convention that the maxCollateralPerByte
# should be equal to totalRemainingCollateral.
if availability.freeSize == 0.uint64:
return availability.totalRemainingCollateral
return availability.totalRemainingCollateral div availability.freeSize.stuint(256)
func key*(reservation: Reservation): ?!Key = func key*(reservation: Reservation): ?!Key =
return key(reservation.id, reservation.availabilityId) return key(reservation.id, reservation.availabilityId)
func available*(self: Reservations): uint = func available*(self: Reservations): uint = self.repo.available.uint
self.repo.available.uint
func hasAvailable*(self: Reservations, bytes: uint): bool = func hasAvailable*(self: Reservations, bytes: uint): bool =
self.repo.available(bytes.NBytes) self.repo.available(bytes.NBytes)
proc exists*( proc exists*(
self: Reservations, key: Key self: Reservations,
): Future[bool] {.async: (raises: [CancelledError]).} = key: Key): Future[bool] {.async.} =
let exists = await self.repo.metaDs.ds.contains(key) let exists = await self.repo.metaDs.ds.contains(key)
return exists return exists
iterator items(self: StorableIter): auto =
while not self.finished:
yield self.next()
proc getImpl( proc getImpl(
self: Reservations, key: Key self: Reservations,
): Future[?!seq[byte]] {.async: (raises: [CancelledError]).} = key: Key): Future[?!seq[byte]] {.async.} =
if not await self.exists(key): if not await self.exists(key):
let err = let err = newException(NotExistsError, "object with key " & $key & " does not exist")
newException(NotExistsError, "object with key " & $key & " does not exist")
return failure(err) return failure(err)
without serialized =? await self.repo.metaDs.ds.get(key), error: without serialized =? await self.repo.metaDs.ds.get(key), error:
@ -266,8 +205,10 @@ proc getImpl(
return success serialized return success serialized
proc get*( proc get*(
self: Reservations, key: Key, T: type SomeStorableObject self: Reservations,
): Future[?!T] {.async: (raises: [CancelledError]).} = key: Key,
T: type SomeStorableObject): Future[?!T] {.async.} =
without serialized =? await self.getImpl(key), error: without serialized =? await self.getImpl(key), error:
return failure(error) return failure(error)
@ -277,29 +218,29 @@ proc get*(
return success obj return success obj
proc updateImpl( proc updateImpl(
self: Reservations, obj: SomeStorableObject self: Reservations,
): Future[?!void] {.async: (raises: [CancelledError]).} = obj: SomeStorableObject): Future[?!void] {.async.} =
trace "updating " & $(obj.type), id = obj.id trace "updating " & $(obj.type), id = obj.id
without key =? obj.key, error: without key =? obj.key, error:
return failure(error) return failure(error)
if err =? (await self.repo.metaDs.ds.put(key, @(obj.toJson.toBytes))).errorOption: if err =? (await self.repo.metaDs.ds.put(
key,
@(obj.toJson.toBytes)
)).errorOption:
return failure(err.toErr(UpdateFailedError)) return failure(err.toErr(UpdateFailedError))
return success() return success()
proc updateAvailability( proc updateAvailability(
self: Reservations, obj: Availability self: Reservations,
): Future[?!void] {.async: (raises: [CancelledError]).} = obj: Availability): Future[?!void] {.async.} =
logScope: logScope:
availabilityId = obj.id availabilityId = obj.id
if obj.until < 0:
let error =
newException(UntilOutOfBoundsError, "Cannot set until to a negative value")
return failure(error)
without key =? obj.key, error: without key =? obj.key, error:
return failure(error) return failure(error)
@ -308,70 +249,68 @@ proc updateAvailability(
trace "Creating new Availability" trace "Creating new Availability"
let res = await self.updateImpl(obj) let res = await self.updateImpl(obj)
# inform subscribers that Availability has been added # inform subscribers that Availability has been added
if OnAvailabilitySaved =? self.OnAvailabilitySaved: if onAvailabilityAdded =? self.onAvailabilityAdded:
await OnAvailabilitySaved(obj) # when chronos v4 is implemented, and OnAvailabilityAdded is annotated
# with async:(raises:[]), we can remove this try/catch as we know, with
# certainty, that nothing will be raised
try:
await onAvailabilityAdded(obj)
except CancelledError as e:
raise e
except CatchableError as e:
# we don't have any insight into types of exceptions that
# `onAvailabilityAdded` can raise because it is caller-defined
warn "Unknown error during 'onAvailabilityAdded' callback", error = e.msg
return res return res
else: else:
return failure(err) return failure(err)
if obj.until > 0:
without allReservations =? await self.all(Reservation, obj.id), error:
error.msg = "Error updating reservation: " & error.msg
return failure(error)
let requestEnds = allReservations.mapIt(it.validUntil)
if requestEnds.len > 0 and requestEnds.max > obj.until:
let error = newException(
UntilOutOfBoundsError,
"Until parameter must be greater or equal to the longest currently hosted slot",
)
return failure(error)
# Sizing of the availability changed, we need to adjust the repo reservation accordingly # Sizing of the availability changed, we need to adjust the repo reservation accordingly
if oldAvailability.totalSize != obj.totalSize: if oldAvailability.totalSize != obj.totalSize:
trace "totalSize changed, updating repo reservation" trace "totalSize changed, updating repo reservation"
if oldAvailability.totalSize < obj.totalSize: # storage added if oldAvailability.totalSize < obj.totalSize: # storage added
if reserveErr =? ( if reserveErr =? (await self.repo.reserve((obj.totalSize - oldAvailability.totalSize).truncate(uint).NBytes)).errorOption:
await self.repo.reserve((obj.totalSize - oldAvailability.totalSize).NBytes)
).errorOption:
return failure(reserveErr.toErr(ReserveFailedError)) return failure(reserveErr.toErr(ReserveFailedError))
elif oldAvailability.totalSize > obj.totalSize: # storage removed elif oldAvailability.totalSize > obj.totalSize: # storage removed
if reserveErr =? ( if reserveErr =? (await self.repo.release((oldAvailability.totalSize - obj.totalSize).truncate(uint).NBytes)).errorOption:
await self.repo.release((oldAvailability.totalSize - obj.totalSize).NBytes)
).errorOption:
return failure(reserveErr.toErr(ReleaseFailedError)) return failure(reserveErr.toErr(ReleaseFailedError))
let res = await self.updateImpl(obj) let res = await self.updateImpl(obj)
if oldAvailability.freeSize < obj.freeSize or oldAvailability.duration < obj.duration or if oldAvailability.freeSize < obj.freeSize: # availability added
oldAvailability.minPricePerBytePerSecond < obj.minPricePerBytePerSecond or
oldAvailability.totalRemainingCollateral < obj.totalRemainingCollateral:
# availability updated
# inform subscribers that Availability has been modified (with increased # inform subscribers that Availability has been modified (with increased
# size) # size)
if OnAvailabilitySaved =? self.OnAvailabilitySaved: if onAvailabilityAdded =? self.onAvailabilityAdded:
await OnAvailabilitySaved(obj) # when chronos v4 is implemented, and OnAvailabilityAdded is annotated
# with async:(raises:[]), we can remove this try/catch as we know, with
# certainty, that nothing will be raised
try:
await onAvailabilityAdded(obj)
except CancelledError as e:
raise e
except CatchableError as e:
# we don't have any insight into types of exceptions that
# `onAvailabilityAdded` can raise because it is caller-defined
warn "Unknown error during 'onAvailabilityAdded' callback", error = e.msg
return res return res
proc update*( proc update*(
self: Reservations, obj: Reservation self: Reservations,
): Future[?!void] {.async: (raises: [CancelledError]).} = obj: Reservation): Future[?!void] {.async.} =
return await self.updateImpl(obj) return await self.updateImpl(obj)
proc update*( proc update*(
self: Reservations, obj: Availability self: Reservations,
): Future[?!void] {.async: (raises: [CancelledError]).} = obj: Availability): Future[?!void] {.async.} =
try: withLock(self.availabilityLock):
withLock(self.availabilityLock): return await self.updateAvailability(obj)
return await self.updateAvailability(obj)
except AsyncLockError as e:
error "Lock error when trying to update the availability", err = e.msg
return failure(e)
proc delete( proc delete(
self: Reservations, key: Key self: Reservations,
): Future[?!void] {.async: (raises: [CancelledError]).} = key: Key): Future[?!void] {.async.} =
trace "deleting object", key trace "deleting object", key
if not await self.exists(key): if not await self.exists(key):
@ -383,27 +322,28 @@ proc delete(
return success() return success()
proc deleteReservation*( proc deleteReservation*(
self: Reservations, self: Reservations,
reservationId: ReservationId, reservationId: ReservationId,
availabilityId: AvailabilityId, availabilityId: AvailabilityId): Future[?!void] {.async.} =
returnedCollateral: ?UInt256 = UInt256.none,
): Future[?!void] {.async: (raises: [CancelledError]).} =
logScope: logScope:
reservationId reservationId
availabilityId availabilityId
trace "deleting reservation" trace "deleting reservation"
without key =? key(reservationId, availabilityId), error: without key =? key(reservationId, availabilityId), error:
return failure(error) return failure(error)
try: withLock(self.availabilityLock):
withLock(self.availabilityLock): without reservation =? (await self.get(key, Reservation)), error:
without reservation =? (await self.get(key, Reservation)), error: if error of NotExistsError:
if error of NotExistsError: return success()
return success() else:
else: return failure(error)
return failure(error)
if reservation.size > 0.u256:
trace "returning remaining reservation bytes to availability",
size = reservation.size
without availabilityKey =? availabilityId.key, error: without availabilityKey =? availabilityId.key, error:
return failure(error) return failure(error)
@ -411,54 +351,38 @@ proc deleteReservation*(
without var availability =? await self.get(availabilityKey, Availability), error: without var availability =? await self.get(availabilityKey, Availability), error:
return failure(error) return failure(error)
if reservation.size > 0.uint64: availability.freeSize += reservation.size
trace "returning remaining reservation bytes to availability",
size = reservation.size
availability.freeSize += reservation.size
if collateral =? returnedCollateral:
availability.totalRemainingCollateral += collateral
if updateErr =? (await self.updateAvailability(availability)).errorOption: if updateErr =? (await self.updateAvailability(availability)).errorOption:
return failure(updateErr) return failure(updateErr)
if err =? (await self.repo.metaDs.ds.delete(key)).errorOption: if err =? (await self.repo.metaDs.ds.delete(key)).errorOption:
return failure(err.toErr(DeleteFailedError)) return failure(err.toErr(DeleteFailedError))
return success() return success()
except AsyncLockError as e:
error "Lock error when trying to delete the availability", err = e.msg
return failure(e)
# TODO: add support for deleting availabilities # TODO: add support for deleting availabilities
# To delete, must not have any active sales. # To delete, must not have any active sales.
proc createAvailability*( proc createAvailability*(
self: Reservations, self: Reservations,
size: uint64, size: UInt256,
duration: uint64, duration: UInt256,
minPricePerBytePerSecond: UInt256, minPrice: UInt256,
totalCollateral: UInt256, maxCollateral: UInt256): Future[?!Availability] {.async.} =
enabled: bool,
until: SecondsSince1970,
): Future[?!Availability] {.async: (raises: [CancelledError]).} =
trace "creating availability",
size, duration, minPricePerBytePerSecond, totalCollateral, enabled, until
if until < 0: trace "creating availability", size, duration, minPrice, maxCollateral
let error =
newException(UntilOutOfBoundsError, "Cannot set until to a negative value")
return failure(error)
let availability = Availability.init( let availability = Availability.init(
size, size, duration, minPricePerBytePerSecond, totalCollateral, enabled, until size, size, duration, minPrice, maxCollateral
) )
let bytes = availability.freeSize let bytes = availability.freeSize.truncate(uint)
if reserveErr =? (await self.repo.reserve(bytes.NBytes)).errorOption: if reserveErr =? (await self.repo.reserve(bytes.NBytes)).errorOption:
return failure(reserveErr.toErr(ReserveFailedError)) return failure(reserveErr.toErr(ReserveFailedError))
if updateErr =? (await self.update(availability)).errorOption: if updateErr =? (await self.update(availability)).errorOption:
# rollback the reserve # rollback the reserve
trace "rolling back reserve" trace "rolling back reserve"
if rollbackErr =? (await self.repo.release(bytes.NBytes)).errorOption: if rollbackErr =? (await self.repo.release(bytes.NBytes)).errorOption:
@ -470,130 +394,115 @@ proc createAvailability*(
return success(availability) return success(availability)
method createReservation*( method createReservation*(
self: Reservations, self: Reservations,
availabilityId: AvailabilityId, availabilityId: AvailabilityId,
slotSize: uint64, slotSize: UInt256,
requestId: RequestId, requestId: RequestId,
slotIndex: uint64, slotIndex: UInt256
collateralPerByte: UInt256, ): Future[?!Reservation] {.async, base.} =
validUntil: SecondsSince1970,
): Future[?!Reservation] {.async: (raises: [CancelledError]), base.} =
try:
withLock(self.availabilityLock):
without availabilityKey =? availabilityId.key, error:
return failure(error)
without availability =? await self.get(availabilityKey, Availability), error: withLock(self.availabilityLock):
return failure(error) without availabilityKey =? availabilityId.key, error:
return failure(error)
# Check that the found availability has enough free space after the lock has been acquired, to prevent asynchronous Availiability modifications without availability =? await self.get(availabilityKey, Availability), error:
if availability.freeSize < slotSize: return failure(error)
let error = newException(
BytesOutOfBoundsError,
"trying to reserve an amount of bytes that is greater than the free size of the Availability",
)
return failure(error)
trace "Creating reservation", # Check that the found availability has enough free space after the lock has been acquired, to prevent asynchronous Availiability modifications
availabilityId, slotSize, requestId, slotIndex, validUntil = validUntil if availability.freeSize < slotSize:
let error = newException(
BytesOutOfBoundsError,
"trying to reserve an amount of bytes that is greater than the total size of the Availability")
return failure(error)
let reservation = trace "Creating reservation", availabilityId, slotSize, requestId, slotIndex
Reservation.init(availabilityId, slotSize, requestId, slotIndex, validUntil)
if createResErr =? (await self.update(reservation)).errorOption: let reservation = Reservation.init(availabilityId, slotSize, requestId, slotIndex)
return failure(createResErr)
# reduce availability freeSize by the slot size, which is now accounted for in if createResErr =? (await self.update(reservation)).errorOption:
# the newly created Reservation return failure(createResErr)
availability.freeSize -= slotSize
# adjust the remaining totalRemainingCollateral # reduce availability freeSize by the slot size, which is now accounted for in
availability.totalRemainingCollateral -= slotSize.u256 * collateralPerByte # the newly created Reservation
availability.freeSize -= slotSize
# update availability with reduced size # update availability with reduced size
trace "Updating availability with reduced size", freeSize = availability.freeSize trace "Updating availability with reduced size"
if updateErr =? (await self.updateAvailability(availability)).errorOption: if updateErr =? (await self.updateAvailability(availability)).errorOption:
trace "Updating availability failed, rolling back reservation creation" trace "Updating availability failed, rolling back reservation creation"
without key =? reservation.key, keyError: without key =? reservation.key, keyError:
keyError.parent = updateErr keyError.parent = updateErr
return failure(keyError) return failure(keyError)
# rollback the reservation creation # rollback the reservation creation
if rollbackErr =? (await self.delete(key)).errorOption: if rollbackErr =? (await self.delete(key)).errorOption:
rollbackErr.parent = updateErr rollbackErr.parent = updateErr
return failure(rollbackErr) return failure(rollbackErr)
return failure(updateErr) return failure(updateErr)
trace "Reservation succesfully created" trace "Reservation succesfully created"
return success(reservation) return success(reservation)
except AsyncLockError as e:
error "Lock error when trying to delete the availability", err = e.msg
return failure(e)
proc returnBytesToAvailability*( proc returnBytesToAvailability*(
self: Reservations, self: Reservations,
availabilityId: AvailabilityId, availabilityId: AvailabilityId,
reservationId: ReservationId, reservationId: ReservationId,
bytes: uint64, bytes: UInt256): Future[?!void] {.async.} =
): Future[?!void] {.async: (raises: [CancelledError]).} =
logScope: logScope:
reservationId reservationId
availabilityId availabilityId
try:
withLock(self.availabilityLock):
without key =? key(reservationId, availabilityId), error:
return failure(error)
without var reservation =? (await self.get(key, Reservation)), error: withLock(self.availabilityLock):
return failure(error) without key =? key(reservationId, availabilityId), error:
return failure(error)
# We are ignoring bytes that are still present in the Reservation because without var reservation =? (await self.get(key, Reservation)), error:
# they will be returned to Availability through `deleteReservation`. return failure(error)
let bytesToBeReturned = bytes - reservation.size
if bytesToBeReturned == 0: # We are ignoring bytes that are still present in the Reservation because
trace "No bytes are returned", # they will be returned to Availability through `deleteReservation`.
requestSizeBytes = bytes, returningBytes = bytesToBeReturned let bytesToBeReturned = bytes - reservation.size
return success()
trace "Returning bytes",
requestSizeBytes = bytes, returningBytes = bytesToBeReturned
# First lets see if we can re-reserve the bytes, if the Repo's quota
# is depleted then we will fail-fast as there is nothing to be done atm.
if reserveErr =? (await self.repo.reserve(bytesToBeReturned.NBytes)).errorOption:
return failure(reserveErr.toErr(ReserveFailedError))
without availabilityKey =? availabilityId.key, error:
return failure(error)
without var availability =? await self.get(availabilityKey, Availability), error:
return failure(error)
availability.freeSize += bytesToBeReturned
# Update availability with returned size
if updateErr =? (await self.updateAvailability(availability)).errorOption:
trace "Rolling back returning bytes"
if rollbackErr =? (await self.repo.release(bytesToBeReturned.NBytes)).errorOption:
rollbackErr.parent = updateErr
return failure(rollbackErr)
return failure(updateErr)
if bytesToBeReturned == 0:
trace "No bytes are returned", requestSizeBytes = bytes, returningBytes = bytesToBeReturned
return success() return success()
except AsyncLockError as e:
error "Lock error when returning bytes to the availability", err = e.msg trace "Returning bytes", requestSizeBytes = bytes, returningBytes = bytesToBeReturned
return failure(e)
# First lets see if we can re-reserve the bytes, if the Repo's quota
# is depleted then we will fail-fast as there is nothing to be done atm.
if reserveErr =? (await self.repo.reserve(bytesToBeReturned.truncate(uint).NBytes)).errorOption:
return failure(reserveErr.toErr(ReserveFailedError))
without availabilityKey =? availabilityId.key, error:
return failure(error)
without var availability =? await self.get(availabilityKey, Availability), error:
return failure(error)
availability.freeSize += bytesToBeReturned
# Update availability with returned size
if updateErr =? (await self.updateAvailability(availability)).errorOption:
trace "Rolling back returning bytes"
if rollbackErr =? (await self.repo.release(bytesToBeReturned.truncate(uint).NBytes)).errorOption:
rollbackErr.parent = updateErr
return failure(rollbackErr)
return failure(updateErr)
return success()
proc release*( proc release*(
self: Reservations, self: Reservations,
reservationId: ReservationId, reservationId: ReservationId,
availabilityId: AvailabilityId, availabilityId: AvailabilityId,
bytes: uint, bytes: uint): Future[?!void] {.async.} =
): Future[?!void] {.async: (raises: [CancelledError]).} =
logScope: logScope:
topics = "release" topics = "release"
bytes bytes
@ -608,20 +517,20 @@ proc release*(
without var reservation =? (await self.get(key, Reservation)), error: without var reservation =? (await self.get(key, Reservation)), error:
return failure(error) return failure(error)
if reservation.size < bytes: if reservation.size < bytes.u256:
let error = newException( let error = newException(
BytesOutOfBoundsError, BytesOutOfBoundsError,
"trying to release an amount of bytes that is greater than the total size of the Reservation", "trying to release an amount of bytes that is greater than the total size of the Reservation")
)
return failure(error) return failure(error)
if releaseErr =? (await self.repo.release(bytes.NBytes)).errorOption: if releaseErr =? (await self.repo.release(bytes.NBytes)).errorOption:
return failure(releaseErr.toErr(ReleaseFailedError)) return failure(releaseErr.toErr(ReleaseFailedError))
reservation.size -= bytes reservation.size -= bytes.u256
# persist partially used Reservation with updated size # persist partially used Reservation with updated size
if err =? (await self.update(reservation)).errorOption: if err =? (await self.update(reservation)).errorOption:
# rollback release if an update error encountered # rollback release if an update error encountered
trace "rolling back release" trace "rolling back release"
if rollbackErr =? (await self.repo.reserve(bytes.NBytes)).errorOption: if rollbackErr =? (await self.repo.reserve(bytes.NBytes)).errorOption:
@ -631,9 +540,16 @@ proc release*(
return success() return success()
iterator items(self: StorableIter): Future[?seq[byte]] =
while not self.finished:
yield self.next()
proc storables( proc storables(
self: Reservations, T: type SomeStorableObject, queryKey: Key = ReservationsKey self: Reservations,
): Future[?!StorableIter] {.async: (raises: [CancelledError]).} = T: type SomeStorableObject,
queryKey: Key = ReservationsKey
): Future[?!StorableIter] {.async.} =
var iter = StorableIter() var iter = StorableIter()
let query = Query.init(queryKey) let query = Query.init(queryKey)
when T is Availability: when T is Availability:
@ -651,16 +567,20 @@ proc storables(
return failure(error) return failure(error)
# /sales/reservations # /sales/reservations
proc next(): Future[?seq[byte]] {.async: (raises: [CancelledError]).} = proc next(): Future[?seq[byte]] {.async.} =
await idleAsync() await idleAsync()
iter.finished = results.finished iter.finished = results.finished
if not results.finished and res =? (await results.next()) and res.data.len > 0 and if not results.finished and
key =? res.key and key.namespaces.len == defaultKey.namespaces.len: res =? (await results.next()) and
res.data.len > 0 and
key =? res.key and
key.namespaces.len == defaultKey.namespaces.len:
return some res.data return some res.data
return none seq[byte] return none seq[byte]
proc dispose(): Future[?!void] {.async: (raises: [CancelledError]).} = proc dispose(): Future[?!void] {.async.} =
return await results.dispose() return await results.dispose()
iter.next = next iter.next = next
@ -668,74 +588,70 @@ proc storables(
return success iter return success iter
proc allImpl( proc allImpl(
self: Reservations, T: type SomeStorableObject, queryKey: Key = ReservationsKey self: Reservations,
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} = T: type SomeStorableObject,
queryKey: Key = ReservationsKey
): Future[?!seq[T]] {.async.} =
var ret: seq[T] = @[] var ret: seq[T] = @[]
without storables =? (await self.storables(T, queryKey)), error: without storables =? (await self.storables(T, queryKey)), error:
return failure(error) return failure(error)
for storable in storables.items: for storable in storables.items:
try: without bytes =? (await storable):
without bytes =? (await storable):
continue
without obj =? T.fromJson(bytes), error:
error "json deserialization error",
json = string.fromBytes(bytes), error = error.msg
continue
ret.add obj
except CancelledError as err:
raise err
except CatchableError as err:
error "Error when retrieving storable", error = err.msg
continue continue
without obj =? T.fromJson(bytes), error:
error "json deserialization error",
json = string.fromBytes(bytes),
error = error.msg
continue
ret.add obj
return success(ret) return success(ret)
proc all*( proc all*(
self: Reservations, T: type SomeStorableObject self: Reservations,
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} = T: type SomeStorableObject
): Future[?!seq[T]] {.async.} =
return await self.allImpl(T) return await self.allImpl(T)
proc all*( proc all*(
self: Reservations, T: type SomeStorableObject, availabilityId: AvailabilityId self: Reservations,
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} = T: type SomeStorableObject,
without key =? key(availabilityId): availabilityId: AvailabilityId
): Future[?!seq[T]] {.async.} =
without key =? (ReservationsKey / $availabilityId):
return failure("no key") return failure("no key")
return await self.allImpl(T, key) return await self.allImpl(T, key)
proc findAvailability*( proc findAvailability*(
self: Reservations, self: Reservations,
size, duration: uint64, size, duration, minPrice, collateral: UInt256
pricePerBytePerSecond, collateralPerByte: UInt256, ): Future[?Availability] {.async.} =
validUntil: SecondsSince1970,
): Future[?Availability] {.async: (raises: [CancelledError]).} =
without storables =? (await self.storables(Availability)), e: without storables =? (await self.storables(Availability)), e:
error "failed to get all storables", error = e.msg error "failed to get all storables", error = e.msg
return none Availability return none Availability
for item in storables.items: for item in storables.items:
if bytes =? (await item) and availability =? Availability.fromJson(bytes): if bytes =? (await item) and
if availability.enabled and size <= availability.freeSize and availability =? Availability.fromJson(bytes):
duration <= availability.duration and
collateralPerByte <= availability.maxCollateralPerByte and if size <= availability.freeSize and
pricePerBytePerSecond >= availability.minPricePerBytePerSecond and duration <= availability.duration and
(availability.until == 0 or availability.until >= validUntil): collateral <= availability.maxCollateral and
minPrice >= availability.minPrice:
trace "availability matched", trace "availability matched",
id = availability.id, id = availability.id,
enabled = availability.enabled, size, availFreeSize = availability.freeSize,
size, duration, availDuration = availability.duration,
availFreeSize = availability.freeSize, minPrice, availMinPrice = availability.minPrice,
duration, collateral, availMaxCollateral = availability.maxCollateral
availDuration = availability.duration,
pricePerBytePerSecond,
availMinPricePerBytePerSecond = availability.minPricePerBytePerSecond,
collateralPerByte,
availMaxCollateralPerByte = availability.maxCollateralPerByte,
until = availability.until
# TODO: As soon as we're on ARC-ORC, we can use destructors # TODO: As soon as we're on ARC-ORC, we can use destructors
# to automatically dispose our iterators when they fall out of scope. # to automatically dispose our iterators when they fall out of scope.
@ -747,13 +663,7 @@ proc findAvailability*(
trace "availability did not match", trace "availability did not match",
id = availability.id, id = availability.id,
enabled = availability.enabled, size, availFreeSize = availability.freeSize,
size, duration, availDuration = availability.duration,
availFreeSize = availability.freeSize, minPrice, availMinPrice = availability.minPrice,
duration, collateral, availMaxCollateral = availability.maxCollateral
availDuration = availability.duration,
pricePerBytePerSecond,
availMinPricePerBytePerSecond = availability.minPricePerBytePerSecond,
collateralPerByte,
availMaxCollateralPerByte = availability.maxCollateralPerByte,
until = availability.until

Some files were not shown because too many files have changed in this diff Show More