Compare commits

..

25 Commits

Author SHA1 Message Date
Slava
ad28204bad
chore: update testnet marketplace address (#983) (#984)
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-11-04 07:32:01 +02:00
Slava
2a4a37934c
Add ETH_PRIVATE_KEY to Docker entrypoint (#982)
* Add ETH_PRIVATE_KEY to Docker entrypoint

* Add deprecation warning for PRIV_KEY variable

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-11-04 07:31:57 +02:00
Ben Bierens
3a734aff59
fix: bumps ethers to fix missing nonce error (#980)
* fix: bumps ethers to fix missing nonce error

* fix was merged in nim-ethers

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
2024-11-04 07:31:51 +02:00
Eric
20a63ce7d7
chore: update dependencies, especially nim-ethers to chronos v4 compatible version (#968)
* chore: bump dependencies, including nim-ethers with chronos v4 support

Bumps the following dependencies:
- nim-ethers to commit 507ac6a4cc71cec9be7693fa393db4a49b52baf9 which contains a pinned nim-eth version. This is to be replaced by a versioned library, so it will be pinned to a particular version. There is a crucial fix in this version of ethers that fixes nonce management which is causing issues in the Codex testnet.
- nim-json-rpc to v0.4.4
- nim-json-serialization to v0.2.8
- nim-serde to v1.2.2
- nim-serialization to v0.2.4

Currently, one of the integration tests is failing.

* fix integration test

- When a state's run was cancelled, it was being caught as an error due to catching all CatchableErrors. This caused a state transition to SaleErrored, however cancellation of run was not actually an error. Handling this correctly fixed the issue.
- Stopping of the clock was moved to after `HostInteractions` (sales) which avoided an assertion around getting time when the clock was not started.

* bump ethers to include nonce fix and filter not found fix

* bump ethers: fixes missing symbol not exported in ethers

* Fix cirdl test imports/exports

* Debugging in ci

* Handle CancelledErrors for state.run in one place only

* Rename `config` to `configuration`

There was a symbol clash preventing compilation and it was easiest to rename `config` to `configuration` in the contracts. Not even remotely ideal, but it was the only way.

* bump ethers to latest

Prevents an issue were `JsonNode.items` symbol could not be found

* More changes to support `config` > `configuration`

* cleanup

* testing to see if this fixes failure in ci

* bumps contracts

- ensures slot is free before allowing reservation
- renames config to configuration to avoid symbol clash
2024-11-04 07:31:43 +02:00
Arnaud
cf0d28130a
Move the upload headers to the POST method (#978) 2024-11-04 07:31:37 +02:00
Slava
35c32844f8
Build Postman Collection (#973)
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-11-04 07:31:30 +02:00
Arnaud
e86589b79c
feat: add metadata to the manifest (#960)
* Add metadata to the manifest

* Remove useless import

* Fix the openapi documentation

* Use optional fields instead of default values

* Remove testRestApi target

* Return failure when the protobuf cannot get the field

* Set download headers and fix cors headers when an error is returned

* Add tests to verify the download headers

* Try to adjust the content length header

* Fix convertion to string

* Remove the content length header

* Remove testRestApi target

* Removing debug messages
2024-11-04 07:31:24 +02:00
Arnaud
6d04ae42c9
Remove duplicated header (#970) 2024-11-04 07:31:18 +02:00
Arnaud
0582611077
Complete documentation for debug endpoint (#969) 2024-11-04 07:31:13 +02:00
Eric
96a9c687c3
fix(slot-reservations): Avoid slot filled cancellations (#963)
* Avoid cancelling states when slot is filled

* improve logging

Improves logging for situations where a Sale should be ignored instead of being considered an error, including when reservation is not allowed and when a slot was filled by another host.

* remove onSlotFilled unit tests from states
2024-11-04 07:31:01 +02:00
Slava
0c647d8337
chore: new marketplace address for testnet (#961)
https://github.com/codex-storage/infra-codex/issues/248

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-10-21 13:31:54 +03:00
Ben Bierens
f196caf8cb
Download API upgrade (#955)
* Adds API for fetching manifest only and downloading dataset without stream

* Updates openapi.yaml

* Adds tests for downloading manifest-only and without stream.

* review comments by Giuliano

* updates test clients
2024-10-21 13:25:19 +03:00
Adam Uhlíř
bf1434d192
docs: openapi node fix (#950) 2024-10-21 13:25:15 +03:00
Adam Uhlíř
00ab8d712e
ci: linux ci runs on ubuntu-20.04 (#953)
* ci: linux ci runs uses ubuntu-20.04

* ci: use ubuntu-20.04 for nim-matrix

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-10-21 13:25:10 +03:00
Ben Bierens
21d996ab3f
Adds log for cirdl download URL (#948) 2024-10-21 13:24:52 +03:00
Adam Uhlíř
eff0d8cd18
feat: partial rewards and withdraws (#880)
* feat: partial rewards and withdraws

* test: missing reserve slot

* test: fix contracts
2024-10-21 13:24:47 +03:00
Ben Bierens
b0607d3fdb
Handles LPStreamError in chunker (#947)
* Handles LPStreamError in chunker

* Adds test for lpstream exception

* Adds tests for other stream exceptions. Cleanup.
2024-10-21 13:24:38 +03:00
Arnaud
859b7ea0e5
fix(restapi): Add cors headers when the request is returning errors (#942)
* Add cors headers when the request is returning errors

* Prevent nim presto to send multiple cors headers
2024-10-21 13:24:32 +03:00
Eric
29549935ad
Support enforcement of slot reservations before filling slot (#934) 2024-10-21 13:22:55 +03:00
Slava
47061bf29b
Release v0.1.6 (#945)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* remove erasure and por parameters from openapi spec (#915)

* Move Building Codex guide to the main docs site (#893)

* updates Marketplace tutorial documentation (#888)

* updates Marketplace tutorial documentation

* Applies review comments to marketplace-tutorial

* Final formatting touches

* moved `Prerequisites` around

* Fixes indentation in one JSON snippet

* Use CLI args when passed for cirdl in Docker entrypoint (#927)

* Use CLI args when passed for cirdl in Docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase CI timeout

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Validator - support partitioning of the  slot id space (#890)

* Adds validatorPartitionSize and validatorPartitionIndex config options

* adds partitioning options to the validation type

* adds partitioning logic to the validator

* ignores partitionIndex when partitionSize is either 0 or 1

* clips the partition index to <<partitionIndex mod partitionSize>>

* handles negative values for the validation partition index

* updates long description of the new validator cli options

* makes default partitionSize to be 0 for better backward compatibility

* Improving formatting on validator CLI

* reactors validation params into a separate type and simplifies validation of validation params

* removes suspected duplication

* fixes typo in validator CLI help

* updates README

* Applies review comments - using optionals and range types to handle validation params

* Adds initializer to the configFactory for validatorMaxSlots

* [Review] update validator CLI description and README

* [Review]: renaming validationParams to validationConfig (config)

* [Review]: move validationconfig.nim to a higher level (next to validation.nim)

* changes backing type of MaxSlots to be int and makes sure slots are validated without limit when maxSlots is set to 0

* adds more end-to-end test for the validator and the groups

* fixes typo in README and conf.nim

* makes `maxSlotsConstraintRespected` and `shouldValidateSlot` private + updates the tests

* fixes public address of the signer account in the marketplace tutorial

* applies review comments - removes two tests

* Remove moved docs (#930)

* Remove moved document

* Update main Readme and point links to the documentation site

* feat(slot-reservations): Support reserving slots (#907)

* feat(slot-reservations): Support reserving slots

Closes #898.

Wire up reserveSlot and canReserveSlot contract calls, but don't call them

* Remove return value from `reserveSlot`

* convert EthersError to MarketError

* Move convertEthersError to reserveSlot

* bump codex-contracts-eth after rebase

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* bump codex-contracts-eth after rebase

* bump codex-contracts-eth to master after codex-contracts-eth/pull/177 merged

* feat(slot-reservations): Add SaleSlotReserving state (#917)

* convert EthersError to MarketError

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* Add SaleSlotReserving

Adds a new state, SaleSlotReserving, that attempts to reserve a slot before downloading.
If the slot cannot be reserved, the state moves to SaleIgnored.
On error, the state moves to SaleErrored.

SaleIgnored is also updated to pass in `reprocessSlot` and `returnBytes`, controlling the behaviour in the Sales module after the slot is ignored. This is because previously it was assumed that SaleIgnored was only reached when there was no Availability. This is no longer the case, since SaleIgnored can now be reached when a slot cannot be reserved.

* Update SalePreparing

Specify `reprocessSlot` and `returnBytes` when moving to `SaleIgnored` from `SalePreparing`.

Update tests to include test for a raised CatchableError.

* Fix unit test

* Modify `canReserveSlot` and `reverseSlot` params after rebase

* Update MockMarket with new `canReserveSlot` and `reserveSlot` params

* fix after rebase

also bump codex-contracts-eth to master

* Use Ubuntu 20.04 for Linux amd64 releases (#939)

* Use Ubuntu 20.04 for Linux amd64 releases (#932)

* Accept branches with the slash in the name for release workflow (#932)

* Increase artifacts retention-days for release workflow (#932)

* feat(slot-reservations): support SlotReservationsFull event (#926)

* Remove moved docs (#935)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fix: null-ref in networkPeer (#937)

* Fixes nullref in networkPeer

* Removes inflight semaphore

* Revert "Removes inflight semaphore"

This reverts commit 26ec15c6f788df3adb6ff3b912a0c4b5d3139358.

* docs(openapi): provider better documentation for space endpoint parameters (#921)

* Trying to improve documentation

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

---------

Signed-off-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>

* Update Codex Testnet marketplace contract address (#944)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Signed-off-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
Co-authored-by: Marcin Czenko <marcin.czenko@pm.me>
2024-10-08 12:22:12 +03:00
Slava
7ba5e8c13a
Release v0.1.5 (#941)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* remove erasure and por parameters from openapi spec (#915)

* Move Building Codex guide to the main docs site (#893)

* updates Marketplace tutorial documentation (#888)

* updates Marketplace tutorial documentation

* Applies review comments to marketplace-tutorial

* Final formatting touches

* moved `Prerequisites` around

* Fixes indentation in one JSON snippet

* Use CLI args when passed for cirdl in Docker entrypoint (#927)

* Use CLI args when passed for cirdl in Docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase CI timeout

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Validator - support partitioning of the  slot id space (#890)

* Adds validatorPartitionSize and validatorPartitionIndex config options

* adds partitioning options to the validation type

* adds partitioning logic to the validator

* ignores partitionIndex when partitionSize is either 0 or 1

* clips the partition index to <<partitionIndex mod partitionSize>>

* handles negative values for the validation partition index

* updates long description of the new validator cli options

* makes default partitionSize to be 0 for better backward compatibility

* Improving formatting on validator CLI

* reactors validation params into a separate type and simplifies validation of validation params

* removes suspected duplication

* fixes typo in validator CLI help

* updates README

* Applies review comments - using optionals and range types to handle validation params

* Adds initializer to the configFactory for validatorMaxSlots

* [Review] update validator CLI description and README

* [Review]: renaming validationParams to validationConfig (config)

* [Review]: move validationconfig.nim to a higher level (next to validation.nim)

* changes backing type of MaxSlots to be int and makes sure slots are validated without limit when maxSlots is set to 0

* adds more end-to-end test for the validator and the groups

* fixes typo in README and conf.nim

* makes `maxSlotsConstraintRespected` and `shouldValidateSlot` private + updates the tests

* fixes public address of the signer account in the marketplace tutorial

* applies review comments - removes two tests

* Remove moved docs (#930)

* Remove moved document

* Update main Readme and point links to the documentation site

* feat(slot-reservations): Support reserving slots (#907)

* feat(slot-reservations): Support reserving slots

Closes #898.

Wire up reserveSlot and canReserveSlot contract calls, but don't call them

* Remove return value from `reserveSlot`

* convert EthersError to MarketError

* Move convertEthersError to reserveSlot

* bump codex-contracts-eth after rebase

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* bump codex-contracts-eth after rebase

* bump codex-contracts-eth to master after codex-contracts-eth/pull/177 merged

* feat(slot-reservations): Add SaleSlotReserving state (#917)

* convert EthersError to MarketError

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* Add SaleSlotReserving

Adds a new state, SaleSlotReserving, that attempts to reserve a slot before downloading.
If the slot cannot be reserved, the state moves to SaleIgnored.
On error, the state moves to SaleErrored.

SaleIgnored is also updated to pass in `reprocessSlot` and `returnBytes`, controlling the behaviour in the Sales module after the slot is ignored. This is because previously it was assumed that SaleIgnored was only reached when there was no Availability. This is no longer the case, since SaleIgnored can now be reached when a slot cannot be reserved.

* Update SalePreparing

Specify `reprocessSlot` and `returnBytes` when moving to `SaleIgnored` from `SalePreparing`.

Update tests to include test for a raised CatchableError.

* Fix unit test

* Modify `canReserveSlot` and `reverseSlot` params after rebase

* Update MockMarket with new `canReserveSlot` and `reserveSlot` params

* fix after rebase

also bump codex-contracts-eth to master

* Use Ubuntu 20.04 for Linux amd64 releases (#939)

* Use Ubuntu 20.04 for Linux amd64 releases (#932)

* Accept branches with the slash in the name for release workflow (#932)

* Increase artifacts retention-days for release workflow (#932)

* feat(slot-reservations): support SlotReservationsFull event (#926)

* Remove moved docs (#935)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fix: null-ref in networkPeer (#937)

* Fixes nullref in networkPeer

* Removes inflight semaphore

* Revert "Removes inflight semaphore"

This reverts commit 26ec15c6f788df3adb6ff3b912a0c4b5d3139358.

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
Co-authored-by: Marcin Czenko <marcin.czenko@pm.me>
2024-10-07 15:27:25 +03:00
Slava
484124db09
Release v0.1.4 (#912)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
2024-09-24 13:19:58 +03:00
Slava
89917d4bb6
Release v0.1.3 (#856) 2024-07-03 20:20:53 +03:00
Slava
7602adc0df
Release v0.1.2 (#847)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
2024-06-27 08:51:50 +03:00
Slava
15ff87a8bb
Merge latest master into release (#842)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
2024-06-26 05:38:04 +03:00
425 changed files with 12775 additions and 24351 deletions

View File

@ -1,2 +0,0 @@
# Formatted with nph v0.6.1-0-g0d8000e
e5df8c50d3b6e70e6eec1ff031657d2b7bb6fe63

View File

@ -11,16 +11,13 @@ inputs:
default: "amd64"
nim_version:
description: "Nim version"
default: "v2.0.14"
default: "version-1-6"
rust_version:
description: "Rust version"
default: "1.79.0"
default: "1.78.0"
shell:
description: "Shell to run commands in"
default: "bash --noprofile --norc -e -o pipefail"
coverage:
description: "True if the process is used for coverage"
default: false
runs:
using: "composite"
steps:
@ -34,8 +31,8 @@ runs:
if: inputs.os == 'linux' && (inputs.cpu == 'amd64' || inputs.cpu == 'arm64')
shell: ${{ inputs.shell }} {0}
run: |
sudo apt-get update -qq
sudo DEBIAN_FRONTEND='noninteractive' apt-get install \
sudo apt-fast update -qq
sudo DEBIAN_FRONTEND='noninteractive' apt-fast install \
--no-install-recommends -yq lcov
- name: APT (Linux i386)
@ -43,8 +40,8 @@ runs:
shell: ${{ inputs.shell }} {0}
run: |
sudo dpkg --add-architecture i386
sudo apt-get update -qq
sudo DEBIAN_FRONTEND='noninteractive' apt-get install \
sudo apt-fast update -qq
sudo DEBIAN_FRONTEND='noninteractive' apt-fast install \
--no-install-recommends -yq gcc-multilib g++-multilib
- name: Homebrew (macOS)
@ -81,48 +78,11 @@ runs:
mingw-w64-i686-ntldd-git
mingw-w64-i686-rust
- name: Install gcc 14 on Linux
# We don't want to install gcc 14 for coverage (Ubuntu 20.04)
if : ${{ inputs.os == 'linux' && inputs.coverage != 'true' }}
shell: ${{ inputs.shell }} {0}
run: |
# Skip for older Ubuntu versions
if [[ $(lsb_release -r | awk -F '[^0-9]+' '{print $2}') -ge 24 ]]; then
# Install GCC-14
sudo apt-get update -qq
sudo apt-get install -yq gcc-14
# Add GCC-14 to alternatives
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-14 14
# Set GCC-14 as the default
sudo update-alternatives --set gcc /usr/bin/gcc-14
fi
- name: Install ccache on Linux/Mac
if: inputs.os == 'linux' || inputs.os == 'macos'
uses: hendrikmuhs/ccache-action@v1.2
with:
create-symlink: true
key: ${{ inputs.os }}-${{ inputs.builder }}-${{ inputs.cpu }}-${{ inputs.tests }}-${{ inputs.nim_version }}
evict-old-files: 7d
- name: Install ccache on Windows
if: inputs.os == 'windows'
uses: hendrikmuhs/ccache-action@v1.2
with:
key: ${{ inputs.os }}-${{ inputs.builder }}-${{ inputs.cpu }}-${{ inputs.tests }}-${{ inputs.nim_version }}
evict-old-files: 7d
- name: Enable ccache on Windows
- name: MSYS2 (Windows All) - Downgrade to gcc 13
if: inputs.os == 'windows'
shell: ${{ inputs.shell }} {0}
run: |
CCACHE_DIR=$(dirname $(which ccache))/ccached
mkdir ${CCACHE_DIR}
ln -s $(which ccache) ${CCACHE_DIR}/gcc.exe
ln -s $(which ccache) ${CCACHE_DIR}/g++.exe
ln -s $(which ccache) ${CCACHE_DIR}/cc.exe
ln -s $(which ccache) ${CCACHE_DIR}/c++.exe
echo "export PATH=${CCACHE_DIR}:\$PATH" >> $HOME/.bash_profile # prefix path in MSYS2
pacman -U --noconfirm https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-13.2.0-6-any.pkg.tar.zst https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-libs-13.2.0-6-any.pkg.tar.zst
- name: Derive environment variables
shell: ${{ inputs.shell }} {0}
@ -181,11 +141,8 @@ runs:
llvm_bin_dir="${llvm_dir}/bin"
llvm_lib_dir="${llvm_dir}/lib"
echo "${llvm_bin_dir}" >> ${GITHUB_PATH}
# Make sure ccache has precedence (GITHUB_PATH is appending before)
echo "$(brew --prefix)/opt/ccache/libexec" >> ${GITHUB_PATH}
echo $PATH
echo "LDFLAGS=${LDFLAGS} -L${libomp_lib_dir} -L${llvm_lib_dir} -Wl,-rpath,${llvm_lib_dir}" >> ${GITHUB_ENV}
NIMFLAGS="${NIMFLAGS} $(quote "-d:LeopardCmakeFlags='-DCMAKE_BUILD_TYPE=Release' -d:LeopardExtraCompilerFlags='-fopenmp' -d:LeopardExtraLinkerFlags='-fopenmp -L${libomp_lib_dir}'")"
NIMFLAGS="${NIMFLAGS} $(quote "-d:LeopardCmakeFlags='-DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=${llvm_bin_dir}/clang -DCMAKE_CXX_COMPILER=${llvm_bin_dir}/clang++' -d:LeopardExtraCompilerlags='-fopenmp' -d:LeopardExtraLinkerFlags='-fopenmp -L${libomp_lib_dir}'")"
echo "NIMFLAGS=${NIMFLAGS}" >> $GITHUB_ENV
fi
@ -202,7 +159,6 @@ runs:
- name: Restore Nim toolchain binaries from cache
id: nim-cache
uses: actions/cache@v4
if : ${{ inputs.coverage != 'true' }}
with:
path: NimBinaries
key: ${{ inputs.os }}-${{ inputs.cpu }}-nim-${{ inputs.nim_version }}-cache-${{ env.cache_nonce }}-${{ github.run_id }}
@ -212,17 +168,9 @@ runs:
shell: ${{ inputs.shell }} {0}
run: echo "NIM_COMMIT=${{ inputs.nim_version }}" >> ${GITHUB_ENV}
- name: MSYS2 (Windows All) - Disable git symbolic links (since miniupnp 2.2.5)
if: inputs.os == 'windows'
- name: Build Nim and Codex dependencies
shell: ${{ inputs.shell }} {0}
run: |
git config --global core.symlinks false
- name: Build Nim and Logos Storage dependencies
shell: ${{ inputs.shell }} {0}
run: |
which gcc
gcc --version
make -j${ncpu} CI_CACHE=NimBinaries ${ARCH_OVERRIDE} QUICK_AND_DIRTY_COMPILER=1 update
echo
./env.sh nim --version

View File

@ -3,14 +3,12 @@ Tips for shorter build times
### Runner availability ###
When running on the Github free, pro or team plan, the bottleneck when
optimizing workflows is the availability of macOS runners. Therefore, anything
that reduces the time spent in macOS jobs will have a positive impact on the
time waiting for runners to become available. On the Github enterprise plan,
this is not the case and you can more freely use parallelization on multiple
runners. The usage limits for Github Actions are [described here][limits]. You
can see a breakdown of runner usage for your jobs in the Github Actions tab
([example][usage]).
Currently, the biggest bottleneck when optimizing workflows is the availability
of Windows and macOS runners. Therefore, anything that reduces the time spent in
Windows or macOS jobs will have a positive impact on the time waiting for
runners to become available. The usage limits for Github Actions are [described
here][limits]. You can see a breakdown of runner usage for your jobs in the
Github Actions tab ([example][usage]).
### Windows is slow ###
@ -24,10 +22,11 @@ analysis, etc. are therefore better performed on a Linux runner.
Breaking up a long build job into several jobs that you run in parallel can have
a positive impact on the wall clock time that a workflow runs. For instance, you
might consider running unit tests and integration tests in parallel. When
running on the Github free, pro or team plan, keep in mind that availability of
macOS runners is a bottleneck. If you split a macOS job into two jobs, you now
need to wait for two macOS runners to become available.
might consider running unit tests and integration tests in parallel. Keep in
mind however that availability of macOS and Windows runners is the biggest
bottleneck. If you split a Windows job into two jobs, you now need to wait for
two Windows runners to become available! Therefore parallelization often only
makes sense for Linux jobs.
### Refactoring ###
@ -67,10 +66,9 @@ might seem inconvenient, because when you're debugging an issue you often want
to know whether you introduced a failure on all platforms, or only on a single
one. You might be tempted to disable fail-fast, but keep in mind that this keeps
runners busy for longer on a workflow that you know is going to fail anyway.
Consequent runs will therefore take longer to start. Fail fast is most likely
better for overall development speed.
Consequent runs will therefore take longer to start. Fail fast is most likely better for overall development speed.
[usage]: https://github.com/logos-storage/logos-storage-nim/actions/runs/3462031231/usage
[usage]: https://github.com/codex-storage/nim-codex/actions/runs/3462031231/usage
[composite]: https://docs.github.com/en/actions/creating-actions/creating-a-composite-action
[reusable]: https://docs.github.com/en/actions/using-workflows/reusing-workflows
[cache]: https://github.com/actions/cache/blob/main/workarounds.md#update-a-cache

View File

@ -24,9 +24,9 @@ jobs:
run:
shell: ${{ matrix.shell }} {0}
name: ${{ matrix.os }}-${{ matrix.tests }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.job_number }}
name: '${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.tests }}'
runs-on: ${{ matrix.builder }}
timeout-minutes: 90
timeout-minutes: 100
steps:
- name: Checkout sources
uses: actions/checkout@v4
@ -38,32 +38,28 @@ jobs:
uses: ./.github/actions/nimbus-build-system
with:
os: ${{ matrix.os }}
cpu: ${{ matrix.cpu }}
shell: ${{ matrix.shell }}
nim_version: ${{ matrix.nim_version }}
coverage: false
## Part 1 Tests ##
- name: Unit tests
if: matrix.tests == 'unittest' || matrix.tests == 'all'
run: make -j${ncpu} test
# workaround for https://github.com/NomicFoundation/hardhat/issues/3877
- name: Setup Node.js
if: matrix.tests == 'contract' || matrix.tests == 'integration' || matrix.tests == 'tools' || matrix.tests == 'all'
uses: actions/setup-node@v4
with:
node-version: 22
node-version: 18.15
- name: Start Ethereum node with Logos Storage contracts
- name: Start Ethereum node with Codex contracts
if: matrix.tests == 'contract' || matrix.tests == 'integration' || matrix.tests == 'tools' || matrix.tests == 'all'
working-directory: vendor/logos-storage-contracts-eth
working-directory: vendor/codex-contracts-eth
env:
MSYS2_PATH_TYPE: inherit
run: |
npm ci
npm install
npm start &
# Wait for the contracts to be deployed
sleep 5
## Part 2 Tests ##
- name: Contract tests
@ -73,15 +69,13 @@ jobs:
## Part 3 Tests ##
- name: Integration tests
if: matrix.tests == 'integration' || matrix.tests == 'all'
env:
CODEX_INTEGRATION_TEST_INCLUDES: ${{ matrix.includes }}
run: make -j${ncpu} testIntegration
- name: Upload integration tests log files
uses: actions/upload-artifact@v4
if: (matrix.tests == 'integration' || matrix.tests == 'all') && always()
with:
name: ${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.job_number }}-integration-tests-logs
name: ${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-integration-tests-logs
path: tests/integration/logs/
retention-days: 1

View File

@ -9,28 +9,31 @@ on:
env:
cache_nonce: 0 # Allows for easily busting actions/cache caches
nim_version: v2.2.4
nim_version: pinned
concurrency:
group: ${{ github.workflow }}-${{ github.ref || github.run_id }}
cancel-in-progress: true
jobs:
matrix:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.matrix.outputs.matrix }}
cache_nonce: ${{ env.cache_nonce }}
steps:
- name: Checkout sources
uses: actions/checkout@v4
- name: Compute matrix
id: matrix
run: |
echo 'matrix<<EOF' >> $GITHUB_OUTPUT
tools/scripts/ci-job-matrix.sh >> $GITHUB_OUTPUT
echo 'EOF' >> $GITHUB_OUTPUT
- name: Compute matrix
id: matrix
uses: fabiocaccamo/create-matrix-action@v4
with:
matrix: |
os {linux}, cpu {amd64}, builder {ubuntu-20.04}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {macos}, cpu {amd64}, builder {macos-13}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {unittest}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {contract}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {integration}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {tools}, nim_version {${{ env.nim_version }}}, shell {msys2}
build:
needs: matrix
@ -39,21 +42,8 @@ jobs:
matrix: ${{ needs.matrix.outputs.matrix }}
cache_nonce: ${{ needs.matrix.outputs.cache_nonce }}
linting:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v4
- name: Check `nph` formatting
uses: arnetheduck/nph-action@v1
with:
version: 0.6.1
options: "codex/ tests/"
fail: true
suggest: true
coverage:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: Checkout sources
uses: actions/checkout@v4
@ -66,7 +56,6 @@ jobs:
with:
os: linux
nim_version: ${{ env.nim_version }}
coverage: true
- name: Generate coverage data
run: |

View File

@ -1,19 +0,0 @@
name: Conventional Commits Linting
on:
push:
branches:
- master
pull_request:
workflow_dispatch:
merge_group:
jobs:
pr-title:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- name: PR Conventional Commit Validation
uses: ytanikin/pr-conventional-commits@1.4.1
with:
task_types: '["feat","fix","docs","test","ci","build","refactor","style","perf","chore","revert"]'

33
.github/workflows/docker-dist-tests.yml vendored Normal file
View File

@ -0,0 +1,33 @@
name: Docker - Dist-Tests
on:
push:
branches:
- master
tags:
- 'v*.*.*'
paths-ignore:
- '**/*.md'
- '.gitignore'
- '.github/**'
- '!.github/workflows/docker-dist-tests.yml'
- '!.github/workflows/docker-reusable.yml'
- 'docker/**'
- '!docker/codex.Dockerfile'
- '!docker/docker-entrypoint.sh'
workflow_dispatch:
jobs:
build-and-push:
name: Build and Push
uses: ./.github/workflows/docker-reusable.yml
with:
nimflags: '-d:disableMarchNative -d:codex_enable_api_debug_peers=true -d:codex_enable_proof_failures=true -d:codex_enable_log_counter=true -d:verify_circuit=true'
nat_ip_auto: true
tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }}
tag_suffix: dist-tests
continuous_tests_list: PeersTest HoldMyBeerTest
continuous_tests_duration: 12h
secrets: inherit

View File

@ -34,11 +34,6 @@ on:
description: Set latest tag for Docker images
required: false
type: boolean
tag_stable:
default: false
description: Set stable tag for Docker images
required: false
type: boolean
tag_sha:
default: true
description: Set Git short commit as Docker tag
@ -59,19 +54,6 @@ on:
description: Continuous Tests duration
required: false
type: string
run_release_tests:
description: Run Release tests
required: false
type: string
default: false
contract_image:
description: Specifies compatible smart contract image
required: false
type: string
outputs:
codex_image:
description: Logos Storage Docker image tag
value: ${{ jobs.publish.outputs.codex_image }}
env:
@ -82,33 +64,19 @@ env:
NIMFLAGS: ${{ inputs.nimflags }}
NAT_IP_AUTO: ${{ inputs.nat_ip_auto }}
TAG_LATEST: ${{ inputs.tag_latest }}
TAG_STABLE: ${{ inputs.tag_stable }}
TAG_SHA: ${{ inputs.tag_sha }}
TAG_SUFFIX: ${{ inputs.tag_suffix }}
CONTRACT_IMAGE: ${{ inputs.contract_image }}
# Tests
TESTS_SOURCE: logos-storage/logos-storage-nim-cs-dist-tests
TESTS_BRANCH: master
CONTINUOUS_TESTS_SOURCE: codex-storage/cs-codex-dist-tests
CONTINUOUS_TESTS_BRANCH: master
CONTINUOUS_TESTS_LIST: ${{ inputs.continuous_tests_list }}
CONTINUOUS_TESTS_DURATION: ${{ inputs.continuous_tests_duration }}
CONTINUOUS_TESTS_NAMEPREFIX: c-tests-ci
jobs:
# Compute variables
compute:
name: Compute build ID
runs-on: ubuntu-latest
outputs:
build_id: ${{ steps.build_id.outputs.build_id }}
steps:
- name: Generate unique build id
id: build_id
run: echo "build_id=$(openssl rand -hex 5)" >> $GITHUB_OUTPUT
# Build platform specific image
build:
needs: compute
strategy:
fail-fast: true
matrix:
@ -121,11 +89,11 @@ jobs:
- target:
os: linux
arch: amd64
builder: ubuntu-24.04
builder: ubuntu-22.04
- target:
os: linux
arch: arm64
builder: ubuntu-24.04-arm
builder: buildjet-4vcpu-ubuntu-2204-arm
name: Build ${{ matrix.target.os }}/${{ matrix.target.arch }}
runs-on: ${{ matrix.builder }}
@ -135,19 +103,11 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
- name: Docker - Variables
run: |
# Create contract label for compatible contract image if specified
if [[ -n "${{ env.CONTRACT_IMAGE }}" ]]; then
echo "CONTRACT_LABEL=storage.codex.nim-codex.blockchain-image=${{ env.CONTRACT_IMAGE }}" >> $GITHUB_ENV
fi
- name: Docker - Meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.DOCKER_REPO }}
labels: ${{ env.CONTRACT_LABEL }}
- name: Docker - Set up Buildx
uses: docker/setup-buildx-action@v3
@ -182,7 +142,7 @@ jobs:
- name: Docker - Upload digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ needs.compute.outputs.build_id }}-${{ matrix.target.arch }}
name: digests-${{ matrix.target.arch }}
path: /tmp/digests/*
if-no-files-found: error
retention-days: 1
@ -194,41 +154,35 @@ jobs:
runs-on: ubuntu-latest
outputs:
version: ${{ steps.meta.outputs.version }}
codex_image: ${{ steps.image_tag.outputs.codex_image }}
needs: [build, compute]
needs: build
steps:
- name: Docker - Variables
run: |
# Adjust custom suffix when set
# Adjust custom suffix when set and
if [[ -n "${{ env.TAG_SUFFIX }}" ]]; then
echo "TAG_SUFFIX=-${{ env.TAG_SUFFIX }}" >> $GITHUB_ENV
echo "TAG_SUFFIX=-${{ env.TAG_SUFFIX }}" >>$GITHUB_ENV
fi
# Disable SHA tags on tagged release
if [[ ${{ startsWith(github.ref, 'refs/tags/') }} == "true" ]]; then
echo "TAG_SHA=false" >> $GITHUB_ENV
echo "TAG_SHA=false" >>$GITHUB_ENV
fi
# Handle latest and latest-custom using raw
if [[ ${{ env.TAG_SHA }} == "false" ]]; then
echo "TAG_LATEST=false" >> $GITHUB_ENV
echo "TAG_RAW=true" >> $GITHUB_ENV
echo "TAG_LATEST=false" >>$GITHUB_ENV
echo "TAG_RAW=true" >>$GITHUB_ENV
if [[ -z "${{ env.TAG_SUFFIX }}" ]]; then
echo "TAG_RAW_VALUE=latest" >> $GITHUB_ENV
echo "TAG_RAW_VALUE=latest" >>$GITHUB_ENV
else
echo "TAG_RAW_VALUE=latest-{{ env.TAG_SUFFIX }}" >> $GITHUB_ENV
echo "TAG_RAW_VALUE=latest-{{ env.TAG_SUFFIX }}" >>$GITHUB_ENV
fi
else
echo "TAG_RAW=false" >> $GITHUB_ENV
fi
# Create contract label for compatible contract image if specified
if [[ -n "${{ env.CONTRACT_IMAGE }}" ]]; then
echo "CONTRACT_LABEL=storage.codex.nim-codex.blockchain-image=${{ env.CONTRACT_IMAGE }}" >> $GITHUB_ENV
echo "TAG_RAW=false" >>$GITHUB_ENV
fi
- name: Docker - Download digests
uses: actions/download-artifact@v4
with:
pattern: digests-${{ needs.compute.outputs.build_id }}-*
pattern: digests-*
merge-multiple: true
path: /tmp/digests
@ -240,14 +194,12 @@ jobs:
uses: docker/metadata-action@v5
with:
images: ${{ env.DOCKER_REPO }}
labels: ${{ env.CONTRACT_LABEL }}
flavor: |
latest=${{ env.TAG_LATEST }}
suffix=${{ env.TAG_SUFFIX }},onlatest=true
tags: |
type=semver,pattern={{version}}
type=raw,enable=${{ env.TAG_RAW }},value=latest
type=raw,enable=${{ env.TAG_STABLE }},value=stable
type=sha,enable=${{ env.TAG_SHA }}
- name: Docker - Login to Docker Hub
@ -262,81 +214,54 @@ jobs:
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.DOCKER_REPO }}@sha256:%s ' *)
- name: Docker - Image tag
id: image_tag
run: echo "codex_image=${{ env.DOCKER_REPO }}:${{ steps.meta.outputs.version }}" >> "$GITHUB_OUTPUT"
- name: Docker - Inspect image
run: docker buildx imagetools inspect ${{ steps.image_tag.outputs.codex_image }}
run: |
docker buildx imagetools inspect ${{ env.DOCKER_REPO }}:${{ steps.meta.outputs.version }}
# Compute Tests inputs
# Compute Continuous Tests inputs
compute-tests-inputs:
name: Compute Tests inputs
if: ${{ inputs.continuous_tests_list != '' || inputs.run_release_tests == 'true' }}
name: Compute Continuous Tests list
if: ${{ inputs.continuous_tests_list != '' && github.ref_name == github.event.repository.default_branch }}
runs-on: ubuntu-latest
needs: publish
outputs:
source: ${{ steps.compute.outputs.source }}
branch: ${{ env.TESTS_BRANCH }}
workflow_source: ${{ env.TESTS_SOURCE }}
branch: ${{ steps.compute.outputs.branch }}
codexdockerimage: ${{ steps.compute.outputs.codexdockerimage }}
steps:
- name: Compute Tests inputs
id: compute
run: |
echo "source=${{ format('{0}/{1}', github.server_url, env.TESTS_SOURCE) }}" >> "$GITHUB_OUTPUT"
echo "codexdockerimage=${{ inputs.docker_repo }}:${{ needs.publish.outputs.version }}" >> "$GITHUB_OUTPUT"
# Compute Continuous Tests inputs
compute-continuous-tests-inputs:
name: Compute Continuous Tests inputs
if: ${{ inputs.continuous_tests_list != '' && github.ref_name == github.event.repository.default_branch }}
runs-on: ubuntu-latest
needs: compute-tests-inputs
outputs:
nameprefix: ${{ steps.compute.outputs.nameprefix }}
continuous_tests_list: ${{ steps.compute.outputs.continuous_tests_list }}
continuous_tests_duration: ${{ env.CONTINUOUS_TESTS_DURATION }}
continuous_tests_duration: ${{ steps.compute.outputs.continuous_tests_duration }}
continuous_tests_workflow: ${{ steps.compute.outputs.continuous_tests_workflow }}
workflow_source: ${{ steps.compute.outputs.workflow_source }}
steps:
- name: Compute Continuous Tests inputs
- name: Compute Continuous Tests list
id: compute
run: |
echo "source=${{ format('{0}/{1}', github.server_url, env.CONTINUOUS_TESTS_SOURCE) }}" >> "$GITHUB_OUTPUT"
echo "branch=${{ env.CONTINUOUS_TESTS_BRANCH }}" >> "$GITHUB_OUTPUT"
echo "codexdockerimage=${{ inputs.docker_repo }}:${{ needs.publish.outputs.version }}" >> "$GITHUB_OUTPUT"
echo "nameprefix=$(awk '{ print tolower($0) }' <<< ${{ env.CONTINUOUS_TESTS_NAMEPREFIX }})" >> "$GITHUB_OUTPUT"
echo "continuous_tests_list=$(jq -cR 'split(" ")' <<< '${{ env.CONTINUOUS_TESTS_LIST }}')" >> "$GITHUB_OUTPUT"
echo "continuous_tests_duration=${{ env.CONTINUOUS_TESTS_DURATION }}" >> "$GITHUB_OUTPUT"
echo "workflow_source=${{ env.CONTINUOUS_TESTS_SOURCE }}" >> "$GITHUB_OUTPUT"
# Run Continuous Tests
run-continuous-tests:
run-tests:
name: Run Continuous Tests
needs: [compute-tests-inputs, compute-continuous-tests-inputs]
needs: [publish, compute-tests-inputs]
strategy:
max-parallel: 1
matrix:
tests: ${{ fromJSON(needs.compute-continuous-tests-inputs.outputs.continuous_tests_list) }}
uses: logos-storage/logos-storage-nim-cs-dist-tests/.github/workflows/run-continuous-tests.yaml@master
tests: ${{ fromJSON(needs.compute-tests-inputs.outputs.continuous_tests_list) }}
uses: codex-storage/cs-codex-dist-tests/.github/workflows/run-continuous-tests.yaml@master
with:
source: ${{ needs.compute-tests-inputs.outputs.source }}
branch: ${{ needs.compute-tests-inputs.outputs.branch }}
codexdockerimage: ${{ needs.compute-tests-inputs.outputs.codexdockerimage }}
nameprefix: ${{ needs.compute-continuous-tests-inputs.outputs.nameprefix }}-${{ matrix.tests }}-${{ needs.compute-continuous-tests-inputs.outputs.continuous_tests_duration }}
nameprefix: ${{ needs.compute-tests-inputs.outputs.nameprefix }}-${{ matrix.tests }}-${{ needs.compute-tests-inputs.outputs.continuous_tests_duration }}
tests_filter: ${{ matrix.tests }}
tests_target_duration: ${{ needs.compute-tests-inputs.outputs.continuous_tests_duration }}
workflow_source: ${{ needs.compute-tests-inputs.outputs.workflow_source }}
secrets: inherit
# Run Release Tests
run-release-tests:
name: Run Release Tests
needs: [compute-tests-inputs]
if: ${{ inputs.run_release_tests == 'true' }}
uses: logos-storage/logos-storage-nim-cs-dist-tests/.github/workflows/run-release-tests.yaml@master
with:
source: ${{ needs.compute-tests-inputs.outputs.source }}
branch: ${{ needs.compute-tests-inputs.outputs.branch }}
codexdockerimage: ${{ needs.compute-tests-inputs.outputs.codexdockerimage }}
workflow_source: ${{ needs.compute-tests-inputs.outputs.workflow_source }}
secrets: inherit

View File

@ -18,27 +18,11 @@ on:
- '!docker/docker-entrypoint.sh'
workflow_dispatch:
jobs:
get-contracts-hash:
runs-on: ubuntu-latest
outputs:
hash: ${{ steps.get-hash.outputs.hash }}
steps:
- uses: actions/checkout@v4
with:
submodules: true
- name: Get submodule short hash
id: get-hash
run: |
hash=$(git rev-parse --short HEAD:vendor/logos-storage-contracts-eth)
echo "hash=$hash" >> $GITHUB_OUTPUT
jobs:
build-and-push:
name: Build and Push
uses: ./.github/workflows/docker-reusable.yml
needs: get-contracts-hash
with:
tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }}
tag_stable: ${{ startsWith(github.ref, 'refs/tags/') }}
contract_image: "codexstorage/codex-contracts-eth:sha-${{ needs.get-contracts-hash.outputs.hash }}"
secrets: inherit

View File

@ -2,17 +2,17 @@ name: OpenAPI
on:
push:
tags:
- "v*.*.*"
branches:
- 'master'
paths:
- "openapi.yaml"
- ".github/workflows/docs.yml"
- 'openapi.yaml'
- '.github/workflows/docs.yml'
pull_request:
branches:
- "**"
- '**'
paths:
- "openapi.yaml"
- ".github/workflows/docs.yml"
- 'openapi.yaml'
- '.github/workflows/docs.yml'
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
@ -40,7 +40,7 @@ jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/')
if: github.ref == 'refs/heads/master'
steps:
- name: Checkout
uses: actions/checkout@v4
@ -52,7 +52,7 @@ jobs:
node-version: 18
- name: Build OpenAPI
run: npx @redocly/cli build-docs openapi.yaml --output openapi/index.html --title "Logos Storage API"
run: npx @redocly/cli build-docs openapi.yaml --output openapi/index.html --title "Codex API"
- name: Build Postman Collection
run: npx -y openapi-to-postmanv2 -s openapi.yaml -o openapi/postman.json -p -O folderStrategy=Tags,includeAuthInfoInExample=false

View File

@ -15,14 +15,12 @@ jobs:
matrix: ${{ steps.matrix.outputs.matrix }}
cache_nonce: ${{ env.cache_nonce }}
steps:
- name: Checkout sources
uses: actions/checkout@v4
- name: Compute matrix
id: matrix
run: |
echo 'matrix<<EOF' >> $GITHUB_OUTPUT
tools/scripts/ci-job-matrix.sh linux >> $GITHUB_OUTPUT
echo 'EOF' >> $GITHUB_OUTPUT
- name: Compute matrix
id: matrix
uses: fabiocaccamo/create-matrix-action@v4
with:
matrix: |
os {linux}, cpu {amd64}, builder {ubuntu-20.04}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
build:
needs: matrix

View File

@ -4,15 +4,13 @@ on:
push:
tags:
- 'v*.*.*'
branches:
- master
workflow_dispatch:
env:
cache_nonce: 0 # Allows for easily busting actions/cache caches
nim_version: pinned
rust_version: 1.79.0
storage_binary_base: storage
rust_version: 1.78.0
codex_binary_base: codex
cirdl_binary_base: cirdl
build_dir: build
nim_flags: ''
@ -27,13 +25,14 @@ jobs:
steps:
- name: Compute matrix
id: matrix
uses: fabiocaccamo/create-matrix-action@v5
uses: fabiocaccamo/create-matrix-action@v4
with:
matrix: |
os {linux}, cpu {amd64}, builder {ubuntu-22.04}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {linux}, cpu {arm64}, builder {ubuntu-22.04-arm}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {macos}, cpu {arm64}, builder {macos-14}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {windows}, cpu {amd64}, builder {windows-latest}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {msys2}
os {linux}, cpu {amd64}, builder {ubuntu-20.04}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {linux}, cpu {arm64}, builder {buildjet-4vcpu-ubuntu-2204-arm}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {macos}, cpu {amd64}, builder {macos-13}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {macos}, cpu {arm64}, builder {macos-14}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {windows}, cpu {amd64}, builder {windows-latest}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {msys2}
# Build
build:
@ -73,18 +72,18 @@ jobs:
windows*) os_name="windows" ;;
esac
github_ref_name="${GITHUB_REF_NAME/\//-}"
storage_binary="${{ env.storage_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
codex_binary="${{ env.codex_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
cirdl_binary="${{ env.cirdl_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
if [[ ${os_name} == "windows" ]]; then
storage_binary="${storage_binary}.exe"
codex_binary="${codex_binary}.exe"
cirdl_binary="${cirdl_binary}.exe"
fi
echo "storage_binary=${storage_binary}" >>$GITHUB_ENV
echo "codex_binary=${codex_binary}" >>$GITHUB_ENV
echo "cirdl_binary=${cirdl_binary}" >>$GITHUB_ENV
- name: Release - Build
run: |
make NIMFLAGS="--out:${{ env.build_dir }}/${{ env.storage_binary }} ${{ env.nim_flags }}"
make NIMFLAGS="--out:${{ env.build_dir }}/${{ env.codex_binary }} ${{ env.nim_flags }}"
make cirdl NIMFLAGS="--out:${{ env.build_dir }}/${{ env.cirdl_binary }} ${{ env.nim_flags }}"
- name: Release - Libraries
@ -95,11 +94,11 @@ jobs:
done
fi
- name: Release - Upload Logos Storage build artifacts
- name: Release - Upload codex build artifacts
uses: actions/upload-artifact@v4
with:
name: release-${{ env.storage_binary }}
path: ${{ env.build_dir }}/${{ env.storage_binary_base }}*
name: release-${{ env.codex_binary }}
path: ${{ env.build_dir }}/${{ env.codex_binary_base }}*
retention-days: 30
- name: Release - Upload cirdl build artifacts
@ -139,7 +138,7 @@ jobs:
}
# Compress and prepare
for file in ${{ env.storage_binary_base }}* ${{ env.cirdl_binary_base }}*; do
for file in ${{ env.codex_binary_base }}* ${{ env.cirdl_binary_base }}*; do
if [[ "${file}" == *".exe"* ]]; then
# Windows - binary only
@ -171,34 +170,6 @@ jobs:
path: /tmp/release/
retention-days: 30
- name: Release - Upload to the cloud
env:
s3_endpoint: ${{ secrets.S3_ENDPOINT }}
s3_bucket: ${{ secrets.S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
run: |
# Variables
branch="${GITHUB_REF_NAME/\//-}"
folder="/tmp/release"
# Tagged releases
if [[ "${{ github.ref }}" == *"refs/tags/"* ]]; then
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/releases/${branch} --endpoint-url ${{ env.s3_endpoint }}
echo "${branch}" > "${folder}"/latest
aws s3 cp "${folder}"/latest s3://${{ env.s3_bucket }}/releases/latest --endpoint-url ${{ env.s3_endpoint }}
rm -f "${folder}"/latest
# master branch
elif [[ "${branch}" == "${{ github.event.repository.default_branch }}" ]]; then
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/${branch} --endpoint-url ${{ env.s3_endpoint }}
# Custom branch
else
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/branches/${branch} --endpoint-url ${{ env.s3_endpoint }}
fi
- name: Release
uses: softprops/action-gh-release@v2
if: startsWith(github.ref, 'refs/tags/')
@ -206,12 +177,3 @@ jobs:
files: |
/tmp/release/*
make_latest: true
- name: Generate Python SDK
uses: peter-evans/repository-dispatch@v3
if: startsWith(github.ref, 'refs/tags/')
with:
token: ${{ secrets.DISPATCH_PAT }}
repository: logos-storage/logos-storage-py-api-client
event-type: generate
client-payload: '{"openapi_url": "https://raw.githubusercontent.com/logos-storage/logos-storage-nim/${{ github.ref }}/openapi.yaml"}'

6
.gitignore vendored
View File

@ -5,13 +5,9 @@
!LICENSE*
!Makefile
!Jenkinsfile
nimcache/
# Executables when using nix will be stored in result/ directory
result/
# Executables shall be put in an ignored build/ directory
build/
@ -45,5 +41,3 @@ docker/prometheus-data
.DS_Store
nim.cfg
tests/integration/logs
data/

58
.gitmodules vendored
View File

@ -37,17 +37,22 @@
path = vendor/nim-nitro
url = https://github.com/status-im/nim-nitro.git
ignore = untracked
branch = main
branch = master
[submodule "vendor/questionable"]
path = vendor/questionable
url = https://github.com/status-im/questionable.git
ignore = untracked
branch = main
branch = master
[submodule "vendor/upraises"]
path = vendor/upraises
url = https://github.com/markspanbroek/upraises.git
ignore = untracked
branch = master
[submodule "vendor/asynctest"]
path = vendor/asynctest
url = https://github.com/status-im/asynctest.git
ignore = untracked
branch = main
branch = master
[submodule "vendor/nim-presto"]
path = vendor/nim-presto
url = https://github.com/status-im/nim-presto.git
@ -127,7 +132,7 @@
path = vendor/nim-websock
url = https://github.com/status-im/nim-websock.git
ignore = untracked
branch = main
branch = master
[submodule "vendor/nim-contract-abi"]
path = vendor/nim-contract-abi
url = https://github.com/status-im/nim-contract-abi
@ -155,13 +160,13 @@
path = vendor/nim-taskpools
url = https://github.com/status-im/nim-taskpools.git
ignore = untracked
branch = stable
branch = master
[submodule "vendor/nim-leopard"]
path = vendor/nim-leopard
url = https://github.com/status-im/nim-leopard.git
[submodule "vendor/logos-storage-nim-dht"]
path = vendor/logos-storage-nim-dht
url = https://github.com/logos-storage/logos-storage-nim-dht.git
[submodule "vendor/nim-codex-dht"]
path = vendor/nim-codex-dht
url = https://github.com/codex-storage/nim-codex-dht.git
ignore = untracked
branch = master
[submodule "vendor/nim-datastore"]
@ -173,11 +178,9 @@
[submodule "vendor/nim-eth"]
path = vendor/nim-eth
url = https://github.com/status-im/nim-eth
[submodule "vendor/logos-storage-contracts-eth"]
path = vendor/logos-storage-contracts-eth
url = https://github.com/logos-storage/logos-storage-contracts-eth.git
ignore = untracked
branch = master
[submodule "vendor/codex-contracts-eth"]
path = vendor/codex-contracts-eth
url = https://github.com/status-im/codex-contracts-eth
[submodule "vendor/nim-protobuf-serialization"]
path = vendor/nim-protobuf-serialization
url = https://github.com/status-im/nim-protobuf-serialization
@ -192,41 +195,26 @@
url = https://github.com/zevv/npeg
[submodule "vendor/nim-poseidon2"]
path = vendor/nim-poseidon2
url = https://github.com/logos-storage/nim-poseidon2.git
ignore = untracked
branch = master
url = https://github.com/codex-storage/nim-poseidon2.git
[submodule "vendor/constantine"]
path = vendor/constantine
url = https://github.com/mratsim/constantine.git
[submodule "vendor/nim-circom-compat"]
path = vendor/nim-circom-compat
url = https://github.com/logos-storage/nim-circom-compat.git
url = https://github.com/codex-storage/nim-circom-compat.git
ignore = untracked
branch = master
[submodule "vendor/logos-storage-proofs-circuits"]
path = vendor/logos-storage-proofs-circuits
url = https://github.com/logos-storage/logos-storage-proofs-circuits.git
[submodule "vendor/codex-storage-proofs-circuits"]
path = vendor/codex-storage-proofs-circuits
url = https://github.com/codex-storage/codex-storage-proofs-circuits.git
ignore = untracked
branch = master
[submodule "vendor/nim-serde"]
path = vendor/nim-serde
url = https://github.com/logos-storage/nim-serde.git
url = https://github.com/codex-storage/nim-serde.git
[submodule "vendor/nim-leveldbstatic"]
path = vendor/nim-leveldbstatic
url = https://github.com/logos-storage/nim-leveldb.git
url = https://github.com/codex-storage/nim-leveldb.git
[submodule "vendor/nim-zippy"]
path = vendor/nim-zippy
url = https://github.com/status-im/nim-zippy.git
[submodule "vendor/nph"]
path = vendor/nph
url = https://github.com/arnetheduck/nph.git
[submodule "vendor/nim-quic"]
path = vendor/nim-quic
url = https://github.com/vacp2p/nim-quic.git
ignore = untracked
branch = main
[submodule "vendor/nim-ngtcp2"]
path = vendor/nim-ngtcp2
url = https://github.com/vacp2p/nim-ngtcp2.git
ignore = untracked
branch = main

37
Jenkinsfile vendored
View File

@ -1,37 +0,0 @@
#!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.13'
pipeline {
agent { label 'linux && x86_64 && nix-2.24' }
options {
disableConcurrentBuilds()
/* manage how many builds we keep */
buildDiscarder(logRotator(
numToKeepStr: '20',
daysToKeepStr: '30',
))
}
stages {
stage('Build') {
steps {
script {
nix.flake("default")
}
}
}
stage('Check') {
steps {
script {
sh './result/bin/storage --version'
}
}
}
}
post {
cleanup { cleanWs() }
}
}

112
Makefile
View File

@ -15,7 +15,7 @@
#
# If NIM_COMMIT is set to "nimbusbuild", this will use the
# version pinned by nimbus-build-system.
PINNED_NIM_VERSION := v2.2.4
PINNED_NIM_VERSION := 38640664088251bbc88917b4bacfd86ec53014b8 # 1.6.21
ifeq ($(NIM_COMMIT),)
NIM_COMMIT := $(PINNED_NIM_VERSION)
@ -40,30 +40,6 @@ DOCKER_IMAGE_NIM_PARAMS ?= -d:chronicles_colors:none -d:insecure
LINK_PCRE := 0
ifeq ($(OS),Windows_NT)
ifeq ($(PROCESSOR_ARCHITECTURE), AMD64)
ARCH = x86_64
endif
ifeq ($(PROCESSOR_ARCHITECTURE), ARM64)
ARCH = arm64
endif
else
UNAME_P := $(shell uname -m)
ifneq ($(filter $(UNAME_P), i686 i386 x86_64),)
ARCH = x86_64
endif
ifneq ($(filter $(UNAME_P), aarch64 arm),)
ARCH = arm64
endif
endif
ifeq ($(ARCH), x86_64)
CXXFLAGS ?= -std=c++17 -mssse3
else
CXXFLAGS ?= -std=c++17
endif
export CXXFLAGS
# we don't want an error here, so we can handle things later, in the ".DEFAULT" target
-include $(BUILD_SYSTEM_DIR)/makefiles/variables.mk
@ -93,10 +69,10 @@ else # "variables.mk" was included. Business as usual until the end of this file
# default target, because it's the first one that doesn't start with '.'
# Builds the Logos Storage binary
# Builds the codex binary
all: | build deps
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim storage $(NIM_PARAMS) build.nims
$(ENV_SCRIPT) nim codex $(NIM_PARAMS) build.nims
# Build tools/cirdl
cirdl: | deps
@ -138,12 +114,12 @@ test: | build deps
# Builds and runs the smart contract tests
testContracts: | build deps
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testContracts $(NIM_PARAMS) --define:ws_resubscribe=240 build.nims
$(ENV_SCRIPT) nim testContracts $(NIM_PARAMS) build.nims
# Builds and runs the integration tests
testIntegration: | build deps
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testIntegration $(NIM_PARAMS) --define:ws_resubscribe=240 build.nims
$(ENV_SCRIPT) nim testIntegration $(NIM_PARAMS) build.nims
# Builds and runs all tests (except for Taiko L2 tests)
testAll: | build deps
@ -178,11 +154,11 @@ coverage:
$(MAKE) NIMFLAGS="$(NIMFLAGS) --lineDir:on --passC:-fprofile-arcs --passC:-ftest-coverage --passL:-fprofile-arcs --passL:-ftest-coverage" test
cd nimcache/release/testCodex && rm -f *.c
mkdir -p coverage
lcov --capture --keep-going --directory nimcache/release/testCodex --output-file coverage/coverage.info
lcov --capture --directory nimcache/release/testCodex --output-file coverage/coverage.info
shopt -s globstar && ls $$(pwd)/codex/{*,**/*}.nim
shopt -s globstar && lcov --extract coverage/coverage.info --keep-going $$(pwd)/codex/{*,**/*}.nim --output-file coverage/coverage.f.info
shopt -s globstar && lcov --extract coverage/coverage.info $$(pwd)/codex/{*,**/*}.nim --output-file coverage/coverage.f.info
echo -e $(BUILD_MSG) "coverage/report/index.html"
genhtml coverage/coverage.f.info --keep-going --output-directory coverage/report
genhtml coverage/coverage.f.info --output-directory coverage/report
show-coverage:
if which open >/dev/null; then (echo -e "\e[92mOpening\e[39m HTML coverage report in browser..." && open coverage/report/index.html) || true; fi
@ -199,76 +175,4 @@ ifneq ($(USE_LIBBACKTRACE), 0)
+ $(MAKE) -C vendor/nim-libbacktrace clean $(HANDLE_OUTPUT)
endif
############
## Format ##
############
.PHONY: build-nph install-nph-hook clean-nph print-nph-path
# Default location for nph binary shall be next to nim binary to make it available on the path.
NPH:=$(shell dirname $(NIM_BINARY))/nph
build-nph:
ifeq ("$(wildcard $(NPH))","")
$(ENV_SCRIPT) nim c vendor/nph/src/nph.nim && \
mv vendor/nph/src/nph $(shell dirname $(NPH))
echo "nph utility is available at " $(NPH)
endif
GIT_PRE_COMMIT_HOOK := .git/hooks/pre-commit
install-nph-hook: build-nph
ifeq ("$(wildcard $(GIT_PRE_COMMIT_HOOK))","")
cp ./tools/scripts/git_pre_commit_format.sh $(GIT_PRE_COMMIT_HOOK)
else
echo "$(GIT_PRE_COMMIT_HOOK) already present, will NOT override"
exit 1
endif
nph/%: build-nph
echo -e $(FORMAT_MSG) "nph/$*" && \
$(NPH) $*
format:
$(NPH) *.nim
$(NPH) codex/
$(NPH) tests/
$(NPH) library/
clean-nph:
rm -f $(NPH)
# To avoid hardcoding nph binary location in several places
print-nph-path:
echo "$(NPH)"
clean: | clean-nph
################
## C Bindings ##
################
.PHONY: libstorage
STATIC ?= 0
ifneq ($(strip $(STORAGE_LIB_PARAMS)),)
NIM_PARAMS := $(NIM_PARAMS) $(STORAGE_LIB_PARAMS)
endif
libstorage:
$(MAKE) deps
rm -f build/libstorage*
ifeq ($(STATIC), 1)
echo -e $(BUILD_MSG) "build/$@.a" && \
$(ENV_SCRIPT) nim libstorageStatic $(NIM_PARAMS) -d:LeopardCmakeFlags="\"-DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_BUILD_TYPE=Release\"" codex.nims
else ifeq ($(detected_OS),Windows)
echo -e $(BUILD_MSG) "build/$@.dll" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) -d:LeopardCmakeFlags="\"-G \\\"MSYS Makefiles\\\" -DCMAKE_BUILD_TYPE=Release\"" codex.nims
else ifeq ($(detected_OS),macOS)
echo -e $(BUILD_MSG) "build/$@.dylib" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) -d:LeopardCmakeFlags="\"-DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_BUILD_TYPE=Release\"" codex.nims
else
echo -e $(BUILD_MSG) "build/$@.so" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) -d:LeopardCmakeFlags="\"-DCMAKE_POSITION_INDEPENDENT_CODE=ON -DCMAKE_BUILD_TYPE=Release\"" codex.nims
endif
endif # "variables.mk" was not included

View File

@ -1,22 +1,22 @@
# Logos Storage Decentralized Engine
# Codex Decentralized Durability Engine
> The Logos Storage project aims to create a decentralized engine that allows persisting data in p2p networks.
> The Codex project aims to create a decentralized durability engine that allows persisting data in p2p networks. In other words, it allows storing files and data with predictable durability guarantees for later retrieval.
> WARNING: This project is under active development and is considered pre-alpha.
[![License: Apache](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Stability: experimental](https://img.shields.io/badge/stability-experimental-orange.svg)](#stability)
[![CI](https://github.com/logos-storage/logos-storage-nim/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/logos-storage/logos-storage-nim/actions/workflows/ci.yml?query=branch%3Amaster)
[![Docker](https://github.com/logos-storage/logos-storage-nim/actions/workflows/docker.yml/badge.svg?branch=master)](https://github.com/logos-storage/logos-storage-nim/actions/workflows/docker.yml?query=branch%3Amaster)
[![Codecov](https://codecov.io/gh/logos-storage/logos-storage-nim/branch/master/graph/badge.svg?token=XFmCyPSNzW)](https://codecov.io/gh/logos-storage/logos-storage-nim)
[![CI](https://github.com/codex-storage/nim-codex/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/codex-storage/nim-codex/actions/workflows/ci.yml?query=branch%3Amaster)
[![Docker](https://github.com/codex-storage/nim-codex/actions/workflows/docker.yml/badge.svg?branch=master)](https://github.com/codex-storage/nim-codex/actions/workflows/docker.yml?query=branch%3Amaster)
[![Codecov](https://codecov.io/gh/codex-storage/nim-codex/branch/master/graph/badge.svg?token=XFmCyPSNzW)](https://codecov.io/gh/codex-storage/nim-codex)
[![Discord](https://img.shields.io/discord/895609329053474826)](https://discord.gg/CaJTh24ddQ)
![Docker Pulls](https://img.shields.io/docker/pulls/codexstorage/nim-codex)
## Build and Run
For detailed instructions on preparing to build logos-storagenim see [*Build Logos Storage*](https://docs.codex.storage/learn/build).
For detailed instructions on preparing to build nim-codex see [*Build Codex*](https://docs.codex.storage/learn/build).
To build the project, clone it and run:
@ -29,12 +29,11 @@ The executable will be placed under the `build` directory under the project root
Run the client with:
```bash
build/storage
build/codex
```
## Configuration
It is possible to configure a Logos Storage node in several ways:
It is possible to configure a Codex node in several ways:
1. CLI options
2. Environment variables
3. Configuration file
@ -45,72 +44,10 @@ Please check [documentation](https://docs.codex.storage/learn/run#configuration)
## Guides
To get acquainted with Logos Storage, consider:
* running the simple [Logos Storage Two-Client Test](https://docs.codex.storage/learn/local-two-client-test) for a start, and;
* if you are feeling more adventurous, try [Running a Local Logos Storage Network with Marketplace Support](https://docs.codex.storage/learn/local-marketplace) using a local blockchain as well.
To get acquainted with Codex, consider:
* running the simple [Codex Two-Client Test](https://docs.codex.storage/learn/local-two-client-test) for a start, and;
* if you are feeling more adventurous, try [Running a Local Codex Network with Marketplace Support](https://docs.codex.storage/learn/local-marketplace) using a local blockchain as well.
## API
The client exposes a REST API that can be used to interact with the clients. Overview of the API can be found on [api.codex.storage](https://api.codex.storage).
## Bindings
Logos Storage provides a C API that can be wrapped by other languages. The bindings is located in the `library` folder.
Currently, only a Go binding is included.
### Build the C library
```bash
make libstorage
```
This produces the shared library under `build/`.
### Run the Go example
Build the Go example:
```bash
go build -o storage-go examples/golang/storage.go
```
Export the library path:
```bash
export LD_LIBRARY_PATH=build
```
Run the example:
```bash
./storage-go
```
### Static vs Dynamic build
By default, Logos Storage builds a dynamic library (`libstorage.so`), which you can load at runtime.
If you prefer a static library (`libstorage.a`), set the `STATIC` flag:
```bash
# Build dynamic (default)
make libstorage
# Build static
make STATIC=1 libstorage
```
### Limitation
Callbacks must be fast and non-blocking; otherwise, the working thread will hang and prevent other requests from being processed.
## Contributing and development
Feel free to dive in, contributions are welcomed! Open an issue or submit PRs.
### Linting and formatting
`logos-storage-nim` uses [nph](https://github.com/arnetheduck/nph) for formatting our code and it is required to adhere to its styling.
If you are setting up fresh setup, in order to get `nph` run `make build-nph`.
In order to format files run `make nph/<file/folder you want to format>`.
If you want you can install Git pre-commit hook using `make install-nph-commit`, which will format modified files prior committing them.
If you are using VSCode and the [NimLang](https://marketplace.visualstudio.com/items?itemName=NimLang.nimlang) extension you can enable "Format On Save" (eq. the `nim.formatOnSave` property) that will format the files using `nph`.

View File

@ -10,17 +10,17 @@ nim c -r run_benchmarks
```
By default all circuit files for each combinations of circuit args will be generated in a unique folder named like:
logos-storage-nim/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3
nim-codex/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3
Generating the circuit files often takes longer than running benchmarks, so caching the results allows re-running the benchmark as needed.
You can modify the `CircuitArgs` and `CircuitEnv` objects in `runAllBenchMarks` to suite your needs. See `create_circuits.nim` for their definition.
The runner executes all commands relative to the `logos-storage-nim` repo. This simplifies finding the correct circuit includes paths, etc. `CircuitEnv` sets all of this.
The runner executes all commands relative to the `nim-codex` repo. This simplifies finding the correct circuit includes paths, etc. `CircuitEnv` sets all of this.
## Logos Storage Ark Circom CLI
## Codex Ark Circom CLI
Runs Logos Storage's prover setup with Ark / Circom.
Runs Codex's prover setup with Ark / Circom.
Compile:
```sh

View File

@ -29,10 +29,10 @@ proc findCodexProjectDir(): string =
func default*(tp: typedesc[CircuitEnv]): CircuitEnv =
let codexDir = findCodexProjectDir()
result.nimCircuitCli =
codexDir / "vendor" / "logos-storage-proofs-circuits" / "reference" / "nim" /
codexDir / "vendor" / "codex-storage-proofs-circuits" / "reference" / "nim" /
"proof_input" / "cli"
result.circuitDirIncludes =
codexDir / "vendor" / "logos-storage-proofs-circuits" / "circuit"
codexDir / "vendor" / "codex-storage-proofs-circuits" / "circuit"
result.ptauPath =
codexDir / "benchmarks" / "ceremony" / "powersOfTau28_hez_final_23.ptau"
result.ptauUrl = "https://storage.googleapis.com/zkevm/ptau".parseUri
@ -118,7 +118,7 @@ proc createCircuit*(
##
## All needed circuit files will be generated as needed.
## They will be located in `circBenchDir` which defaults to a folder like:
## `logos-storage-nim/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3`
## `nim-codex/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3`
## with all the given CircuitArgs.
##
let circdir = circBenchDir

View File

@ -41,7 +41,7 @@ template benchmark*(name: untyped, count: int, blk: untyped) =
)
benchRuns[benchmarkName] = (runs.avg(), count)
template printBenchMarkSummaries*(printRegular = true, printTsv = true) =
template printBenchMarkSummaries*(printRegular=true, printTsv=true) =
if printRegular:
echo ""
for k, v in benchRuns:
@ -53,6 +53,7 @@ template printBenchMarkSummaries*(printRegular = true, printTsv = true) =
for k, v in benchRuns:
echo k, "\t", v.avgTimeSec, "\t", v.count
import std/math
func floorLog2*(x: int): int =

View File

@ -3,97 +3,63 @@ mode = ScriptMode.Verbose
import std/os except commandLineParams
### Helper functions
proc buildBinary(srcName: string, outName = os.lastPathPart(srcName), srcDir = "./", params = "", lang = "c") =
proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") =
if not dirExists "build":
mkDir "build"
# allow something like "nim nimbus --verbosity:0 --hints:off nimbus.nims"
var extra_params = params
when compiles(commandLineParams):
for param in commandLineParams():
extra_params &= " " & param
else:
for i in 2 ..< paramCount():
for i in 2..<paramCount():
extra_params &= " " & paramStr(i)
let
# Place build output in 'build' folder, even if name includes a longer path.
cmd =
"nim " & lang & " --out:build/" & outName & " " & extra_params & " " & srcDir &
srcName & ".nim"
outName = os.lastPathPart(name)
cmd = "nim " & lang & " --out:build/" & outName & " " & extra_params & " " & srcDir & name & ".nim"
exec(cmd)
proc buildLibrary(name: string, srcDir = "./", params = "", `type` = "dynamic") =
if not dirExists "build":
mkDir "build"
proc test(name: string, srcDir = "tests/", params = "", lang = "c") =
buildBinary name, srcDir, params
exec "build/" & name
if `type` == "dynamic":
let lib_name = (
when defined(windows): name & ".dll"
elif defined(macosx): name & ".dylib"
else: name & ".so"
)
exec "nim c" & " --out:build/" & lib_name &
" --threads:on --app:lib --opt:size --noMain --mm:refc --header --d:metrics " &
"--nimMainPrefix:libstorage -d:noSignalHandler " &
"-d:LeopardExtraCompilerFlags=-fPIC " & "-d:chronicles_runtime_filtering " &
"-d:chronicles_log_level=TRACE " & params & " " & srcDir & name & ".nim"
else:
exec "nim c" & " --out:build/" & name &
".a --threads:on --app:staticlib --opt:size --noMain --mm:refc --header --d:metrics " &
"--nimMainPrefix:libstorage -d:noSignalHandler " &
"-d:LeopardExtraCompilerFlags=-fPIC " &
"-d:chronicles_runtime_filtering " &
"-d:chronicles_log_level=TRACE " &
params & " " & srcDir & name & ".nim"
proc test(name: string, outName = name, srcDir = "tests/", params = "", lang = "c") =
buildBinary name, outName, srcDir, params
exec "build/" & outName
task storage, "build logos storage binary":
buildBinary "codex",
outname = "storage",
params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE"
task codex, "build codex binary":
buildBinary "codex", params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE"
task toolsCirdl, "build tools/cirdl binary":
buildBinary "tools/cirdl/cirdl"
task testStorage, "Build & run Logos Storage tests":
test "testCodex", outName = "testStorage", params = "-d:storage_enable_proof_failures=true"
task testCodex, "Build & run Codex tests":
test "testCodex", params = "-d:codex_enable_proof_failures=true"
task testContracts, "Build & run Logos Storage Contract tests":
task testContracts, "Build & run Codex Contract tests":
test "testContracts"
task testIntegration, "Run integration tests":
buildBinary "codex",
outName = "storage",
params =
"-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE -d:storage_enable_proof_failures=true"
buildBinary "codex", params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE -d:codex_enable_proof_failures=true"
test "testIntegration"
# use params to enable logging from the integration test executable
# test "testIntegration", params = "-d:chronicles_sinks=textlines[notimestamps,stdout],textlines[dynamic] " &
# "-d:chronicles_enabled_topics:integration:TRACE"
task build, "build Logos Storage binary":
storageTask()
task build, "build codex binary":
codexTask()
task test, "Run tests":
testStorageTask()
testCodexTask()
task testTools, "Run Tools tests":
toolsCirdlTask()
test "testTools"
task testAll, "Run all tests (except for Taiko L2 tests)":
testStorageTask()
testCodexTask()
testContractsTask()
testIntegrationTask()
testToolsTask()
task testTaiko, "Run Taiko L2 tests":
storageTask()
codexTask()
test "testTaiko"
import strutils
@ -119,50 +85,20 @@ task coverage, "generates code coverage report":
var nimSrcs = " "
for f in walkDirRec("codex", {pcFile}):
if f.endswith(".nim"):
nimSrcs.add " " & f.absolutePath.quoteShell()
if f.endswith(".nim"): nimSrcs.add " " & f.absolutePath.quoteShell()
echo "======== Running Tests ======== "
test "coverage",
srcDir = "tests/",
params =
" --nimcache:nimcache/coverage -d:release -d:storage_enable_proof_failures=true"
test "coverage", srcDir = "tests/", params = " --nimcache:nimcache/coverage -d:release -d:codex_enable_proof_failures=true"
exec("rm nimcache/coverage/*.c")
rmDir("coverage")
mkDir("coverage")
rmDir("coverage"); mkDir("coverage")
echo " ======== Running LCOV ======== "
exec(
"lcov --capture --keep-going --directory nimcache/coverage --output-file coverage/coverage.info"
)
exec(
"lcov --extract coverage/coverage.info --keep-going --output-file coverage/coverage.f.info " &
nimSrcs
)
exec("lcov --capture --directory nimcache/coverage --output-file coverage/coverage.info")
exec("lcov --extract coverage/coverage.info --output-file coverage/coverage.f.info " & nimSrcs)
echo " ======== Generating HTML coverage report ======== "
exec("genhtml coverage/coverage.f.info --keep-going --output-directory coverage/report ")
exec("genhtml coverage/coverage.f.info --output-directory coverage/report ")
echo " ======== Coverage report Done ======== "
task showCoverage, "open coverage html":
echo " ======== Opening HTML coverage report in browser... ======== "
if findExe("open") != "":
exec("open coverage/report/index.html")
task libstorageDynamic, "Generate bindings":
var params = ""
when compiles(commandLineParams):
for param in commandLineParams():
if param.len > 0 and param.startsWith("-"):
params.add " " & param
let name = "libstorage"
buildLibrary name, "library/", params, "dynamic"
task libstorageStatic, "Generate bindings":
var params = ""
when compiles(commandLineParams):
for param in commandLineParams():
if param.len > 0 and param.startsWith("-"):
params.add " " & param
let name = "libstorage"
buildLibrary name, "library/", params, "static"

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -28,6 +28,7 @@ import ./codex/codextypes
export codex, conf, libp2p, chronos, logutils
when isMainModule:
import std/sequtils
import std/os
import pkg/confutils/defs
import ./codex/utils/fileutils
@ -38,45 +39,40 @@ when isMainModule:
when defined(posix):
import system/ansi_c
type CodexStatus {.pure.} = enum
Stopped
Stopping
Running
type
CodexStatus {.pure.} = enum
Stopped,
Stopping,
Running
let config = CodexConf.load(
version = codexFullVersion,
envVarsPrefix = "storage",
secondarySources = proc(
config: CodexConf, sources: auto
) {.gcsafe, raises: [ConfigurationError].} =
if configFile =? config.configFile:
sources.addConfigFile(Toml, configFile)
,
envVarsPrefix = "codex",
secondarySources = proc (config: CodexConf, sources: auto) =
if configFile =? config.configFile:
sources.addConfigFile(Toml, configFile)
)
config.setupLogging()
try:
updateLogLevel(config.logLevel)
except ValueError as err:
try:
stderr.write "Invalid value for --log-level. " & err.msg & "\n"
except IOError:
echo "Invalid value for --log-level. " & err.msg
quit QuitFailure
config.setupMetrics()
if not (checkAndCreateDataDir((config.dataDir).string)):
if config.nat == ValidIpAddress.init(IPv4_any()):
error "`--nat` cannot be set to the any (`0.0.0.0`) address"
quit QuitFailure
if config.nat == ValidIpAddress.init("127.0.0.1"):
warn "`--nat` is set to loopback, your node wont properly announce over the DHT"
if not(checkAndCreateDataDir((config.dataDir).string)):
# We are unable to access/create data folder or data folder's
# permissions are insecure.
quit QuitFailure
if config.prover() and not (checkAndCreateDataDir((config.circuitDir).string)):
if config.prover() and not(checkAndCreateDataDir((config.circuitDir).string)):
quit QuitFailure
trace "Data dir initialized", dir = $config.dataDir
if not (checkAndCreateDataDir((config.dataDir / "repo"))):
if not(checkAndCreateDataDir((config.dataDir / "repo"))):
# We are unable to access/create data folder or data folder's
# permissions are insecure.
quit QuitFailure
@ -95,28 +91,25 @@ when isMainModule:
config.dataDir / config.netPrivKeyFile
privateKey = setupKey(keyPath).expect("Should setup private key!")
server =
try:
CodexServer.new(config, privateKey)
except Exception as exc:
error "Failed to start Logos Storage", msg = exc.msg
quit QuitFailure
server = try:
CodexServer.new(config, privateKey)
except Exception as exc:
error "Failed to start Codex", msg = exc.msg
quit QuitFailure
## Ctrl+C handling
proc doShutdown() =
shutdown = server.shutdown()
shutdown = server.stop()
state = CodexStatus.Stopping
notice "Stopping Logos Storage"
notice "Stopping Codex"
proc controlCHandler() {.noconv.} =
when defined(windows):
# workaround for https://github.com/nim-lang/Nim/issues/4057
try:
setupForeignThreadGc()
except Exception as exc:
raiseAssert exc.msg
# shouldn't happen
except Exception as exc: raiseAssert exc.msg # shouldn't happen
notice "Shutting down after having received SIGINT"
doShutdown()
@ -138,7 +131,7 @@ when isMainModule:
try:
waitFor server.start()
except CatchableError as error:
error "Logos Storage failed to start", error = error.msg
error "Codex failed to start", error = error.msg
# XXX ideally we'd like to issue a stop instead of quitting cold turkey,
# but this would mean we'd have to fix the implementation of all
# services so they won't crash if we attempt to stop them before they
@ -159,7 +152,7 @@ when isMainModule:
# be assigned before state switches to Stopping
waitFor shutdown
except CatchableError as error:
error "Logos Storage didn't shutdown correctly", error = error.msg
error "Codex didn't shutdown correctly", error = error.msg
quit QuitFailure
notice "Exited Storage"
notice "Exited codex"

View File

@ -1,5 +1,5 @@
version = "0.1.0"
author = "Logos Storage Team"
author = "Codex Team"
description = "p2p data durability engine"
license = "MIT"
binDir = "build"

View File

@ -1,5 +1,10 @@
import ./blockexchange/[network, engine, peers]
import ./blockexchange/[
network,
engine,
peers]
import ./blockexchange/protobuf/[blockexc, presence]
import ./blockexchange/protobuf/[
blockexc,
presence]
export network, engine, blockexc, presence, peers

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,8 +7,6 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [].}
import pkg/chronos
import pkg/libp2p/cid
import pkg/libp2p/multicodec
@ -20,8 +18,6 @@ import ../protobuf/presence
import ../peers
import ../../utils
import ../../utils/exceptions
import ../../utils/trackedfutures
import ../../discovery
import ../../stores/blockstore
import ../../logutils
@ -30,122 +26,114 @@ import ../../manifest
logScope:
topics = "codex discoveryengine advertiser"
declareGauge(codex_inflight_advertise, "inflight advertise requests")
declareGauge(codexInflightAdvertise, "inflight advertise requests")
const
DefaultConcurrentAdvertRequests = 10
DefaultAdvertiseLoopSleep = 30.minutes
type Advertiser* = ref object of RootObj
localStore*: BlockStore # Local block store for this instance
discovery*: Discovery # Discovery interface
type
Advertiser* = ref object of RootObj
localStore*: BlockStore # Local block store for this instance
discovery*: Discovery # Discovery interface
advertiserRunning*: bool # Indicates if discovery is running
concurrentAdvReqs: int # Concurrent advertise requests
advertiserRunning*: bool # Indicates if discovery is running
concurrentAdvReqs: int # Concurrent advertise requests
advertiseLocalStoreLoop*: Future[void].Raising([]) # Advertise loop task handle
advertiseQueue*: AsyncQueue[Cid] # Advertise queue
trackedFutures*: TrackedFutures # Advertise tasks futures
advertiseLocalStoreLoop*: Future[void] # Advertise loop task handle
advertiseQueue*: AsyncQueue[Cid] # Advertise queue
advertiseTasks*: seq[Future[void]] # Advertise tasks
advertiseLocalStoreLoopSleep: Duration # Advertise loop sleep
inFlightAdvReqs*: Table[Cid, Future[void]] # Inflight advertise requests
advertiseLocalStoreLoopSleep: Duration # Advertise loop sleep
inFlightAdvReqs*: Table[Cid, Future[void]] # Inflight advertise requests
proc addCidToQueue(b: Advertiser, cid: Cid) {.async: (raises: [CancelledError]).} =
proc addCidToQueue(b: Advertiser, cid: Cid) {.async.} =
if cid notin b.advertiseQueue:
await b.advertiseQueue.put(cid)
trace "Advertising", cid
proc advertiseBlock(b: Advertiser, cid: Cid) {.async: (raises: [CancelledError]).} =
proc advertiseBlock(b: Advertiser, cid: Cid) {.async.} =
without isM =? cid.isManifest, err:
warn "Unable to determine if cid is manifest"
return
try:
if isM:
without blk =? await b.localStore.getBlock(cid), err:
error "Error retrieving manifest block", cid, err = err.msg
return
if isM:
without blk =? await b.localStore.getBlock(cid), err:
error "Error retrieving manifest block", cid, err = err.msg
return
without manifest =? Manifest.decode(blk), err:
error "Unable to decode as manifest", err = err.msg
return
without manifest =? Manifest.decode(blk), err:
error "Unable to decode as manifest", err = err.msg
return
# announce manifest cid and tree cid
await b.addCidToQueue(cid)
await b.addCidToQueue(manifest.treeCid)
except CancelledError as exc:
trace "Cancelled advertise block", cid
raise exc
except CatchableError as e:
error "failed to advertise block", cid, error = e.msgDetail
# announce manifest cid and tree cid
await b.addCidToQueue(cid)
await b.addCidToQueue(manifest.treeCid)
proc advertiseLocalStoreLoop(b: Advertiser) {.async: (raises: []).} =
try:
while b.advertiserRunning:
if cidsIter =? await b.localStore.listBlocks(blockType = BlockType.Manifest):
trace "Advertiser begins iterating blocks..."
for c in cidsIter:
if cid =? await c:
await b.advertiseBlock(cid)
trace "Advertiser iterating blocks finished."
proc advertiseLocalStoreLoop(b: Advertiser) {.async.} =
while b.advertiserRunning:
if cids =? await b.localStore.listBlocks(blockType = BlockType.Manifest):
trace "Advertiser begins iterating blocks..."
for c in cids:
if cid =? await c:
await b.advertiseBlock(cid)
trace "Advertiser iterating blocks finished."
await sleepAsync(b.advertiseLocalStoreLoopSleep)
except CancelledError:
warn "Cancelled advertise local store loop"
await sleepAsync(b.advertiseLocalStoreLoopSleep)
info "Exiting advertise task loop"
proc processQueueLoop(b: Advertiser) {.async: (raises: []).} =
try:
while b.advertiserRunning:
let cid = await b.advertiseQueue.get()
proc processQueueLoop(b: Advertiser) {.async.} =
while b.advertiserRunning:
try:
let
cid = await b.advertiseQueue.get()
if cid in b.inFlightAdvReqs:
continue
let request = b.discovery.provide(cid)
b.inFlightAdvReqs[cid] = request
codex_inflight_advertise.set(b.inFlightAdvReqs.len.int64)
try:
let
request = b.discovery.provide(cid)
defer:
b.inFlightAdvReqs[cid] = request
codexInflightAdvertise.set(b.inFlightAdvReqs.len.int64)
await request
finally:
b.inFlightAdvReqs.del(cid)
codex_inflight_advertise.set(b.inFlightAdvReqs.len.int64)
await request
except CancelledError:
warn "Cancelled advertise task runner"
codexInflightAdvertise.set(b.inFlightAdvReqs.len.int64)
except CancelledError:
trace "Advertise task cancelled"
return
except CatchableError as exc:
warn "Exception in advertise task runner", exc = exc.msg
info "Exiting advertise task runner"
proc start*(b: Advertiser) {.async: (raises: []).} =
proc start*(b: Advertiser) {.async.} =
## Start the advertiser
##
trace "Advertiser start"
# The advertiser is expected to be started only once.
if b.advertiserRunning:
raiseAssert "Advertiser can only be started once — this should not happen"
proc onBlock(cid: Cid) {.async: (raises: []).} =
try:
await b.advertiseBlock(cid)
except CancelledError:
trace "Cancelled advertise block", cid
proc onBlock(cid: Cid) {.async.} =
await b.advertiseBlock(cid)
doAssert(b.localStore.onBlockStored.isNone())
b.localStore.onBlockStored = onBlock.some
if b.advertiserRunning:
warn "Starting advertiser twice"
return
b.advertiserRunning = true
for i in 0 ..< b.concurrentAdvReqs:
let fut = b.processQueueLoop()
b.trackedFutures.track(fut)
for i in 0..<b.concurrentAdvReqs:
b.advertiseTasks.add(processQueueLoop(b))
b.advertiseLocalStoreLoop = advertiseLocalStoreLoop(b)
b.trackedFutures.track(b.advertiseLocalStoreLoop)
proc stop*(b: Advertiser) {.async: (raises: []).} =
proc stop*(b: Advertiser) {.async.} =
## Stop the advertiser
##
@ -157,16 +145,26 @@ proc stop*(b: Advertiser) {.async: (raises: []).} =
b.advertiserRunning = false
# Stop incoming tasks from callback and localStore loop
b.localStore.onBlockStored = CidCallback.none
trace "Stopping advertise loop and tasks"
await b.trackedFutures.cancelTracked()
trace "Advertiser loop and tasks stopped"
if not b.advertiseLocalStoreLoop.isNil and not b.advertiseLocalStoreLoop.finished:
trace "Awaiting advertise loop to stop"
await b.advertiseLocalStoreLoop.cancelAndWait()
trace "Advertise loop stopped"
# Clear up remaining tasks
for task in b.advertiseTasks:
if not task.finished:
trace "Awaiting advertise task to stop"
await task.cancelAndWait()
trace "Advertise task stopped"
trace "Advertiser stopped"
proc new*(
T: type Advertiser,
localStore: BlockStore,
discovery: Discovery,
concurrentAdvReqs = DefaultConcurrentAdvertRequests,
advertiseLocalStoreLoopSleep = DefaultAdvertiseLoopSleep,
advertiseLocalStoreLoopSleep = DefaultAdvertiseLoopSleep
): Advertiser =
## Create a advertiser instance
##
@ -175,7 +173,5 @@ proc new*(
discovery: discovery,
concurrentAdvReqs: concurrentAdvReqs,
advertiseQueue: newAsyncQueue[Cid](concurrentAdvReqs),
trackedFutures: TrackedFutures.new(),
inFlightAdvReqs: initTable[Cid, Future[void]](),
advertiseLocalStoreLoopSleep: advertiseLocalStoreLoopSleep,
)
advertiseLocalStoreLoopSleep: advertiseLocalStoreLoopSleep)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -8,7 +8,6 @@
## those terms.
import std/sequtils
import std/algorithm
import pkg/chronos
import pkg/libp2p/cid
@ -24,7 +23,6 @@ import ../network
import ../peers
import ../../utils
import ../../utils/trackedfutures
import ../../discovery
import ../../stores/blockstore
import ../../logutils
@ -33,107 +31,95 @@ import ../../manifest
logScope:
topics = "codex discoveryengine"
declareGauge(codex_inflight_discovery, "inflight discovery requests")
declareGauge(codexInflightDiscovery, "inflight discovery requests")
const
DefaultConcurrentDiscRequests = 10
DefaultDiscoveryTimeout = 1.minutes
DefaultMinPeersPerBlock = 3
DefaultMaxPeersPerBlock = 8
DefaultDiscoveryLoopSleep = 3.seconds
type DiscoveryEngine* = ref object of RootObj
localStore*: BlockStore # Local block store for this instance
peers*: PeerCtxStore # Peer context store
network*: BlockExcNetwork # Network interface
discovery*: Discovery # Discovery interface
pendingBlocks*: PendingBlocksManager # Blocks we're awaiting to be resolved
discEngineRunning*: bool # Indicates if discovery is running
concurrentDiscReqs: int # Concurrent discovery requests
discoveryLoop*: Future[void].Raising([]) # Discovery loop task handle
discoveryQueue*: AsyncQueue[Cid] # Discovery queue
trackedFutures*: TrackedFutures # Tracked Discovery tasks futures
minPeersPerBlock*: int # Min number of peers with block
maxPeersPerBlock*: int # Max number of peers with block
discoveryLoopSleep: Duration # Discovery loop sleep
inFlightDiscReqs*: Table[Cid, Future[seq[SignedPeerRecord]]]
# Inflight discovery requests
type
DiscoveryEngine* = ref object of RootObj
localStore*: BlockStore # Local block store for this instance
peers*: PeerCtxStore # Peer context store
network*: BlockExcNetwork # Network interface
discovery*: Discovery # Discovery interface
pendingBlocks*: PendingBlocksManager # Blocks we're awaiting to be resolved
discEngineRunning*: bool # Indicates if discovery is running
concurrentDiscReqs: int # Concurrent discovery requests
discoveryLoop*: Future[void] # Discovery loop task handle
discoveryQueue*: AsyncQueue[Cid] # Discovery queue
discoveryTasks*: seq[Future[void]] # Discovery tasks
minPeersPerBlock*: int # Max number of peers with block
discoveryLoopSleep: Duration # Discovery loop sleep
inFlightDiscReqs*: Table[Cid, Future[seq[SignedPeerRecord]]] # Inflight discovery requests
proc cleanupExcessPeers(b: DiscoveryEngine, cid: Cid) {.gcsafe, raises: [].} =
var haves = b.peers.peersHave(cid)
let count = haves.len - b.maxPeersPerBlock
if count <= 0:
return
haves.sort(
proc(a, b: BlockExcPeerCtx): int =
cmp(a.lastExchange, b.lastExchange)
)
let toRemove = haves[0 ..< count]
for peer in toRemove:
try:
peer.cleanPresence(BlockAddress.init(cid))
trace "Removed block presence from peer", cid, peer = peer.id
except CatchableError as exc:
error "Failed to clean presence for peer",
cid, peer = peer.id, error = exc.msg, name = exc.name
proc discoveryQueueLoop(b: DiscoveryEngine) {.async: (raises: []).} =
try:
while b.discEngineRunning:
for cid in toSeq(b.pendingBlocks.wantListBlockCids):
proc discoveryQueueLoop(b: DiscoveryEngine) {.async.} =
while b.discEngineRunning:
for cid in toSeq(b.pendingBlocks.wantListBlockCids):
try:
await b.discoveryQueue.put(cid)
except CancelledError:
trace "Discovery loop cancelled"
return
except CatchableError as exc:
warn "Exception in discovery loop", exc = exc.msg
await sleepAsync(b.discoveryLoopSleep)
except CancelledError:
trace "Discovery loop cancelled"
logScope:
sleep = b.discoveryLoopSleep
wanted = b.pendingBlocks.len
proc discoveryTaskLoop(b: DiscoveryEngine) {.async: (raises: []).} =
await sleepAsync(b.discoveryLoopSleep)
proc discoveryTaskLoop(b: DiscoveryEngine) {.async.} =
## Run discovery tasks
##
try:
while b.discEngineRunning:
let cid = await b.discoveryQueue.get()
while b.discEngineRunning:
try:
let
cid = await b.discoveryQueue.get()
if cid in b.inFlightDiscReqs:
trace "Discovery request already in progress", cid
continue
trace "Running discovery task for cid", cid
let haves = b.peers.peersHave(cid)
if haves.len > b.maxPeersPerBlock:
trace "Cleaning up excess peers",
cid, peers = haves.len, max = b.maxPeersPerBlock
b.cleanupExcessPeers(cid)
continue
let
haves = b.peers.peersHave(cid)
if haves.len < b.minPeersPerBlock:
let request = b.discovery.find(cid)
b.inFlightDiscReqs[cid] = request
codex_inflight_discovery.set(b.inFlightDiscReqs.len.int64)
try:
let
request = b.discovery
.find(cid)
.wait(DefaultDiscoveryTimeout)
defer:
b.inFlightDiscReqs.del(cid)
codex_inflight_discovery.set(b.inFlightDiscReqs.len.int64)
b.inFlightDiscReqs[cid] = request
codexInflightDiscovery.set(b.inFlightDiscReqs.len.int64)
let
peers = await request
if (await request.withTimeout(DefaultDiscoveryTimeout)) and
peers =? (await request).catch:
let dialed = await allFinished(peers.mapIt(b.network.dialPeer(it.data)))
let
dialed = await allFinished(
peers.mapIt( b.network.dialPeer(it.data) ))
for i, f in dialed:
if f.failed:
await b.discovery.removeProvider(peers[i].data.peerId)
except CancelledError:
trace "Discovery task cancelled"
return
finally:
b.inFlightDiscReqs.del(cid)
codexInflightDiscovery.set(b.inFlightDiscReqs.len.int64)
except CancelledError:
trace "Discovery task cancelled"
return
except CatchableError as exc:
warn "Exception in discovery task runner", exc = exc.msg
info "Exiting discovery task runner"
proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) =
proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) {.inline.} =
for cid in cids:
if cid notin b.discoveryQueue:
try:
@ -141,27 +127,23 @@ proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) =
except CatchableError as exc:
warn "Exception queueing discovery request", exc = exc.msg
proc start*(b: DiscoveryEngine) {.async: (raises: []).} =
proc start*(b: DiscoveryEngine) {.async.} =
## Start the discengine task
##
trace "Discovery engine starting"
trace "Discovery engine start"
if b.discEngineRunning:
warn "Starting discovery engine twice"
return
b.discEngineRunning = true
for i in 0 ..< b.concurrentDiscReqs:
let fut = b.discoveryTaskLoop()
b.trackedFutures.track(fut)
for i in 0..<b.concurrentDiscReqs:
b.discoveryTasks.add(discoveryTaskLoop(b))
b.discoveryLoop = b.discoveryQueueLoop()
b.trackedFutures.track(b.discoveryLoop)
b.discoveryLoop = discoveryQueueLoop(b)
trace "Discovery engine started"
proc stop*(b: DiscoveryEngine) {.async: (raises: []).} =
proc stop*(b: DiscoveryEngine) {.async.} =
## Stop the discovery engine
##
@ -171,9 +153,16 @@ proc stop*(b: DiscoveryEngine) {.async: (raises: []).} =
return
b.discEngineRunning = false
trace "Stopping discovery loop and tasks"
await b.trackedFutures.cancelTracked()
trace "Discovery loop and tasks stopped"
for task in b.discoveryTasks:
if not task.finished:
trace "Awaiting discovery task to stop"
await task.cancelAndWait()
trace "Discovery task stopped"
if not b.discoveryLoop.isNil and not b.discoveryLoop.finished:
trace "Awaiting discovery loop to stop"
await b.discoveryLoop.cancelAndWait()
trace "Discovery loop stopped"
trace "Discovery engine stopped"
@ -186,8 +175,7 @@ proc new*(
pendingBlocks: PendingBlocksManager,
concurrentDiscReqs = DefaultConcurrentDiscRequests,
discoveryLoopSleep = DefaultDiscoveryLoopSleep,
minPeersPerBlock = DefaultMinPeersPerBlock,
maxPeersPerBlock = DefaultMaxPeersPerBlock,
minPeersPerBlock = DefaultMinPeersPerBlock
): DiscoveryEngine =
## Create a discovery engine instance for advertising services
##
@ -199,9 +187,6 @@ proc new*(
pendingBlocks: pendingBlocks,
concurrentDiscReqs: concurrentDiscReqs,
discoveryQueue: newAsyncQueue[Cid](concurrentDiscReqs),
trackedFutures: TrackedFutures.new(),
inFlightDiscReqs: initTable[Cid, Future[seq[SignedPeerRecord]]](),
discoveryLoopSleep: discoveryLoopSleep,
minPeersPerBlock: minPeersPerBlock,
maxPeersPerBlock: maxPeersPerBlock,
)
minPeersPerBlock: minPeersPerBlock)

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,8 +7,6 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [].}
import std/math
import pkg/nitro
import pkg/questionable/results
@ -17,13 +15,15 @@ import ../peers
export nitro
export results
push: {.upraises: [].}
const ChainId* = 0.u256 # invalid chain id for now
const Asset* = EthAddress.zero # invalid ERC20 asset address for now
const AmountPerChannel = (10'u64 ^ 18).u256 # 1 asset, ERC20 default is 18 decimals
const AmountPerChannel = (10'u64^18).u256 # 1 asset, ERC20 default is 18 decimals
func openLedgerChannel*(
wallet: WalletRef, hub: EthAddress, asset: EthAddress
): ?!ChannelId =
func openLedgerChannel*(wallet: WalletRef,
hub: EthAddress,
asset: EthAddress): ?!ChannelId =
wallet.openLedgerChannel(hub, ChainId, asset, AmountPerChannel)
func getOrOpenChannel(wallet: WalletRef, peer: BlockExcPeerCtx): ?!ChannelId =
@ -36,7 +36,9 @@ func getOrOpenChannel(wallet: WalletRef, peer: BlockExcPeerCtx): ?!ChannelId =
else:
failure "no account set for peer"
func pay*(wallet: WalletRef, peer: BlockExcPeerCtx, amount: UInt256): ?!SignedState =
func pay*(wallet: WalletRef,
peer: BlockExcPeerCtx,
amount: UInt256): ?!SignedState =
if account =? peer.account:
let asset = Asset
let receiver = account.address

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,11 +7,12 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [].}
import std/tables
import std/monotimes
import std/strutils
import pkg/upraises
push: {.upraises: [].}
import pkg/chronos
import pkg/libp2p
@ -24,194 +25,133 @@ import ../../logutils
logScope:
topics = "codex pendingblocks"
declareGauge(
codex_block_exchange_pending_block_requests,
"codex blockexchange pending block requests",
)
declareGauge(
codex_block_exchange_retrieval_time_us, "codex blockexchange block retrieval time us"
)
declareGauge(codex_block_exchange_pending_block_requests, "codex blockexchange pending block requests")
declareGauge(codex_block_exchange_retrieval_time_us, "codex blockexchange block retrieval time us")
const
DefaultBlockRetries* = 3000
DefaultRetryInterval* = 2.seconds
DefaultBlockTimeout* = 10.minutes
type
RetriesExhaustedError* = object of CatchableError
BlockHandle* = Future[Block].Raising([CancelledError, RetriesExhaustedError])
BlockReq* = object
handle*: BlockHandle
requested*: ?PeerId
blockRetries*: int
handle*: Future[Block]
inFlight*: bool
startTime*: int64
PendingBlocksManager* = ref object of RootObj
blockRetries*: int = DefaultBlockRetries
retryInterval*: Duration = DefaultRetryInterval
blocks*: Table[BlockAddress, BlockReq] # pending Block requests
lastInclusion*: Moment # time at which we last included a block into our wantlist
proc updatePendingBlockGauge(p: PendingBlocksManager) =
codex_block_exchange_pending_block_requests.set(p.blocks.len.int64)
proc getWantHandle*(
self: PendingBlocksManager, address: BlockAddress, requested: ?PeerId = PeerId.none
): Future[Block] {.async: (raw: true, raises: [CancelledError, RetriesExhaustedError]).} =
p: PendingBlocksManager,
address: BlockAddress,
timeout = DefaultBlockTimeout,
inFlight = false): Future[Block] {.async.} =
## Add an event for a block
##
self.blocks.withValue(address, blk):
return blk[].handle
do:
let blk = BlockReq(
handle: newFuture[Block]("pendingBlocks.getWantHandle"),
requested: requested,
blockRetries: self.blockRetries,
startTime: getMonoTime().ticks,
)
self.blocks[address] = blk
self.lastInclusion = Moment.now()
try:
if address notin p.blocks:
p.blocks[address] = BlockReq(
handle: newFuture[Block]("pendingBlocks.getWantHandle"),
inFlight: inFlight,
startTime: getMonoTime().ticks)
let handle = blk.handle
proc cleanUpBlock(data: pointer) {.raises: [].} =
self.blocks.del(address)
self.updatePendingBlockGauge()
handle.addCallback(cleanUpBlock)
handle.cancelCallback = proc(data: pointer) {.raises: [].} =
if not handle.finished:
handle.removeCallback(cleanUpBlock)
cleanUpBlock(nil)
self.updatePendingBlockGauge()
return handle
p.updatePendingBlockGauge()
return await p.blocks[address].handle.wait(timeout)
except CancelledError as exc:
trace "Blocks cancelled", exc = exc.msg, address
raise exc
except CatchableError as exc:
error "Pending WANT failed or expired", exc = exc.msg
# no need to cancel, it is already cancelled by wait()
raise exc
finally:
p.blocks.del(address)
p.updatePendingBlockGauge()
proc getWantHandle*(
self: PendingBlocksManager, cid: Cid, requested: ?PeerId = PeerId.none
): Future[Block] {.async: (raw: true, raises: [CancelledError, RetriesExhaustedError]).} =
self.getWantHandle(BlockAddress.init(cid), requested)
proc completeWantHandle*(
self: PendingBlocksManager, address: BlockAddress, blk: Block
) {.raises: [].} =
## Complete a pending want handle
self.blocks.withValue(address, blockReq):
if not blockReq[].handle.finished:
trace "Completing want handle from provided block", address
blockReq[].handle.complete(blk)
else:
trace "Want handle already completed", address
do:
trace "No pending want handle found for address", address
p: PendingBlocksManager,
cid: Cid,
timeout = DefaultBlockTimeout,
inFlight = false): Future[Block] =
p.getWantHandle(BlockAddress.init(cid), timeout, inFlight)
proc resolve*(
self: PendingBlocksManager, blocksDelivery: seq[BlockDelivery]
) {.gcsafe, raises: [].} =
p: PendingBlocksManager,
blocksDelivery: seq[BlockDelivery]) {.gcsafe, raises: [].} =
## Resolve pending blocks
##
for bd in blocksDelivery:
self.blocks.withValue(bd.address, blockReq):
if not blockReq[].handle.finished:
trace "Resolving pending block", address = bd.address
p.blocks.withValue(bd.address, blockReq):
if not blockReq.handle.finished:
let
startTime = blockReq[].startTime
startTime = blockReq.startTime
stopTime = getMonoTime().ticks
retrievalDurationUs = (stopTime - startTime) div 1000
blockReq.handle.complete(bd.blk)
codex_block_exchange_retrieval_time_us.set(retrievalDurationUs)
if retrievalDurationUs > 500000:
warn "High block retrieval time", retrievalDurationUs, address = bd.address
else:
trace "Block handle already finished", address = bd.address
func retries*(self: PendingBlocksManager, address: BlockAddress): int =
self.blocks.withValue(address, pending):
result = pending[].blockRetries
do:
result = 0
func decRetries*(self: PendingBlocksManager, address: BlockAddress) =
self.blocks.withValue(address, pending):
pending[].blockRetries -= 1
func retriesExhausted*(self: PendingBlocksManager, address: BlockAddress): bool =
self.blocks.withValue(address, pending):
result = pending[].blockRetries <= 0
func isRequested*(self: PendingBlocksManager, address: BlockAddress): bool =
## Check if a block has been requested to a peer
##
result = false
self.blocks.withValue(address, pending):
result = pending[].requested.isSome
func getRequestPeer*(self: PendingBlocksManager, address: BlockAddress): ?PeerId =
## Returns the peer that requested this block
##
result = PeerId.none
self.blocks.withValue(address, pending):
result = pending[].requested
proc markRequested*(
self: PendingBlocksManager, address: BlockAddress, peer: PeerId
): bool =
## Marks this block as having been requested to a peer
proc setInFlight*(
p: PendingBlocksManager,
address: BlockAddress,
inFlight = true) =
## Set inflight status for a block
##
if self.isRequested(address):
return false
p.blocks.withValue(address, pending):
pending[].inFlight = inFlight
self.blocks.withValue(address, pending):
pending[].requested = peer.some
return true
proc isInFlight*(
p: PendingBlocksManager,
address: BlockAddress): bool =
## Check if a block is in flight
##
proc clearRequest*(
self: PendingBlocksManager, address: BlockAddress, peer: ?PeerId = PeerId.none
) =
self.blocks.withValue(address, pending):
if peer.isSome:
assert peer == pending[].requested
pending[].requested = PeerId.none
p.blocks.withValue(address, pending):
result = pending[].inFlight
func contains*(self: PendingBlocksManager, cid: Cid): bool =
BlockAddress.init(cid) in self.blocks
proc contains*(p: PendingBlocksManager, cid: Cid): bool =
BlockAddress.init(cid) in p.blocks
func contains*(self: PendingBlocksManager, address: BlockAddress): bool =
address in self.blocks
proc contains*(p: PendingBlocksManager, address: BlockAddress): bool =
address in p.blocks
iterator wantList*(self: PendingBlocksManager): BlockAddress =
for a in self.blocks.keys:
iterator wantList*(p: PendingBlocksManager): BlockAddress =
for a in p.blocks.keys:
yield a
iterator wantListBlockCids*(self: PendingBlocksManager): Cid =
for a in self.blocks.keys:
iterator wantListBlockCids*(p: PendingBlocksManager): Cid =
for a in p.blocks.keys:
if not a.leaf:
yield a.cid
iterator wantListCids*(self: PendingBlocksManager): Cid =
iterator wantListCids*(p: PendingBlocksManager): Cid =
var yieldedCids = initHashSet[Cid]()
for a in self.blocks.keys:
for a in p.blocks.keys:
let cid = a.cidOrTreeCid
if cid notin yieldedCids:
yieldedCids.incl(cid)
yield cid
iterator wantHandles*(self: PendingBlocksManager): Future[Block] =
for v in self.blocks.values:
iterator wantHandles*(p: PendingBlocksManager): Future[Block] =
for v in p.blocks.values:
yield v.handle
proc wantListLen*(self: PendingBlocksManager): int =
self.blocks.len
proc wantListLen*(p: PendingBlocksManager): int =
p.blocks.len
func len*(self: PendingBlocksManager): int =
self.blocks.len
func len*(p: PendingBlocksManager): int =
p.blocks.len
func new*(
T: type PendingBlocksManager,
retries = DefaultBlockRetries,
interval = DefaultRetryInterval,
): PendingBlocksManager =
PendingBlocksManager(blockRetries: retries, retryInterval: interval)
func new*(T: type PendingBlocksManager): PendingBlocksManager =
PendingBlocksManager()

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -21,28 +21,24 @@ import ../../blocktype as bt
import ../../logutils
import ../protobuf/blockexc as pb
import ../protobuf/payments
import ../../utils/trackedfutures
import ./networkpeer
export networkpeer, payments
export network, payments
logScope:
topics = "codex blockexcnetwork"
const
Codec* = "/codex/blockexc/1.0.0"
DefaultMaxInflight* = 100
MaxInflight* = 100
type
WantListHandler* = proc(peer: PeerId, wantList: WantList) {.async: (raises: []).}
BlocksDeliveryHandler* =
proc(peer: PeerId, blocks: seq[BlockDelivery]) {.async: (raises: []).}
BlockPresenceHandler* =
proc(peer: PeerId, precense: seq[BlockPresence]) {.async: (raises: []).}
AccountHandler* = proc(peer: PeerId, account: Account) {.async: (raises: []).}
PaymentHandler* = proc(peer: PeerId, payment: SignedState) {.async: (raises: []).}
PeerEventHandler* = proc(peer: PeerId) {.async: (raises: [CancelledError]).}
WantListHandler* = proc(peer: PeerId, wantList: WantList): Future[void] {.gcsafe.}
BlocksDeliveryHandler* = proc(peer: PeerId, blocks: seq[BlockDelivery]): Future[void] {.gcsafe.}
BlockPresenceHandler* = proc(peer: PeerId, precense: seq[BlockPresence]): Future[void] {.gcsafe.}
AccountHandler* = proc(peer: PeerId, account: Account): Future[void] {.gcsafe.}
PaymentHandler* = proc(peer: PeerId, payment: SignedState): Future[void] {.gcsafe.}
BlockExcHandlers* = object
onWantList*: WantListHandler
@ -50,9 +46,6 @@ type
onPresence*: BlockPresenceHandler
onAccount*: AccountHandler
onPayment*: PaymentHandler
onPeerJoined*: PeerEventHandler
onPeerDeparted*: PeerEventHandler
onPeerDropped*: PeerEventHandler
WantListSender* = proc(
id: PeerId,
@ -61,21 +54,12 @@ type
cancel: bool = false,
wantType: WantType = WantType.WantHave,
full: bool = false,
sendDontHave: bool = false,
) {.async: (raises: [CancelledError]).}
WantCancellationSender* = proc(peer: PeerId, addresses: seq[BlockAddress]) {.
async: (raises: [CancelledError])
.}
BlocksDeliverySender* = proc(peer: PeerId, blocksDelivery: seq[BlockDelivery]) {.
async: (raises: [CancelledError])
.}
PresenceSender* = proc(peer: PeerId, presence: seq[BlockPresence]) {.
async: (raises: [CancelledError])
.}
AccountSender* =
proc(peer: PeerId, account: Account) {.async: (raises: [CancelledError]).}
PaymentSender* =
proc(peer: PeerId, payment: SignedState) {.async: (raises: [CancelledError]).}
sendDontHave: bool = false): Future[void] {.gcsafe.}
WantCancellationSender* = proc(peer: PeerId, addresses: seq[BlockAddress]): Future[void] {.gcsafe.}
BlocksDeliverySender* = proc(peer: PeerId, blocksDelivery: seq[BlockDelivery]): Future[void] {.gcsafe.}
PresenceSender* = proc(peer: PeerId, presence: seq[BlockPresence]): Future[void] {.gcsafe.}
AccountSender* = proc(peer: PeerId, account: Account): Future[void] {.gcsafe.}
PaymentSender* = proc(peer: PeerId, payment: SignedState): Future[void] {.gcsafe.}
BlockExcRequest* = object
sendWantList*: WantListSender
@ -92,8 +76,6 @@ type
request*: BlockExcRequest
getConn: ConnProvider
inflightSema: AsyncSemaphore
maxInflight: int = DefaultMaxInflight
trackedFutures*: TrackedFutures = TrackedFutures()
proc peerId*(b: BlockExcNetwork): PeerId =
## Return peer id
@ -107,9 +89,7 @@ proc isSelf*(b: BlockExcNetwork, peer: PeerId): bool =
return b.peerId == peer
proc send*(
b: BlockExcNetwork, id: PeerId, msg: pb.Message
) {.async: (raises: [CancelledError]).} =
proc send*(b: BlockExcNetwork, id: PeerId, msg: pb.Message) {.async.} =
## Send message to peer
##
@ -117,9 +97,8 @@ proc send*(
trace "Unable to send, peer not found", peerId = id
return
let peer = b.peers[id]
try:
let peer = b.peers[id]
await b.inflightSema.acquire()
await peer.send(msg)
except CancelledError as error:
@ -130,8 +109,9 @@ proc send*(
b.inflightSema.release()
proc handleWantList(
b: BlockExcNetwork, peer: NetworkPeer, list: WantList
) {.async: (raises: []).} =
b: BlockExcNetwork,
peer: NetworkPeer,
list: WantList) {.async.} =
## Handle incoming want list
##
@ -139,15 +119,14 @@ proc handleWantList(
await b.handlers.onWantList(peer.id, list)
proc sendWantList*(
b: BlockExcNetwork,
id: PeerId,
addresses: seq[BlockAddress],
priority: int32 = 0,
cancel: bool = false,
wantType: WantType = WantType.WantHave,
full: bool = false,
sendDontHave: bool = false,
) {.async: (raw: true, raises: [CancelledError]).} =
b: BlockExcNetwork,
id: PeerId,
addresses: seq[BlockAddress],
priority: int32 = 0,
cancel: bool = false,
wantType: WantType = WantType.WantHave,
full: bool = false,
sendDontHave: bool = false): Future[void] =
## Send a want message to peer
##
@ -158,41 +137,43 @@ proc sendWantList*(
priority: priority,
cancel: cancel,
wantType: wantType,
sendDontHave: sendDontHave,
)
),
full: full,
)
sendDontHave: sendDontHave) ),
full: full)
b.send(id, Message(wantlist: msg))
proc sendWantCancellations*(
b: BlockExcNetwork, id: PeerId, addresses: seq[BlockAddress]
): Future[void] {.async: (raises: [CancelledError]).} =
b: BlockExcNetwork,
id: PeerId,
addresses: seq[BlockAddress]): Future[void] {.async.} =
## Informs a remote peer that we're no longer interested in a set of blocks
##
await b.sendWantList(id = id, addresses = addresses, cancel = true)
proc handleBlocksDelivery(
b: BlockExcNetwork, peer: NetworkPeer, blocksDelivery: seq[BlockDelivery]
) {.async: (raises: []).} =
b: BlockExcNetwork,
peer: NetworkPeer,
blocksDelivery: seq[BlockDelivery]) {.async.} =
## Handle incoming blocks
##
if not b.handlers.onBlocksDelivery.isNil:
await b.handlers.onBlocksDelivery(peer.id, blocksDelivery)
proc sendBlocksDelivery*(
b: BlockExcNetwork, id: PeerId, blocksDelivery: seq[BlockDelivery]
) {.async: (raw: true, raises: [CancelledError]).} =
b: BlockExcNetwork,
id: PeerId,
blocksDelivery: seq[BlockDelivery]): Future[void] =
## Send blocks to remote
##
b.send(id, pb.Message(payload: blocksDelivery))
proc handleBlockPresence(
b: BlockExcNetwork, peer: NetworkPeer, presence: seq[BlockPresence]
) {.async: (raises: []).} =
b: BlockExcNetwork,
peer: NetworkPeer,
presence: seq[BlockPresence]) {.async.} =
## Handle block presence
##
@ -200,16 +181,18 @@ proc handleBlockPresence(
await b.handlers.onPresence(peer.id, presence)
proc sendBlockPresence*(
b: BlockExcNetwork, id: PeerId, presence: seq[BlockPresence]
) {.async: (raw: true, raises: [CancelledError]).} =
b: BlockExcNetwork,
id: PeerId,
presence: seq[BlockPresence]): Future[void] =
## Send presence to remote
##
b.send(id, Message(blockPresences: @presence))
proc handleAccount(
network: BlockExcNetwork, peer: NetworkPeer, account: Account
) {.async: (raises: []).} =
network: BlockExcNetwork,
peer: NetworkPeer,
account: Account) {.async.} =
## Handle account info
##
@ -217,24 +200,27 @@ proc handleAccount(
await network.handlers.onAccount(peer.id, account)
proc sendAccount*(
b: BlockExcNetwork, id: PeerId, account: Account
) {.async: (raw: true, raises: [CancelledError]).} =
b: BlockExcNetwork,
id: PeerId,
account: Account): Future[void] =
## Send account info to remote
##
b.send(id, Message(account: AccountMessage.init(account)))
proc sendPayment*(
b: BlockExcNetwork, id: PeerId, payment: SignedState
) {.async: (raw: true, raises: [CancelledError]).} =
b: BlockExcNetwork,
id: PeerId,
payment: SignedState): Future[void] =
## Send payment to remote
##
b.send(id, Message(payment: StateChannelUpdate.init(payment)))
proc handlePayment(
network: BlockExcNetwork, peer: NetworkPeer, payment: SignedState
) {.async: (raises: []).} =
network: BlockExcNetwork,
peer: NetworkPeer,
payment: SignedState) {.async.} =
## Handle payment
##
@ -242,185 +228,138 @@ proc handlePayment(
await network.handlers.onPayment(peer.id, payment)
proc rpcHandler(
self: BlockExcNetwork, peer: NetworkPeer, msg: Message
) {.async: (raises: []).} =
b: BlockExcNetwork,
peer: NetworkPeer,
msg: Message) {.raises: [].} =
## handle rpc messages
##
if msg.wantList.entries.len > 0:
self.trackedFutures.track(self.handleWantList(peer, msg.wantList))
asyncSpawn b.handleWantList(peer, msg.wantList)
if msg.payload.len > 0:
self.trackedFutures.track(self.handleBlocksDelivery(peer, msg.payload))
asyncSpawn b.handleBlocksDelivery(peer, msg.payload)
if msg.blockPresences.len > 0:
self.trackedFutures.track(self.handleBlockPresence(peer, msg.blockPresences))
asyncSpawn b.handleBlockPresence(peer, msg.blockPresences)
if account =? Account.init(msg.account):
self.trackedFutures.track(self.handleAccount(peer, account))
asyncSpawn b.handleAccount(peer, account)
if payment =? SignedState.init(msg.payment):
self.trackedFutures.track(self.handlePayment(peer, payment))
asyncSpawn b.handlePayment(peer, payment)
proc getOrCreatePeer(self: BlockExcNetwork, peer: PeerId): NetworkPeer =
proc getOrCreatePeer(b: BlockExcNetwork, peer: PeerId): NetworkPeer =
## Creates or retrieves a BlockExcNetwork Peer
##
if peer in self.peers:
return self.peers.getOrDefault(peer, nil)
if peer in b.peers:
return b.peers.getOrDefault(peer, nil)
var getConn: ConnProvider = proc(): Future[Connection] {.
async: (raises: [CancelledError])
.} =
var getConn: ConnProvider = proc(): Future[Connection] {.async, gcsafe, closure.} =
try:
trace "Getting new connection stream", peer
return await self.switch.dial(peer, Codec)
return await b.switch.dial(peer, Codec)
except CancelledError as error:
raise error
except CatchableError as exc:
trace "Unable to connect to blockexc peer", exc = exc.msg
if not isNil(self.getConn):
getConn = self.getConn
if not isNil(b.getConn):
getConn = b.getConn
let rpcHandler = proc(p: NetworkPeer, msg: Message) {.async: (raises: []).} =
await self.rpcHandler(p, msg)
let rpcHandler = proc (p: NetworkPeer, msg: Message) {.async.} =
b.rpcHandler(p, msg)
# create new pubsub peer
let blockExcPeer = NetworkPeer.new(peer, getConn, rpcHandler)
debug "Created new blockexc peer", peer
self.peers[peer] = blockExcPeer
b.peers[peer] = blockExcPeer
return blockExcPeer
proc dialPeer*(self: BlockExcNetwork, peer: PeerRecord) {.async.} =
proc setupPeer*(b: BlockExcNetwork, peer: PeerId) =
## Perform initial setup, such as want
## list exchange
##
discard b.getOrCreatePeer(peer)
proc dialPeer*(b: BlockExcNetwork, peer: PeerRecord) {.async.} =
## Dial a peer
##
if self.isSelf(peer.peerId):
if b.isSelf(peer.peerId):
trace "Skipping dialing self", peer = peer.peerId
return
if peer.peerId in self.peers:
trace "Already connected to peer", peer = peer.peerId
return
await b.switch.connect(peer.peerId, peer.addresses.mapIt(it.address))
await self.switch.connect(peer.peerId, peer.addresses.mapIt(it.address))
proc dropPeer*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
trace "Dropping peer", peer
try:
if not self.switch.isNil:
await self.switch.disconnect(peer)
except CatchableError as error:
warn "Error attempting to disconnect from peer", peer = peer, error = error.msg
if not self.handlers.onPeerDropped.isNil:
await self.handlers.onPeerDropped(peer)
proc handlePeerJoined*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
discard self.getOrCreatePeer(peer)
if not self.handlers.onPeerJoined.isNil:
await self.handlers.onPeerJoined(peer)
proc handlePeerDeparted*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
proc dropPeer*(b: BlockExcNetwork, peer: PeerId) =
## Cleanup disconnected peer
##
trace "Cleaning up departed peer", peer
self.peers.del(peer)
if not self.handlers.onPeerDeparted.isNil:
await self.handlers.onPeerDeparted(peer)
b.peers.del(peer)
method init*(self: BlockExcNetwork) {.raises: [].} =
method init*(b: BlockExcNetwork) =
## Perform protocol initialization
##
proc peerEventHandler(
peerId: PeerId, event: PeerEvent
): Future[void] {.async: (raises: [CancelledError]).} =
proc peerEventHandler(peerId: PeerId, event: PeerEvent) {.async.} =
if event.kind == PeerEventKind.Joined:
await self.handlePeerJoined(peerId)
elif event.kind == PeerEventKind.Left:
await self.handlePeerDeparted(peerId)
b.setupPeer(peerId)
else:
warn "Unknown peer event", event
b.dropPeer(peerId)
self.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Joined)
self.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Left)
b.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Joined)
b.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Left)
proc handler(
conn: Connection, proto: string
): Future[void] {.async: (raises: [CancelledError]).} =
proc handle(conn: Connection, proto: string) {.async, gcsafe, closure.} =
let peerId = conn.peerId
let blockexcPeer = self.getOrCreatePeer(peerId)
await blockexcPeer.readLoop(conn) # attach read loop
let blockexcPeer = b.getOrCreatePeer(peerId)
await blockexcPeer.readLoop(conn) # attach read loop
self.handler = handler
self.codec = Codec
proc stop*(self: BlockExcNetwork) {.async: (raises: []).} =
await self.trackedFutures.cancelTracked()
b.handler = handle
b.codec = Codec
proc new*(
T: type BlockExcNetwork,
switch: Switch,
connProvider: ConnProvider = nil,
maxInflight = DefaultMaxInflight,
): BlockExcNetwork =
T: type BlockExcNetwork,
switch: Switch,
connProvider: ConnProvider = nil,
maxInflight = MaxInflight): BlockExcNetwork =
## Create a new BlockExcNetwork instance
##
let self = BlockExcNetwork(
switch: switch,
getConn: connProvider,
inflightSema: newAsyncSemaphore(maxInflight),
maxInflight: maxInflight,
)
self.maxIncomingStreams = self.maxInflight
let
self = BlockExcNetwork(
switch: switch,
getConn: connProvider,
inflightSema: newAsyncSemaphore(maxInflight))
proc sendWantList(
id: PeerId,
cids: seq[BlockAddress],
priority: int32 = 0,
cancel: bool = false,
wantType: WantType = WantType.WantHave,
full: bool = false,
sendDontHave: bool = false,
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendWantList(id, cids, priority, cancel, wantType, full, sendDontHave)
id: PeerId,
cids: seq[BlockAddress],
priority: int32 = 0,
cancel: bool = false,
wantType: WantType = WantType.WantHave,
full: bool = false,
sendDontHave: bool = false): Future[void] {.gcsafe.} =
self.sendWantList(
id, cids, priority, cancel,
wantType, full, sendDontHave)
proc sendWantCancellations(
id: PeerId, addresses: seq[BlockAddress]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
proc sendWantCancellations(id: PeerId, addresses: seq[BlockAddress]): Future[void] {.gcsafe.} =
self.sendWantCancellations(id, addresses)
proc sendBlocksDelivery(
id: PeerId, blocksDelivery: seq[BlockDelivery]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
proc sendBlocksDelivery(id: PeerId, blocksDelivery: seq[BlockDelivery]): Future[void] {.gcsafe.} =
self.sendBlocksDelivery(id, blocksDelivery)
proc sendPresence(
id: PeerId, presence: seq[BlockPresence]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
proc sendPresence(id: PeerId, presence: seq[BlockPresence]): Future[void] {.gcsafe.} =
self.sendBlockPresence(id, presence)
proc sendAccount(
id: PeerId, account: Account
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
proc sendAccount(id: PeerId, account: Account): Future[void] {.gcsafe.} =
self.sendAccount(id, account)
proc sendPayment(
id: PeerId, payment: SignedState
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
proc sendPayment(id: PeerId, payment: SignedState): Future[void] {.gcsafe.} =
self.sendPayment(id, payment)
self.request = BlockExcRequest(
@ -429,8 +368,7 @@ proc new*(
sendBlocksDelivery: sendBlocksDelivery,
sendPresence: sendPresence,
sendAccount: sendAccount,
sendPayment: sendPayment,
)
sendPayment: sendPayment)
self.init()
return self

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,7 +7,8 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [].}
import pkg/upraises
push: {.upraises: [].}
import pkg/chronos
import pkg/libp2p
@ -16,98 +17,78 @@ import ../protobuf/blockexc
import ../protobuf/message
import ../../errors
import ../../logutils
import ../../utils/trackedfutures
logScope:
topics = "codex blockexcnetworkpeer"
const DefaultYieldInterval = 50.millis
type
ConnProvider* = proc(): Future[Connection] {.async: (raises: [CancelledError]).}
ConnProvider* = proc(): Future[Connection] {.gcsafe, closure.}
RPCHandler* = proc(peer: NetworkPeer, msg: Message) {.async: (raises: []).}
RPCHandler* = proc(peer: NetworkPeer, msg: Message): Future[void] {.gcsafe.}
NetworkPeer* = ref object of RootObj
id*: PeerId
handler*: RPCHandler
sendConn: Connection
getConn: ConnProvider
yieldInterval*: Duration = DefaultYieldInterval
trackedFutures: TrackedFutures
proc connected*(self: NetworkPeer): bool =
not (isNil(self.sendConn)) and not (self.sendConn.closed or self.sendConn.atEof)
proc connected*(b: NetworkPeer): bool =
not(isNil(b.sendConn)) and
not(b.sendConn.closed or b.sendConn.atEof)
proc readLoop*(self: NetworkPeer, conn: Connection) {.async: (raises: []).} =
proc readLoop*(b: NetworkPeer, conn: Connection) {.async.} =
if isNil(conn):
trace "No connection to read from", peer = self.id
return
trace "Attaching read loop", peer = self.id, connId = conn.oid
try:
var nextYield = Moment.now() + self.yieldInterval
while not conn.atEof or not conn.closed:
if Moment.now() > nextYield:
nextYield = Moment.now() + self.yieldInterval
trace "Yielding in read loop",
peer = self.id, nextYield = nextYield, interval = self.yieldInterval
await sleepAsync(10.millis)
let
data = await conn.readLp(MaxMessageSize.int)
msg = Message.protobufDecode(data).mapFailure().tryGet()
trace "Received message", peer = self.id, connId = conn.oid
await self.handler(self, msg)
await b.handler(b, msg)
except CancelledError:
trace "Read loop cancelled"
except CatchableError as err:
warn "Exception in blockexc read loop", msg = err.msg
finally:
warn "Detaching read loop", peer = self.id, connId = conn.oid
if self.sendConn == conn:
self.sendConn = nil
await conn.close()
proc connect*(
self: NetworkPeer
): Future[Connection] {.async: (raises: [CancelledError]).} =
if self.connected:
trace "Already connected", peer = self.id, connId = self.sendConn.oid
return self.sendConn
proc connect*(b: NetworkPeer): Future[Connection] {.async.} =
if b.connected:
return b.sendConn
self.sendConn = await self.getConn()
self.trackedFutures.track(self.readLoop(self.sendConn))
return self.sendConn
b.sendConn = await b.getConn()
asyncSpawn b.readLoop(b.sendConn)
return b.sendConn
proc send*(
self: NetworkPeer, msg: Message
) {.async: (raises: [CancelledError, LPStreamError]).} =
let conn = await self.connect()
proc send*(b: NetworkPeer, msg: Message) {.async.} =
let conn = await b.connect()
if isNil(conn):
warn "Unable to get send connection for peer message not sent", peer = self.id
warn "Unable to get send connection for peer message not sent", peer = b.id
return
trace "Sending message", peer = self.id, connId = conn.oid
try:
await conn.writeLp(protobufEncode(msg))
except CatchableError as err:
if self.sendConn == conn:
self.sendConn = nil
raise newException(LPStreamError, "Failed to send message: " & err.msg)
await conn.writeLp(protobufEncode(msg))
proc broadcast*(b: NetworkPeer, msg: Message) =
proc sendAwaiter() {.async.} =
try:
await b.send(msg)
except CatchableError as exc:
warn "Exception broadcasting message to peer", peer = b.id, exc = exc.msg
asyncSpawn sendAwaiter()
func new*(
T: type NetworkPeer,
peer: PeerId,
connProvider: ConnProvider,
rpcHandler: RPCHandler,
): NetworkPeer =
doAssert(not isNil(connProvider), "should supply connection provider")
T: type NetworkPeer,
peer: PeerId,
connProvider: ConnProvider,
rpcHandler: RPCHandler): NetworkPeer =
doAssert(not isNil(connProvider),
"should supply connection provider")
NetworkPeer(
id: peer,
getConn: connProvider,
handler: rpcHandler,
trackedFutures: TrackedFutures(),
)
handler: rpcHandler)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -25,77 +25,29 @@ import ../../logutils
export payments, nitro
const
MinRefreshInterval = 1.seconds
MaxRefreshBackoff = 36 # 36 seconds
MaxWantListBatchSize* = 1024 # Maximum blocks to send per WantList message
type
BlockExcPeerCtx* = ref object of RootObj
id*: PeerId
blocks*: Table[BlockAddress, Presence] # remote peer have list including price
peerWants*: seq[WantListEntry] # remote peers want lists
exchanged*: int # times peer has exchanged with us
lastExchange*: Moment # last time peer has exchanged with us
account*: ?Account # ethereum account of this peer
paymentChannel*: ?ChannelId # payment channel id
type BlockExcPeerCtx* = ref object of RootObj
id*: PeerId
blocks*: Table[BlockAddress, Presence] # remote peer have list including price
wantedBlocks*: HashSet[BlockAddress] # blocks that the peer wants
exchanged*: int # times peer has exchanged with us
refreshInProgress*: bool # indicates if a refresh is in progress
lastRefresh*: Moment # last time we refreshed our knowledge of the blocks this peer has
refreshBackoff*: int = 1 # backoff factor for refresh requests
account*: ?Account # ethereum account of this peer
paymentChannel*: ?ChannelId # payment channel id
blocksSent*: HashSet[BlockAddress] # blocks sent to peer
blocksRequested*: HashSet[BlockAddress] # pending block requests to this peer
lastExchange*: Moment # last time peer has sent us a block
activityTimeout*: Duration
lastSentWants*: HashSet[BlockAddress]
# track what wantList we last sent for delta updates
proc peerHave*(self: BlockExcPeerCtx): seq[BlockAddress] =
toSeq(self.blocks.keys)
proc isKnowledgeStale*(self: BlockExcPeerCtx): bool =
let staleness =
self.lastRefresh + self.refreshBackoff * MinRefreshInterval < Moment.now()
proc peerHaveCids*(self: BlockExcPeerCtx): HashSet[Cid] =
self.blocks.keys.toSeq.mapIt(it.cidOrTreeCid).toHashSet
if staleness and self.refreshInProgress:
trace "Cleaning up refresh state", peer = self.id
self.refreshInProgress = false
self.refreshBackoff = 1
staleness
proc isBlockSent*(self: BlockExcPeerCtx, address: BlockAddress): bool =
address in self.blocksSent
proc markBlockAsSent*(self: BlockExcPeerCtx, address: BlockAddress) =
self.blocksSent.incl(address)
proc markBlockAsNotSent*(self: BlockExcPeerCtx, address: BlockAddress) =
self.blocksSent.excl(address)
proc refreshRequested*(self: BlockExcPeerCtx) =
trace "Refresh requested for peer", peer = self.id, backoff = self.refreshBackoff
self.refreshInProgress = true
self.lastRefresh = Moment.now()
proc refreshReplied*(self: BlockExcPeerCtx) =
self.refreshInProgress = false
self.lastRefresh = Moment.now()
self.refreshBackoff = min(self.refreshBackoff * 2, MaxRefreshBackoff)
proc havesUpdated(self: BlockExcPeerCtx) =
self.refreshBackoff = 1
proc wantsUpdated*(self: BlockExcPeerCtx) =
self.refreshBackoff = 1
proc peerHave*(self: BlockExcPeerCtx): HashSet[BlockAddress] =
# XXX: this is ugly an inefficient, but since those will typically
# be used in "joins", it's better to pay the price here and have
# a linear join than to not do it and have a quadratic join.
toHashSet(self.blocks.keys.toSeq)
proc peerWantsCids*(self: BlockExcPeerCtx): HashSet[Cid] =
self.peerWants.mapIt(it.address.cidOrTreeCid).toHashSet
proc contains*(self: BlockExcPeerCtx, address: BlockAddress): bool =
address in self.blocks
func setPresence*(self: BlockExcPeerCtx, presence: Presence) =
if presence.address notin self.blocks:
self.havesUpdated()
self.blocks[presence.address] = presence
func cleanPresence*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]) =
@ -112,36 +64,3 @@ func price*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]): UInt256 =
price += precense[].price
price
proc blockRequestScheduled*(self: BlockExcPeerCtx, address: BlockAddress) =
## Adds a block the set of blocks that have been requested to this peer
## (its request schedule).
if self.blocksRequested.len == 0:
self.lastExchange = Moment.now()
self.blocksRequested.incl(address)
proc blockRequestCancelled*(self: BlockExcPeerCtx, address: BlockAddress) =
## Removes a block from the set of blocks that have been requested to this peer
## (its request schedule).
self.blocksRequested.excl(address)
proc blockReceived*(self: BlockExcPeerCtx, address: BlockAddress): bool =
let wasRequested = address in self.blocksRequested
self.blocksRequested.excl(address)
self.lastExchange = Moment.now()
wasRequested
proc activityTimer*(
self: BlockExcPeerCtx
): Future[void] {.async: (raises: [CancelledError]).} =
## This is called by the block exchange when a block is scheduled for this peer.
## If the peer sends no blocks for a while, it is considered inactive/uncooperative
## and the peer is dropped. Note that ANY block that the peer sends will reset this
## timer for all blocks.
##
while true:
let idleTime = Moment.now() - self.lastExchange
if idleTime > self.activityTimeout:
return
await sleepAsync(self.activityTimeout - idleTime)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,12 +7,13 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [].}
import std/sequtils
import std/tables
import std/algorithm
import std/sequtils
import pkg/upraises
push: {.upraises: [].}
import pkg/chronos
import pkg/libp2p
@ -21,6 +22,7 @@ import ../protobuf/blockexc
import ../../blocktype
import ../../logutils
import ./peercontext
export peercontext
@ -31,8 +33,6 @@ type
PeerCtxStore* = ref object of RootObj
peers*: OrderedTable[PeerId, BlockExcPeerCtx]
PeersForBlock* = tuple[with: seq[BlockExcPeerCtx], without: seq[BlockExcPeerCtx]]
iterator items*(self: PeerCtxStore): BlockExcPeerCtx =
for p in self.peers.values:
yield p
@ -41,10 +41,7 @@ proc contains*(a: openArray[BlockExcPeerCtx], b: PeerId): bool =
## Convenience method to check for peer precense
##
a.anyIt(it.id == b)
func peerIds*(self: PeerCtxStore): seq[PeerId] =
toSeq(self.peers.keys)
a.anyIt( it.id == b )
func contains*(self: PeerCtxStore, peerId: PeerId): bool =
peerId in self.peers
@ -62,27 +59,43 @@ func len*(self: PeerCtxStore): int =
self.peers.len
func peersHave*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
toSeq(self.peers.values).filterIt(address in it.peerHave)
toSeq(self.peers.values).filterIt( it.peerHave.anyIt( it == address ) )
func peersHave*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] =
# FIXME: this is way slower and can end up leading to unexpected performance loss.
toSeq(self.peers.values).filterIt(it.peerHave.anyIt(it.cidOrTreeCid == cid))
toSeq(self.peers.values).filterIt( it.peerHave.anyIt( it.cidOrTreeCid == cid ) )
func peersWant*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
toSeq(self.peers.values).filterIt(address in it.wantedBlocks)
toSeq(self.peers.values).filterIt( it.peerWants.anyIt( it == address ) )
func peersWant*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] =
# FIXME: this is way slower and can end up leading to unexpected performance loss.
toSeq(self.peers.values).filterIt(it.wantedBlocks.anyIt(it.cidOrTreeCid == cid))
toSeq(self.peers.values).filterIt( it.peerWants.anyIt( it.address.cidOrTreeCid == cid ) )
proc getPeersForBlock*(self: PeerCtxStore, address: BlockAddress): PeersForBlock =
var res: PeersForBlock = (@[], @[])
for peer in self:
if address in peer:
res.with.add(peer)
func selectCheapest*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
# assume that the price for all leaves in a tree is the same
let rootAddress = BlockAddress(leaf: false, cid: address.cidOrTreeCid)
var peers = self.peersHave(rootAddress)
func cmp(a, b: BlockExcPeerCtx): int =
var
priceA = 0.u256
priceB = 0.u256
a.blocks.withValue(rootAddress, precense):
priceA = precense[].price
b.blocks.withValue(rootAddress, precense):
priceB = precense[].price
if priceA == priceB:
0
elif priceA > priceB:
1
else:
res.without.add(peer)
res
-1
peers.sort(cmp)
trace "Selected cheapest peers", peers = peers.len
return peers
proc new*(T: type PeerCtxStore): PeerCtxStore =
## create new instance of a peer context store

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,6 +9,7 @@
import std/hashes
import std/sequtils
import pkg/stew/endians2
import message
@ -19,6 +20,13 @@ export Wantlist, WantType, WantListEntry
export BlockDelivery, BlockPresenceType, BlockPresence
export AccountMessage, StateChannelUpdate
proc hash*(a: BlockAddress): Hash =
if a.leaf:
let data = a.treeCid.data.buffer & @(a.index.uint64.toBytesBE)
hash(data)
else:
hash(a.cid.data.buffer)
proc hash*(e: WantListEntry): Hash =
hash(e.address)
@ -34,6 +42,7 @@ proc `==`*(a: WantListEntry, b: BlockAddress): bool =
proc `<`*(a, b: WantListEntry): bool =
a.priority < b.priority
proc `==`*(a: BlockPresence, b: BlockAddress): bool =
return a.address == b

View File

@ -1,4 +1,4 @@
# Protocol of data exchange between Logos Storage nodes
# Protocol of data exchange between Codex nodes
# and Protobuf encoder/decoder for these messages.
#
# Eventually all this code should be auto-generated from message.proto.
@ -20,44 +20,40 @@ const
type
WantType* = enum
WantBlock = 0
WantBlock = 0,
WantHave = 1
WantListEntry* = object
address*: BlockAddress
# XXX: I think explicit priority is pointless as the peer will request
# the blocks in the order it wants to receive them, and all we have to
# do is process those in the same order as we send them back. It also
# complicates things for no reason at the moment, as the priority is
# always set to 0.
priority*: int32 # The priority (normalized). default to 1
cancel*: bool # Whether this revokes an entry
wantType*: WantType # Note: defaults to enum 0, ie Block
sendDontHave*: bool # Note: defaults to false
priority*: int32 # The priority (normalized). default to 1
cancel*: bool # Whether this revokes an entry
wantType*: WantType # Note: defaults to enum 0, ie Block
sendDontHave*: bool # Note: defaults to false
inFlight*: bool # Whether block sending is in progress. Not serialized.
WantList* = object
entries*: seq[WantListEntry] # A list of wantList entries
full*: bool # Whether this is the full wantList. default to false
entries*: seq[WantListEntry] # A list of wantList entries
full*: bool # Whether this is the full wantList. default to false
BlockDelivery* = object
blk*: Block
address*: BlockAddress
proof*: ?CodexProof # Present only if `address.leaf` is true
proof*: ?CodexProof # Present only if `address.leaf` is true
BlockPresenceType* = enum
Have = 0
Have = 0,
DontHave = 1
BlockPresence* = object
address*: BlockAddress
`type`*: BlockPresenceType
price*: seq[byte] # Amount of assets to pay for the block (UInt256)
price*: seq[byte] # Amount of assets to pay for the block (UInt256)
AccountMessage* = object
address*: seq[byte] # Ethereum address to which payments should be made
address*: seq[byte] # Ethereum address to which payments should be made
StateChannelUpdate* = object
update*: seq[byte] # Signed Nitro state, serialized as JSON
update*: seq[byte] # Signed Nitro state, serialized as JSON
Message* = object
wantList*: WantList
@ -101,7 +97,7 @@ proc write*(pb: var ProtoBuffer, field: int, value: WantList) =
pb.write(field, ipb)
proc write*(pb: var ProtoBuffer, field: int, value: BlockDelivery) =
var ipb = initProtoBuffer()
var ipb = initProtoBuffer(maxSize = MaxBlockSize)
ipb.write(1, value.blk.cid.data.buffer)
ipb.write(2, value.blk.data)
ipb.write(3, value.address)
@ -132,7 +128,7 @@ proc write*(pb: var ProtoBuffer, field: int, value: StateChannelUpdate) =
pb.write(field, ipb)
proc protobufEncode*(value: Message): seq[byte] =
var ipb = initProtoBuffer()
var ipb = initProtoBuffer(maxSize = MaxMessageSize)
ipb.write(1, value.wantList)
for v in value.payload:
ipb.write(3, v)
@ -144,6 +140,7 @@ proc protobufEncode*(value: Message): seq[byte] =
ipb.finish()
ipb.buffer
#
# Decoding Message from seq[byte] in Protobuf format
#
@ -154,22 +151,22 @@ proc decode*(_: type BlockAddress, pb: ProtoBuffer): ProtoResult[BlockAddress] =
field: uint64
cidBuf = newSeq[byte]()
if ?pb.getField(1, field):
if ? pb.getField(1, field):
leaf = bool(field)
if leaf:
var
treeCid: Cid
index: Natural
if ?pb.getField(2, cidBuf):
treeCid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ?pb.getField(3, field):
if ? pb.getField(2, cidBuf):
treeCid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ? pb.getField(3, field):
index = field
value = BlockAddress(leaf: true, treeCid: treeCid, index: index)
else:
var cid: Cid
if ?pb.getField(4, cidBuf):
cid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ? pb.getField(4, cidBuf):
cid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
value = BlockAddress(leaf: false, cid: cid)
ok(value)
@ -179,15 +176,15 @@ proc decode*(_: type WantListEntry, pb: ProtoBuffer): ProtoResult[WantListEntry]
value = WantListEntry()
field: uint64
ipb: ProtoBuffer
if ?pb.getField(1, ipb):
value.address = ?BlockAddress.decode(ipb)
if ?pb.getField(2, field):
if ? pb.getField(1, ipb):
value.address = ? BlockAddress.decode(ipb)
if ? pb.getField(2, field):
value.priority = int32(field)
if ?pb.getField(3, field):
if ? pb.getField(3, field):
value.cancel = bool(field)
if ?pb.getField(4, field):
if ? pb.getField(4, field):
value.wantType = WantType(field)
if ?pb.getField(5, field):
if ? pb.getField(5, field):
value.sendDontHave = bool(field)
ok(value)
@ -196,10 +193,10 @@ proc decode*(_: type WantList, pb: ProtoBuffer): ProtoResult[WantList] =
value = WantList()
field: uint64
sublist: seq[seq[byte]]
if ?pb.getRepeatedField(1, sublist):
if ? pb.getRepeatedField(1, sublist):
for item in sublist:
value.entries.add(?WantListEntry.decode(initProtoBuffer(item)))
if ?pb.getField(2, field):
value.entries.add(? WantListEntry.decode(initProtoBuffer(item)))
if ? pb.getField(2, field):
value.full = bool(field)
ok(value)
@ -211,18 +208,17 @@ proc decode*(_: type BlockDelivery, pb: ProtoBuffer): ProtoResult[BlockDelivery]
cid: Cid
ipb: ProtoBuffer
if ?pb.getField(1, cidBuf):
cid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ?pb.getField(2, dataBuf):
value.blk =
?Block.new(cid, dataBuf, verify = true).mapErr(x => ProtoError.IncorrectBlob)
if ?pb.getField(3, ipb):
value.address = ?BlockAddress.decode(ipb)
if ? pb.getField(1, cidBuf):
cid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ? pb.getField(2, dataBuf):
value.blk = ? Block.new(cid, dataBuf, verify = true).mapErr(x => ProtoError.IncorrectBlob)
if ? pb.getField(3, ipb):
value.address = ? BlockAddress.decode(ipb)
if value.address.leaf:
var proofBuf = newSeq[byte]()
if ?pb.getField(4, proofBuf):
let proof = ?CodexProof.decode(proofBuf).mapErr(x => ProtoError.IncorrectBlob)
if ? pb.getField(4, proofBuf):
let proof = ? CodexProof.decode(proofBuf).mapErr(x => ProtoError.IncorrectBlob)
value.proof = proof.some
else:
value.proof = CodexProof.none
@ -236,42 +232,42 @@ proc decode*(_: type BlockPresence, pb: ProtoBuffer): ProtoResult[BlockPresence]
value = BlockPresence()
field: uint64
ipb: ProtoBuffer
if ?pb.getField(1, ipb):
value.address = ?BlockAddress.decode(ipb)
if ?pb.getField(2, field):
if ? pb.getField(1, ipb):
value.address = ? BlockAddress.decode(ipb)
if ? pb.getField(2, field):
value.`type` = BlockPresenceType(field)
discard ?pb.getField(3, value.price)
discard ? pb.getField(3, value.price)
ok(value)
proc decode*(_: type AccountMessage, pb: ProtoBuffer): ProtoResult[AccountMessage] =
var value = AccountMessage()
discard ?pb.getField(1, value.address)
var
value = AccountMessage()
discard ? pb.getField(1, value.address)
ok(value)
proc decode*(
_: type StateChannelUpdate, pb: ProtoBuffer
): ProtoResult[StateChannelUpdate] =
var value = StateChannelUpdate()
discard ?pb.getField(1, value.update)
proc decode*(_: type StateChannelUpdate, pb: ProtoBuffer): ProtoResult[StateChannelUpdate] =
var
value = StateChannelUpdate()
discard ? pb.getField(1, value.update)
ok(value)
proc protobufDecode*(_: type Message, msg: seq[byte]): ProtoResult[Message] =
var
value = Message()
pb = initProtoBuffer(msg)
pb = initProtoBuffer(msg, maxSize = MaxMessageSize)
ipb: ProtoBuffer
sublist: seq[seq[byte]]
if ?pb.getField(1, ipb):
value.wantList = ?WantList.decode(ipb)
if ?pb.getRepeatedField(3, sublist):
if ? pb.getField(1, ipb):
value.wantList = ? WantList.decode(ipb)
if ? pb.getRepeatedField(3, sublist):
for item in sublist:
value.payload.add(?BlockDelivery.decode(initProtoBuffer(item)))
if ?pb.getRepeatedField(4, sublist):
value.payload.add(? BlockDelivery.decode(initProtoBuffer(item, maxSize = MaxBlockSize)))
if ? pb.getRepeatedField(4, sublist):
for item in sublist:
value.blockPresences.add(?BlockPresence.decode(initProtoBuffer(item)))
discard ?pb.getField(5, value.pendingBytes)
if ?pb.getField(6, ipb):
value.account = ?AccountMessage.decode(ipb)
if ?pb.getField(7, ipb):
value.payment = ?StateChannelUpdate.decode(ipb)
value.blockPresences.add(? BlockPresence.decode(initProtoBuffer(item)))
discard ? pb.getField(5, value.pendingBytes)
if ? pb.getField(6, ipb):
value.account = ? AccountMessage.decode(ipb)
if ? pb.getField(7, ipb):
value.payment = ? StateChannelUpdate.decode(ipb)
ok(value)

View File

@ -1,4 +1,4 @@
// Protocol of data exchange between Logos Storage nodes.
// Protocol of data exchange between Codex nodes.
// Extended version of https://github.com/ipfs/specs/blob/main/BITSWAP.md
syntax = "proto3";

View File

@ -1,9 +1,8 @@
{.push raises: [].}
import pkg/stew/byteutils
import pkg/stint
import pkg/nitro
import pkg/questionable
import pkg/upraises
import ./blockexc
export AccountMessage
@ -12,8 +11,11 @@ export StateChannelUpdate
export stint
export nitro
type Account* = object
address*: EthAddress
push: {.upraises: [].}
type
Account* = object
address*: EthAddress
func init*(_: type AccountMessage, account: Account): AccountMessage =
AccountMessage(address: @(account.address.toArray))
@ -22,7 +24,7 @@ func parse(_: type EthAddress, bytes: seq[byte]): ?EthAddress =
var address: array[20, byte]
if bytes.len != address.len:
return EthAddress.none
for i in 0 ..< address.len:
for i in 0..<address.len:
address[i] = bytes[i]
EthAddress(address).some

View File

@ -1,9 +1,8 @@
{.push raises: [].}
import libp2p
import pkg/stint
import pkg/questionable
import pkg/questionable/results
import pkg/upraises
import ./blockexc
import ../../blocktype
@ -12,6 +11,8 @@ export questionable
export stint
export BlockPresenceType
upraises.push: {.upraises: [].}
type
PresenceMessage* = blockexc.BlockPresence
Presence* = object
@ -31,12 +32,15 @@ func init*(_: type Presence, message: PresenceMessage): ?Presence =
some Presence(
address: message.address,
have: message.`type` == BlockPresenceType.Have,
price: price,
price: price
)
func init*(_: type PresenceMessage, presence: Presence): PresenceMessage =
PresenceMessage(
address: presence.address,
`type`: if presence.have: BlockPresenceType.Have else: BlockPresenceType.DontHave,
price: @(presence.price.toBytesBE),
`type`: if presence.have:
BlockPresenceType.Have
else:
BlockPresenceType.DontHave,
price: @(presence.price.toBytesBE)
)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,14 +9,15 @@
import std/tables
import std/sugar
import std/hashes
export tables
{.push raises: [], gcsafe.}
import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p/[cid, multicodec, multihash]
import pkg/stew/[byteutils, endians2]
import pkg/stew/byteutils
import pkg/questionable
import pkg/questionable/results
@ -48,16 +49,16 @@ logutils.formatIt(LogFormat.textLines, BlockAddress):
else:
"cid: " & shortLog($it.cid)
logutils.formatIt(LogFormat.json, BlockAddress):
%it
logutils.formatIt(LogFormat.json, BlockAddress): %it
proc `==`*(a, b: BlockAddress): bool =
a.leaf == b.leaf and (
if a.leaf:
a.treeCid == b.treeCid and a.index == b.index
else:
a.cid == b.cid
)
a.leaf == b.leaf and
(
if a.leaf:
a.treeCid == b.treeCid and a.index == b.index
else:
a.cid == b.cid
)
proc `$`*(a: BlockAddress): string =
if a.leaf:
@ -65,15 +66,11 @@ proc `$`*(a: BlockAddress): string =
else:
"cid: " & $a.cid
proc hash*(a: BlockAddress): Hash =
if a.leaf:
let data = a.treeCid.data.buffer & @(a.index.uint64.toBytesBE)
hash(data)
else:
hash(a.cid.data.buffer)
proc cidOrTreeCid*(a: BlockAddress): Cid =
if a.leaf: a.treeCid else: a.cid
if a.leaf:
a.treeCid
else:
a.cid
proc address*(b: Block): BlockAddress =
BlockAddress(leaf: false, cid: b.cid)
@ -89,55 +86,57 @@ proc `$`*(b: Block): string =
result &= "\ndata: " & string.fromBytes(b.data)
func new*(
T: type Block,
data: openArray[byte] = [],
version = CIDv1,
mcodec = Sha256HashCodec,
codec = BlockCodec,
): ?!Block =
T: type Block,
data: openArray[byte] = [],
version = CIDv1,
mcodec = Sha256HashCodec,
codec = BlockCodec): ?!Block =
## creates a new block for both storage and network IO
##
let
hash = ?MultiHash.digest($mcodec, data).mapFailure
cid = ?Cid.init(version, codec, hash).mapFailure
hash = ? MultiHash.digest($mcodec, data).mapFailure
cid = ? Cid.init(version, codec, hash).mapFailure
# TODO: If the hash is `>=` to the data,
# use the Cid as a container!
Block(cid: cid, data: @data).success
Block(
cid: cid,
data: @data).success
proc new*(
T: type Block, cid: Cid, data: openArray[byte], verify: bool = true
T: type Block,
cid: Cid,
data: openArray[byte],
verify: bool = true
): ?!Block =
## creates a new block for both storage and network IO
##
if verify:
let
mhash = ?cid.mhash.mapFailure
computedMhash = ?MultiHash.digest($mhash.mcodec, data).mapFailure
computedCid = ?Cid.init(cid.cidver, cid.mcodec, computedMhash).mapFailure
mhash = ? cid.mhash.mapFailure
computedMhash = ? MultiHash.digest($mhash.mcodec, data).mapFailure
computedCid = ? Cid.init(cid.cidver, cid.mcodec, computedMhash).mapFailure
if computedCid != cid:
return "Cid doesn't match the data".failure
return Block(cid: cid, data: @data).success
return Block(
cid: cid,
data: @data
).success
proc emptyBlock*(version: CidVersion, hcodec: MultiCodec): ?!Block =
emptyCid(version, hcodec, BlockCodec).flatMap(
(cid: Cid) => Block.new(cid = cid, data = @[])
)
emptyCid(version, hcodec, BlockCodec)
.flatMap((cid: Cid) => Block.new(cid = cid, data = @[]))
proc emptyBlock*(cid: Cid): ?!Block =
cid.mhash.mapFailure.flatMap(
(mhash: MultiHash) => emptyBlock(cid.cidver, mhash.mcodec)
)
cid.mhash.mapFailure.flatMap((mhash: MultiHash) =>
emptyBlock(cid.cidver, mhash.mcodec))
proc isEmpty*(cid: Cid): bool =
success(cid) ==
cid.mhash.mapFailure.flatMap(
(mhash: MultiHash) => emptyCid(cid.cidver, mhash.mcodec, cid.mcodec)
)
success(cid) == cid.mhash.mapFailure.flatMap((mhash: MultiHash) =>
emptyCid(cid.cidver, mhash.mcodec, cid.mcodec))
proc isEmpty*(blk: Block): bool =
blk.cid.isEmpty

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,7 +9,9 @@
# TODO: This is super inneficient and needs a rewrite, but it'll do for now
{.push raises: [], gcsafe.}
import pkg/upraises
push: {.upraises: [].}
import pkg/questionable
import pkg/questionable/results
@ -21,22 +23,20 @@ import ./logutils
export blocktype
const DefaultChunkSize* = DefaultBlockSize
const
DefaultChunkSize* = DefaultBlockSize
type
# default reader type
ChunkerError* = object of CatchableError
ChunkBuffer* = ptr UncheckedArray[byte]
Reader* = proc(data: ChunkBuffer, len: int): Future[int] {.
async: (raises: [ChunkerError, CancelledError])
.}
Reader* = proc(data: ChunkBuffer, len: int): Future[int] {.gcsafe, raises: [Defect].}
# Reader that splits input data into fixed-size chunks
Chunker* = ref object
reader*: Reader # Procedure called to actually read the data
offset*: int # Bytes read so far (position in the stream)
chunkSize*: NBytes # Size of each chunk
pad*: bool # Pad last chunk to chunkSize?
reader*: Reader # Procedure called to actually read the data
offset*: int # Bytes read so far (position in the stream)
chunkSize*: NBytes # Size of each chunk
pad*: bool # Pad last chunk to chunkSize?
FileChunker* = Chunker
LPStreamChunker* = Chunker
@ -60,21 +60,30 @@ proc getBytes*(c: Chunker): Future[seq[byte]] {.async.} =
return move buff
proc new*(
T: type Chunker, reader: Reader, chunkSize = DefaultChunkSize, pad = true
T: type Chunker,
reader: Reader,
chunkSize = DefaultChunkSize,
pad = true
): Chunker =
## create a new Chunker instance
##
Chunker(reader: reader, offset: 0, chunkSize: chunkSize, pad: pad)
Chunker(
reader: reader,
offset: 0,
chunkSize: chunkSize,
pad: pad)
proc new*(
T: type LPStreamChunker, stream: LPStream, chunkSize = DefaultChunkSize, pad = true
T: type LPStreamChunker,
stream: LPStream,
chunkSize = DefaultChunkSize,
pad = true
): LPStreamChunker =
## create the default File chunker
##
proc reader(
data: ChunkBuffer, len: int
): Future[int] {.async: (raises: [ChunkerError, CancelledError]).} =
proc reader(data: ChunkBuffer, len: int): Future[int]
{.gcsafe, async, raises: [Defect].} =
var res = 0
try:
while res < len:
@ -85,24 +94,29 @@ proc new*(
raise error
except LPStreamError as error:
error "LPStream error", err = error.msg
raise newException(ChunkerError, "LPStream error", error)
raise error
except CatchableError as exc:
error "CatchableError exception", exc = exc.msg
raise newException(Defect, exc.msg)
return res
LPStreamChunker.new(reader = reader, chunkSize = chunkSize, pad = pad)
LPStreamChunker.new(
reader = reader,
chunkSize = chunkSize,
pad = pad)
proc new*(
T: type FileChunker, file: File, chunkSize = DefaultChunkSize, pad = true
T: type FileChunker,
file: File,
chunkSize = DefaultChunkSize,
pad = true
): FileChunker =
## create the default File chunker
##
proc reader(
data: ChunkBuffer, len: int
): Future[int] {.async: (raises: [ChunkerError, CancelledError]).} =
proc reader(data: ChunkBuffer, len: int): Future[int]
{.gcsafe, async, raises: [Defect].} =
var total = 0
try:
while total < len:
@ -121,4 +135,7 @@ proc new*(
return total
FileChunker.new(reader = reader, chunkSize = chunkSize, pad = pad)
FileChunker.new(
reader = reader,
chunkSize = chunkSize,
pad = pad)

View File

@ -1,7 +1,6 @@
{.push raises: [].}
import pkg/chronos
import pkg/stew/endians2
import pkg/upraises
import pkg/stint
type
@ -9,12 +8,10 @@ type
SecondsSince1970* = int64
Timeout* = object of CatchableError
method now*(clock: Clock): SecondsSince1970 {.base, gcsafe, raises: [].} =
method now*(clock: Clock): SecondsSince1970 {.base, upraises: [].} =
raiseAssert "not implemented"
method waitUntil*(
clock: Clock, time: SecondsSince1970
) {.base, async: (raises: [CancelledError]).} =
method waitUntil*(clock: Clock, time: SecondsSince1970) {.base, async.} =
raiseAssert "not implemented"
method start*(clock: Clock) {.base, async.} =
@ -23,9 +20,9 @@ method start*(clock: Clock) {.base, async.} =
method stop*(clock: Clock) {.base, async.} =
discard
proc withTimeout*(
future: Future[void], clock: Clock, expiry: SecondsSince1970
) {.async.} =
proc withTimeout*(future: Future[void],
clock: Clock,
expiry: SecondsSince1970) {.async.} =
let timeout = clock.waitUntil(expiry)
try:
await future or timeout
@ -43,8 +40,5 @@ proc toSecondsSince1970*(bytes: seq[byte]): SecondsSince1970 =
let asUint = uint64.fromBytes(bytes)
cast[int64](asUint)
proc toSecondsSince1970*(num: uint64): SecondsSince1970 =
cast[int64](num)
proc toSecondsSince1970*(bigint: UInt256): SecondsSince1970 =
bigint.truncate(int64)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -12,23 +12,23 @@ import std/strutils
import std/os
import std/tables
import std/cpuinfo
import std/net
import pkg/chronos
import pkg/taskpools
import pkg/presto
import pkg/libp2p
import pkg/confutils
import pkg/confutils/defs
import pkg/nitro
import pkg/stew/io2
import pkg/stew/shims/net as stewnet
import pkg/datastore
import pkg/ethers except Rng
import pkg/stew/io2
import pkg/taskpools
import ./node
import ./conf
import ./rng as random
import ./rng
import ./rest/api
import ./stores
import ./slots
@ -44,7 +44,6 @@ import ./utils/addrutils
import ./namespaces
import ./codextypes
import ./logutils
import ./nat
logScope:
topics = "codex node"
@ -57,20 +56,10 @@ type
repoStore: RepoStore
maintenance: BlockMaintainer
taskpool: Taskpool
isStarted: bool
CodexPrivateKey* = libp2p.PrivateKey # alias
EthWallet = ethers.Wallet
func config*(self: CodexServer): CodexConf =
return self.config
func node*(self: CodexServer): CodexNodeRef =
return self.codexNode
func repoStore*(self: CodexServer): RepoStore =
return self.repoStore
proc waitForSync(provider: Provider): Future[void] {.async.} =
var sleepTime = 1
trace "Checking sync state of Ethereum provider..."
@ -81,7 +70,8 @@ proc waitForSync(provider: Provider): Future[void] {.async.} =
inc sleepTime
trace "Ethereum provider is synced."
proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
proc bootstrapInteractions(
s: CodexServer): Future[void] {.async.} =
## bootstrap interactions and return contracts
## using clients, hosts, validators pairings
##
@ -94,9 +84,7 @@ proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
error "Persistence enabled, but no Ethereum account was set"
quit QuitFailure
let provider = JsonRpcProvider.new(
config.ethProvider, maxPriorityFeePerGas = config.maxPriorityFeePerGas.u256
)
let provider = JsonRpcProvider.new(config.ethProvider)
await waitForSync(provider)
var signer: Signer
if account =? config.ethAccount:
@ -116,15 +104,13 @@ proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
quit QuitFailure
signer = wallet
let deploy = Deployment.new(provider, config.marketplaceAddress)
let deploy = Deployment.new(provider, config)
without marketplaceAddress =? await deploy.address(Marketplace):
error "No Marketplace address was specified or there is no known address for the current network"
quit QuitFailure
let marketplace = Marketplace.new(marketplaceAddress, signer)
let market = OnChainMarket.new(
marketplace, config.rewardRecipient, config.marketplaceRequestCacheSize
)
let market = OnChainMarket.new(marketplace, config.rewardRecipient)
let clock = OnChainClock.new(provider)
var client: ?ClientInteractions
@ -138,7 +124,7 @@ proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
# This is used for simulation purposes. Normal nodes won't be compiled with this flag
# and hence the proof failure will always be 0.
when storage_enable_proof_failures:
when codex_enable_proof_failures:
let proofFailures = config.simulateProofFailures
if proofFailures > 0:
warn "Enabling proof failure simulation!"
@ -147,232 +133,172 @@ proc bootstrapInteractions(s: CodexServer): Future[void] {.async.} =
if config.simulateProofFailures > 0:
warn "Proof failure simulation is not enabled for this build! Configuration ignored"
if error =? (await market.loadConfig()).errorOption:
fatal "Cannot load market configuration", error = error.msg
quit QuitFailure
let purchasing = Purchasing.new(market, clock)
let sales = Sales.new(market, clock, repo, proofFailures)
client = some ClientInteractions.new(clock, purchasing)
host = some HostInteractions.new(clock, sales)
if config.validator:
without validationConfig =?
ValidationConfig.init(
config.validatorMaxSlots, config.validatorGroups, config.validatorGroupIndex
), err:
error "Invalid validation parameters", err = err.msg
quit QuitFailure
without validationConfig =? ValidationConfig.init(
config.validatorMaxSlots,
config.validatorGroups,
config.validatorGroupIndex), err:
error "Invalid validation parameters", err = err.msg
quit QuitFailure
let validation = Validation.new(clock, market, validationConfig)
validator = some ValidatorInteractions.new(clock, validation)
s.codexNode.contracts = (client, host, validator)
proc start*(s: CodexServer) {.async.} =
if s.isStarted:
warn "Storage server already started, skipping"
return
trace "Starting codex node", config = $s.config
trace "Starting Storage node", config = $s.config
await s.repoStore.start()
s.maintenance.start()
await s.codexNode.switch.start()
let (announceAddrs, discoveryAddrs) = nattedAddress(
s.config.nat, s.codexNode.switch.peerInfo.addrs, s.config.discoveryPort
)
let
# TODO: Can't define these as constants, pity
natIpPart = MultiAddress.init("/ip4/" & $s.config.nat & "/")
.expect("Should create multiaddress")
anyAddrIp = MultiAddress.init("/ip4/0.0.0.0/")
.expect("Should create multiaddress")
loopBackAddrIp = MultiAddress.init("/ip4/127.0.0.1/")
.expect("Should create multiaddress")
# announce addresses should be set to bound addresses,
# but the IP should be mapped to the provided nat ip
announceAddrs = s.codexNode.switch.peerInfo.addrs.mapIt:
block:
let
listenIPPart = it[multiCodec("ip4")].expect("Should get IP")
if listenIPPart == anyAddrIp or
(listenIPPart == loopBackAddrIp and natIpPart != loopBackAddrIp):
it.remapAddr(s.config.nat.some)
else:
it
s.codexNode.discovery.updateAnnounceRecord(announceAddrs)
s.codexNode.discovery.updateDhtRecord(discoveryAddrs)
s.codexNode.discovery.updateDhtRecord(s.config.nat, s.config.discoveryPort)
await s.bootstrapInteractions()
await s.codexNode.start()
if s.restServer != nil:
s.restServer.start()
s.isStarted = true
s.restServer.start()
proc stop*(s: CodexServer) {.async.} =
if not s.isStarted:
warn "Storage is not started"
return
notice "Stopping codex node"
notice "Stopping Storage node"
var futures =
@[
s.codexNode.switch.stop(),
s.codexNode.stop(),
s.repoStore.stop(),
s.maintenance.stop(),
]
s.taskpool.syncAll()
s.taskpool.shutdown()
if s.restServer != nil:
futures.add(s.restServer.stop())
let res = await noCancel allFinishedFailed[void](futures)
if res.failure.len > 0:
error "Failed to stop Storage node", failures = res.failure.len
raiseAssert "Failed to stop Storage node"
proc close*(s: CodexServer) {.async.} =
var futures = @[s.codexNode.close(), s.repoStore.close()]
let res = await noCancel allFinishedFailed[void](futures)
if not s.taskpool.isNil:
try:
s.taskpool.shutdown()
except Exception as exc:
error "Failed to stop the taskpool", failures = res.failure.len
raiseAssert("Failure in taskpool shutdown:" & exc.msg)
if res.failure.len > 0:
error "Failed to close Storage node", failures = res.failure.len
raiseAssert "Failed to close Storage node"
proc shutdown*(server: CodexServer) {.async.} =
await server.stop()
await server.close()
await allFuturesThrowing(
s.restServer.stop(),
s.codexNode.switch.stop(),
s.codexNode.stop(),
s.repoStore.stop(),
s.maintenance.stop())
proc new*(
T: type CodexServer, config: CodexConf, privateKey: CodexPrivateKey
): CodexServer =
T: type CodexServer,
config: CodexConf,
privateKey: CodexPrivateKey): CodexServer =
## create CodexServer including setting up datastore, repostore, etc
let switch = SwitchBuilder
let
switch = SwitchBuilder
.new()
.withPrivateKey(privateKey)
.withAddresses(config.listenAddrs)
.withRng(random.Rng.instance())
.withRng(Rng.instance())
.withNoise()
.withMplex(5.minutes, 5.minutes)
.withMaxConnections(config.maxPeers)
.withAgentVersion(config.agentString)
.withSignedPeerRecord(true)
.withTcpTransport({ServerFlags.ReuseAddr, ServerFlags.TcpNoDelay})
.withTcpTransport({ServerFlags.ReuseAddr})
.build()
var
cache: CacheStore = nil
taskpool: Taskpool
try:
if config.numThreads == ThreadCount(0):
taskpool = Taskpool.new(numThreads = min(countProcessors(), 16))
else:
taskpool = Taskpool.new(numThreads = int(config.numThreads))
info "Threadpool started", numThreads = taskpool.numThreads
except CatchableError as exc:
raiseAssert("Failure in taskpool initialization:" & exc.msg)
if config.cacheSize > 0'nb:
cache = CacheStore.new(cacheSize = config.cacheSize)
## Is unused?
let discoveryDir = config.dataDir / CodexDhtNamespace
let
discoveryDir = config.dataDir / CodexDhtNamespace
if io2.createPath(discoveryDir).isErr:
trace "Unable to create discovery directory for block store",
discoveryDir = discoveryDir
trace "Unable to create discovery directory for block store", discoveryDir = discoveryDir
raise (ref Defect)(
msg: "Unable to create discovery directory for block store: " & discoveryDir
)
msg: "Unable to create discovery directory for block store: " & discoveryDir)
let
discoveryStore = Datastore(
LevelDbDatastore.new(config.dataDir / CodexDhtProvidersNamespace).expect(
"Should create discovery datastore!"
)
)
LevelDbDatastore.new(config.dataDir / CodexDhtProvidersNamespace)
.expect("Should create discovery datastore!"))
discovery = Discovery.new(
switch.peerInfo.privateKey,
announceAddrs = config.listenAddrs,
bindIp = config.discoveryIp,
bindPort = config.discoveryPort,
bootstrapNodes = config.bootstrapNodes,
store = discoveryStore,
)
store = discoveryStore)
wallet = WalletRef.new(EthPrivateKey.random())
network = BlockExcNetwork.new(switch)
repoData =
case config.repoKind
of repoFS:
Datastore(
FSDatastore.new($config.dataDir, depth = 5).expect(
"Should create repo file data store!"
)
)
of repoSQLite:
Datastore(
SQLiteDatastore.new($config.dataDir).expect(
"Should create repo SQLite data store!"
)
)
of repoLevelDb:
Datastore(
LevelDbDatastore.new($config.dataDir).expect(
"Should create repo LevelDB data store!"
)
)
repoData = case config.repoKind
of repoFS: Datastore(FSDatastore.new($config.dataDir, depth = 5)
.expect("Should create repo file data store!"))
of repoSQLite: Datastore(SQLiteDatastore.new($config.dataDir)
.expect("Should create repo SQLite data store!"))
of repoLevelDb: Datastore(LevelDbDatastore.new($config.dataDir)
.expect("Should create repo LevelDB data store!"))
repoStore = RepoStore.new(
repoDs = repoData,
metaDs = LevelDbDatastore.new(config.dataDir / CodexMetaNamespace).expect(
"Should create metadata store!"
),
metaDs = LevelDbDatastore.new(config.dataDir / CodexMetaNamespace)
.expect("Should create metadata store!"),
quotaMaxBytes = config.storageQuota,
blockTtl = config.blockTtl,
)
blockTtl = config.blockTtl)
maintenance = BlockMaintainer.new(
repoStore,
interval = config.blockMaintenanceInterval,
numberOfBlocksPerInterval = config.blockMaintenanceNumberOfBlocks,
)
numberOfBlocksPerInterval = config.blockMaintenanceNumberOfBlocks)
peerStore = PeerCtxStore.new()
pendingBlocks = PendingBlocksManager.new(retries = config.blockRetries)
pendingBlocks = PendingBlocksManager.new()
advertiser = Advertiser.new(repoStore, discovery)
blockDiscovery =
DiscoveryEngine.new(repoStore, peerStore, network, discovery, pendingBlocks)
engine = BlockExcEngine.new(
repoStore, wallet, network, blockDiscovery, advertiser, peerStore, pendingBlocks
)
blockDiscovery = DiscoveryEngine.new(repoStore, peerStore, network, discovery, pendingBlocks)
engine = BlockExcEngine.new(repoStore, wallet, network, blockDiscovery, advertiser, peerStore, pendingBlocks)
store = NetworkStore.new(engine, repoStore)
prover =
if config.prover:
let backend =
config.initializeBackend().expect("Unable to create prover backend.")
some Prover.new(store, backend, config.numProofSamples)
else:
none Prover
prover = if config.prover:
let backend = config.initializeBackend().expect("Unable to create prover backend.")
some Prover.new(store, backend, config.numProofSamples)
else:
none Prover
taskpool = Taskpool.new(num_threads = countProcessors())
codexNode = CodexNodeRef.new(
switch = switch,
networkStore = store,
engine = engine,
discovery = discovery,
prover = prover,
taskPool = taskpool,
)
discovery = discovery,
taskpool = taskpool)
var restServer: RestServerRef = nil
if config.apiBindAddress.isSome:
restServer = RestServerRef
.new(
codexNode.initRestApi(config, repoStore, config.apiCorsAllowedOrigin),
initTAddress(config.apiBindAddress.get(), config.apiPort),
bufferSize = (1024 * 64),
maxRequestBodySize = int.high,
)
.expect("Should create rest server!")
restServer = RestServerRef.new(
codexNode.initRestApi(config, repoStore, config.apiCorsAllowedOrigin),
initTAddress(config.apiBindAddress , config.apiPort),
bufferSize = (1024 * 64),
maxRequestBodySize = int.high)
.expect("Should start rest server!")
switch.mount(network)
@ -382,5 +308,4 @@ proc new*(
restServer: restServer,
repoStore: repoStore,
maintenance: maintenance,
taskpool: taskpool,
)
taskpool: taskpool)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -25,15 +25,15 @@ export tables
const
# Size of blocks for storage / network exchange,
DefaultBlockSize* = NBytes 1024 * 64
DefaultBlockSize* = NBytes 1024*64
DefaultCellSize* = NBytes 2048
# Proving defaults
DefaultMaxSlotDepth* = 32
DefaultMaxSlotDepth* = 32
DefaultMaxDatasetDepth* = 8
DefaultBlockDepth* = 5
DefaultCellElms* = 67
DefaultSamplesNum* = 5
DefaultBlockDepth* = 5
DefaultCellElms* = 67
DefaultSamplesNum* = 5
# hashes
Sha256HashCodec* = multiCodec("sha2-256")
@ -48,10 +48,18 @@ const
SlotProvingRootCodec* = multiCodec("codex-proving-root")
CodexSlotCellCodec* = multiCodec("codex-slot-cell")
CodexHashesCodecs* = [Sha256HashCodec, Pos2Bn128SpngCodec, Pos2Bn128MrklCodec]
CodexHashesCodecs* = [
Sha256HashCodec,
Pos2Bn128SpngCodec,
Pos2Bn128MrklCodec
]
CodexPrimitivesCodecs* = [
ManifestCodec, DatasetRootCodec, BlockCodec, SlotRootCodec, SlotProvingRootCodec,
ManifestCodec,
DatasetRootCodec,
BlockCodec,
SlotRootCodec,
SlotProvingRootCodec,
CodexSlotCellCodec,
]
@ -66,34 +74,40 @@ proc initEmptyCidTable(): ?!Table[(CidVersion, MultiCodec, MultiCodec), Cid] =
let
emptyData: seq[byte] = @[]
PadHashes = {
Sha256HashCodec: ?MultiHash.digest($Sha256HashCodec, emptyData).mapFailure,
Sha512HashCodec: ?MultiHash.digest($Sha512HashCodec, emptyData).mapFailure,
Sha256HashCodec: ? MultiHash.digest($Sha256HashCodec, emptyData).mapFailure,
Sha512HashCodec: ? MultiHash.digest($Sha512HashCodec, emptyData).mapFailure,
}.toTable
var table = initTable[(CidVersion, MultiCodec, MultiCodec), Cid]()
var
table = initTable[(CidVersion, MultiCodec, MultiCodec), Cid]()
for hcodec, mhash in PadHashes.pairs:
table[(CIDv1, hcodec, BlockCodec)] = ?Cid.init(CIDv1, BlockCodec, mhash).mapFailure
table[(CIDv1, hcodec, BlockCodec)] = ? Cid.init(CIDv1, BlockCodec, mhash).mapFailure
success table
proc emptyCid*(version: CidVersion, hcodec: MultiCodec, dcodec: MultiCodec): ?!Cid =
proc emptyCid*(
version: CidVersion,
hcodec: MultiCodec,
dcodec: MultiCodec): ?!Cid =
## Returns cid representing empty content,
## given cid version, hash codec and data codec
##
var table {.global, threadvar.}: Table[(CidVersion, MultiCodec, MultiCodec), Cid]
var
table {.global, threadvar.}: Table[(CidVersion, MultiCodec, MultiCodec), Cid]
once:
table = ?initEmptyCidTable()
table = ? initEmptyCidTable()
table[(version, hcodec, dcodec)].catch
proc emptyDigest*(
version: CidVersion, hcodec: MultiCodec, dcodec: MultiCodec
): ?!MultiHash =
version: CidVersion,
hcodec: MultiCodec,
dcodec: MultiCodec): ?!MultiHash =
## Returns hash representing empty content,
## given cid version, hash codec and data codec
##
emptyCid(version, hcodec, dcodec).flatMap((cid: Cid) => cid.mhash.mapFailure)
emptyCid(version, hcodec, dcodec)
.flatMap((cid: Cid) => cid.mhash.mapFailure)

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +0,0 @@
const ContentIdsExts = [
multiCodec("codex-root"),
multiCodec("codex-manifest"),
multiCodec("codex-block"),
multiCodec("codex-slot-root"),
multiCodec("codex-proving-root"),
multiCodec("codex-slot-cell"),
]

View File

@ -2,10 +2,8 @@ import contracts/requests
import contracts/marketplace
import contracts/market
import contracts/interactions
import contracts/provider
export requests
export marketplace
export market
export interactions
export provider

View File

@ -1,13 +1,13 @@
Logos Storage Contracts in Nim
Codex Contracts in Nim
=======================
Nim API for the [Logos Storage smart contracts][1].
Nim API for the [Codex smart contracts][1].
Usage
-----
For a global overview of the steps involved in starting and fulfilling a
storage contract, see [Logos Storage Contracts][1].
storage contract, see [Codex Contracts][1].
Smart contract
--------------
@ -144,5 +144,5 @@ await storage
.markProofAsMissing(id, period)
```
[1]: https://github.com/logos-storage/logos-storage-contracts-eth/
[2]: https://github.com/logos-storage/logos-storage-research/blob/master/design/storage-proof-timing.md
[1]: https://github.com/status-im/codex-contracts-eth/
[2]: https://github.com/status-im/codex-research/blob/main/design/storage-proof-timing.md

View File

@ -1,32 +1,26 @@
{.push raises: [].}
import std/times
import pkg/ethers
import pkg/questionable
import pkg/chronos
import pkg/stint
import ../clock
import ../conf
import ../utils/trackedfutures
export clock
logScope:
topics = "contracts clock"
type OnChainClock* = ref object of Clock
provider: Provider
subscription: Subscription
offset: times.Duration
blockNumber: UInt256
started: bool
newBlock: AsyncEvent
trackedFutures: TrackedFutures
type
OnChainClock* = ref object of Clock
provider: Provider
subscription: Subscription
offset: times.Duration
blockNumber: UInt256
started: bool
newBlock: AsyncEvent
proc new*(_: type OnChainClock, provider: Provider): OnChainClock =
OnChainClock(
provider: provider, newBlock: newAsyncEvent(), trackedFutures: TrackedFutures()
)
OnChainClock(provider: provider, newBlock: newAsyncEvent())
proc update(clock: OnChainClock, blck: Block) =
if number =? blck.number and number > clock.blockNumber:
@ -34,28 +28,26 @@ proc update(clock: OnChainClock, blck: Block) =
let computerTime = getTime()
clock.offset = blockTime - computerTime
clock.blockNumber = number
trace "updated clock",
blockTime = blck.timestamp, blockNumber = number, offset = clock.offset
trace "updated clock", blockTime=blck.timestamp, blockNumber=number, offset=clock.offset
clock.newBlock.fire()
proc update(clock: OnChainClock) {.async: (raises: []).} =
proc update(clock: OnChainClock) {.async.} =
try:
if latest =? (await clock.provider.getBlock(BlockTag.latest)):
clock.update(latest)
except CancelledError as error:
raise error
except CatchableError as error:
debug "error updating clock: ", error = error.msg
debug "error updating clock: ", error=error.msg
discard
method start*(clock: OnChainClock) {.async.} =
if clock.started:
return
proc onBlock(blckResult: ?!Block) =
if eventError =? blckResult.errorOption:
error "There was an error in block subscription", msg = eventError.msg
return
proc onBlock(_: Block) =
# ignore block parameter; hardhat may call this with pending blocks
clock.trackedFutures.track(clock.update())
asyncSpawn clock.update()
await clock.update()
@ -67,16 +59,13 @@ method stop*(clock: OnChainClock) {.async.} =
return
await clock.subscription.unsubscribe()
await clock.trackedFutures.cancelTracked()
clock.started = false
method now*(clock: OnChainClock): SecondsSince1970 =
doAssert clock.started, "clock should be started before calling now()"
return toUnix(getTime() + clock.offset)
method waitUntil*(
clock: OnChainClock, time: SecondsSince1970
) {.async: (raises: [CancelledError]).} =
method waitUntil*(clock: OnChainClock, time: SecondsSince1970) {.async.} =
while (let difference = time - clock.now(); difference > 0):
clock.newBlock.clear()
discard await clock.newBlock.wait().withTimeout(chronos.seconds(difference))

View File

@ -1,71 +1,52 @@
import pkg/contractabi
import pkg/ethers/contracts/fields
import pkg/ethers/fields
import pkg/questionable/results
export contractabi
const DefaultRequestCacheSize* = 128.uint16
const DefaultMaxPriorityFeePerGas* = 1_000_000_000.uint64
type
MarketplaceConfig* = object
collateral*: CollateralConfig
proofs*: ProofConfig
reservations*: SlotReservationsConfig
requestDurationLimit*: uint64
CollateralConfig* = object
repairRewardPercentage*: uint8
# percentage of remaining collateral slot has after it has been freed
repairRewardPercentage*: uint8 # percentage of remaining collateral slot has after it has been freed
maxNumberOfSlashes*: uint8 # frees slot when the number of slashes reaches this value
slashCriterion*: uint16 # amount of proofs missed that lead to slashing
slashPercentage*: uint8 # percentage of the collateral that is slashed
validatorRewardPercentage*: uint8
# percentage of the slashed amount going to the validators
ProofConfig* = object
period*: uint64 # proofs requirements are calculated per period (in seconds)
timeout*: uint64 # mark proofs as missing before the timeout (in seconds)
period*: UInt256 # proofs requirements are calculated per period (in seconds)
timeout*: UInt256 # mark proofs as missing before the timeout (in seconds)
downtime*: uint8 # ignore this much recent blocks for proof requirements
downtimeProduct*: uint8
zkeyHash*: string # hash of the zkey file which is linked to the verifier
# Ensures the pointer does not remain in downtime for many consecutive
# periods. For each period increase, move the pointer `pointerProduct`
# blocks. Should be a prime number to ensure there are no cycles.
downtimeProduct*: uint8
SlotReservationsConfig* = object
maxReservations*: uint8
func fromTuple(_: type ProofConfig, tupl: tuple): ProofConfig =
ProofConfig(
period: tupl[0],
timeout: tupl[1],
downtime: tupl[2],
downtimeProduct: tupl[3],
zkeyHash: tupl[4],
zkeyHash: tupl[3],
downtimeProduct: tupl[4]
)
func fromTuple(_: type SlotReservationsConfig, tupl: tuple): SlotReservationsConfig =
SlotReservationsConfig(maxReservations: tupl[0])
func fromTuple(_: type CollateralConfig, tupl: tuple): CollateralConfig =
CollateralConfig(
repairRewardPercentage: tupl[0],
maxNumberOfSlashes: tupl[1],
slashPercentage: tupl[2],
validatorRewardPercentage: tupl[3],
slashCriterion: tupl[2],
slashPercentage: tupl[3]
)
func fromTuple(_: type MarketplaceConfig, tupl: tuple): MarketplaceConfig =
MarketplaceConfig(
collateral: tupl[0],
proofs: tupl[1],
reservations: tupl[2],
requestDurationLimit: tupl[3],
proofs: tupl[1]
)
func solidityType*(_: type SlotReservationsConfig): string =
solidityType(SlotReservationsConfig.fieldTypes)
func solidityType*(_: type ProofConfig): string =
solidityType(ProofConfig.fieldTypes)
@ -73,10 +54,7 @@ func solidityType*(_: type CollateralConfig): string =
solidityType(CollateralConfig.fieldTypes)
func solidityType*(_: type MarketplaceConfig): string =
solidityType(MarketplaceConfig.fieldTypes)
func encode*(encoder: var AbiEncoder, slot: SlotReservationsConfig) =
encoder.write(slot.fieldValues)
solidityType(CollateralConfig.fieldTypes)
func encode*(encoder: var AbiEncoder, slot: ProofConfig) =
encoder.write(slot.fieldValues)
@ -91,10 +69,6 @@ func decode*(decoder: var AbiDecoder, T: type ProofConfig): ?!T =
let tupl = ?decoder.read(ProofConfig.fieldTypes)
success ProofConfig.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type SlotReservationsConfig): ?!T =
let tupl = ?decoder.read(SlotReservationsConfig.fieldTypes)
success SlotReservationsConfig.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type CollateralConfig): ?!T =
let tupl = ?decoder.read(CollateralConfig.fieldTypes)
success CollateralConfig.fromTuple(tupl)

View File

@ -9,42 +9,38 @@ import ./marketplace
type Deployment* = ref object
provider: Provider
marketplaceAddressOverride: ?Address
config: CodexConf
const knownAddresses = {
# Hardhat localhost network
"31337":
{"Marketplace": Address.init("0x322813Fd9A801c5507c9de605d63CEA4f2CE6c44")}.toTable,
# Taiko Alpha-3 Testnet
"167005":
{"Marketplace": Address.init("0x948CF9291b77Bd7ad84781b9047129Addf1b894F")}.toTable,
# Codex Testnet - Jun 19 2025 13:11:56 PM (+00:00 UTC)
"789987":
{"Marketplace": Address.init("0x5378a4EA5dA2a548ce22630A3AE74b052000C62D")}.toTable,
# Linea (Status)
"1660990954":
{"Marketplace": Address.init("0x34F606C65869277f236ce07aBe9af0B8c88F486B")}.toTable,
# Hardhat localhost network
"31337": {
"Marketplace": Address.init("0x322813Fd9A801c5507c9de605d63CEA4f2CE6c44"),
}.toTable,
# Taiko Alpha-3 Testnet
"167005": {
"Marketplace": Address.init("0x948CF9291b77Bd7ad84781b9047129Addf1b894F")
}.toTable,
# Codex Testnet - Nov 03 2024 07:30:30 AM (+00:00 UTC)
"789987": {
"Marketplace": Address.init("0x5Bd66fA15Eb0E546cd26808248867a572cFF5706")
}.toTable
}.toTable
proc getKnownAddress(T: type, chainId: UInt256): ?Address =
let id = chainId.toString(10)
notice "Looking for well-known contract address with ChainID ", chainId = id
notice "Looking for well-known contract address with ChainID ", chainId=id
if not (id in knownAddresses):
return none Address
return knownAddresses[id].getOrDefault($T, Address.none)
proc new*(
_: type Deployment,
provider: Provider,
marketplaceAddressOverride: ?Address = none Address,
): Deployment =
Deployment(provider: provider, marketplaceAddressOverride: marketplaceAddressOverride)
proc new*(_: type Deployment, provider: Provider, config: CodexConf): Deployment =
Deployment(provider: provider, config: config)
proc address*(deployment: Deployment, contract: type): Future[?Address] {.async.} =
when contract is Marketplace:
if address =? deployment.marketplaceAddressOverride:
if address =? deployment.config.marketplaceAddress:
return some address
let chainId = await deployment.provider.getChainId()

View File

@ -9,12 +9,13 @@ import ./interactions
export purchasing
export logutils
type ClientInteractions* = ref object of ContractInteractions
purchasing*: Purchasing
type
ClientInteractions* = ref object of ContractInteractions
purchasing*: Purchasing
proc new*(
_: type ClientInteractions, clock: OnChainClock, purchasing: Purchasing
): ClientInteractions =
proc new*(_: type ClientInteractions,
clock: OnChainClock,
purchasing: Purchasing): ClientInteractions =
ClientInteractions(clock: clock, purchasing: purchasing)
proc start*(self: ClientInteractions) {.async.} =

View File

@ -7,10 +7,15 @@ import ./interactions
export sales
export logutils
type HostInteractions* = ref object of ContractInteractions
sales*: Sales
type
HostInteractions* = ref object of ContractInteractions
sales*: Sales
proc new*(_: type HostInteractions, clock: Clock, sales: Sales): HostInteractions =
proc new*(
_: type HostInteractions,
clock: Clock,
sales: Sales
): HostInteractions =
## Create a new HostInteractions instance
##
HostInteractions(clock: clock, sales: sales)

View File

@ -5,8 +5,9 @@ import ../market
export clock
type ContractInteractions* = ref object of RootObj
clock*: Clock
type
ContractInteractions* = ref object of RootObj
clock*: Clock
method start*(self: ContractInteractions) {.async, base.} =
discard

View File

@ -3,12 +3,13 @@ import ../../validation
export validation
type ValidatorInteractions* = ref object of ContractInteractions
validation: Validation
type
ValidatorInteractions* = ref object of ContractInteractions
validation: Validation
proc new*(
_: type ValidatorInteractions, clock: OnChainClock, validation: Validation
): ValidatorInteractions =
proc new*(_: type ValidatorInteractions,
clock: OnChainClock,
validation: Validation): ValidatorInteractions =
ValidatorInteractions(clock: clock, validation: validation)
proc start*(self: ValidatorInteractions) {.async.} =

View File

@ -1,14 +1,14 @@
import std/strformat
import std/sequtils
import std/strutils
import std/sugar
import pkg/ethers
import pkg/upraises
import pkg/questionable
import pkg/lrucache
import ../utils/exceptions
import ../logutils
import ../market
import ./marketplace
import ./proofs
import ./provider
export market
@ -20,225 +20,128 @@ type
contract: Marketplace
signer: Signer
rewardRecipient: ?Address
configuration: ?MarketplaceConfig
requestCache: LruCache[string, StorageRequest]
allowanceLock: AsyncLock
MarketSubscription = market.Subscription
EventSubscription = ethers.Subscription
OnChainMarketSubscription = ref object of MarketSubscription
eventSubscription: EventSubscription
func new*(
_: type OnChainMarket,
contract: Marketplace,
rewardRecipient = Address.none,
requestCacheSize: uint16 = DefaultRequestCacheSize,
): OnChainMarket =
_: type OnChainMarket,
contract: Marketplace,
rewardRecipient = Address.none): OnChainMarket =
without signer =? contract.signer:
raiseAssert("Marketplace contract should have a signer")
var requestCache = newLruCache[string, StorageRequest](int(requestCacheSize))
OnChainMarket(
contract: contract,
signer: signer,
rewardRecipient: rewardRecipient,
requestCache: requestCache,
rewardRecipient: rewardRecipient
)
proc raiseMarketError(message: string) {.raises: [MarketError].} =
raise newException(MarketError, message)
func prefixWith(suffix, prefix: string, separator = ": "): string =
if prefix.len > 0:
return &"{prefix}{separator}{suffix}"
else:
return suffix
template convertEthersError(msg: string = "", body) =
template convertEthersError(body) =
try:
body
except EthersError as error:
raiseMarketError(error.msgDetail.prefixWith(msg))
raiseMarketError(error.msgDetail)
proc config(
market: OnChainMarket
): Future[MarketplaceConfig] {.async: (raises: [CancelledError, MarketError]).} =
without resolvedConfig =? market.configuration:
if err =? (await market.loadConfig()).errorOption:
raiseMarketError(err.msg)
without config =? market.configuration:
raiseMarketError("Failed to access to config from the Marketplace contract")
return config
return resolvedConfig
template withAllowanceLock*(market: OnChainMarket, body: untyped) =
if market.allowanceLock.isNil:
market.allowanceLock = newAsyncLock()
await market.allowanceLock.acquire()
try:
body
finally:
try:
market.allowanceLock.release()
except AsyncLockError as error:
raise newException(Defect, error.msg, error)
proc approveFunds(
market: OnChainMarket, amount: UInt256
) {.async: (raises: [CancelledError, MarketError]).} =
proc approveFunds(market: OnChainMarket, amount: UInt256) {.async.} =
debug "Approving tokens", amount
convertEthersError("Failed to approve funds"):
convertEthersError:
let tokenAddress = await market.contract.token()
let token = Erc20Token.new(tokenAddress, market.signer)
let owner = await market.signer.getAddress()
let spender = market.contract.address
market.withAllowanceLock:
let allowance = await token.allowance(owner, spender)
discard await token.approve(spender, allowance + amount).confirm(1)
discard await token.increaseAllowance(market.contract.address(), amount).confirm(0)
method loadConfig*(
market: OnChainMarket
): Future[?!void] {.async: (raises: [CancelledError]).} =
try:
without config =? market.configuration:
let fetchedConfig = await market.contract.configuration()
market.configuration = some fetchedConfig
return success()
except EthersError as err:
return failure newException(
MarketError,
"Failed to fetch the config from the Marketplace contract: " & err.msg,
)
method getZkeyHash*(
market: OnChainMarket
): Future[?string] {.async: (raises: [CancelledError, MarketError]).} =
let config = await market.config()
method getZkeyHash*(market: OnChainMarket): Future[?string] {.async.} =
let config = await market.contract.configuration()
return some config.proofs.zkeyHash
method getSigner*(
market: OnChainMarket
): Future[Address] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to get signer address"):
method getSigner*(market: OnChainMarket): Future[Address] {.async.} =
convertEthersError:
return await market.signer.getAddress()
method periodicity*(
market: OnChainMarket
): Future[Periodicity] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
method periodicity*(market: OnChainMarket): Future[Periodicity] {.async.} =
convertEthersError:
let config = await market.contract.configuration()
let period = config.proofs.period
return Periodicity(seconds: period)
method proofTimeout*(
market: OnChainMarket
): Future[uint64] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
method proofTimeout*(market: OnChainMarket): Future[UInt256] {.async.} =
convertEthersError:
let config = await market.contract.configuration()
return config.proofs.timeout
method repairRewardPercentage*(
market: OnChainMarket
): Future[uint8] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
return config.collateral.repairRewardPercentage
method requestDurationLimit*(market: OnChainMarket): Future[uint64] {.async.} =
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
return config.requestDurationLimit
method proofDowntime*(
market: OnChainMarket
): Future[uint8] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to get Marketplace config"):
let config = await market.config()
method proofDowntime*(market: OnChainMarket): Future[uint8] {.async.} =
convertEthersError:
let config = await market.contract.configuration()
return config.proofs.downtime
method getPointer*(market: OnChainMarket, slotId: SlotId): Future[uint8] {.async.} =
convertEthersError("Failed to get slot pointer"):
convertEthersError:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.getPointer(slotId, overrides)
method myRequests*(market: OnChainMarket): Future[seq[RequestId]] {.async.} =
convertEthersError("Failed to get my requests"):
convertEthersError:
return await market.contract.myRequests
method mySlots*(market: OnChainMarket): Future[seq[SlotId]] {.async.} =
convertEthersError("Failed to get my slots"):
convertEthersError:
let slots = await market.contract.mySlots()
debug "Fetched my slots", numSlots = len(slots)
debug "Fetched my slots", numSlots=len(slots)
return slots
method requestStorage(
market: OnChainMarket, request: StorageRequest
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to request storage"):
method requestStorage(market: OnChainMarket, request: StorageRequest){.async.} =
convertEthersError:
debug "Requesting storage"
await market.approveFunds(request.totalPrice())
discard await market.contract.requestStorage(request).confirm(1)
await market.approveFunds(request.price())
discard await market.contract.requestStorage(request).confirm(0)
method getRequest*(
market: OnChainMarket, id: RequestId
): Future[?StorageRequest] {.async: (raises: [CancelledError]).} =
try:
let key = $id
method getRequest(market: OnChainMarket,
id: RequestId): Future[?StorageRequest] {.async.} =
convertEthersError:
try:
return some await market.contract.getRequest(id)
except ProviderError as e:
if e.msgDetail.contains("Unknown request"):
return none StorageRequest
raise e
if key in market.requestCache:
return some market.requestCache[key]
let request = await market.contract.getRequest(id)
market.requestCache[key] = request
return some request
except Marketplace_UnknownRequest, KeyError:
warn "Cannot retrieve the request", error = getCurrentExceptionMsg()
return none StorageRequest
except EthersError as e:
error "Cannot retrieve the request", error = e.msg
return none StorageRequest
method requestState*(
market: OnChainMarket, requestId: RequestId
): Future[?RequestState] {.async.} =
convertEthersError("Failed to get request state"):
method requestState*(market: OnChainMarket,
requestId: RequestId): Future[?RequestState] {.async.} =
convertEthersError:
try:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return some await market.contract.requestState(requestId, overrides)
except Marketplace_UnknownRequest:
return none RequestState
except ProviderError as e:
if e.msgDetail.contains("Unknown request"):
return none RequestState
raise e
method slotState*(
market: OnChainMarket, slotId: SlotId
): Future[SlotState] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to fetch the slot state from the Marketplace contract"):
method slotState*(market: OnChainMarket,
slotId: SlotId): Future[SlotState] {.async.} =
convertEthersError:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.slotState(slotId, overrides)
method getRequestEnd*(
market: OnChainMarket, id: RequestId
): Future[SecondsSince1970] {.async.} =
convertEthersError("Failed to get request end"):
method getRequestEnd*(market: OnChainMarket,
id: RequestId): Future[SecondsSince1970] {.async.} =
convertEthersError:
return await market.contract.requestEnd(id)
method requestExpiresAt*(
market: OnChainMarket, id: RequestId
): Future[SecondsSince1970] {.async.} =
convertEthersError("Failed to get request expiry"):
method requestExpiresAt*(market: OnChainMarket,
id: RequestId): Future[SecondsSince1970] {.async.} =
convertEthersError:
return await market.contract.requestExpiry(id)
method getHost(
market: OnChainMarket, requestId: RequestId, slotIndex: uint64
): Future[?Address] {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to get slot's host"):
method getHost(market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256): Future[?Address] {.async.} =
convertEthersError:
let slotId = slotId(requestId, slotIndex)
let address = await market.contract.getHost(slotId)
if address != Address.default:
@ -246,435 +149,277 @@ method getHost(
else:
return none Address
method currentCollateral*(
market: OnChainMarket, slotId: SlotId
): Future[UInt256] {.async: (raises: [MarketError, CancelledError]).} =
convertEthersError("Failed to get slot's current collateral"):
return await market.contract.currentCollateral(slotId)
method getActiveSlot*(market: OnChainMarket, slotId: SlotId): Future[?Slot] {.async.} =
convertEthersError("Failed to get active slot"):
method getActiveSlot*(market: OnChainMarket,
slotId: SlotId): Future[?Slot] {.async.} =
convertEthersError:
try:
return some await market.contract.getActiveSlot(slotId)
except Marketplace_SlotIsFree:
return none Slot
except ProviderError as e:
if e.msgDetail.contains("Slot is free"):
return none Slot
raise e
method fillSlot(
market: OnChainMarket,
requestId: RequestId,
slotIndex: uint64,
proof: Groth16Proof,
collateral: UInt256,
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to fill slot"):
method fillSlot(market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256,
proof: Groth16Proof,
collateral: UInt256) {.async.} =
convertEthersError:
logScope:
requestId
slotIndex
try:
await market.approveFunds(collateral)
await market.approveFunds(collateral)
trace "calling fillSlot on contract"
discard await market.contract.fillSlot(requestId, slotIndex, proof).confirm(0)
trace "fillSlot transaction completed"
# Add 10% to gas estimate to deal with different evm code flow when we
# happen to be the last one to fill a slot in this request
trace "estimating gas for fillSlot"
let gas = await market.contract.estimateGas.fillSlot(requestId, slotIndex, proof)
let gasLimit = (gas * 110) div 100
let overrides = TransactionOverrides(gasLimit: some gasLimit)
method freeSlot*(market: OnChainMarket, slotId: SlotId) {.async.} =
convertEthersError:
var freeSlot: Future[Confirmable]
if rewardRecipient =? market.rewardRecipient:
# If --reward-recipient specified, use it as the reward recipient, and use
# the SP's address as the collateral recipient
let collateralRecipient = await market.getSigner()
freeSlot = market.contract.freeSlot(
slotId,
rewardRecipient, # --reward-recipient
collateralRecipient) # SP's address
trace "calling fillSlot on contract", estimatedGas = gas, gasLimit = gasLimit
discard await market.contract
.fillSlot(requestId, slotIndex, proof, overrides)
.confirm(1)
trace "fillSlot transaction completed"
except Marketplace_SlotNotFree as parent:
raise newException(
SlotStateMismatchError, "Failed to fill slot because the slot is not free",
parent,
)
else:
# Otherwise, use the SP's address as both the reward and collateral
# recipient (the contract will use msg.sender for both)
freeSlot = market.contract.freeSlot(slotId)
method freeSlot*(
market: OnChainMarket, slotId: SlotId
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to free slot"):
try:
var freeSlot: Future[Confirmable]
if rewardRecipient =? market.rewardRecipient:
# If --reward-recipient specified, use it as the reward recipient, and use
# the SP's address as the collateral recipient
let collateralRecipient = await market.getSigner()
discard await freeSlot.confirm(0)
# Add 200% to gas estimate to deal with different evm code flow when we
# happen to be the one to make the request fail
let gas = await market.contract.estimateGas.freeSlot(
slotId, rewardRecipient, collateralRecipient
)
let gasLimit = gas * 3
let overrides = TransactionOverrides(gasLimit: some gasLimit)
trace "calling freeSlot on contract", estimatedGas = gas, gasLimit = gasLimit
method withdrawFunds(market: OnChainMarket,
requestId: RequestId) {.async.} =
convertEthersError:
discard await market.contract.withdrawFunds(requestId).confirm(0)
freeSlot = market.contract.freeSlot(
slotId,
rewardRecipient, # --reward-recipient
collateralRecipient, # SP's address
overrides,
)
else:
# Otherwise, use the SP's address as both the reward and collateral
# recipient (the contract will use msg.sender for both)
# Add 200% to gas estimate to deal with different evm code flow when we
# happen to be the one to make the request fail
let gas = await market.contract.estimateGas.freeSlot(slotId)
let gasLimit = gas * 3
let overrides = TransactionOverrides(gasLimit: some (gasLimit))
trace "calling freeSlot on contract", estimatedGas = gas, gasLimit = gasLimit
freeSlot = market.contract.freeSlot(slotId, overrides)
discard await freeSlot.confirm(1)
except Marketplace_SlotIsFree as parent:
raise newException(
SlotStateMismatchError, "Failed to free slot, slot is already free", parent
)
method withdrawFunds(
market: OnChainMarket, requestId: RequestId
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to withdraw funds"):
discard await market.contract.withdrawFunds(requestId).confirm(1)
method isProofRequired*(market: OnChainMarket, id: SlotId): Future[bool] {.async.} =
convertEthersError("Failed to get proof requirement"):
method isProofRequired*(market: OnChainMarket,
id: SlotId): Future[bool] {.async.} =
convertEthersError:
try:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.isProofRequired(id, overrides)
except Marketplace_SlotIsFree:
return false
except ProviderError as e:
if e.msgDetail.contains("Slot is free"):
return false
raise e
method willProofBeRequired*(market: OnChainMarket, id: SlotId): Future[bool] {.async.} =
convertEthersError("Failed to get future proof requirement"):
method willProofBeRequired*(market: OnChainMarket,
id: SlotId): Future[bool] {.async.} =
convertEthersError:
try:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.willProofBeRequired(id, overrides)
except Marketplace_SlotIsFree:
return false
except ProviderError as e:
if e.msgDetail.contains("Slot is free"):
return false
raise e
method getChallenge*(
market: OnChainMarket, id: SlotId
): Future[ProofChallenge] {.async.} =
convertEthersError("Failed to get proof challenge"):
method getChallenge*(market: OnChainMarket, id: SlotId): Future[ProofChallenge] {.async.} =
convertEthersError:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.getChallenge(id, overrides)
method submitProof*(
market: OnChainMarket, id: SlotId, proof: Groth16Proof
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to submit proof"):
try:
discard await market.contract.submitProof(id, proof).confirm(1)
except Proofs_InvalidProof as parent:
raise newException(
ProofInvalidError, "Failed to submit proof because the proof is invalid", parent
)
method submitProof*(market: OnChainMarket,
id: SlotId,
proof: Groth16Proof) {.async.} =
convertEthersError:
discard await market.contract.submitProof(id, proof).confirm(0)
method markProofAsMissing*(
market: OnChainMarket, id: SlotId, period: Period
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to mark proof as missing"):
# Add 50% to gas estimate to deal with different evm code flow when we
# happen to be the one to make the request fail
let gas = await market.contract.estimateGas.markProofAsMissing(id, period)
let gasLimit = (gas * 150) div 100
let overrides = TransactionOverrides(gasLimit: some gasLimit)
method markProofAsMissing*(market: OnChainMarket,
id: SlotId,
period: Period) {.async.} =
convertEthersError:
discard await market.contract.markProofAsMissing(id, period).confirm(0)
trace "calling markProofAsMissing on contract",
estimatedGas = gas, gasLimit = gasLimit
discard await market.contract.markProofAsMissing(id, period, overrides).confirm(1)
method canMarkProofAsMissing*(
market: OnChainMarket, id: SlotId, period: Period
): Future[bool] {.async: (raises: [CancelledError]).} =
method canProofBeMarkedAsMissing*(
market: OnChainMarket,
id: SlotId,
period: Period
): Future[bool] {.async.} =
let provider = market.contract.provider
let contractWithoutSigner = market.contract.connect(provider)
let overrides = CallOverrides(blockTag: some BlockTag.pending)
try:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
discard await market.contract.canMarkProofAsMissing(id, period, overrides)
discard await contractWithoutSigner.markProofAsMissing(id, period, overrides)
return true
except EthersError as e:
trace "Proof cannot be marked as missing", msg = e.msg
return false
method reserveSlot*(
market: OnChainMarket, requestId: RequestId, slotIndex: uint64
) {.async: (raises: [CancelledError, MarketError]).} =
convertEthersError("Failed to reserve slot"):
try:
# Add 25% to gas estimate to deal with different evm code flow when we
# happen to be the last one that is allowed to reserve the slot
let gas = await market.contract.estimateGas.reserveSlot(requestId, slotIndex)
let gasLimit = (gas * 125) div 100
let overrides = TransactionOverrides(gasLimit: some gasLimit)
market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256) {.async.} =
trace "calling reserveSlot on contract", estimatedGas = gas, gasLimit = gasLimit
discard
await market.contract.reserveSlot(requestId, slotIndex, overrides).confirm(1)
except SlotReservations_ReservationNotAllowed:
raise newException(
SlotReservationNotAllowedError,
"Failed to reserve slot because reservation is not allowed",
)
convertEthersError:
discard await market.contract.reserveSlot(
requestId,
slotIndex,
# reserveSlot runs out of gas for unknown reason, but 100k gas covers it
TransactionOverrides(gasLimit: some 100000.u256)
).confirm(0)
method canReserveSlot*(
market: OnChainMarket, requestId: RequestId, slotIndex: uint64
): Future[bool] {.async.} =
convertEthersError("Unable to determine if slot can be reserved"):
market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256): Future[bool] {.async.} =
convertEthersError:
return await market.contract.canReserveSlot(requestId, slotIndex)
method subscribeRequests*(
market: OnChainMarket, callback: OnRequest
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!StorageRequested) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in Request subscription", msg = eventErr.msg
return
method subscribeRequests*(market: OnChainMarket,
callback: OnRequest):
Future[MarketSubscription] {.async.} =
proc onEvent(event: StorageRequested) {.upraises:[].} =
callback(event.requestId,
event.ask,
event.expiry)
callback(event.requestId, event.ask, event.expiry)
convertEthersError("Failed to subscribe to StorageRequested events"):
convertEthersError:
let subscription = await market.contract.subscribe(StorageRequested, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotFilled*(
market: OnChainMarket, callback: OnSlotFilled
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!SlotFilled) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in SlotFilled subscription", msg = eventErr.msg
return
method subscribeSlotFilled*(market: OnChainMarket,
callback: OnSlotFilled):
Future[MarketSubscription] {.async.} =
proc onEvent(event: SlotFilled) {.upraises:[].} =
callback(event.requestId, event.slotIndex)
convertEthersError("Failed to subscribe to SlotFilled events"):
convertEthersError:
let subscription = await market.contract.subscribe(SlotFilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotFilled*(
market: OnChainMarket,
requestId: RequestId,
slotIndex: uint64,
callback: OnSlotFilled,
): Future[MarketSubscription] {.async.} =
proc onSlotFilled(eventRequestId: RequestId, eventSlotIndex: uint64) =
method subscribeSlotFilled*(market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256,
callback: OnSlotFilled):
Future[MarketSubscription] {.async.} =
proc onSlotFilled(eventRequestId: RequestId, eventSlotIndex: UInt256) =
if eventRequestId == requestId and eventSlotIndex == slotIndex:
callback(requestId, slotIndex)
convertEthersError("Failed to subscribe to SlotFilled events"):
convertEthersError:
return await market.subscribeSlotFilled(onSlotFilled)
method subscribeSlotFreed*(
market: OnChainMarket, callback: OnSlotFreed
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!SlotFreed) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in SlotFreed subscription", msg = eventErr.msg
return
method subscribeSlotFreed*(market: OnChainMarket,
callback: OnSlotFreed):
Future[MarketSubscription] {.async.} =
proc onEvent(event: SlotFreed) {.upraises:[].} =
callback(event.requestId, event.slotIndex)
convertEthersError("Failed to subscribe to SlotFreed events"):
convertEthersError:
let subscription = await market.contract.subscribe(SlotFreed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotReservationsFull*(
market: OnChainMarket, callback: OnSlotReservationsFull
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!SlotReservationsFull) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in SlotReservationsFull subscription",
msg = eventErr.msg
return
market: OnChainMarket,
callback: OnSlotReservationsFull): Future[MarketSubscription] {.async.} =
proc onEvent(event: SlotReservationsFull) {.upraises:[].} =
callback(event.requestId, event.slotIndex)
convertEthersError("Failed to subscribe to SlotReservationsFull events"):
convertEthersError:
let subscription = await market.contract.subscribe(SlotReservationsFull, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeFulfillment(
market: OnChainMarket, callback: OnFulfillment
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestFulfilled) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in RequestFulfillment subscription", msg = eventErr.msg
return
method subscribeFulfillment(market: OnChainMarket,
callback: OnFulfillment):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestFulfilled) {.upraises:[].} =
callback(event.requestId)
convertEthersError("Failed to subscribe to RequestFulfilled events"):
convertEthersError:
let subscription = await market.contract.subscribe(RequestFulfilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeFulfillment(
market: OnChainMarket, requestId: RequestId, callback: OnFulfillment
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestFulfilled) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in RequestFulfillment subscription", msg = eventErr.msg
return
method subscribeFulfillment(market: OnChainMarket,
requestId: RequestId,
callback: OnFulfillment):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestFulfilled) {.upraises:[].} =
if event.requestId == requestId:
callback(event.requestId)
convertEthersError("Failed to subscribe to RequestFulfilled events"):
convertEthersError:
let subscription = await market.contract.subscribe(RequestFulfilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestCancelled*(
market: OnChainMarket, callback: OnRequestCancelled
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestCancelled) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in RequestCancelled subscription", msg = eventErr.msg
return
method subscribeRequestCancelled*(market: OnChainMarket,
callback: OnRequestCancelled):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestCancelled) {.upraises:[].} =
callback(event.requestId)
convertEthersError("Failed to subscribe to RequestCancelled events"):
convertEthersError:
let subscription = await market.contract.subscribe(RequestCancelled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestCancelled*(
market: OnChainMarket, requestId: RequestId, callback: OnRequestCancelled
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestCancelled) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in RequestCancelled subscription", msg = eventErr.msg
return
method subscribeRequestCancelled*(market: OnChainMarket,
requestId: RequestId,
callback: OnRequestCancelled):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestCancelled) {.upraises:[].} =
if event.requestId == requestId:
callback(event.requestId)
convertEthersError("Failed to subscribe to RequestCancelled events"):
convertEthersError:
let subscription = await market.contract.subscribe(RequestCancelled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestFailed*(
market: OnChainMarket, callback: OnRequestFailed
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestFailed) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in RequestFailed subscription", msg = eventErr.msg
return
method subscribeRequestFailed*(market: OnChainMarket,
callback: OnRequestFailed):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestFailed) {.upraises:[]} =
callback(event.requestId)
convertEthersError("Failed to subscribe to RequestFailed events"):
convertEthersError:
let subscription = await market.contract.subscribe(RequestFailed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestFailed*(
market: OnChainMarket, requestId: RequestId, callback: OnRequestFailed
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!RequestFailed) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in RequestFailed subscription", msg = eventErr.msg
return
method subscribeRequestFailed*(market: OnChainMarket,
requestId: RequestId,
callback: OnRequestFailed):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestFailed) {.upraises:[]} =
if event.requestId == requestId:
callback(event.requestId)
convertEthersError("Failed to subscribe to RequestFailed events"):
convertEthersError:
let subscription = await market.contract.subscribe(RequestFailed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeProofSubmission*(
market: OnChainMarket, callback: OnProofSubmitted
): Future[MarketSubscription] {.async.} =
proc onEvent(eventResult: ?!ProofSubmitted) {.raises: [].} =
without event =? eventResult, eventErr:
error "There was an error in ProofSubmitted subscription", msg = eventErr.msg
return
method subscribeProofSubmission*(market: OnChainMarket,
callback: OnProofSubmitted):
Future[MarketSubscription] {.async.} =
proc onEvent(event: ProofSubmitted) {.upraises: [].} =
callback(event.id)
convertEthersError("Failed to subscribe to ProofSubmitted events"):
convertEthersError:
let subscription = await market.contract.subscribe(ProofSubmitted, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method unsubscribe*(subscription: OnChainMarketSubscription) {.async.} =
await subscription.eventSubscription.unsubscribe()
method queryPastSlotFilledEvents*(
market: OnChainMarket, fromBlock: BlockTag
): Future[seq[SlotFilled]] {.async.} =
convertEthersError("Failed to get past SlotFilled events from block"):
return await market.contract.queryFilter(SlotFilled, fromBlock, BlockTag.latest)
method queryPastEvents*[T: MarketplaceEvent](
market: OnChainMarket,
_: type T,
blocksAgo: int): Future[seq[T]] {.async.} =
method queryPastSlotFilledEvents*(
market: OnChainMarket, blocksAgo: int
): Future[seq[SlotFilled]] {.async.} =
convertEthersError("Failed to get past SlotFilled events"):
let fromBlock = await market.contract.provider.pastBlockTag(blocksAgo)
convertEthersError:
let contract = market.contract
let provider = contract.provider
return await market.queryPastSlotFilledEvents(fromBlock)
let head = await provider.getBlockNumber()
let fromBlock = BlockTag.init(head - blocksAgo.abs.u256)
method queryPastSlotFilledEvents*(
market: OnChainMarket, fromTime: SecondsSince1970
): Future[seq[SlotFilled]] {.async.} =
convertEthersError("Failed to get past SlotFilled events from time"):
let fromBlock = await market.contract.provider.blockNumberForEpoch(fromTime)
return await market.queryPastSlotFilledEvents(BlockTag.init(fromBlock))
method queryPastStorageRequestedEvents*(
market: OnChainMarket, fromBlock: BlockTag
): Future[seq[StorageRequested]] {.async.} =
convertEthersError("Failed to get past StorageRequested events from block"):
return
await market.contract.queryFilter(StorageRequested, fromBlock, BlockTag.latest)
method queryPastStorageRequestedEvents*(
market: OnChainMarket, blocksAgo: int
): Future[seq[StorageRequested]] {.async.} =
convertEthersError("Failed to get past StorageRequested events"):
let fromBlock = await market.contract.provider.pastBlockTag(blocksAgo)
return await market.queryPastStorageRequestedEvents(fromBlock)
method slotCollateral*(
market: OnChainMarket, requestId: RequestId, slotIndex: uint64
): Future[?!UInt256] {.async: (raises: [CancelledError]).} =
let slotid = slotId(requestId, slotIndex)
try:
let slotState = await market.slotState(slotid)
without request =? await market.getRequest(requestId):
return failure newException(
MarketError, "Failure calculating the slotCollateral, cannot get the request"
)
return market.slotCollateral(request.ask.collateralPerSlot, slotState)
except MarketError as error:
error "Error when trying to calculate the slotCollateral", error = error.msg
return failure error
method slotCollateral*(
market: OnChainMarket, collateralPerSlot: UInt256, slotState: SlotState
): ?!UInt256 {.raises: [].} =
if slotState == SlotState.Repair:
without repairRewardPercentage =?
market.configuration .? collateral .? repairRewardPercentage:
return failure newException(
MarketError,
"Failure calculating the slotCollateral, cannot get the reward percentage",
)
return success (
collateralPerSlot - (collateralPerSlot * repairRewardPercentage.u256).div(
100.u256
)
)
return success(collateralPerSlot)
return await contract.queryFilter(T,
fromBlock,
BlockTag.latest)

View File

@ -17,182 +17,40 @@ export requests
type
Marketplace* = ref object of Contract
Marketplace_RepairRewardPercentageTooHigh* = object of SolidityError
Marketplace_SlashPercentageTooHigh* = object of SolidityError
Marketplace_MaximumSlashingTooHigh* = object of SolidityError
Marketplace_InvalidExpiry* = object of SolidityError
Marketplace_InvalidMaxSlotLoss* = object of SolidityError
Marketplace_InsufficientSlots* = object of SolidityError
Marketplace_InvalidClientAddress* = object of SolidityError
Marketplace_RequestAlreadyExists* = object of SolidityError
Marketplace_InvalidSlot* = object of SolidityError
Marketplace_SlotNotFree* = object of SolidityError
Marketplace_InvalidSlotHost* = object of SolidityError
Marketplace_AlreadyPaid* = object of SolidityError
Marketplace_TransferFailed* = object of SolidityError
Marketplace_UnknownRequest* = object of SolidityError
Marketplace_InvalidState* = object of SolidityError
Marketplace_StartNotBeforeExpiry* = object of SolidityError
Marketplace_SlotNotAcceptingProofs* = object of SolidityError
Marketplace_SlotIsFree* = object of SolidityError
Marketplace_ReservationRequired* = object of SolidityError
Marketplace_NothingToWithdraw* = object of SolidityError
Marketplace_InsufficientDuration* = object of SolidityError
Marketplace_InsufficientProofProbability* = object of SolidityError
Marketplace_InsufficientCollateral* = object of SolidityError
Marketplace_InsufficientReward* = object of SolidityError
Marketplace_InvalidCid* = object of SolidityError
Marketplace_DurationExceedsLimit* = object of SolidityError
Proofs_InsufficientBlockHeight* = object of SolidityError
Proofs_InvalidProof* = object of SolidityError
Proofs_ProofAlreadySubmitted* = object of SolidityError
Proofs_PeriodNotEnded* = object of SolidityError
Proofs_ValidationTimedOut* = object of SolidityError
Proofs_ProofNotMissing* = object of SolidityError
Proofs_ProofNotRequired* = object of SolidityError
Proofs_ProofAlreadyMarkedMissing* = object of SolidityError
Periods_InvalidSecondsPerPeriod* = object of SolidityError
SlotReservations_ReservationNotAllowed* = object of SolidityError
proc configuration*(marketplace: Marketplace): MarketplaceConfig {.contract, view.}
proc token*(marketplace: Marketplace): Address {.contract, view.}
proc currentCollateral*(
marketplace: Marketplace, id: SlotId
): UInt256 {.contract, view.}
proc requestStorage*(
marketplace: Marketplace, request: StorageRequest
): Confirmable {.
contract,
errors: [
Marketplace_InvalidClientAddress, Marketplace_RequestAlreadyExists,
Marketplace_InvalidExpiry, Marketplace_InsufficientSlots,
Marketplace_InvalidMaxSlotLoss, Marketplace_InsufficientDuration,
Marketplace_InsufficientProofProbability, Marketplace_InsufficientCollateral,
Marketplace_InsufficientReward, Marketplace_InvalidCid,
]
.}
proc fillSlot*(
marketplace: Marketplace, requestId: RequestId, slotIndex: uint64, proof: Groth16Proof
): Confirmable {.
contract,
errors: [
Marketplace_InvalidSlot, Marketplace_ReservationRequired, Marketplace_SlotNotFree,
Marketplace_StartNotBeforeExpiry, Marketplace_UnknownRequest,
]
.}
proc withdrawFunds*(
marketplace: Marketplace, requestId: RequestId
): Confirmable {.
contract,
errors: [
Marketplace_InvalidClientAddress, Marketplace_InvalidState,
Marketplace_NothingToWithdraw, Marketplace_UnknownRequest,
]
.}
proc withdrawFunds*(
marketplace: Marketplace, requestId: RequestId, withdrawAddress: Address
): Confirmable {.
contract,
errors: [
Marketplace_InvalidClientAddress, Marketplace_InvalidState,
Marketplace_NothingToWithdraw, Marketplace_UnknownRequest,
]
.}
proc freeSlot*(
marketplace: Marketplace, id: SlotId
): Confirmable {.
contract,
errors: [
Marketplace_InvalidSlotHost, Marketplace_AlreadyPaid,
Marketplace_StartNotBeforeExpiry, Marketplace_UnknownRequest, Marketplace_SlotIsFree,
]
.}
proc freeSlot*(
marketplace: Marketplace,
id: SlotId,
rewardRecipient: Address,
collateralRecipient: Address,
): Confirmable {.
contract,
errors: [
Marketplace_InvalidSlotHost, Marketplace_AlreadyPaid,
Marketplace_StartNotBeforeExpiry, Marketplace_UnknownRequest, Marketplace_SlotIsFree,
]
.}
proc getRequest*(
marketplace: Marketplace, id: RequestId
): StorageRequest {.contract, view, errors: [Marketplace_UnknownRequest].}
proc slashMisses*(marketplace: Marketplace): UInt256 {.contract, view.}
proc slashPercentage*(marketplace: Marketplace): UInt256 {.contract, view.}
proc minCollateralThreshold*(marketplace: Marketplace): UInt256 {.contract, view.}
proc requestStorage*(marketplace: Marketplace, request: StorageRequest): Confirmable {.contract.}
proc fillSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256, proof: Groth16Proof): Confirmable {.contract.}
proc withdrawFunds*(marketplace: Marketplace, requestId: RequestId): Confirmable {.contract.}
proc withdrawFunds*(marketplace: Marketplace, requestId: RequestId, withdrawAddress: Address): Confirmable {.contract.}
proc freeSlot*(marketplace: Marketplace, id: SlotId): Confirmable {.contract.}
proc freeSlot*(marketplace: Marketplace, id: SlotId, rewardRecipient: Address, collateralRecipient: Address): Confirmable {.contract.}
proc getRequest*(marketplace: Marketplace, id: RequestId): StorageRequest {.contract, view.}
proc getHost*(marketplace: Marketplace, id: SlotId): Address {.contract, view.}
proc getActiveSlot*(
marketplace: Marketplace, id: SlotId
): Slot {.contract, view, errors: [Marketplace_SlotIsFree].}
proc getActiveSlot*(marketplace: Marketplace, id: SlotId): Slot {.contract, view.}
proc myRequests*(marketplace: Marketplace): seq[RequestId] {.contract, view.}
proc mySlots*(marketplace: Marketplace): seq[SlotId] {.contract, view.}
proc requestState*(
marketplace: Marketplace, requestId: RequestId
): RequestState {.contract, view, errors: [Marketplace_UnknownRequest].}
proc requestState*(marketplace: Marketplace, requestId: RequestId): RequestState {.contract, view.}
proc slotState*(marketplace: Marketplace, slotId: SlotId): SlotState {.contract, view.}
proc requestEnd*(
marketplace: Marketplace, requestId: RequestId
): SecondsSince1970 {.contract, view.}
proc requestEnd*(marketplace: Marketplace, requestId: RequestId): SecondsSince1970 {.contract, view.}
proc requestExpiry*(marketplace: Marketplace, requestId: RequestId): SecondsSince1970 {.contract, view.}
proc requestExpiry*(
marketplace: Marketplace, requestId: RequestId
): SecondsSince1970 {.contract, view.}
proc proofTimeout*(marketplace: Marketplace): UInt256 {.contract, view.}
proc proofEnd*(marketplace: Marketplace, id: SlotId): UInt256 {.contract, view.}
proc missingProofs*(marketplace: Marketplace, id: SlotId): UInt256 {.contract, view.}
proc isProofRequired*(marketplace: Marketplace, id: SlotId): bool {.contract, view.}
proc willProofBeRequired*(marketplace: Marketplace, id: SlotId): bool {.contract, view.}
proc getChallenge*(
marketplace: Marketplace, id: SlotId
): array[32, byte] {.contract, view.}
proc getChallenge*(marketplace: Marketplace, id: SlotId): array[32, byte] {.contract, view.}
proc getPointer*(marketplace: Marketplace, id: SlotId): uint8 {.contract, view.}
proc submitProof*(
marketplace: Marketplace, id: SlotId, proof: Groth16Proof
): Confirmable {.
contract,
errors:
[Proofs_ProofAlreadySubmitted, Proofs_InvalidProof, Marketplace_UnknownRequest]
.}
proc submitProof*(marketplace: Marketplace, id: SlotId, proof: Groth16Proof): Confirmable {.contract.}
proc markProofAsMissing*(marketplace: Marketplace, id: SlotId, period: UInt256): Confirmable {.contract.}
proc markProofAsMissing*(
marketplace: Marketplace, id: SlotId, period: uint64
): Confirmable {.
contract,
errors: [
Marketplace_SlotNotAcceptingProofs, Marketplace_StartNotBeforeExpiry,
Proofs_PeriodNotEnded, Proofs_ValidationTimedOut, Proofs_ProofNotMissing,
Proofs_ProofNotRequired, Proofs_ProofAlreadyMarkedMissing,
]
.}
proc canMarkProofAsMissing*(
marketplace: Marketplace, id: SlotId, period: uint64
): Confirmable {.
contract,
errors: [
Marketplace_SlotNotAcceptingProofs, Proofs_PeriodNotEnded,
Proofs_ValidationTimedOut, Proofs_ProofNotMissing, Proofs_ProofNotRequired,
Proofs_ProofAlreadyMarkedMissing,
]
.}
proc reserveSlot*(
marketplace: Marketplace, requestId: RequestId, slotIndex: uint64
): Confirmable {.contract.}
proc canReserveSlot*(
marketplace: Marketplace, requestId: RequestId, slotIndex: uint64
): bool {.contract, view.}
proc reserveSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256): Confirmable {.contract.}
proc canReserveSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256): bool {.contract, view.}

View File

@ -1,22 +1,19 @@
import pkg/stint
import pkg/contractabi
import pkg/ethers/contracts/fields
import pkg/ethers/fields
type
Groth16Proof* = object
a*: G1Point
b*: G2Point
c*: G1Point
G1Point* = object
x*: UInt256
y*: UInt256
# A field element F_{p^2} encoded as `real + i * imag`
Fp2Element* = object
real*: UInt256
imag*: UInt256
G2Point* = object
x*: Fp2Element
y*: Fp2Element

View File

@ -1,123 +0,0 @@
import pkg/ethers/provider
import pkg/chronos
import pkg/questionable
import ../logutils
from ../clock import SecondsSince1970
logScope:
topics = "marketplace onchain provider"
proc raiseProviderError(message: string) {.raises: [ProviderError].} =
raise newException(ProviderError, message)
proc blockNumberAndTimestamp*(
provider: Provider, blockTag: BlockTag
): Future[(UInt256, UInt256)] {.async: (raises: [ProviderError, CancelledError]).} =
without latestBlock =? await provider.getBlock(blockTag):
raiseProviderError("Could not get latest block")
without latestBlockNumber =? latestBlock.number:
raiseProviderError("Could not get latest block number")
return (latestBlockNumber, latestBlock.timestamp)
proc binarySearchFindClosestBlock(
provider: Provider, epochTime: int, low: UInt256, high: UInt256
): Future[UInt256] {.async: (raises: [ProviderError, CancelledError]).} =
let (_, lowTimestamp) = await provider.blockNumberAndTimestamp(BlockTag.init(low))
let (_, highTimestamp) = await provider.blockNumberAndTimestamp(BlockTag.init(high))
if abs(lowTimestamp.truncate(int) - epochTime) <
abs(highTimestamp.truncate(int) - epochTime):
return low
else:
return high
proc binarySearchBlockNumberForEpoch(
provider: Provider,
epochTime: UInt256,
latestBlockNumber: UInt256,
earliestBlockNumber: UInt256,
): Future[UInt256] {.async: (raises: [ProviderError, CancelledError]).} =
var low = earliestBlockNumber
var high = latestBlockNumber
while low <= high:
if low == 0 and high == 0:
return low
let mid = (low + high) div 2
let (midBlockNumber, midBlockTimestamp) =
await provider.blockNumberAndTimestamp(BlockTag.init(mid))
if midBlockTimestamp < epochTime:
low = mid + 1
elif midBlockTimestamp > epochTime:
high = mid - 1
else:
return midBlockNumber
# NOTICE that by how the binary search is implemented, when it finishes
# low is always greater than high - this is why we use high, where
# intuitively we would use low:
await provider.binarySearchFindClosestBlock(
epochTime.truncate(int), low = high, high = low
)
proc blockNumberForEpoch*(
provider: Provider, epochTime: SecondsSince1970
): Future[UInt256] {.async: (raises: [ProviderError, CancelledError]).} =
let epochTimeUInt256 = epochTime.u256
let (latestBlockNumber, latestBlockTimestamp) =
await provider.blockNumberAndTimestamp(BlockTag.latest)
let (earliestBlockNumber, earliestBlockTimestamp) =
await provider.blockNumberAndTimestamp(BlockTag.earliest)
# Initially we used the average block time to predict
# the number of blocks we need to look back in order to find
# the block number corresponding to the given epoch time.
# This estimation can be highly inaccurate if block time
# was changing in the past or is fluctuating and therefore
# we used that information initially only to find out
# if the available history is long enough to perform effective search.
# It turns out we do not have to do that. There is an easier way.
#
# First we check if the given epoch time equals the timestamp of either
# the earliest or the latest block. If it does, we just return the
# block number of that block.
#
# Otherwise, if the earliest available block is not the genesis block,
# we should check the timestamp of that earliest block and if it is greater
# than the epoch time, we should issue a warning and return
# that earliest block number.
# In all other cases, thus when the earliest block is not the genesis
# block but its timestamp is not greater than the requested epoch time, or
# if the earliest available block is the genesis block,
# (which means we have the whole history available), we should proceed with
# the binary search.
#
# Additional benefit of this method is that we do not have to rely
# on the average block time, which not only makes the whole thing
# more reliable, but also easier to test.
# Are lucky today?
if earliestBlockTimestamp == epochTimeUInt256:
return earliestBlockNumber
if latestBlockTimestamp == epochTimeUInt256:
return latestBlockNumber
if earliestBlockNumber > 0 and earliestBlockTimestamp > epochTimeUInt256:
let availableHistoryInDays =
(latestBlockTimestamp - earliestBlockTimestamp) div 1.days.secs.u256
warn "Short block history detected.",
earliestBlockTimestamp = earliestBlockTimestamp, days = availableHistoryInDays
return earliestBlockNumber
return await provider.binarySearchBlockNumberForEpoch(
epochTimeUInt256, latestBlockNumber, earliestBlockNumber
)
proc pastBlockTag*(
provider: Provider, blocksAgo: int
): Future[BlockTag] {.async: (raises: [ProviderError, CancelledError]).} =
let head = await provider.getBlockNumber()
return BlockTag.init(head - blocksAgo.abs.u256)

View File

@ -2,14 +2,13 @@ import std/hashes
import std/sequtils
import std/typetraits
import pkg/contractabi
import pkg/nimcrypto/keccak
import pkg/ethers/contracts/fields
import pkg/nimcrypto
import pkg/ethers/fields
import pkg/questionable/results
import pkg/stew/byteutils
import pkg/libp2p/[cid, multicodec]
import pkg/upraises
import ../logutils
import ../utils/json
from ../errors import mapFailure
export contractabi
@ -18,26 +17,22 @@ type
client* {.serialize.}: Address
ask* {.serialize.}: StorageAsk
content* {.serialize.}: StorageContent
expiry* {.serialize.}: uint64
expiry* {.serialize.}: UInt256
nonce*: Nonce
StorageAsk* = object
proofProbability* {.serialize.}: UInt256
pricePerBytePerSecond* {.serialize.}: UInt256
collateralPerByte* {.serialize.}: UInt256
slots* {.serialize.}: uint64
slotSize* {.serialize.}: uint64
duration* {.serialize.}: uint64
slotSize* {.serialize.}: UInt256
duration* {.serialize.}: UInt256
proofProbability* {.serialize.}: UInt256
reward* {.serialize.}: UInt256
collateral* {.serialize.}: UInt256
maxSlotLoss* {.serialize.}: uint64
StorageContent* = object
cid* {.serialize.}: Cid
cid* {.serialize.}: string
merkleRoot*: array[32, byte]
Slot* = object
request* {.serialize.}: StorageRequest
slotIndex* {.serialize.}: uint64
slotIndex* {.serialize.}: UInt256
SlotId* = distinct array[32, byte]
RequestId* = distinct array[32, byte]
Nonce* = distinct array[32, byte]
@ -47,7 +42,6 @@ type
Cancelled
Finished
Failed
SlotState* {.pure.} = enum
Free
Filled
@ -55,7 +49,6 @@ type
Failed
Paid
Cancelled
Repair
proc `==`*(x, y: Nonce): bool {.borrow.}
proc `==`*(x, y: RequestId): bool {.borrow.}
@ -87,43 +80,44 @@ proc toHex*[T: distinct](id: T): string =
type baseType = T.distinctBase
baseType(id).toHex
logutils.formatIt(LogFormat.textLines, Nonce):
it.short0xHexLog
logutils.formatIt(LogFormat.textLines, RequestId):
it.short0xHexLog
logutils.formatIt(LogFormat.textLines, SlotId):
it.short0xHexLog
logutils.formatIt(LogFormat.json, Nonce):
it.to0xHexLog
logutils.formatIt(LogFormat.json, RequestId):
it.to0xHexLog
logutils.formatIt(LogFormat.json, SlotId):
it.to0xHexLog
logutils.formatIt(LogFormat.textLines, Nonce): it.short0xHexLog
logutils.formatIt(LogFormat.textLines, RequestId): it.short0xHexLog
logutils.formatIt(LogFormat.textLines, SlotId): it.short0xHexLog
logutils.formatIt(LogFormat.json, Nonce): it.to0xHexLog
logutils.formatIt(LogFormat.json, RequestId): it.to0xHexLog
logutils.formatIt(LogFormat.json, SlotId): it.to0xHexLog
func fromTuple(_: type StorageRequest, tupl: tuple): StorageRequest =
StorageRequest(
client: tupl[0], ask: tupl[1], content: tupl[2], expiry: tupl[3], nonce: tupl[4]
client: tupl[0],
ask: tupl[1],
content: tupl[2],
expiry: tupl[3],
nonce: tupl[4]
)
func fromTuple(_: type Slot, tupl: tuple): Slot =
Slot(request: tupl[0], slotIndex: tupl[1])
Slot(
request: tupl[0],
slotIndex: tupl[1]
)
func fromTuple(_: type StorageAsk, tupl: tuple): StorageAsk =
StorageAsk(
proofProbability: tupl[0],
pricePerBytePerSecond: tupl[1],
collateralPerByte: tupl[2],
slots: tupl[3],
slotSize: tupl[4],
duration: tupl[5],
maxSlotLoss: tupl[6],
slots: tupl[0],
slotSize: tupl[1],
duration: tupl[2],
proofProbability: tupl[3],
reward: tupl[4],
collateral: tupl[5],
maxSlotLoss: tupl[6]
)
func fromTuple(_: type StorageContent, tupl: tuple): StorageContent =
StorageContent(cid: tupl[0], merkleRoot: tupl[1])
func solidityType*(_: type Cid): string =
solidityType(seq[byte])
StorageContent(
cid: tupl[0],
merkleRoot: tupl[1]
)
func solidityType*(_: type StorageContent): string =
solidityType(StorageContent.fieldTypes)
@ -134,10 +128,6 @@ func solidityType*(_: type StorageAsk): string =
func solidityType*(_: type StorageRequest): string =
solidityType(StorageRequest.fieldTypes)
# Note: it seems to be ok to ignore the vbuffer offset for now
func encode*(encoder: var AbiEncoder, cid: Cid) =
encoder.write(cid.data.buffer)
func encode*(encoder: var AbiEncoder, content: StorageContent) =
encoder.write(content.fieldValues)
@ -150,12 +140,8 @@ func encode*(encoder: var AbiEncoder, id: RequestId | SlotId | Nonce) =
func encode*(encoder: var AbiEncoder, request: StorageRequest) =
encoder.write(request.fieldValues)
func encode*(encoder: var AbiEncoder, slot: Slot) =
encoder.write(slot.fieldValues)
func decode*(decoder: var AbiDecoder, T: type Cid): ?!T =
let data = ?decoder.read(seq[byte])
Cid.init(data).mapFailure
func encode*(encoder: var AbiEncoder, request: Slot) =
encoder.write(request.fieldValues)
func decode*(decoder: var AbiDecoder, T: type StorageContent): ?!T =
let tupl = ?decoder.read(StorageContent.fieldTypes)
@ -174,33 +160,27 @@ func decode*(decoder: var AbiDecoder, T: type Slot): ?!T =
success Slot.fromTuple(tupl)
func id*(request: StorageRequest): RequestId =
let encoding = AbiEncoder.encode((request,))
let encoding = AbiEncoder.encode((request, ))
RequestId(keccak256.digest(encoding).data)
func slotId*(requestId: RequestId, slotIndex: uint64): SlotId =
func slotId*(requestId: RequestId, slotIndex: UInt256): SlotId =
let encoding = AbiEncoder.encode((requestId, slotIndex))
SlotId(keccak256.digest(encoding).data)
func slotId*(request: StorageRequest, slotIndex: uint64): SlotId =
func slotId*(request: StorageRequest, slotIndex: UInt256): SlotId =
slotId(request.id, slotIndex)
func id*(slot: Slot): SlotId =
slotId(slot.request, slot.slotIndex)
func pricePerSlotPerSecond*(ask: StorageAsk): UInt256 =
ask.pricePerBytePerSecond * ask.slotSize.u256
func pricePerSlot*(ask: StorageAsk): UInt256 =
ask.duration.u256 * ask.pricePerSlotPerSecond
ask.duration * ask.reward
func totalPrice*(ask: StorageAsk): UInt256 =
func price*(ask: StorageAsk): UInt256 =
ask.slots.u256 * ask.pricePerSlot
func totalPrice*(request: StorageRequest): UInt256 =
request.ask.totalPrice
func price*(request: StorageRequest): UInt256 =
request.ask.price
func collateralPerSlot*(ask: StorageAsk): UInt256 =
ask.collateralPerByte * ask.slotSize.u256
func size*(ask: StorageAsk): uint64 =
ask.slots * ask.slotSize
func size*(ask: StorageAsk): UInt256 =
ask.slots.u256 * ask.slotSize

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,19 +7,16 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [].}
import std/algorithm
import std/net
import std/sequtils
import pkg/chronos
import pkg/libp2p/[cid, multicodec, routing_record, signed_envelope]
import pkg/questionable
import pkg/questionable/results
import pkg/stew/shims/net
import pkg/contractabi/address as ca
import pkg/codexdht/discv5/[routing_table, protocol as discv5]
from pkg/nimcrypto import keccak256
import ./rng
import ./errors
@ -34,16 +31,15 @@ export discv5
logScope:
topics = "codex discovery"
type Discovery* = ref object of RootObj
protocol*: discv5.Protocol # dht protocol
key: PrivateKey # private key
peerId: PeerId # the peer id of the local node
announceAddrs*: seq[MultiAddress] # addresses announced as part of the provider records
providerRecord*: ?SignedPeerRecord
# record to advertice node connection information, this carry any
# address that the node can be connected on
dhtRecord*: ?SignedPeerRecord # record to advertice DHT connection information
isStarted: bool
type
Discovery* = ref object of RootObj
protocol*: discv5.Protocol # dht protocol
key: PrivateKey # private key
peerId: PeerId # the peer id of the local node
announceAddrs*: seq[MultiAddress] # addresses announced as part of the provider records
providerRecord*: ?SignedPeerRecord # record to advertice node connection information, this carry any
# address that the node can be connected on
dhtRecord*: ?SignedPeerRecord # record to advertice DHT connection information
proc toNodeId*(cid: Cid): NodeId =
## Cid to discovery id
@ -58,121 +54,82 @@ proc toNodeId*(host: ca.Address): NodeId =
readUintBE[256](keccak256.digest(host.toArray).data)
proc findPeer*(
d: Discovery, peerId: PeerId
): Future[?PeerRecord] {.async: (raises: [CancelledError]).} =
d: Discovery,
peerId: PeerId): Future[?PeerRecord] {.async.} =
trace "protocol.resolve..."
## Find peer using the given Discovery object
##
let
node = await d.protocol.resolve(toNodeId(peerId))
try:
let node = await d.protocol.resolve(toNodeId(peerId))
return
if node.isSome():
node.get().record.data.some
else:
PeerRecord.none
except CancelledError as exc:
warn "Error finding peer", peerId = peerId, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding peer", peerId = peerId, exc = exc.msg
return PeerRecord.none
return
if node.isSome():
node.get().record.data.some
else:
PeerRecord.none
method find*(
d: Discovery, cid: Cid
): Future[seq[SignedPeerRecord]] {.async: (raises: [CancelledError]), base.} =
d: Discovery,
cid: Cid): Future[seq[SignedPeerRecord]] {.async, base.} =
## Find block providers
##
without providers =?
(await d.protocol.getProviders(cid.toNodeId())).mapFailure, error:
warn "Error finding providers for block", cid, error = error.msg
try:
without providers =? (await d.protocol.getProviders(cid.toNodeId())).mapFailure,
error:
warn "Error finding providers for block", cid, error = error.msg
return providers.filterIt( not (it.data.peerId == d.peerId) )
return providers.filterIt(not (it.data.peerId == d.peerId))
except CancelledError as exc:
warn "Error finding providers for block", cid, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding providers for block", cid, exc = exc.msg
method provide*(d: Discovery, cid: Cid) {.async: (raises: [CancelledError]), base.} =
method provide*(d: Discovery, cid: Cid) {.async, base.} =
## Provide a block Cid
##
try:
let nodes = await d.protocol.addProvider(cid.toNodeId(), d.providerRecord.get)
let
nodes = await d.protocol.addProvider(
cid.toNodeId(), d.providerRecord.get)
if nodes.len <= 0:
warn "Couldn't provide to any nodes!"
if nodes.len <= 0:
warn "Couldn't provide to any nodes!"
except CancelledError as exc:
warn "Error providing block", cid, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error providing block", cid, exc = exc.msg
method find*(
d: Discovery, host: ca.Address
): Future[seq[SignedPeerRecord]] {.async: (raises: [CancelledError]), base.} =
d: Discovery,
host: ca.Address): Future[seq[SignedPeerRecord]] {.async, base.} =
## Find host providers
##
try:
trace "Finding providers for host", host = $host
without var providers =? (await d.protocol.getProviders(host.toNodeId())).mapFailure,
error:
trace "Error finding providers for host", host = $host, exc = error.msg
return
trace "Finding providers for host", host = $host
without var providers =?
(await d.protocol.getProviders(host.toNodeId())).mapFailure, error:
trace "Error finding providers for host", host = $host, exc = error.msg
return
if providers.len <= 0:
trace "No providers found", host = $host
return
if providers.len <= 0:
trace "No providers found", host = $host
return
providers.sort do(a, b: SignedPeerRecord) -> int:
system.cmp[uint64](a.data.seqNo, b.data.seqNo)
providers.sort do(a, b: SignedPeerRecord) -> int:
system.cmp[uint64](a.data.seqNo, b.data.seqNo)
return providers
except CancelledError as exc:
warn "Error finding providers for host", host = $host, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding providers for host", host = $host, exc = exc.msg
return providers
method provide*(
d: Discovery, host: ca.Address
) {.async: (raises: [CancelledError]), base.} =
method provide*(d: Discovery, host: ca.Address) {.async, base.} =
## Provide hosts
##
try:
trace "Providing host", host = $host
let nodes = await d.protocol.addProvider(host.toNodeId(), d.providerRecord.get)
if nodes.len > 0:
trace "Provided to nodes", nodes = nodes.len
except CancelledError as exc:
warn "Error providing host", host = $host, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error providing host", host = $host, exc = exc.msg
trace "Providing host", host = $host
let
nodes = await d.protocol.addProvider(
host.toNodeId(), d.providerRecord.get)
if nodes.len > 0:
trace "Provided to nodes", nodes = nodes.len
method removeProvider*(
d: Discovery, peerId: PeerId
): Future[void] {.base, async: (raises: [CancelledError]).} =
d: Discovery,
peerId: PeerId): Future[void] {.base.} =
## Remove provider from providers table
##
trace "Removing provider", peerId
try:
await d.protocol.removeProvidersLocal(peerId)
except CancelledError as exc:
warn "Error removing provider", peerId = peerId, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error removing provider", peerId = peerId, exc = exc.msg
except Exception as exc: # Something in discv5 is raising Exception
warn "Error removing provider", peerId = peerId, exc = exc.msg
raiseAssert("Unexpected Exception in removeProvider")
d.protocol.removeProvidersLocal(peerId)
proc updateAnnounceRecord*(d: Discovery, addrs: openArray[MultiAddress]) =
## Update providers record
@ -180,58 +137,54 @@ proc updateAnnounceRecord*(d: Discovery, addrs: openArray[MultiAddress]) =
d.announceAddrs = @addrs
info "Updating announce record", addrs = d.announceAddrs
d.providerRecord = SignedPeerRecord
.init(d.key, PeerRecord.init(d.peerId, d.announceAddrs))
.expect("Should construct signed record").some
trace "Updating announce record", addrs = d.announceAddrs
d.providerRecord = SignedPeerRecord.init(
d.key, PeerRecord.init(d.peerId, d.announceAddrs))
.expect("Should construct signed record").some
if not d.protocol.isNil:
d.protocol.updateRecord(d.providerRecord).expect("Should update SPR")
d.protocol.updateRecord(d.providerRecord)
.expect("Should update SPR")
proc updateDhtRecord*(d: Discovery, addrs: openArray[MultiAddress]) =
proc updateDhtRecord*(d: Discovery, ip: ValidIpAddress, port: Port) =
## Update providers record
##
info "Updating Dht record", addrs = addrs
d.dhtRecord = SignedPeerRecord
.init(d.key, PeerRecord.init(d.peerId, @addrs))
.expect("Should construct signed record").some
trace "Updating Dht record", ip, port = $port
d.dhtRecord = SignedPeerRecord.init(
d.key, PeerRecord.init(d.peerId, @[
MultiAddress.init(
ip,
IpTransportProtocol.udpProtocol,
port)])).expect("Should construct signed record").some
if not d.protocol.isNil:
d.protocol.updateRecord(d.dhtRecord).expect("Should update SPR")
d.protocol.updateRecord(d.dhtRecord)
.expect("Should update SPR")
proc start*(d: Discovery) {.async: (raises: []).} =
try:
d.protocol.open()
await d.protocol.start()
d.isStarted = true
except CatchableError as exc:
error "Error starting discovery", exc = exc.msg
proc start*(d: Discovery) {.async.} =
d.protocol.open()
await d.protocol.start()
proc stop*(d: Discovery) {.async: (raises: []).} =
if not d.isStarted:
warn "Discovery not started, skipping stop"
return
try:
await noCancel d.protocol.closeWait()
except CatchableError as exc:
error "Error stopping discovery", exc = exc.msg
proc stop*(d: Discovery) {.async.} =
await d.protocol.closeWait()
proc new*(
T: type Discovery,
key: PrivateKey,
bindIp = IPv4_any(),
bindIp = ValidIpAddress.init(IPv4_any()),
bindPort = 0.Port,
announceAddrs: openArray[MultiAddress],
bootstrapNodes: openArray[SignedPeerRecord] = [],
store: Datastore = SQLiteDatastore.new(Memory).expect("Should not fail!"),
store: Datastore = SQLiteDatastore.new(Memory).expect("Should not fail!")
): Discovery =
## Create a new Discovery node instance for the given key and datastore
##
var self =
Discovery(key: key, peerId: PeerId.init(key).expect("Should construct PeerId"))
var
self = Discovery(
key: key,
peerId: PeerId.init(key).expect("Should construct PeerId"))
self.updateAnnounceRecord(announceAddrs)
@ -239,20 +192,22 @@ proc new*(
# FIXME disable IP limits temporarily so we can run our workshop. Re-enable
# and figure out proper solution.
let discoveryConfig = DiscoveryConfig(
tableIpLimits: TableIpLimits(tableIpLimit: high(uint), bucketIpLimit: high(uint)),
bitsPerHop: DefaultBitsPerHop,
tableIpLimits: TableIpLimits(
tableIpLimit: high(uint),
bucketIpLimit:high(uint)
),
bitsPerHop: DefaultBitsPerHop
)
# --------------------------------------------------------------------------
self.protocol = newProtocol(
key,
bindIp = bindIp,
bindIp = bindIp.toNormalIp,
bindPort = bindPort,
record = self.providerRecord.get,
bootstrapRecords = bootstrapNodes,
rng = Rng.instance(),
providers = ProvidersManager.new(store),
config = discoveryConfig,
)
config = discoveryConfig)
self

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))

View File

@ -0,0 +1,225 @@
## Nim-Codex
## Copyright (c) 2024 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import std/sequtils
import pkg/taskpools
import pkg/taskpools/flowvars
import pkg/chronos
import pkg/chronos/threadsync
import pkg/questionable/results
import ./backend
import ../errors
import ../logutils
logScope:
topics = "codex asyncerasure"
const
CompletitionTimeout = 1.seconds # Maximum await time for completition after receiving a signal
CompletitionRetryDelay = 10.millis
type
EncoderBackendPtr = ptr EncoderBackend
DecoderBackendPtr = ptr DecoderBackend
# Args objects are missing seq[seq[byte]] field, to avoid unnecessary data copy
EncodeTaskArgs = object
signal: ThreadSignalPtr
backend: EncoderBackendPtr
blockSize: int
ecM: int
DecodeTaskArgs = object
signal: ThreadSignalPtr
backend: DecoderBackendPtr
blockSize: int
ecK: int
SharedArrayHolder*[T] = object
data: ptr UncheckedArray[T]
size: int
EncodeTaskResult = Result[SharedArrayHolder[byte], cstring]
DecodeTaskResult = Result[SharedArrayHolder[byte], cstring]
proc encodeTask(args: EncodeTaskArgs, data: seq[seq[byte]]): EncodeTaskResult =
var
data = data.unsafeAddr
parity = newSeqWith[seq[byte]](args.ecM, newSeq[byte](args.blockSize))
try:
let res = args.backend[].encode(data[], parity)
if res.isOk:
let
resDataSize = parity.len * args.blockSize
resData = cast[ptr UncheckedArray[byte]](allocShared0(resDataSize))
arrHolder = SharedArrayHolder[byte](
data: resData,
size: resDataSize
)
for i in 0..<parity.len:
copyMem(addr resData[i * args.blockSize], addr parity[i][0], args.blockSize)
return ok(arrHolder)
else:
return err(res.error)
except CatchableError as exception:
return err(exception.msg.cstring)
finally:
if err =? args.signal.fireSync().mapFailure.errorOption():
error "Error firing signal", msg = err.msg
proc decodeTask(args: DecodeTaskArgs, data: seq[seq[byte]], parity: seq[seq[byte]]): DecodeTaskResult =
var
data = data.unsafeAddr
parity = parity.unsafeAddr
recovered = newSeqWith[seq[byte]](args.ecK, newSeq[byte](args.blockSize))
try:
let res = args.backend[].decode(data[], parity[], recovered)
if res.isOk:
let
resDataSize = recovered.len * args.blockSize
resData = cast[ptr UncheckedArray[byte]](allocShared0(resDataSize))
arrHolder = SharedArrayHolder[byte](
data: resData,
size: resDataSize
)
for i in 0..<recovered.len:
copyMem(addr resData[i * args.blockSize], addr recovered[i][0], args.blockSize)
return ok(arrHolder)
else:
return err(res.error)
except CatchableError as exception:
return err(exception.msg.cstring)
finally:
if err =? args.signal.fireSync().mapFailure.errorOption():
error "Error firing signal", msg = err.msg
proc proxySpawnEncodeTask(
tp: Taskpool,
args: EncodeTaskArgs,
data: ref seq[seq[byte]]
): Flowvar[EncodeTaskResult] =
# FIXME Uncomment the code below after addressing an issue:
# https://github.com/codex-storage/nim-codex/issues/854
# tp.spawn encodeTask(args, data[])
let fv = EncodeTaskResult.newFlowVar
fv.readyWith(encodeTask(args, data[]))
return fv
proc proxySpawnDecodeTask(
tp: Taskpool,
args: DecodeTaskArgs,
data: ref seq[seq[byte]],
parity: ref seq[seq[byte]]
): Flowvar[DecodeTaskResult] =
# FIXME Uncomment the code below after addressing an issue:
# https://github.com/codex-storage/nim-codex/issues/854
# tp.spawn decodeTask(args, data[], parity[])
let fv = DecodeTaskResult.newFlowVar
fv.readyWith(decodeTask(args, data[], parity[]))
return fv
proc awaitResult[T](signal: ThreadSignalPtr, handle: Flowvar[T]): Future[?!T] {.async.} =
await wait(signal)
var
res: T
awaitTotal: Duration
while awaitTotal < CompletitionTimeout:
if handle.tryComplete(res):
return success(res)
else:
awaitTotal += CompletitionRetryDelay
await sleepAsync(CompletitionRetryDelay)
return failure("Task signaled finish but didn't return any result within " & $CompletitionRetryDelay)
proc asyncEncode*(
tp: Taskpool,
backend: EncoderBackend,
data: ref seq[seq[byte]],
blockSize: int,
ecM: int
): Future[?!ref seq[seq[byte]]] {.async.} =
without signal =? ThreadSignalPtr.new().mapFailure, err:
return failure(err)
try:
let
blockSize = data[0].len
args = EncodeTaskArgs(signal: signal, backend: unsafeAddr backend, blockSize: blockSize, ecM: ecM)
handle = proxySpawnEncodeTask(tp, args, data)
without res =? await awaitResult(signal, handle), err:
return failure(err)
if res.isOk:
var parity = seq[seq[byte]].new()
parity[].setLen(ecM)
for i in 0..<parity[].len:
parity[i] = newSeq[byte](blockSize)
copyMem(addr parity[i][0], addr res.value.data[i * blockSize], blockSize)
deallocShared(res.value.data)
return success(parity)
else:
return failure($res.error)
finally:
if err =? signal.close().mapFailure.errorOption():
error "Error closing signal", msg = $err.msg
proc asyncDecode*(
tp: Taskpool,
backend: DecoderBackend,
data, parity: ref seq[seq[byte]],
blockSize: int
): Future[?!ref seq[seq[byte]]] {.async.} =
without signal =? ThreadSignalPtr.new().mapFailure, err:
return failure(err)
try:
let
ecK = data[].len
args = DecodeTaskArgs(signal: signal, backend: unsafeAddr backend, blockSize: blockSize, ecK: ecK)
handle = proxySpawnDecodeTask(tp, args, data, parity)
without res =? await awaitResult(signal, handle), err:
return failure(err)
if res.isOk:
var recovered = seq[seq[byte]].new()
recovered[].setLen(ecK)
for i in 0..<recovered[].len:
recovered[i] = newSeq[byte](blockSize)
copyMem(addr recovered[i][0], addr res.value.data[i * blockSize], blockSize)
deallocShared(res.value.data)
return success(recovered)
else:
return failure($res.error)
finally:
if err =? signal.close().mapFailure.errorOption():
error "Error closing signal", msg = $err.msg

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,38 +7,41 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [], gcsafe.}
import pkg/upraises
push: {.upraises: [].}
import ../stores
type
ErasureBackend* = ref object of RootObj
blockSize*: int # block size in bytes
buffers*: int # number of original pieces
parity*: int # number of redundancy pieces
buffers*: int # number of original pieces
parity*: int # number of redundancy pieces
EncoderBackend* = ref object of ErasureBackend
DecoderBackend* = ref object of ErasureBackend
method release*(self: ErasureBackend) {.base, gcsafe.} =
method release*(self: ErasureBackend) {.base.} =
## release the backend
##
raiseAssert("not implemented!")
method encode*(
self: EncoderBackend,
buffers, parity: ptr UncheckedArray[ptr UncheckedArray[byte]],
dataLen, parityLen: int,
): Result[void, cstring] {.base, gcsafe.} =
buffers,
parity: var openArray[seq[byte]]
): Result[void, cstring] {.base.} =
## encode buffers using a backend
##
raiseAssert("not implemented!")
method decode*(
self: DecoderBackend,
buffers, parity, recovered: ptr UncheckedArray[ptr UncheckedArray[byte]],
dataLen, parityLen, recoveredLen: int,
): Result[void, cstring] {.base, gcsafe.} =
buffers,
parity,
recovered: var openArray[seq[byte]]
): Result[void, cstring] {.base.} =
## decode buffers using a backend
##
raiseAssert("not implemented!")

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -10,7 +10,7 @@
import std/options
import pkg/leopard
import pkg/results
import pkg/stew/results
import ../backend
@ -22,39 +22,43 @@ type
decoder*: Option[LeoDecoder]
method encode*(
self: LeoEncoderBackend,
data, parity: ptr UncheckedArray[ptr UncheckedArray[byte]],
dataLen, parityLen: int,
): Result[void, cstring] =
self: LeoEncoderBackend,
data,
parity: var openArray[seq[byte]]): Result[void, cstring] =
## Encode data using Leopard backend
if parityLen == 0:
if parity.len == 0:
return ok()
var encoder =
if self.encoder.isNone:
self.encoder = (?LeoEncoder.init(self.blockSize, self.buffers, self.parity)).some
var encoder = if self.encoder.isNone:
self.encoder = (? LeoEncoder.init(
self.blockSize,
self.buffers,
self.parity)).some
self.encoder.get()
else:
self.encoder.get()
encoder.encode(data, parity, dataLen, parityLen)
encoder.encode(data, parity)
method decode*(
self: LeoDecoderBackend,
data, parity, recovered: ptr UncheckedArray[ptr UncheckedArray[byte]],
dataLen, parityLen, recoveredLen: int,
): Result[void, cstring] =
self: LeoDecoderBackend,
data,
parity,
recovered: var openArray[seq[byte]]): Result[void, cstring] =
## Decode data using given Leopard backend
var decoder =
if self.decoder.isNone:
self.decoder = (?LeoDecoder.init(self.blockSize, self.buffers, self.parity)).some
self.decoder = (? LeoDecoder.init(
self.blockSize,
self.buffers,
self.parity)).some
self.decoder.get()
else:
self.decoder.get()
decoder.decode(data, parity, recovered, dataLen, parityLen, recoveredLen)
decoder.decode(data, parity, recovered)
method release*(self: LeoEncoderBackend) =
if self.encoder.isSome:
@ -65,15 +69,25 @@ method release*(self: LeoDecoderBackend) =
self.decoder.get().free()
proc new*(
T: type LeoEncoderBackend, blockSize, buffers, parity: int
): LeoEncoderBackend =
T: type LeoEncoderBackend,
blockSize,
buffers,
parity: int): LeoEncoderBackend =
## Create an instance of an Leopard Encoder backend
##
LeoEncoderBackend(blockSize: blockSize, buffers: buffers, parity: parity)
LeoEncoderBackend(
blockSize: blockSize,
buffers: buffers,
parity: parity)
proc new*(
T: type LeoDecoderBackend, blockSize, buffers, parity: int
): LeoDecoderBackend =
T: type LeoDecoderBackend,
blockSize,
buffers,
parity: int): LeoDecoderBackend =
## Create an instance of an Leopard Decoder backend
##
LeoDecoderBackend(blockSize: blockSize, buffers: buffers, parity: parity)
LeoDecoderBackend(
blockSize: blockSize,
buffers: buffers,
parity: parity)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,13 +7,14 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [], gcsafe.}
import pkg/upraises
import std/[sugar, atomics, sequtils]
push: {.upraises: [].}
import std/sequtils
import std/sugar
import pkg/chronos
import pkg/chronos/threadsync
import pkg/chronicles
import pkg/libp2p/[multicodec, cid, multihash]
import pkg/libp2p/protobuf/minprotobuf
import pkg/taskpools
@ -22,17 +23,16 @@ import ../logutils
import ../manifest
import ../merkletree
import ../stores
import ../clock
import ../blocktype as bt
import ../utils
import ../utils/asynciter
import ../indexingstrategy
import ../errors
import ../utils/arrayutils
import pkg/stew/byteutils
import ./backend
import ./asyncbackend
export backend
@ -62,17 +62,18 @@ type
## columns (with up to M blocks missing per column),
## or any combination there of.
##
EncoderProvider* =
proc(size, blocks, parity: int): EncoderBackend {.raises: [Defect], noSideEffect.}
DecoderProvider* =
proc(size, blocks, parity: int): DecoderBackend {.raises: [Defect], noSideEffect.}
EncoderProvider* = proc(size, blocks, parity: int): EncoderBackend
{.raises: [Defect], noSideEffect.}
DecoderProvider* = proc(size, blocks, parity: int): DecoderBackend
{.raises: [Defect], noSideEffect.}
Erasure* = ref object
taskPool: Taskpool
encoderProvider*: EncoderProvider
decoderProvider*: DecoderProvider
store*: BlockStore
taskpool: Taskpool
EncodingParams = object
ecK: Natural
@ -89,24 +90,6 @@ type
# provided.
minSize*: NBytes
EncodeTask = object
success: Atomic[bool]
erasure: ptr Erasure
blocks: ptr UncheckedArray[ptr UncheckedArray[byte]]
parity: ptr UncheckedArray[ptr UncheckedArray[byte]]
blockSize, blocksLen, parityLen: int
signal: ThreadSignalPtr
DecodeTask = object
success: Atomic[bool]
erasure: ptr Erasure
blocks: ptr UncheckedArray[ptr UncheckedArray[byte]]
parity: ptr UncheckedArray[ptr UncheckedArray[byte]]
recovered: ptr UncheckedArray[ptr UncheckedArray[byte]]
blockSize, blocksLen: int
parityLen, recoveredLen: int
signal: ThreadSignalPtr
func indexToPos(steps, idx, step: int): int {.inline.} =
## Convert an index to a position in the encoded
## dataset
@ -118,25 +101,21 @@ func indexToPos(steps, idx, step: int): int {.inline.} =
(idx - step) div steps
proc getPendingBlocks(
self: Erasure, manifest: Manifest, indices: seq[int]
): AsyncIter[(?!bt.Block, int)] =
self: Erasure,
manifest: Manifest,
indicies: seq[int]): AsyncIter[(?!bt.Block, int)] =
## Get pending blocks iterator
##
var pendingBlocks: seq[Future[(?!bt.Block, int)]] = @[]
proc attachIndex(
fut: Future[?!bt.Block], i: int
): Future[(?!bt.Block, int)] {.async.} =
## avoids closure capture issues
return (await fut, i)
for blockIndex in indices:
var
# request blocks from the store
let fut = self.store.getBlock(BlockAddress.init(manifest.treeCid, blockIndex))
pendingBlocks.add(attachIndex(fut, blockIndex))
pendingBlocks = indicies.map( (i: int) =>
self.store.getBlock(
BlockAddress.init(manifest.treeCid, i)
).map((r: ?!bt.Block) => (r, i)) # Get the data blocks (first K)
)
proc isFinished(): bool =
pendingBlocks.len == 0
proc isFinished(): bool = pendingBlocks.len == 0
proc genNext(): Future[(?!bt.Block, int)] {.async.} =
let completedFut = await one(pendingBlocks)
@ -147,38 +126,36 @@ proc getPendingBlocks(
let (_, index) = await completedFut
raise newException(
CatchableError,
"Future for block id not found, tree cid: " & $manifest.treeCid & ", index: " &
$index,
)
"Future for block id not found, tree cid: " & $manifest.treeCid & ", index: " & $index)
AsyncIter[(?!bt.Block, int)].new(genNext, isFinished)
proc prepareEncodingData(
self: Erasure,
manifest: Manifest,
params: EncodingParams,
step: Natural,
data: ref seq[seq[byte]],
cids: ref seq[Cid],
emptyBlock: seq[byte],
): Future[?!Natural] {.async.} =
self: Erasure,
manifest: Manifest,
params: EncodingParams,
step: Natural,
data: ref seq[seq[byte]],
cids: ref seq[Cid],
emptyBlock: seq[byte]): Future[?!Natural] {.async.} =
## Prepare data for encoding
##
let
strategy = params.strategy.init(
firstIndex = 0, lastIndex = params.rounded - 1, iterations = params.steps
firstIndex = 0,
lastIndex = params.rounded - 1,
iterations = params.steps
)
indices = toSeq(strategy.getIndices(step))
pendingBlocksIter =
self.getPendingBlocks(manifest, indices.filterIt(it < manifest.blocksCount))
indicies = toSeq(strategy.getIndicies(step))
pendingBlocksIter = self.getPendingBlocks(manifest, indicies.filterIt(it < manifest.blocksCount))
var resolved = 0
for fut in pendingBlocksIter:
let (blkOrErr, idx) = await fut
without blk =? blkOrErr, err:
warn "Failed retrieving a block", treeCid = manifest.treeCid, idx, msg = err.msg
return failure(err)
warn "Failed retreiving a block", treeCid = manifest.treeCid, idx, msg = err.msg
continue
let pos = indexToPos(params.steps, idx, step)
shallowCopy(data[pos], if blk.isEmpty: emptyBlock else: blk.data)
@ -186,26 +163,24 @@ proc prepareEncodingData(
resolved.inc()
for idx in indices.filterIt(it >= manifest.blocksCount):
for idx in indicies.filterIt(it >= manifest.blocksCount):
let pos = indexToPos(params.steps, idx, step)
trace "Padding with empty block", idx
shallowCopy(data[pos], emptyBlock)
without emptyBlockCid =? emptyCid(manifest.version, manifest.hcodec, manifest.codec),
err:
without emptyBlockCid =? emptyCid(manifest.version, manifest.hcodec, manifest.codec), err:
return failure(err)
cids[idx] = emptyBlockCid
success(resolved.Natural)
proc prepareDecodingData(
self: Erasure,
encoded: Manifest,
step: Natural,
data: ref seq[seq[byte]],
parityData: ref seq[seq[byte]],
cids: ref seq[Cid],
emptyBlock: seq[byte],
): Future[?!(Natural, Natural)] {.async.} =
self: Erasure,
encoded: Manifest,
step: Natural,
data: ref seq[seq[byte]],
parityData: ref seq[seq[byte]],
cids: ref seq[Cid],
emptyBlock: seq[byte]): Future[?!(Natural, Natural)] {.async.} =
## Prepare data for decoding
## `encoded` - the encoded manifest
## `step` - the current step
@ -217,10 +192,12 @@ proc prepareDecodingData(
let
strategy = encoded.protectedStrategy.init(
firstIndex = 0, lastIndex = encoded.blocksCount - 1, iterations = encoded.steps
firstIndex = 0,
lastIndex = encoded.blocksCount - 1,
iterations = encoded.steps
)
indices = toSeq(strategy.getIndices(step))
pendingBlocksIter = self.getPendingBlocks(encoded, indices)
indicies = toSeq(strategy.getIndicies(step))
pendingBlocksIter = self.getPendingBlocks(encoded, indicies)
var
dataPieces = 0
@ -234,24 +211,23 @@ proc prepareDecodingData(
let (blkOrErr, idx) = await fut
without blk =? blkOrErr, err:
trace "Failed retrieving a block", idx, treeCid = encoded.treeCid, msg = err.msg
trace "Failed retreiving a block", idx, treeCid = encoded.treeCid, msg = err.msg
continue
let pos = indexToPos(encoded.steps, idx, step)
let
pos = indexToPos(encoded.steps, idx, step)
logScope:
cid = blk.cid
idx = idx
pos = pos
step = step
cid = blk.cid
idx = idx
pos = pos
step = step
empty = blk.isEmpty
cids[idx] = blk.cid
if idx >= encoded.rounded:
trace "Retrieved parity block"
shallowCopy(
parityData[pos - encoded.ecK], if blk.isEmpty: emptyBlock else: blk.data
)
shallowCopy(parityData[pos - encoded.ecK], if blk.isEmpty: emptyBlock else: blk.data)
parityPieces.inc
else:
trace "Retrieved data block"
@ -263,19 +239,17 @@ proc prepareDecodingData(
return success (dataPieces.Natural, parityPieces.Natural)
proc init*(
_: type EncodingParams,
manifest: Manifest,
ecK: Natural,
ecM: Natural,
strategy: StrategyType,
): ?!EncodingParams =
_: type EncodingParams,
manifest: Manifest,
ecK: Natural, ecM: Natural,
strategy: StrategyType): ?!EncodingParams =
if ecK > manifest.blocksCount:
let exc = (ref InsufficientBlocksError)(
msg:
"Unable to encode manifest, not enough blocks, ecK = " & $ecK &
", blocksCount = " & $manifest.blocksCount,
minSize: ecK.NBytes * manifest.blockSize,
)
msg: "Unable to encode manifest, not enough blocks, ecK = " &
$ecK &
", blocksCount = " &
$manifest.blocksCount,
minSize: ecK.NBytes * manifest.blockSize)
return failure(exc)
let
@ -289,139 +263,62 @@ proc init*(
rounded: rounded,
steps: steps,
blocksCount: blocksCount,
strategy: strategy,
strategy: strategy
)
proc leopardEncodeTask(tp: Taskpool, task: ptr EncodeTask) {.gcsafe.} =
# Task suitable for running in taskpools - look, no GC!
let encoder =
task[].erasure.encoderProvider(task[].blockSize, task[].blocksLen, task[].parityLen)
defer:
encoder.release()
discard task[].signal.fireSync()
if (
let res =
encoder.encode(task[].blocks, task[].parity, task[].blocksLen, task[].parityLen)
res.isErr
):
warn "Error from leopard encoder backend!", error = $res.error
task[].success.store(false)
else:
task[].success.store(true)
proc asyncEncode*(
self: Erasure,
blockSize, blocksLen, parityLen: int,
blocks: ref seq[seq[byte]],
parity: ptr UncheckedArray[ptr UncheckedArray[byte]],
): Future[?!void] {.async: (raises: [CancelledError]).} =
without threadPtr =? ThreadSignalPtr.new():
return failure("Unable to create thread signal")
defer:
threadPtr.close().expect("closing once works")
var data = makeUncheckedArray(blocks)
defer:
dealloc(data)
## Create an ecode task with block data
var task = EncodeTask(
erasure: addr self,
blockSize: blockSize,
blocksLen: blocksLen,
parityLen: parityLen,
blocks: data,
parity: parity,
signal: threadPtr,
)
doAssert self.taskPool.numThreads > 1,
"Must have at least one separate thread or signal will never be fired"
self.taskPool.spawn leopardEncodeTask(self.taskPool, addr task)
let threadFut = threadPtr.wait()
if joinErr =? catch(await threadFut.join()).errorOption:
if err =? catch(await noCancel threadFut).errorOption:
return failure(err)
if joinErr of CancelledError:
raise (ref CancelledError) joinErr
else:
return failure(joinErr)
if not task.success.load():
return failure("Leopard encoding task failed")
success()
proc encodeData(
self: Erasure, manifest: Manifest, params: EncodingParams
): Future[?!Manifest] {.async.} =
self: Erasure,
manifest: Manifest,
params: EncodingParams
): Future[?!Manifest] {.async.} =
## Encode blocks pointed to by the protected manifest
##
## `manifest` - the manifest to encode
##
logScope:
steps = params.steps
rounded_blocks = params.rounded
blocks_count = params.blocksCount
ecK = params.ecK
ecM = params.ecM
steps = params.steps
rounded_blocks = params.rounded
blocks_count = params.blocksCount
ecK = params.ecK
ecM = params.ecM
var
cids = seq[Cid].new()
encoder = self.encoderProvider(manifest.blockSize.int, params.ecK, params.ecM)
emptyBlock = newSeq[byte](manifest.blockSize.int)
cids[].setLen(params.blocksCount)
try:
for step in 0 ..< params.steps:
for step in 0..<params.steps:
# TODO: Don't allocate a new seq every time, allocate once and zero out
var
data = seq[seq[byte]].new() # number of blocks to encode
parity = createDoubleArray(params.ecM, manifest.blockSize.int)
defer:
freeDoubleArray(parity, params.ecM)
data[].setLen(params.ecK)
# TODO: this is a tight blocking loop so we sleep here to allow
# other events to be processed, this should be addressed
# by threading
await sleepAsync(10.millis)
without resolved =?
(await self.prepareEncodingData(manifest, params, step, data, cids, emptyBlock)),
err:
trace "Unable to prepare data", error = err.msg
(await self.prepareEncodingData(manifest, params, step, data, cids, emptyBlock)), err:
trace "Unable to prepare data", error = err.msg
return failure(err)
trace "Erasure coding data", data = data[].len, parity = params.ecM
without parity =? await asyncEncode(self.taskpool, encoder, data, manifest.blockSize.int, params.ecM), err:
trace "Error encoding data", err = err.msg
return failure(err)
trace "Erasure coding data", data = data[].len
try:
if err =? (
await self.asyncEncode(
manifest.blockSize.int, params.ecK, params.ecM, data, parity
)
).errorOption:
return failure(err)
except CancelledError as exc:
raise exc
var idx = params.rounded + step
for j in 0 ..< params.ecM:
var innerPtr: ptr UncheckedArray[byte] = parity[][j]
without blk =? bt.Block.new(innerPtr.toOpenArray(0, manifest.blockSize.int - 1)),
error:
for j in 0..<params.ecM:
without blk =? bt.Block.new(parity[j]), error:
trace "Unable to create parity block", err = error.msg
return failure(error)
trace "Adding parity block", cid = blk.cid, idx
cids[idx] = blk.cid
if error =? (await self.store.putBlock(blk)).errorOption:
warn "Unable to store block!", cid = blk.cid, msg = error.msg
if isErr (await self.store.putBlock(blk)):
trace "Unable to store block!", cid = blk.cid
return failure("Unable to store block!")
idx.inc(params.steps)
@ -440,7 +337,7 @@ proc encodeData(
datasetSize = (manifest.blockSize.int * params.blocksCount).NBytes,
ecK = params.ecK,
ecM = params.ecM,
strategy = params.strategy,
strategy = params.strategy
)
trace "Encoded data successfully", treeCid, blocksCount = params.blocksCount
@ -451,14 +348,15 @@ proc encodeData(
except CatchableError as exc:
trace "Erasure coding encoding error", exc = exc.msg
return failure(exc)
finally:
encoder.release()
proc encode*(
self: Erasure,
manifest: Manifest,
blocks: Natural,
parity: Natural,
strategy = SteppedStrategy,
): Future[?!Manifest] {.async.} =
self: Erasure,
manifest: Manifest,
blocks: Natural,
parity: Natural,
strategy = SteppedStrategy): Future[?!Manifest] {.async.} =
## Encode a manifest into one that is erasure protected.
##
## `manifest` - the original manifest to be encoded
@ -474,88 +372,20 @@ proc encode*(
return success encodedManifest
proc leopardDecodeTask(tp: Taskpool, task: ptr DecodeTask) {.gcsafe.} =
# Task suitable for running in taskpools - look, no GC!
let decoder =
task[].erasure.decoderProvider(task[].blockSize, task[].blocksLen, task[].parityLen)
defer:
decoder.release()
discard task[].signal.fireSync()
proc decode*(
self: Erasure,
encoded: Manifest): Future[?!Manifest] {.async.} =
## Decode a protected manifest into it's original
## manifest
##
## `encoded` - the encoded (protected) manifest to
## be recovered
##
if (
let res = decoder.decode(
task[].blocks,
task[].parity,
task[].recovered,
task[].blocksLen,
task[].parityLen,
task[].recoveredLen,
)
res.isErr
):
warn "Error from leopard decoder backend!", error = $res.error
task[].success.store(false)
else:
task[].success.store(true)
proc asyncDecode*(
self: Erasure,
blockSize, blocksLen, parityLen: int,
blocks, parity: ref seq[seq[byte]],
recovered: ptr UncheckedArray[ptr UncheckedArray[byte]],
): Future[?!void] {.async: (raises: [CancelledError]).} =
without threadPtr =? ThreadSignalPtr.new():
return failure("Unable to create thread signal")
defer:
threadPtr.close().expect("closing once works")
var
blockData = makeUncheckedArray(blocks)
parityData = makeUncheckedArray(parity)
defer:
dealloc(blockData)
dealloc(parityData)
## Create an decode task with block data
var task = DecodeTask(
erasure: addr self,
blockSize: blockSize,
blocksLen: blocksLen,
parityLen: parityLen,
recoveredLen: blocksLen,
blocks: blockData,
parity: parityData,
recovered: recovered,
signal: threadPtr,
)
doAssert self.taskPool.numThreads > 1,
"Must have at least one separate thread or signal will never be fired"
self.taskPool.spawn leopardDecodeTask(self.taskPool, addr task)
let threadFut = threadPtr.wait()
if joinErr =? catch(await threadFut.join()).errorOption:
if err =? catch(await noCancel threadFut).errorOption:
return failure(err)
if joinErr of CancelledError:
raise (ref CancelledError) joinErr
else:
return failure(joinErr)
if not task.success.load():
return failure("Leopard decoding task failed")
success()
proc decodeInternal(
self: Erasure, encoded: Manifest
): Future[?!(ref seq[Cid], seq[Natural])] {.async.} =
logScope:
steps = encoded.steps
rounded_blocks = encoded.rounded
new_manifest = encoded.blocksCount
steps = encoded.steps
rounded_blocks = encoded.rounded
new_manifest = encoded.blocksCount
var
cids = seq[Cid].new()
@ -565,27 +395,16 @@ proc decodeInternal(
cids[].setLen(encoded.blocksCount)
try:
for step in 0 ..< encoded.steps:
# TODO: this is a tight blocking loop so we sleep here to allow
# other events to be processed, this should be addressed
# by threading
await sleepAsync(10.millis)
for step in 0..<encoded.steps:
var
data = seq[seq[byte]].new()
parityData = seq[seq[byte]].new()
recovered = createDoubleArray(encoded.ecK, encoded.blockSize.int)
defer:
freeDoubleArray(recovered, encoded.ecK)
parity = seq[seq[byte]].new()
data[].setLen(encoded.ecK) # set len to K
parityData[].setLen(encoded.ecM) # set len to M
data[].setLen(encoded.ecK) # set len to K
parity[].setLen(encoded.ecM) # set len to M
without (dataPieces, _) =? (
await self.prepareDecodingData(
encoded, step, data, parityData, cids, emptyBlock
)
), err:
without (dataPieces, _) =?
(await self.prepareDecodingData(encoded, step, data, parity, cids, emptyBlock)), err:
trace "Unable to prepare data", error = err.msg
return failure(err)
@ -594,34 +413,23 @@ proc decodeInternal(
continue
trace "Erasure decoding data"
try:
if err =? (
await self.asyncDecode(
encoded.blockSize.int, encoded.ecK, encoded.ecM, data, parityData, recovered
)
).errorOption:
return failure(err)
except CancelledError as exc:
raise exc
for i in 0 ..< encoded.ecK:
without recovered =? await asyncDecode(self.taskpool, decoder, data, parity, encoded.blockSize.int), err:
trace "Error decoding data", err = err.msg
return failure(err)
for i in 0..<encoded.ecK:
let idx = i * encoded.steps + step
if data[i].len <= 0 and not cids[idx].isEmpty:
var innerPtr: ptr UncheckedArray[byte] = recovered[][i]
without blk =? bt.Block.new(
innerPtr.toOpenArray(0, encoded.blockSize.int - 1)
), error:
without blk =? bt.Block.new(recovered[i]), error:
trace "Unable to create block!", exc = error.msg
return failure(error)
trace "Recovered block", cid = blk.cid, index = i
if error =? (await self.store.putBlock(blk)).errorOption:
warn "Unable to store block!", cid = blk.cid, msg = error.msg
if isErr (await self.store.putBlock(blk)):
trace "Unable to store block!", cid = blk.cid
return failure("Unable to store block!")
self.store.completeBlock(BlockAddress.init(encoded.treeCid, idx), blk)
cids[idx] = blk.cid
recoveredIndices.add(idx)
except CancelledError as exc:
@ -633,78 +441,25 @@ proc decodeInternal(
finally:
decoder.release()
return (cids, recoveredIndices).success
proc decode*(self: Erasure, encoded: Manifest): Future[?!Manifest] {.async.} =
## Decode a protected manifest into it's original
## manifest
##
## `encoded` - the encoded (protected) manifest to
## be recovered
##
without (cids, recoveredIndices) =? (await self.decodeInternal(encoded)), err:
return failure(err)
without tree =? CodexTree.init(cids[0 ..< encoded.originalBlocksCount]), err:
without tree =? CodexTree.init(cids[0..<encoded.originalBlocksCount]), err:
return failure(err)
without treeCid =? tree.rootCid, err:
return failure(err)
if treeCid != encoded.originalTreeCid:
return failure(
"Original tree root differs from the tree root computed out of recovered data"
)
return failure("Original tree root differs from the tree root computed out of recovered data")
let idxIter =
Iter[Natural].new(recoveredIndices).filter((i: Natural) => i < tree.leavesCount)
let idxIter = Iter[Natural].new(recoveredIndices)
.filter((i: Natural) => i < tree.leavesCount)
if err =? (await self.store.putSomeProofs(tree, idxIter)).errorOption:
return failure(err)
return failure(err)
let decoded = Manifest.new(encoded)
return decoded.success
proc repair*(self: Erasure, encoded: Manifest): Future[?!void] {.async.} =
## Repair a protected manifest by reconstructing the full dataset
##
## `encoded` - the encoded (protected) manifest to
## be repaired
##
without (cids, _) =? (await self.decodeInternal(encoded)), err:
return failure(err)
without tree =? CodexTree.init(cids[0 ..< encoded.originalBlocksCount]), err:
return failure(err)
without treeCid =? tree.rootCid, err:
return failure(err)
if treeCid != encoded.originalTreeCid:
return failure(
"Original tree root differs from the tree root computed out of recovered data"
)
if err =? (await self.store.putAllProofs(tree)).errorOption:
return failure(err)
without repaired =? (
await self.encode(
Manifest.new(encoded), encoded.ecK, encoded.ecM, encoded.protectedStrategy
)
), err:
return failure(err)
if repaired.treeCid != encoded.treeCid:
return failure(
"Original tree root differs from the repaired tree root encoded out of recovered data"
)
return success()
proc start*(self: Erasure) {.async.} =
return
@ -712,17 +467,16 @@ proc stop*(self: Erasure) {.async.} =
return
proc new*(
T: type Erasure,
store: BlockStore,
encoderProvider: EncoderProvider,
decoderProvider: DecoderProvider,
taskPool: Taskpool,
): Erasure =
T: type Erasure,
store: BlockStore,
encoderProvider: EncoderProvider,
decoderProvider: DecoderProvider,
taskpool: Taskpool): Erasure =
## Create a new Erasure instance for encoding and decoding manifests
##
Erasure(
store: store,
encoderProvider: encoderProvider,
decoderProvider: decoderProvider,
taskPool: taskPool,
)
taskpool: taskpool)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,13 +7,9 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [].}
import std/options
import std/sugar
import std/sequtils
import pkg/results
import pkg/stew/results
import pkg/chronos
import pkg/questionable/results
@ -23,18 +19,14 @@ type
CodexError* = object of CatchableError # base codex error
CodexResult*[T] = Result[T, ref CodexError]
FinishedFailed*[T] = tuple[success: seq[Future[T]], failure: seq[Future[T]]]
template mapFailure*[T, V, E](
exp: Result[T, V], exc: typedesc[E]
exp: Result[T, V],
exc: typedesc[E],
): Result[T, ref CatchableError] =
## Convert `Result[T, E]` to `Result[E, ref CatchableError]`
##
exp.mapErr(
proc(e: V): ref CatchableError =
(ref exc)(msg: $e)
)
exp.mapErr(proc (e: V): ref CatchableError = (ref exc)(msg: $e))
template mapFailure*[T, V](exp: Result[T, V]): Result[T, ref CatchableError] =
mapFailure(exp, CodexError)
@ -46,43 +38,12 @@ func toFailure*[T](exp: Option[T]): Result[T, ref CatchableError] {.inline.} =
else:
T.failure("Option is None")
proc allFinishedFailed*[T](
futs: auto
): Future[FinishedFailed[T]] {.async: (raises: [CancelledError]).} =
## Check if all futures have finished or failed
##
## TODO: wip, not sure if we want this - at the minimum,
## we should probably avoid the async transform
proc allFutureResult*[T](fut: seq[Future[T]]): Future[?!void] {.async.} =
try:
await allFuturesThrowing(fut)
except CancelledError as exc:
raise exc
except CatchableError as exc:
return failure(exc.msg)
var res: FinishedFailed[T] = (@[], @[])
await allFutures(futs)
for f in futs:
if f.failed:
res.failure.add f
else:
res.success.add f
return res
proc allFinishedValues*[T](
futs: auto
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} =
## If all futures have finished, return corresponding values,
## otherwise return failure
##
# wait for all futures to be either completed, failed or canceled
await allFutures(futs)
let numOfFailed = futs.countIt(it.failed)
if numOfFailed > 0:
return failure "Some futures failed (" & $numOfFailed & "))"
# here, we know there are no failed futures in "futs"
# and we are only interested in those that completed successfully
let values = collect:
for b in futs:
if b.finished:
b.value
return success values
return success()

View File

@ -10,7 +10,7 @@ type
# 0 => 0, 1, 2
# 1 => 3, 4, 5
# 2 => 6, 7, 8
LinearStrategy
LinearStrategy,
# Stepped indexing:
# 0 => 0, 3, 6
@ -21,106 +21,77 @@ type
# Representing a strategy for grouping indices (of blocks usually)
# Given an interation-count as input, will produce a seq of
# selected indices.
IndexingError* = object of CodexError
IndexingWrongIndexError* = object of IndexingError
IndexingWrongIterationsError* = object of IndexingError
IndexingWrongGroupCountError* = object of IndexingError
IndexingWrongPadBlockCountError* = object of IndexingError
IndexingStrategy* = object
strategyType*: StrategyType # Indexing strategy algorithm
firstIndex*: int # Lowest index that can be returned
lastIndex*: int # Highest index that can be returned
iterations*: int # Number of iteration steps (0 ..< iterations)
step*: int # Step size between generated indices
groupCount*: int # Number of groups to partition indices into
padBlockCount*: int # Number of padding blocks to append per group
strategyType*: StrategyType
firstIndex*: int # Lowest index that can be returned
lastIndex*: int # Highest index that can be returned
iterations*: int # getIndices(iteration) will run from 0 ..< iterations
step*: int
func checkIteration(
self: IndexingStrategy, iteration: int
): void {.raises: [IndexingError].} =
func checkIteration(self: IndexingStrategy, iteration: int): void {.raises: [IndexingError].} =
if iteration >= self.iterations:
raise newException(
IndexingError, "Indexing iteration can't be greater than or equal to iterations."
)
IndexingError,
"Indexing iteration can't be greater than or equal to iterations.")
func getIter(first, last, step: int): Iter[int] =
{.cast(noSideEffect).}:
Iter[int].new(first, last, step)
func getLinearIndices(self: IndexingStrategy, iteration: int): Iter[int] =
func getLinearIndicies(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
self.checkIteration(iteration)
let
first = self.firstIndex + iteration * self.step
last = min(first + self.step - 1, self.lastIndex)
getIter(first, last, 1)
func getSteppedIndices(self: IndexingStrategy, iteration: int): Iter[int] =
func getSteppedIndicies(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
self.checkIteration(iteration)
let
first = self.firstIndex + iteration
last = self.lastIndex
getIter(first, last, self.iterations)
func getStrategyIndices(self: IndexingStrategy, iteration: int): Iter[int] =
func getIndicies*(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
case self.strategyType
of StrategyType.LinearStrategy:
self.getLinearIndices(iteration)
self.getLinearIndicies(iteration)
of StrategyType.SteppedStrategy:
self.getSteppedIndices(iteration)
func getIndices*(
self: IndexingStrategy, iteration: int
): Iter[int] {.raises: [IndexingError].} =
self.checkIteration(iteration)
{.cast(noSideEffect).}:
Iter[int].new(
iterator (): int {.gcsafe.} =
for value in self.getStrategyIndices(iteration):
yield value
for i in 0 ..< self.padBlockCount:
yield self.lastIndex + (iteration + 1) + i * self.groupCount
)
self.getSteppedIndicies(iteration)
func init*(
strategy: StrategyType,
firstIndex, lastIndex, iterations: int,
groupCount = 0,
padBlockCount = 0,
): IndexingStrategy {.raises: [IndexingError].} =
strategy: StrategyType,
firstIndex, lastIndex, iterations: int): IndexingStrategy {.raises: [IndexingError].} =
if firstIndex > lastIndex:
raise newException(
IndexingWrongIndexError,
"firstIndex (" & $firstIndex & ") can't be greater than lastIndex (" & $lastIndex &
")",
)
"firstIndex (" & $firstIndex & ") can't be greater than lastIndex (" & $lastIndex & ")")
if iterations <= 0:
raise newException(
IndexingWrongIterationsError,
"iterations (" & $iterations & ") must be greater than zero.",
)
if padBlockCount < 0:
raise newException(
IndexingWrongPadBlockCountError,
"padBlockCount (" & $padBlockCount & ") must be equal or greater than zero.",
)
if padBlockCount > 0 and groupCount <= 0:
raise newException(
IndexingWrongGroupCountError,
"groupCount (" & $groupCount & ") must be greater than zero.",
)
"iterations (" & $iterations & ") must be greater than zero.")
IndexingStrategy(
strategyType: strategy,
firstIndex: firstIndex,
lastIndex: lastIndex,
iterations: iterations,
step: divUp((lastIndex - firstIndex + 1), iterations),
groupCount: groupCount,
padBlockCount: padBlockCount,
)
step: divUp((lastIndex - firstIndex + 1), iterations))

View File

@ -11,7 +11,7 @@
## 4. Remove usages of `nim-json-serialization` from the codebase
## 5. Remove need to declare `writeValue` for new types
## 6. Remove need to [avoid importing or exporting `toJson`, `%`, `%*` to prevent
## conflicts](https://github.com/logos-storage/logos-storage-nim/pull/645#issuecomment-1838834467)
## conflicts](https://github.com/codex-storage/nim-codex/pull/645#issuecomment-1838834467)
##
## When declaring a new type, one should consider importing the `codex/logutils`
## module, and specifying `formatIt`. If textlines log output and json log output
@ -98,6 +98,7 @@ import pkg/questionable/results
import ./utils/json except formatIt # TODO: remove exception?
import pkg/stew/byteutils
import pkg/stint
import pkg/upraises
export byteutils
export chronicles except toJson, formatIt, `%`
@ -106,6 +107,7 @@ export sequtils
export json except formatIt
export strutils
export sugar
export upraises
export results
func shortLog*(long: string, ellipses = "*", start = 3, stop = 6): string =
@ -123,9 +125,8 @@ func shortLog*(long: string, ellipses = "*", start = 3, stop = 6): string =
short
func shortHexLog*(long: string): string =
if long[0 .. 1] == "0x":
result &= "0x"
result &= long[2 .. long.high].shortLog("..", 4, 4)
if long[0..1] == "0x": result &= "0x"
result &= long[2..long.high].shortLog("..", 4, 4)
func short0xHexLog*[N: static[int], T: array[N, byte]](v: T): string =
v.to0xHex.shortHexLog
@ -152,7 +153,7 @@ proc formatTextLineSeq*(val: seq[string]): string =
template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
# Provides formatters for logging with Chronicles for the given type and
# `LogFormat`.
# NOTE: `seq[T]`, `Option[T]`, and `seq[Option[T]]` are overridden
# NOTE: `seq[T]`, `Option[T]`, and `seq[Option[T]]` are overriddden
# since the base `setProperty` is generic using `auto` and conflicts with
# providing a generic `seq` and `Option` override.
when format == LogFormat.json:
@ -183,16 +184,12 @@ template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
let v = opts.map(opt => opt.formatJsonOption)
setProperty(r, key, json.`%`(v))
proc setProperty*(
r: var JsonRecord, key: string, val: seq[T]
) {.raises: [ValueError, IOError].} =
proc setProperty*(r: var JsonRecord, key: string, val: seq[T]) =
var it {.inject, used.}: T
let v = val.map(it => body)
setProperty(r, key, json.`%`(v))
proc setProperty*(
r: var JsonRecord, key: string, val: T
) {.raises: [ValueError, IOError].} =
proc setProperty*(r: var JsonRecord, key: string, val: T) {.upraises:[ValueError, IOError].} =
var it {.inject, used.}: T = val
let v = body
setProperty(r, key, json.`%`(v))
@ -223,35 +220,23 @@ template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
let v = opts.map(opt => opt.formatTextLineOption)
setProperty(r, key, v.formatTextLineSeq)
proc setProperty*(
r: var TextLineRecord, key: string, val: seq[T]
) {.raises: [ValueError, IOError].} =
proc setProperty*(r: var TextLineRecord, key: string, val: seq[T]) =
var it {.inject, used.}: T
let v = val.map(it => body)
setProperty(r, key, v.formatTextLineSeq)
proc setProperty*(
r: var TextLineRecord, key: string, val: T
) {.raises: [ValueError, IOError].} =
proc setProperty*(r: var TextLineRecord, key: string, val: T) {.upraises:[ValueError, IOError].} =
var it {.inject, used.}: T = val
let v = body
setProperty(r, key, v)
template formatIt*(T: type, body: untyped) {.dirty.} =
formatIt(LogFormat.textLines, T):
body
formatIt(LogFormat.json, T):
body
formatIt(LogFormat.textLines, T): body
formatIt(LogFormat.json, T): body
formatIt(LogFormat.textLines, Cid):
shortLog($it)
formatIt(LogFormat.json, Cid):
$it
formatIt(UInt256):
$it
formatIt(MultiAddress):
$it
formatIt(LogFormat.textLines, array[32, byte]):
it.short0xHexLog
formatIt(LogFormat.json, array[32, byte]):
it.to0xHex
formatIt(LogFormat.textLines, Cid): shortLog($it)
formatIt(LogFormat.json, Cid): $it
formatIt(UInt256): $it
formatIt(MultiAddress): $it
formatIt(LogFormat.textLines, array[32, byte]): it.short0xHexLog
formatIt(LogFormat.json, array[32, byte]): it.to0xHex

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,9 +9,10 @@
# This module implements serialization and deserialization of Manifest
import pkg/upraises
import times
{.push raises: [].}
push: {.upraises: [].}
import std/tables
import std/sequtils
@ -32,7 +33,7 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
## multicodec container (Dag-pb) for now
##
?manifest.verify()
? manifest.verify()
var pbNode = initProtoBuffer()
# NOTE: The `Data` field in the the `dag-pb`
@ -61,6 +62,7 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
# optional ErasureInfo erasure = 7; # erasure coding info
# optional filename: ?string = 8; # original filename
# optional mimetype: ?string = 9; # original mimetype
# optional uploadedAt: ?int64 = 10; # original uploadedAt
# }
# ```
#
@ -99,6 +101,9 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
if manifest.mimetype.isSome:
header.write(9, manifest.mimetype.get())
if manifest.uploadedAt.isSome:
header.write(10, manifest.uploadedAt.get().uint64)
pbNode.write(1, header) # set the treeCid as the data field
pbNode.finish()
@ -129,6 +134,7 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
verifiableStrategy: uint32
filename: string
mimetype: string
uploadedAt: uint64
# Decode `Header` message
if pbNode.getField(1, pbHeader).isErr:
@ -162,6 +168,9 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
if pbHeader.getField(9, mimetype).isErr:
return failure("Unable to decode `mimetype` from manifest!")
if pbHeader.getField(10, uploadedAt).isErr:
return failure("Unable to decode `uploadedAt` from manifest!")
let protected = pbErasureInfo.buffer.len > 0
var verifiable = false
if protected:
@ -197,13 +206,15 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
if pbVerificationInfo.getField(4, verifiableStrategy).isErr:
return failure("Unable to decode `verifiableStrategy` from manifest!")
let treeCid = ?Cid.init(treeCidBuf).mapFailure
let
treeCid = ? Cid.init(treeCidBuf).mapFailure
var filenameOption = if filename.len == 0: string.none else: filename.some
var mimetypeOption = if mimetype.len == 0: string.none else: mimetype.some
var uploadedAtOption = if uploadedAt == 0: int64.none else: uploadedAt.int64.some
let self =
if protected:
let
self = if protected:
Manifest.new(
treeCid = treeCid,
datasetSize = datasetSize.NBytes,
@ -213,37 +224,37 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
codec = codec.MultiCodec,
ecK = ecK.int,
ecM = ecM.int,
originalTreeCid = ?Cid.init(originalTreeCid).mapFailure,
originalTreeCid = ? Cid.init(originalTreeCid).mapFailure,
originalDatasetSize = originalDatasetSize.NBytes,
strategy = StrategyType(protectedStrategy),
filename = filenameOption,
mimetype = mimetypeOption,
)
else:
Manifest.new(
treeCid = treeCid,
datasetSize = datasetSize.NBytes,
blockSize = blockSize.NBytes,
version = CidVersion(version),
hcodec = hcodec.MultiCodec,
codec = codec.MultiCodec,
filename = filenameOption,
mimetype = mimetypeOption,
)
uploadedAt = uploadedAtOption)
else:
Manifest.new(
treeCid = treeCid,
datasetSize = datasetSize.NBytes,
blockSize = blockSize.NBytes,
version = CidVersion(version),
hcodec = hcodec.MultiCodec,
codec = codec.MultiCodec,
filename = filenameOption,
mimetype = mimetypeOption,
uploadedAt = uploadedAtOption)
?self.verify()
? self.verify()
if verifiable:
let
verifyRootCid = ?Cid.init(verifyRoot).mapFailure
slotRootCids = slotRoots.mapIt(?Cid.init(it).mapFailure)
verifyRootCid = ? Cid.init(verifyRoot).mapFailure
slotRootCids = slotRoots.mapIt(? Cid.init(it).mapFailure)
return Manifest.new(
manifest = self,
verifyRoot = verifyRootCid,
slotRoots = slotRootCids,
cellSize = cellSize.NBytes,
strategy = StrategyType(verifiableStrategy),
strategy = StrategyType(verifiableStrategy)
)
self.success
@ -252,7 +263,7 @@ func decode*(_: type Manifest, blk: Block): ?!Manifest =
## Decode a manifest using `decoder`
##
if not ?blk.cid.isManifest:
if not ? blk.cid.isManifest:
return failure "Cid not a manifest codec"
Manifest.decode(blk.data)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,7 +9,9 @@
# This module defines all operations on Manifest
{.push raises: [], gcsafe.}
import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p/protobuf/minprotobuf
import pkg/libp2p/[cid, multihash, multicodec]
@ -23,36 +25,37 @@ import ../blocktype
import ../indexingstrategy
import ../logutils
# TODO: Manifest should be reworked to more concrete types,
# perhaps using inheritance
type Manifest* = ref object of RootObj
treeCid {.serialize.}: Cid # Root of the merkle tree
datasetSize {.serialize.}: NBytes # Total size of all blocks
blockSize {.serialize.}: NBytes
# Size of each contained block (might not be needed if blocks are len-prefixed)
codec: MultiCodec # Dataset codec
hcodec: MultiCodec # Multihash codec
version: CidVersion # Cid version
filename {.serialize.}: ?string # The filename of the content uploaded (optional)
mimetype {.serialize.}: ?string # The mimetype of the content uploaded (optional)
case protected {.serialize.}: bool # Protected datasets have erasure coded info
of true:
ecK: int # Number of blocks to encode
ecM: int # Number of resulting parity blocks
originalTreeCid: Cid # The original root of the dataset being erasure coded
originalDatasetSize: NBytes
protectedStrategy: StrategyType # Indexing strategy used to build the slot roots
case verifiable {.serialize.}: bool
# Verifiable datasets can be used to generate storage proofs
type
Manifest* = ref object of RootObj
treeCid {.serialize.}: Cid # Root of the merkle tree
datasetSize {.serialize.}: NBytes # Total size of all blocks
blockSize {.serialize.}: NBytes # Size of each contained block (might not be needed if blocks are len-prefixed)
codec: MultiCodec # Dataset codec
hcodec: MultiCodec # Multihash codec
version: CidVersion # Cid version
filename {.serialize.}: ?string # The filename of the content uploaded (optional)
mimetype {.serialize.}: ?string # The mimetype of the content uploaded (optional)
uploadedAt {.serialize.}: ?int64 # The UTC creation timestamp in seconds
case protected {.serialize.}: bool # Protected datasets have erasure coded info
of true:
verifyRoot: Cid # Root of the top level merkle tree built from slot roots
slotRoots: seq[Cid] # Individual slot root built from the original dataset blocks
cellSize: NBytes # Size of each slot cell
verifiableStrategy: StrategyType # Indexing strategy used to build the slot roots
ecK: int # Number of blocks to encode
ecM: int # Number of resulting parity blocks
originalTreeCid: Cid # The original root of the dataset being erasure coded
originalDatasetSize: NBytes
protectedStrategy: StrategyType # Indexing strategy used to build the slot roots
case verifiable {.serialize.}: bool # Verifiable datasets can be used to generate storage proofs
of true:
verifyRoot: Cid # Root of the top level merkle tree built from slot roots
slotRoots: seq[Cid] # Individual slot root built from the original dataset blocks
cellSize: NBytes # Size of each slot cell
verifiableStrategy: StrategyType # Indexing strategy used to build the slot roots
else:
discard
else:
discard
else:
discard
############################################################
# Accessors
@ -127,12 +130,14 @@ func filename*(self: Manifest): ?string =
func mimetype*(self: Manifest): ?string =
self.mimetype
func uploadedAt*(self: Manifest): ?int64 =
self.uploadedAt
############################################################
# Operations on block list
############################################################
func isManifest*(cid: Cid): ?!bool =
success (ManifestCodec == ?cid.contentType().mapFailure(CodexError))
success (ManifestCodec == ? cid.contentType().mapFailure(CodexError))
func isManifest*(mc: MultiCodec): ?!bool =
success mc == ManifestCodec
@ -154,38 +159,49 @@ func verify*(self: Manifest): ?!void =
##
if self.protected and (self.blocksCount != self.steps * (self.ecK + self.ecM)):
return
failure newException(CodexError, "Broken manifest: wrong originalBlocksCount")
return failure newException(CodexError, "Broken manifest: wrong originalBlocksCount")
return success()
func cid*(self: Manifest): ?!Cid {.deprecated: "use treeCid instead".} =
self.treeCid.success
func `==`*(a, b: Manifest): bool =
(a.treeCid == b.treeCid) and (a.datasetSize == b.datasetSize) and
(a.blockSize == b.blockSize) and (a.version == b.version) and (a.hcodec == b.hcodec) and
(a.codec == b.codec) and (a.protected == b.protected) and (a.filename == b.filename) and
(a.mimetype == b.mimetype) and (
if a.protected:
(a.ecK == b.ecK) and (a.ecM == b.ecM) and (a.originalTreeCid == b.originalTreeCid) and
(a.originalDatasetSize == b.originalDatasetSize) and
(a.protectedStrategy == b.protectedStrategy) and (a.verifiable == b.verifiable) and
(
if a.verifiable:
(a.verifyRoot == b.verifyRoot) and (a.slotRoots == b.slotRoots) and
(a.cellSize == b.cellSize) and (
a.verifiableStrategy == b.verifiableStrategy
)
(a.treeCid == b.treeCid) and
(a.datasetSize == b.datasetSize) and
(a.blockSize == b.blockSize) and
(a.version == b.version) and
(a.hcodec == b.hcodec) and
(a.codec == b.codec) and
(a.protected == b.protected) and
(a.filename == b.filename) and
(a.mimetype == b.mimetype) and
(a.uploadedAt == b.uploadedAt) and
(if a.protected:
(a.ecK == b.ecK) and
(a.ecM == b.ecM) and
(a.originalTreeCid == b.originalTreeCid) and
(a.originalDatasetSize == b.originalDatasetSize) and
(a.protectedStrategy == b.protectedStrategy) and
(a.verifiable == b.verifiable) and
(if a.verifiable:
(a.verifyRoot == b.verifyRoot) and
(a.slotRoots == b.slotRoots) and
(a.cellSize == b.cellSize) and
(a.verifiableStrategy == b.verifiableStrategy)
else:
true
)
true)
else:
true
)
true)
func `$`*(self: Manifest): string =
result =
"treeCid: " & $self.treeCid & ", datasetSize: " & $self.datasetSize & ", blockSize: " &
$self.blockSize & ", version: " & $self.version & ", hcodec: " & $self.hcodec &
", codec: " & $self.codec & ", protected: " & $self.protected
result = "treeCid: " & $self.treeCid &
", datasetSize: " & $self.datasetSize &
", blockSize: " & $self.blockSize &
", version: " & $self.version &
", hcodec: " & $self.hcodec &
", codec: " & $self.codec &
", protected: " & $self.protected
if self.filename.isSome:
result &= ", filename: " & $self.filename
@ -193,19 +209,22 @@ func `$`*(self: Manifest): string =
if self.mimetype.isSome:
result &= ", mimetype: " & $self.mimetype
result &= (
if self.protected:
", ecK: " & $self.ecK & ", ecM: " & $self.ecM & ", originalTreeCid: " &
$self.originalTreeCid & ", originalDatasetSize: " & $self.originalDatasetSize &
", verifiable: " & $self.verifiable & (
if self.verifiable:
", verifyRoot: " & $self.verifyRoot & ", slotRoots: " & $self.slotRoots
else:
""
)
if self.uploadedAt.isSome:
result &= ", uploadedAt: " & $self.uploadedAt
result &= (if self.protected:
", ecK: " & $self.ecK &
", ecM: " & $self.ecM &
", originalTreeCid: " & $self.originalTreeCid &
", originalDatasetSize: " & $self.originalDatasetSize &
", verifiable: " & $self.verifiable &
(if self.verifiable:
", verifyRoot: " & $self.verifyRoot &
", slotRoots: " & $self.slotRoots
else:
""
)
"")
else:
"")
return result
@ -214,17 +233,18 @@ func `$`*(self: Manifest): string =
############################################################
func new*(
T: type Manifest,
treeCid: Cid,
blockSize: NBytes,
datasetSize: NBytes,
version: CidVersion = CIDv1,
hcodec = Sha256HashCodec,
codec = BlockCodec,
protected = false,
filename: ?string = string.none,
mimetype: ?string = string.none,
): Manifest =
T: type Manifest,
treeCid: Cid,
blockSize: NBytes,
datasetSize: NBytes,
version: CidVersion = CIDv1,
hcodec = Sha256HashCodec,
codec = BlockCodec,
protected = false,
filename: ?string = string.none,
mimetype: ?string = string.none,
uploadedAt: ?int64 = int64.none): Manifest =
T(
treeCid: treeCid,
blockSize: blockSize,
@ -235,16 +255,15 @@ func new*(
protected: protected,
filename: filename,
mimetype: mimetype,
)
uploadedAt: uploadedAt)
func new*(
T: type Manifest,
manifest: Manifest,
treeCid: Cid,
datasetSize: NBytes,
ecK, ecM: int,
strategy = SteppedStrategy,
): Manifest =
T: type Manifest,
manifest: Manifest,
treeCid: Cid,
datasetSize: NBytes,
ecK, ecM: int,
strategy = SteppedStrategy): Manifest =
## Create an erasure protected dataset from an
## unprotected one
##
@ -257,16 +276,18 @@ func new*(
hcodec: manifest.hcodec,
blockSize: manifest.blockSize,
protected: true,
ecK: ecK,
ecM: ecM,
ecK: ecK, ecM: ecM,
originalTreeCid: manifest.treeCid,
originalDatasetSize: manifest.datasetSize,
protectedStrategy: strategy,
filename: manifest.filename,
mimetype: manifest.mimetype,
)
uploadedAt: manifest.uploadedAt
)
func new*(T: type Manifest, manifest: Manifest): Manifest =
func new*(
T: type Manifest,
manifest: Manifest): Manifest =
## Create an unprotected dataset from an
## erasure protected one
##
@ -281,24 +302,25 @@ func new*(T: type Manifest, manifest: Manifest): Manifest =
protected: false,
filename: manifest.filename,
mimetype: manifest.mimetype,
)
uploadedAt: manifest.uploadedAt)
func new*(
T: type Manifest,
treeCid: Cid,
datasetSize: NBytes,
blockSize: NBytes,
version: CidVersion,
hcodec: MultiCodec,
codec: MultiCodec,
ecK: int,
ecM: int,
originalTreeCid: Cid,
originalDatasetSize: NBytes,
strategy = SteppedStrategy,
filename: ?string = string.none,
mimetype: ?string = string.none,
): Manifest =
T: type Manifest,
treeCid: Cid,
datasetSize: NBytes,
blockSize: NBytes,
version: CidVersion,
hcodec: MultiCodec,
codec: MultiCodec,
ecK: int,
ecM: int,
originalTreeCid: Cid,
originalDatasetSize: NBytes,
strategy = SteppedStrategy,
filename: ?string = string.none,
mimetype: ?string = string.none,
uploadedAt: ?int64 = int64.none): Manifest =
Manifest(
treeCid: treeCid,
datasetSize: datasetSize,
@ -314,27 +336,26 @@ func new*(
protectedStrategy: strategy,
filename: filename,
mimetype: mimetype,
)
uploadedAt: uploadedAt)
func new*(
T: type Manifest,
manifest: Manifest,
verifyRoot: Cid,
slotRoots: openArray[Cid],
cellSize = DefaultCellSize,
strategy = LinearStrategy,
): ?!Manifest =
T: type Manifest,
manifest: Manifest,
verifyRoot: Cid,
slotRoots: openArray[Cid],
cellSize = DefaultCellSize,
strategy = LinearStrategy): ?!Manifest =
## Create a verifiable dataset from an
## protected one
##
if not manifest.protected:
return failure newException(
CodexError, "Can create verifiable manifest only from protected manifest."
)
CodexError, "Can create verifiable manifest only from protected manifest.")
if slotRoots.len != manifest.numSlots:
return failure newException(CodexError, "Wrong number of slot roots.")
return failure newException(
CodexError, "Wrong number of slot roots.")
success Manifest(
treeCid: manifest.treeCid,
@ -356,9 +377,12 @@ func new*(
verifiableStrategy: strategy,
filename: manifest.filename,
mimetype: manifest.mimetype,
)
uploadedAt: manifest.uploadedAt
)
func new*(T: type Manifest, data: openArray[byte]): ?!Manifest =
func new*(
T: type Manifest,
data: openArray[byte]): ?!Manifest =
## Create a manifest instance from given data
##

View File

@ -1,4 +1,5 @@
import pkg/chronos
import pkg/upraises
import pkg/questionable
import pkg/ethers/erc20
import ./contracts/requests
@ -17,20 +18,17 @@ export periods
type
Market* = ref object of RootObj
MarketError* = object of CodexError
SlotStateMismatchError* = object of MarketError
SlotReservationNotAllowedError* = object of MarketError
ProofInvalidError* = object of MarketError
Subscription* = ref object of RootObj
OnRequest* =
proc(id: RequestId, ask: StorageAsk, expiry: uint64) {.gcsafe, raises: [].}
OnFulfillment* = proc(requestId: RequestId) {.gcsafe, raises: [].}
OnSlotFilled* = proc(requestId: RequestId, slotIndex: uint64) {.gcsafe, raises: [].}
OnSlotFreed* = proc(requestId: RequestId, slotIndex: uint64) {.gcsafe, raises: [].}
OnSlotReservationsFull* =
proc(requestId: RequestId, slotIndex: uint64) {.gcsafe, raises: [].}
OnRequestCancelled* = proc(requestId: RequestId) {.gcsafe, raises: [].}
OnRequestFailed* = proc(requestId: RequestId) {.gcsafe, raises: [].}
OnProofSubmitted* = proc(id: SlotId) {.gcsafe, raises: [].}
OnRequest* = proc(id: RequestId,
ask: StorageAsk,
expiry: UInt256) {.gcsafe, upraises:[].}
OnFulfillment* = proc(requestId: RequestId) {.gcsafe, upraises: [].}
OnSlotFilled* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises:[].}
OnSlotFreed* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises: [].}
OnSlotReservationsFull* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises: [].}
OnRequestCancelled* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
OnRequestFailed* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
OnProofSubmitted* = proc(id: SlotId) {.gcsafe, upraises:[].}
ProofChallenge* = array[32, byte]
# Marketplace events -- located here due to the Market abstraction
@ -38,68 +36,38 @@ type
StorageRequested* = object of MarketplaceEvent
requestId*: RequestId
ask*: StorageAsk
expiry*: uint64
expiry*: UInt256
SlotFilled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
slotIndex*: uint64
slotIndex*: UInt256
SlotFreed* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
slotIndex*: uint64
slotIndex*: UInt256
SlotReservationsFull* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
slotIndex*: uint64
slotIndex*: UInt256
RequestFulfilled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
RequestCancelled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
RequestFailed* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
ProofSubmitted* = object of MarketplaceEvent
id*: SlotId
method loadConfig*(
market: Market
): Future[?!void] {.base, async: (raises: [CancelledError]).} =
method getZkeyHash*(market: Market): Future[?string] {.base, async.} =
raiseAssert("not implemented")
method getZkeyHash*(
market: Market
): Future[?string] {.base, async: (raises: [CancelledError, MarketError]).} =
method getSigner*(market: Market): Future[Address] {.base, async.} =
raiseAssert("not implemented")
method getSigner*(
market: Market
): Future[Address] {.base, async: (raises: [CancelledError, MarketError]).} =
method periodicity*(market: Market): Future[Periodicity] {.base, async.} =
raiseAssert("not implemented")
method periodicity*(
market: Market
): Future[Periodicity] {.base, async: (raises: [CancelledError, MarketError]).} =
method proofTimeout*(market: Market): Future[UInt256] {.base, async.} =
raiseAssert("not implemented")
method proofTimeout*(
market: Market
): Future[uint64] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented")
method repairRewardPercentage*(
market: Market
): Future[uint8] {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented")
method requestDurationLimit*(market: Market): Future[uint64] {.base, async.} =
raiseAssert("not implemented")
method proofDowntime*(
market: Market
): Future[uint8] {.base, async: (raises: [CancelledError, MarketError]).} =
method proofDowntime*(market: Market): Future[uint8] {.base, async.} =
raiseAssert("not implemented")
method getPointer*(market: Market, slotId: SlotId): Future[uint8] {.base, async.} =
@ -110,9 +78,8 @@ proc inDowntime*(market: Market, slotId: SlotId): Future[bool] {.async.} =
let pntr = await market.getPointer(slotId)
return pntr < downtime
method requestStorage*(
market: Market, request: StorageRequest
) {.base, async: (raises: [CancelledError, MarketError]).} =
method requestStorage*(market: Market,
request: StorageRequest) {.base, async.} =
raiseAssert("not implemented")
method myRequests*(market: Market): Future[seq[RequestId]] {.base, async.} =
@ -121,193 +88,163 @@ method myRequests*(market: Market): Future[seq[RequestId]] {.base, async.} =
method mySlots*(market: Market): Future[seq[SlotId]] {.base, async.} =
raiseAssert("not implemented")
method getRequest*(
market: Market, id: RequestId
): Future[?StorageRequest] {.base, async: (raises: [CancelledError]).} =
method getRequest*(market: Market,
id: RequestId):
Future[?StorageRequest] {.base, async.} =
raiseAssert("not implemented")
method requestState*(
market: Market, requestId: RequestId
): Future[?RequestState] {.base, async.} =
method requestState*(market: Market,
requestId: RequestId): Future[?RequestState] {.base, async.} =
raiseAssert("not implemented")
method slotState*(
market: Market, slotId: SlotId
): Future[SlotState] {.base, async: (raises: [CancelledError, MarketError]).} =
method slotState*(market: Market,
slotId: SlotId): Future[SlotState] {.base, async.} =
raiseAssert("not implemented")
method getRequestEnd*(
market: Market, id: RequestId
): Future[SecondsSince1970] {.base, async.} =
method getRequestEnd*(market: Market,
id: RequestId): Future[SecondsSince1970] {.base, async.} =
raiseAssert("not implemented")
method requestExpiresAt*(
market: Market, id: RequestId
): Future[SecondsSince1970] {.base, async.} =
method requestExpiresAt*(market: Market,
id: RequestId): Future[SecondsSince1970] {.base, async.} =
raiseAssert("not implemented")
method getHost*(
market: Market, requestId: RequestId, slotIndex: uint64
): Future[?Address] {.base, async: (raises: [CancelledError, MarketError]).} =
method getHost*(market: Market,
requestId: RequestId,
slotIndex: UInt256): Future[?Address] {.base, async.} =
raiseAssert("not implemented")
method currentCollateral*(
market: Market, slotId: SlotId
): Future[UInt256] {.base, async: (raises: [MarketError, CancelledError]).} =
method getActiveSlot*(
market: Market,
slotId: SlotId): Future[?Slot] {.base, async.} =
raiseAssert("not implemented")
method getActiveSlot*(market: Market, slotId: SlotId): Future[?Slot] {.base, async.} =
method fillSlot*(market: Market,
requestId: RequestId,
slotIndex: UInt256,
proof: Groth16Proof,
collateral: UInt256) {.base, async.} =
raiseAssert("not implemented")
method fillSlot*(
market: Market,
requestId: RequestId,
slotIndex: uint64,
proof: Groth16Proof,
collateral: UInt256,
) {.base, async: (raises: [CancelledError, MarketError]).} =
method freeSlot*(market: Market, slotId: SlotId) {.base, async.} =
raiseAssert("not implemented")
method freeSlot*(
market: Market, slotId: SlotId
) {.base, async: (raises: [CancelledError, MarketError]).} =
method withdrawFunds*(market: Market,
requestId: RequestId) {.base, async.} =
raiseAssert("not implemented")
method withdrawFunds*(
market: Market, requestId: RequestId
) {.base, async: (raises: [CancelledError, MarketError]).} =
method subscribeRequests*(market: Market,
callback: OnRequest):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequests*(
market: Market, callback: OnRequest
): Future[Subscription] {.base, async.} =
method isProofRequired*(market: Market,
id: SlotId): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method isProofRequired*(market: Market, id: SlotId): Future[bool] {.base, async.} =
method willProofBeRequired*(market: Market,
id: SlotId): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method willProofBeRequired*(market: Market, id: SlotId): Future[bool] {.base, async.} =
method getChallenge*(market: Market, id: SlotId): Future[ProofChallenge] {.base, async.} =
raiseAssert("not implemented")
method getChallenge*(
market: Market, id: SlotId
): Future[ProofChallenge] {.base, async.} =
method submitProof*(market: Market,
id: SlotId,
proof: Groth16Proof) {.base, async.} =
raiseAssert("not implemented")
method submitProof*(
market: Market, id: SlotId, proof: Groth16Proof
) {.base, async: (raises: [CancelledError, MarketError]).} =
method markProofAsMissing*(market: Market,
id: SlotId,
period: Period) {.base, async.} =
raiseAssert("not implemented")
method markProofAsMissing*(
market: Market, id: SlotId, period: Period
) {.base, async: (raises: [CancelledError, MarketError]).} =
raiseAssert("not implemented")
method canMarkProofAsMissing*(
market: Market, id: SlotId, period: Period
): Future[bool] {.base, async: (raises: [CancelledError]).} =
method canProofBeMarkedAsMissing*(market: Market,
id: SlotId,
period: Period): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method reserveSlot*(
market: Market, requestId: RequestId, slotIndex: uint64
) {.base, async: (raises: [CancelledError, MarketError]).} =
market: Market,
requestId: RequestId,
slotIndex: UInt256) {.base, async.} =
raiseAssert("not implemented")
method canReserveSlot*(
market: Market, requestId: RequestId, slotIndex: uint64
): Future[bool] {.base, async.} =
market: Market,
requestId: RequestId,
slotIndex: UInt256): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method subscribeFulfillment*(
market: Market, callback: OnFulfillment
): Future[Subscription] {.base, async.} =
method subscribeFulfillment*(market: Market,
callback: OnFulfillment):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeFulfillment*(
market: Market, requestId: RequestId, callback: OnFulfillment
): Future[Subscription] {.base, async.} =
method subscribeFulfillment*(market: Market,
requestId: RequestId,
callback: OnFulfillment):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeSlotFilled*(
market: Market, callback: OnSlotFilled
): Future[Subscription] {.base, async.} =
method subscribeSlotFilled*(market: Market,
callback: OnSlotFilled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeSlotFilled*(
market: Market, requestId: RequestId, slotIndex: uint64, callback: OnSlotFilled
): Future[Subscription] {.base, async.} =
method subscribeSlotFilled*(market: Market,
requestId: RequestId,
slotIndex: UInt256,
callback: OnSlotFilled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeSlotFreed*(
market: Market, callback: OnSlotFreed
): Future[Subscription] {.base, async.} =
method subscribeSlotFreed*(market: Market,
callback: OnSlotFreed):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeSlotReservationsFull*(
market: Market, callback: OnSlotReservationsFull
): Future[Subscription] {.base, async.} =
market: Market,
callback: OnSlotReservationsFull): Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequestCancelled*(
market: Market, callback: OnRequestCancelled
): Future[Subscription] {.base, async.} =
method subscribeRequestCancelled*(market: Market,
callback: OnRequestCancelled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequestCancelled*(
market: Market, requestId: RequestId, callback: OnRequestCancelled
): Future[Subscription] {.base, async.} =
method subscribeRequestCancelled*(market: Market,
requestId: RequestId,
callback: OnRequestCancelled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequestFailed*(
market: Market, callback: OnRequestFailed
): Future[Subscription] {.base, async.} =
method subscribeRequestFailed*(market: Market,
callback: OnRequestFailed):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequestFailed*(
market: Market, requestId: RequestId, callback: OnRequestFailed
): Future[Subscription] {.base, async.} =
method subscribeRequestFailed*(market: Market,
requestId: RequestId,
callback: OnRequestFailed):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeProofSubmission*(
market: Market, callback: OnProofSubmitted
): Future[Subscription] {.base, async.} =
method subscribeProofSubmission*(market: Market,
callback: OnProofSubmitted):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method unsubscribe*(subscription: Subscription) {.base, async.} =
method unsubscribe*(subscription: Subscription) {.base, async, upraises:[].} =
raiseAssert("not implemented")
method queryPastSlotFilledEvents*(
market: Market, fromBlock: BlockTag
): Future[seq[SlotFilled]] {.base, async.} =
raiseAssert("not implemented")
method queryPastSlotFilledEvents*(
market: Market, blocksAgo: int
): Future[seq[SlotFilled]] {.base, async.} =
raiseAssert("not implemented")
method queryPastSlotFilledEvents*(
market: Market, fromTime: SecondsSince1970
): Future[seq[SlotFilled]] {.base, async.} =
raiseAssert("not implemented")
method queryPastStorageRequestedEvents*(
market: Market, fromBlock: BlockTag
): Future[seq[StorageRequested]] {.base, async.} =
raiseAssert("not implemented")
method queryPastStorageRequestedEvents*(
market: Market, blocksAgo: int
): Future[seq[StorageRequested]] {.base, async.} =
raiseAssert("not implemented")
method slotCollateral*(
market: Market, requestId: RequestId, slotIndex: uint64
): Future[?!UInt256] {.base, async: (raises: [CancelledError]).} =
raiseAssert("not implemented")
method slotCollateral*(
market: Market, collateralPerSlot: UInt256, slotState: SlotState
): ?!UInt256 {.base, gcsafe, raises: [].} =
method queryPastEvents*[T: MarketplaceEvent](
market: Market,
_: type T,
blocksAgo: int): Future[seq[T]] {.base, async.} =
raiseAssert("not implemented")

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,7 +7,9 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [], gcsafe.}
import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p
import pkg/questionable
@ -24,11 +26,11 @@ const MaxMerkleTreeSize = 100.MiBs.uint
const MaxMerkleProofSize = 1.MiBs.uint
proc encode*(self: CodexTree): seq[byte] =
var pb = initProtoBuffer()
var pb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
pb.write(1, self.mcodec.uint64)
pb.write(2, self.leavesCount.uint64)
for node in self.nodes:
var nodesPb = initProtoBuffer()
var nodesPb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
nodesPb.write(1, node)
nodesPb.finish()
pb.write(3, nodesPb)
@ -37,11 +39,11 @@ proc encode*(self: CodexTree): seq[byte] =
pb.buffer
proc decode*(_: type CodexTree, data: seq[byte]): ?!CodexTree =
var pb = initProtoBuffer(data)
var pb = initProtoBuffer(data, maxSize = MaxMerkleTreeSize)
var mcodecCode: uint64
var leavesCount: uint64
discard ?pb.getField(1, mcodecCode).mapFailure
discard ?pb.getField(2, leavesCount).mapFailure
discard ? pb.getField(1, mcodecCode).mapFailure
discard ? pb.getField(2, leavesCount).mapFailure
let mcodec = MultiCodec.codec(mcodecCode.int)
if mcodec == InvalidMultiCodec:
@ -51,22 +53,22 @@ proc decode*(_: type CodexTree, data: seq[byte]): ?!CodexTree =
nodesBuff: seq[seq[byte]]
nodes: seq[ByteHash]
if ?pb.getRepeatedField(3, nodesBuff).mapFailure:
if ? pb.getRepeatedField(3, nodesBuff).mapFailure:
for nodeBuff in nodesBuff:
var node: ByteHash
discard ?initProtoBuffer(nodeBuff).getField(1, node).mapFailure
discard ? initProtoBuffer(nodeBuff).getField(1, node).mapFailure
nodes.add node
CodexTree.fromNodes(mcodec, nodes, leavesCount.int)
proc encode*(self: CodexProof): seq[byte] =
var pb = initProtoBuffer()
var pb = initProtoBuffer(maxSize = MaxMerkleProofSize)
pb.write(1, self.mcodec.uint64)
pb.write(2, self.index.uint64)
pb.write(3, self.nleaves.uint64)
for node in self.path:
var nodesPb = initProtoBuffer()
var nodesPb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
nodesPb.write(1, node)
nodesPb.finish()
pb.write(4, nodesPb)
@ -75,33 +77,36 @@ proc encode*(self: CodexProof): seq[byte] =
pb.buffer
proc decode*(_: type CodexProof, data: seq[byte]): ?!CodexProof =
var pb = initProtoBuffer(data)
var pb = initProtoBuffer(data, maxSize = MaxMerkleProofSize)
var mcodecCode: uint64
var index: uint64
var nleaves: uint64
discard ?pb.getField(1, mcodecCode).mapFailure
discard ? pb.getField(1, mcodecCode).mapFailure
let mcodec = MultiCodec.codec(mcodecCode.int)
if mcodec == InvalidMultiCodec:
return failure("Invalid MultiCodec code " & $mcodecCode)
discard ?pb.getField(2, index).mapFailure
discard ?pb.getField(3, nleaves).mapFailure
discard ? pb.getField(2, index).mapFailure
discard ? pb.getField(3, nleaves).mapFailure
var
nodesBuff: seq[seq[byte]]
nodes: seq[ByteHash]
if ?pb.getRepeatedField(4, nodesBuff).mapFailure:
if ? pb.getRepeatedField(4, nodesBuff).mapFailure:
for nodeBuff in nodesBuff:
var node: ByteHash
let nodePb = initProtoBuffer(nodeBuff)
discard ?nodePb.getField(1, node).mapFailure
discard ? nodePb.getField(1, node).mapFailure
nodes.add node
CodexProof.init(mcodec, index.int, nleaves.int, nodes)
proc fromJson*(_: type CodexProof, json: JsonNode): ?!CodexProof =
proc fromJson*(
_: type CodexProof,
json: JsonNode
): ?!CodexProof =
expectJsonKind(Cid, JString, json)
var bytes: seq[byte]
try:
@ -111,5 +116,4 @@ proc fromJson*(_: type CodexProof, json: JsonNode): ?!CodexProof =
CodexProof.decode(bytes)
func `%`*(proof: CodexProof): JsonNode =
%byteutils.toHex(proof.encode())
func `%`*(proof: CodexProof): JsonNode = % byteutils.toHex(proof.encode())

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -15,7 +15,7 @@ import std/sequtils
import pkg/questionable
import pkg/questionable/results
import pkg/libp2p/[cid, multicodec, multihash]
import pkg/constantine/hashes
import ../../utils
import ../../rng
import ../../errors
@ -32,10 +32,10 @@ logScope:
type
ByteTreeKey* {.pure.} = enum
KeyNone = 0x0.byte
KeyBottomLayer = 0x1.byte
KeyOdd = 0x2.byte
KeyOddAndBottomLayer = 0x3.byte
KeyNone = 0x0.byte
KeyBottomLayer = 0x1.byte
KeyOdd = 0x2.byte
KeyOddAndBottomLayer = 0x3.byte
ByteHash* = seq[byte]
ByteTree* = MerkleTree[ByteHash, ByteTreeKey]
@ -47,10 +47,26 @@ type
CodexProof* = ref object of ByteProof
mcodec*: MultiCodec
func getProof*(self: CodexTree, index: int): ?!CodexProof =
var proof = CodexProof(mcodec: self.mcodec)
func mhash*(mcodec: MultiCodec): ?!MHash =
let
mhash = CodeHashes.getOrDefault(mcodec)
?self.getProof(index, proof)
if isNil(mhash.coder):
return failure "Invalid multihash codec"
success mhash
func digestSize*(self: (CodexTree or CodexProof)): int =
## Number of leaves
##
self.mhash.size
func getProof*(self: CodexTree, index: int): ?!CodexProof =
var
proof = CodexProof(mcodec: self.mcodec)
? self.getProof(index, proof)
success proof
@ -62,113 +78,137 @@ func verify*(self: CodexProof, leaf: MultiHash, root: MultiHash): ?!bool =
rootBytes = root.digestBytes
leafBytes = leaf.digestBytes
if self.mcodec != root.mcodec or self.mcodec != leaf.mcodec:
if self.mcodec != root.mcodec or
self.mcodec != leaf.mcodec:
return failure "Hash codec mismatch"
if rootBytes.len != root.size and leafBytes.len != leaf.size:
if rootBytes.len != root.size and
leafBytes.len != leaf.size:
return failure "Invalid hash length"
self.verify(leafBytes, rootBytes)
func verify*(self: CodexProof, leaf: Cid, root: Cid): ?!bool =
self.verify(?leaf.mhash.mapFailure, ?leaf.mhash.mapFailure)
self.verify(? leaf.mhash.mapFailure, ? leaf.mhash.mapFailure)
proc rootCid*(self: CodexTree, version = CIDv1, dataCodec = DatasetRootCodec): ?!Cid =
if (?self.root).len == 0:
proc rootCid*(
self: CodexTree,
version = CIDv1,
dataCodec = DatasetRootCodec): ?!Cid =
if (? self.root).len == 0:
return failure "Empty root"
let mhash = ?MultiHash.init(self.mcodec, ?self.root).mapFailure
let
mhash = ? MultiHash.init(self.mcodec, ? self.root).mapFailure
Cid.init(version, DatasetRootCodec, mhash).mapFailure
func getLeafCid*(
self: CodexTree, i: Natural, version = CIDv1, dataCodec = BlockCodec
): ?!Cid =
self: CodexTree,
i: Natural,
version = CIDv1,
dataCodec = BlockCodec): ?!Cid =
if i >= self.leavesCount:
return failure "Invalid leaf index " & $i
let
leaf = self.leaves[i]
mhash = ?MultiHash.init($self.mcodec, leaf).mapFailure
mhash = ? MultiHash.init($self.mcodec, leaf).mapFailure
Cid.init(version, dataCodec, mhash).mapFailure
proc `$`*(self: CodexTree): string =
let root =
if self.root.isOk:
byteutils.toHex(self.root.get)
else:
"none"
"CodexTree(" & " root: " & root & ", leavesCount: " & $self.leavesCount & ", levels: " &
$self.levels & ", mcodec: " & $self.mcodec & " )"
let root = if self.root.isOk: byteutils.toHex(self.root.get) else: "none"
"CodexTree(" &
" root: " & root &
", leavesCount: " & $self.leavesCount &
", levels: " & $self.levels &
", mcodec: " & $self.mcodec & " )"
proc `$`*(self: CodexProof): string =
"CodexProof(" & " nleaves: " & $self.nleaves & ", index: " & $self.index & ", path: " &
$self.path.mapIt(byteutils.toHex(it)) & ", mcodec: " & $self.mcodec & " )"
"CodexProof(" &
" nleaves: " & $self.nleaves &
", index: " & $self.index &
", path: " & $self.path.mapIt( byteutils.toHex(it) ) &
", mcodec: " & $self.mcodec & " )"
func compress*(x, y: openArray[byte], key: ByteTreeKey, codec: MultiCodec): ?!ByteHash =
func compress*(
x, y: openArray[byte],
key: ByteTreeKey,
mhash: MHash): ?!ByteHash =
## Compress two hashes
##
let input = @x & @y & @[key.byte]
let digest = ?MultiHash.digest(codec, input).mapFailure
success digest.digestBytes
var digest = newSeq[byte](mhash.size)
mhash.coder(@x & @y & @[ key.byte ], digest)
success digest
func init*(
_: type CodexTree, mcodec: MultiCodec = Sha256HashCodec, leaves: openArray[ByteHash]
): ?!CodexTree =
_: type CodexTree,
mcodec: MultiCodec = Sha256HashCodec,
leaves: openArray[ByteHash]): ?!CodexTree =
if leaves.len == 0:
return failure "Empty leaves"
let
mhash = ? mcodec.mhash()
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} =
compress(x, y, key, mcodec)
digestSize = ?mcodec.digestSize.mapFailure
Zero: ByteHash = newSeq[byte](digestSize)
compress(x, y, key, mhash)
Zero: ByteHash = newSeq[byte](mhash.size)
if digestSize != leaves[0].len:
if mhash.size != leaves[0].len:
return failure "Invalid hash length"
var self = CodexTree(mcodec: mcodec, compress: compressor, zero: Zero)
var
self = CodexTree(mcodec: mcodec, compress: compressor, zero: Zero)
self.layers = ?merkleTreeWorker(self, leaves, isBottomLayer = true)
self.layers = ? merkleTreeWorker(self, leaves, isBottomLayer = true)
success self
func init*(_: type CodexTree, leaves: openArray[MultiHash]): ?!CodexTree =
func init*(
_: type CodexTree,
leaves: openArray[MultiHash]): ?!CodexTree =
if leaves.len == 0:
return failure "Empty leaves"
let
mcodec = leaves[0].mcodec
leaves = leaves.mapIt(it.digestBytes)
leaves = leaves.mapIt( it.digestBytes )
CodexTree.init(mcodec, leaves)
func init*(_: type CodexTree, leaves: openArray[Cid]): ?!CodexTree =
func init*(
_: type CodexTree,
leaves: openArray[Cid]): ?!CodexTree =
if leaves.len == 0:
return failure "Empty leaves"
let
mcodec = (?leaves[0].mhash.mapFailure).mcodec
leaves = leaves.mapIt((?it.mhash.mapFailure).digestBytes)
mcodec = (? leaves[0].mhash.mapFailure).mcodec
leaves = leaves.mapIt( (? it.mhash.mapFailure).digestBytes )
CodexTree.init(mcodec, leaves)
proc fromNodes*(
_: type CodexTree,
mcodec: MultiCodec = Sha256HashCodec,
nodes: openArray[ByteHash],
nleaves: int,
): ?!CodexTree =
_: type CodexTree,
mcodec: MultiCodec = Sha256HashCodec,
nodes: openArray[ByteHash],
nleaves: int): ?!CodexTree =
if nodes.len == 0:
return failure "Empty nodes"
let
digestSize = ?mcodec.digestSize.mapFailure
Zero = newSeq[byte](digestSize)
mhash = ? mcodec.mhash()
Zero = newSeq[byte](mhash.size)
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} =
compress(x, y, key, mcodec)
compress(x, y, key, mhash)
if digestSize != nodes[0].len:
if mhash.size != nodes[0].len:
return failure "Invalid hash length"
var
@ -177,34 +217,34 @@ proc fromNodes*(
pos = 0
while pos < nodes.len:
self.layers.add(nodes[pos ..< (pos + layer)])
self.layers.add( nodes[pos..<(pos + layer)] )
pos += layer
layer = divUp(layer, 2)
let
index = Rng.instance.rand(nleaves - 1)
proof = ?self.getProof(index)
proof = ? self.getProof(index)
if not ?proof.verify(self.leaves[index], ?self.root): # sanity check
if not ? proof.verify(self.leaves[index], ? self.root): # sanity check
return failure "Unable to verify tree built from nodes"
success self
func init*(
_: type CodexProof,
mcodec: MultiCodec = Sha256HashCodec,
index: int,
nleaves: int,
nodes: openArray[ByteHash],
): ?!CodexProof =
_: type CodexProof,
mcodec: MultiCodec = Sha256HashCodec,
index: int,
nleaves: int,
nodes: openArray[ByteHash]): ?!CodexProof =
if nodes.len == 0:
return failure "Empty nodes"
let
digestSize = ?mcodec.digestSize.mapFailure
Zero = newSeq[byte](digestSize)
mhash = ? mcodec.mhash()
Zero = newSeq[byte](mhash.size)
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!seq[byte] {.noSideEffect.} =
compress(x, y, key, mcodec)
compress(x, y, key, mhash)
success CodexProof(
compress: compressor,
@ -212,5 +252,4 @@ func init*(
mcodec: mcodec,
index: index,
nleaves: nleaves,
path: @nodes,
)
path: @nodes)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -16,19 +16,19 @@ import pkg/questionable/results
import ../errors
type
CompressFn*[H, K] = proc(x, y: H, key: K): ?!H {.noSideEffect, raises: [].}
CompressFn*[H, K] = proc (x, y: H, key: K): ?!H {.noSideEffect, raises: [].}
MerkleTree*[H, K] = ref object of RootObj
layers*: seq[seq[H]]
layers* : seq[seq[H]]
compress*: CompressFn[H, K]
zero*: H
zero* : H
MerkleProof*[H, K] = ref object of RootObj
index*: int # linear index of the leaf, starting from 0
path*: seq[H] # order: from the bottom to the top
nleaves*: int # number of leaves in the tree (=size of input)
compress*: CompressFn[H, K] # compress function
zero*: H # zero value
index* : int # linear index of the leaf, starting from 0
path* : seq[H] # order: from the bottom to the top
nleaves* : int # number of leaves in the tree (=size of input)
compress*: CompressFn[H, K] # compress function
zero* : H # zero value
func depth*[H, K](self: MerkleTree[H, K]): int =
return self.layers.len - 1
@ -59,38 +59,36 @@ func root*[H, K](self: MerkleTree[H, K]): ?!H =
return success last[0]
func getProof*[H, K](
self: MerkleTree[H, K], index: int, proof: MerkleProof[H, K]
): ?!void =
let depth = self.depth
self: MerkleTree[H, K],
index: int,
proof: MerkleProof[H, K]): ?!void =
let depth = self.depth
let nleaves = self.leavesCount
if not (index >= 0 and index < nleaves):
return failure "index out of bounds"
var path: seq[H] = newSeq[H](depth)
var path : seq[H] = newSeq[H](depth)
var k = index
var m = nleaves
for i in 0 ..< depth:
for i in 0..<depth:
let j = k xor 1
path[i] =
if (j < m):
self.layers[i][j]
else:
self.zero
k = k shr 1
path[i] = if (j < m): self.layers[i][j] else: self.zero
k = k shr 1
m = (m + 1) shr 1
proof.index = index
proof.path = path
proof.path = path
proof.nleaves = nleaves
proof.compress = self.compress
success()
func getProof*[H, K](self: MerkleTree[H, K], index: int): ?!MerkleProof[H, K] =
var proof = MerkleProof[H, K]()
var
proof = MerkleProof[H, K]()
?self.getProof(index, proof)
? self.getProof(index, proof)
success proof
@ -102,39 +100,41 @@ func reconstructRoot*[H, K](proof: MerkleProof[H, K], leaf: H): ?!H =
bottomFlag = K.KeyBottomLayer
for p in proof.path:
let oddIndex: bool = (bitand(j, 1) != 0)
let oddIndex : bool = (bitand(j,1) != 0)
if oddIndex:
# the index of the child is odd, so the node itself can't be odd (a bit counterintuitive, yeah :)
h = ?proof.compress(p, h, bottomFlag)
h = ? proof.compress( p, h, bottomFlag )
else:
if j == m - 1:
# single child => odd node
h = ?proof.compress(h, p, K(bottomFlag.ord + 2))
h = ? proof.compress( h, p, K(bottomFlag.ord + 2) )
else:
# even node
h = ?proof.compress(h, p, bottomFlag)
h = ? proof.compress( h , p, bottomFlag )
bottomFlag = K.KeyNone
j = j shr 1
m = (m + 1) shr 1
j = j shr 1
m = (m+1) shr 1
return success h
func verify*[H, K](proof: MerkleProof[H, K], leaf: H, root: H): ?!bool =
success bool(root == ?proof.reconstructRoot(leaf))
success bool(root == ? proof.reconstructRoot(leaf))
func merkleTreeWorker*[H, K](
self: MerkleTree[H, K], xs: openArray[H], isBottomLayer: static bool
): ?!seq[seq[H]] =
self: MerkleTree[H, K],
xs: openArray[H],
isBottomLayer: static bool): ?!seq[seq[H]] =
let a = low(xs)
let b = high(xs)
let m = b - a + 1
when not isBottomLayer:
if m == 1:
return success @[@xs]
return success @[ @xs ]
let halfn: int = m div 2
let n: int = 2 * halfn
let halfn: int = m div 2
let n : int = 2 * halfn
let isOdd: bool = (n != m)
var ys: seq[H]
@ -143,11 +143,11 @@ func merkleTreeWorker*[H, K](
else:
ys = newSeq[H](halfn + 1)
for i in 0 ..< halfn:
for i in 0..<halfn:
const key = when isBottomLayer: K.KeyBottomLayer else: K.KeyNone
ys[i] = ?self.compress(xs[a + 2 * i], xs[a + 2 * i + 1], key = key)
ys[i] = ? self.compress( xs[a + 2 * i], xs[a + 2 * i + 1], key = key )
if isOdd:
const key = when isBottomLayer: K.KeyOddAndBottomLayer else: K.KeyOdd
ys[halfn] = ?self.compress(xs[n], self.zero, key = key)
ys[halfn] = ? self.compress( xs[n], self.zero, key = key )
success @[@xs] & ?self.merkleTreeWorker(ys, isBottomLayer = false)
success @[ @xs ] & ? self.merkleTreeWorker(ys, isBottomLayer = false)

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -24,10 +24,10 @@ import ./merkletree
export merkletree, poseidon2
const
KeyNoneF = F.fromHex("0x0")
KeyBottomLayerF = F.fromHex("0x1")
KeyOddF = F.fromHex("0x2")
KeyOddAndBottomLayerF = F.fromHex("0x3")
KeyNoneF = F.fromhex("0x0")
KeyBottomLayerF = F.fromhex("0x1")
KeyOddF = F.fromhex("0x2")
KeyOddAndBottomLayerF = F.fromhex("0x3")
Poseidon2Zero* = zero
@ -35,7 +35,7 @@ type
Bn254Fr* = F
Poseidon2Hash* = Bn254Fr
PoseidonKeysEnum* = enum # can't use non-ordinals as enum values
PoseidonKeysEnum* = enum # can't use non-ordinals as enum values
KeyNone
KeyBottomLayer
KeyOdd
@ -46,50 +46,65 @@ type
proc `$`*(self: Poseidon2Tree): string =
let root = if self.root.isOk: self.root.get.toHex else: "none"
"Poseidon2Tree(" & " root: " & root & ", leavesCount: " & $self.leavesCount &
"Poseidon2Tree(" &
" root: " & root &
", leavesCount: " & $self.leavesCount &
", levels: " & $self.levels & " )"
proc `$`*(self: Poseidon2Proof): string =
"Poseidon2Proof(" & " nleaves: " & $self.nleaves & ", index: " & $self.index &
", path: " & $self.path.mapIt(it.toHex) & " )"
"Poseidon2Proof(" &
" nleaves: " & $self.nleaves &
", index: " & $self.index &
", path: " & $self.path.mapIt( it.toHex ) & " )"
func toArray32*(bytes: openArray[byte]): array[32, byte] =
result[0 ..< bytes.len] = bytes[0 ..< bytes.len]
result[0..<bytes.len] = bytes[0..<bytes.len]
converter toKey*(key: PoseidonKeysEnum): Poseidon2Hash =
case key
case key:
of KeyNone: KeyNoneF
of KeyBottomLayer: KeyBottomLayerF
of KeyOdd: KeyOddF
of KeyOddAndBottomLayer: KeyOddAndBottomLayerF
func init*(_: type Poseidon2Tree, leaves: openArray[Poseidon2Hash]): ?!Poseidon2Tree =
func init*(
_: type Poseidon2Tree,
leaves: openArray[Poseidon2Hash]): ?!Poseidon2Tree =
if leaves.len == 0:
return failure "Empty leaves"
let compressor = proc(
x, y: Poseidon2Hash, key: PoseidonKeysEnum
): ?!Poseidon2Hash {.noSideEffect.} =
success compress(x, y, key.toKey)
let
compressor = proc(
x, y: Poseidon2Hash,
key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
var self = Poseidon2Tree(compress: compressor, zero: Poseidon2Zero)
var
self = Poseidon2Tree(compress: compressor, zero: Poseidon2Zero)
self.layers = ?merkleTreeWorker(self, leaves, isBottomLayer = true)
self.layers = ? merkleTreeWorker(self, leaves, isBottomLayer = true)
success self
func init*(_: type Poseidon2Tree, leaves: openArray[array[31, byte]]): ?!Poseidon2Tree =
Poseidon2Tree.init(leaves.mapIt(Poseidon2Hash.fromBytes(it)))
func init*(
_: type Poseidon2Tree,
leaves: openArray[array[31, byte]]): ?!Poseidon2Tree =
Poseidon2Tree.init(
leaves.mapIt( Poseidon2Hash.fromBytes(it) ))
proc fromNodes*(
_: type Poseidon2Tree, nodes: openArray[Poseidon2Hash], nleaves: int
): ?!Poseidon2Tree =
_: type Poseidon2Tree,
nodes: openArray[Poseidon2Hash],
nleaves: int): ?!Poseidon2Tree =
if nodes.len == 0:
return failure "Empty nodes"
let compressor = proc(
x, y: Poseidon2Hash, key: PoseidonKeysEnum
): ?!Poseidon2Hash {.noSideEffect.} =
success compress(x, y, key.toKey)
let
compressor = proc(
x, y: Poseidon2Hash,
key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
var
self = Poseidon2Tree(compress: compressor, zero: zero)
@ -97,34 +112,37 @@ proc fromNodes*(
pos = 0
while pos < nodes.len:
self.layers.add(nodes[pos ..< (pos + layer)])
self.layers.add( nodes[pos..<(pos + layer)] )
pos += layer
layer = divUp(layer, 2)
let
index = Rng.instance.rand(nleaves - 1)
proof = ?self.getProof(index)
proof = ? self.getProof(index)
if not ?proof.verify(self.leaves[index], ?self.root): # sanity check
if not ? proof.verify(self.leaves[index], ? self.root): # sanity check
return failure "Unable to verify tree built from nodes"
success self
func init*(
_: type Poseidon2Proof, index: int, nleaves: int, nodes: openArray[Poseidon2Hash]
): ?!Poseidon2Proof =
_: type Poseidon2Proof,
index: int,
nleaves: int,
nodes: openArray[Poseidon2Hash]): ?!Poseidon2Proof =
if nodes.len == 0:
return failure "Empty nodes"
let compressor = proc(
x, y: Poseidon2Hash, key: PoseidonKeysEnum
): ?!Poseidon2Hash {.noSideEffect.} =
success compress(x, y, key.toKey)
let
compressor = proc(
x, y: Poseidon2Hash,
key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
success Poseidon2Proof(
compress: compressor,
zero: Poseidon2Zero,
index: index,
nleaves: nleaves,
path: @nodes,
)
path: @nodes)

View File

@ -1,11 +0,0 @@
const CodecExts = [
("poseidon2-alt_bn_128-sponge-r2", 0xCD10), # bn128 rate 2 sponge
("poseidon2-alt_bn_128-merkle-2kb", 0xCD11), # bn128 2kb compress & merkleize
("poseidon2-alt_bn_128-keyed-compress", 0xCD12), # bn128 keyed compress]
("codex-manifest", 0xCD01),
("codex-block", 0xCD02),
("codex-root", 0xCD03),
("codex-slot-root", 0xCD04),
("codex-proving-root", 0xCD05),
("codex-slot-cell", 0xCD06),
]

View File

@ -1,40 +0,0 @@
import blscurve/bls_public_exports
import pkg/constantine/hashes
import poseidon2
proc sha2_256hash_constantine(data: openArray[byte], output: var openArray[byte]) =
# Using Constantine's SHA256 instead of mhash for optimal performance on 32-byte merkle node hashing
# See: https://github.com/logos-storage/logos-storage-nim/issues/1162
if len(output) > 0:
let digest = hashes.sha256.hash(data)
copyMem(addr output[0], addr digest[0], 32)
proc poseidon2_sponge_rate2(data: openArray[byte], output: var openArray[byte]) =
if len(output) > 0:
var digest = poseidon2.Sponge.digest(data).toBytes()
copyMem(addr output[0], addr digest[0], uint(len(output)))
proc poseidon2_merkle_2kb_sponge(data: openArray[byte], output: var openArray[byte]) =
if len(output) > 0:
var digest = poseidon2.SpongeMerkle.digest(data, 2048).toBytes()
copyMem(addr output[0], addr digest[0], uint(len(output)))
const Sha2256MultiHash* = MHash(
mcodec: multiCodec("sha2-256"),
size: sha256.sizeDigest,
coder: sha2_256hash_constantine,
)
const HashExts = [
# override sha2-256 hash function
Sha2256MultiHash,
MHash(
mcodec: multiCodec("poseidon2-alt_bn_128-sponge-r2"),
size: 32,
coder: poseidon2_sponge_rate2,
),
MHash(
mcodec: multiCodec("poseidon2-alt_bn_128-merkle-2kb"),
size: 32,
coder: poseidon2_merkle_2kb_sponge,
),
]

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,17 +9,16 @@
const
# Namespaces
CodexMetaNamespace* = "meta" # meta info stored here
CodexRepoNamespace* = "repo" # repository namespace, blocks and manifests are subkeys
CodexBlockTotalNamespace* = CodexMetaNamespace & "/total"
# number of blocks in the repo
CodexBlocksNamespace* = CodexRepoNamespace & "/blocks" # blocks namespace
CodexMetaNamespace* = "meta" # meta info stored here
CodexRepoNamespace* = "repo" # repository namespace, blocks and manifests are subkeys
CodexBlockTotalNamespace* = CodexMetaNamespace & "/total" # number of blocks in the repo
CodexBlocksNamespace* = CodexRepoNamespace & "/blocks" # blocks namespace
CodexManifestNamespace* = CodexRepoNamespace & "/manifests" # manifest namespace
CodexBlocksTtlNamespace* = # Cid TTL
CodexBlocksTtlNamespace* = # Cid TTL
CodexMetaNamespace & "/ttl"
CodexBlockProofNamespace* = # Cid and Proof
CodexBlockProofNamespace* = # Cid and Proof
CodexMetaNamespace & "/proof"
CodexDhtNamespace* = "dht" # Dht namespace
CodexDhtProvidersNamespace* = # Dht providers namespace
CodexDhtNamespace* = "dht" # Dht namespace
CodexDhtProvidersNamespace* = # Dht providers namespace
CodexDhtNamespace & "/providers"
CodexQuotaNamespace* = CodexMetaNamespace & "/quota" # quota's namespace
CodexQuotaNamespace* = CodexMetaNamespace & "/quota" # quota's namespace

View File

@ -1,432 +0,0 @@
# Copyright (c) 2019-2023 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
{.push raises: [].}
import
std/[options, os, strutils, times, net, atomics],
stew/[objects],
nat_traversal/[miniupnpc, natpmp],
json_serialization/std/net,
results
import pkg/chronos
import pkg/chronicles
import pkg/libp2p
import ./utils
import ./utils/natutils
import ./utils/addrutils
const
UPNP_TIMEOUT = 200 # ms
PORT_MAPPING_INTERVAL = 20 * 60 # seconds
NATPMP_LIFETIME = 60 * 60 # in seconds, must be longer than PORT_MAPPING_INTERVAL
type PortMappings* = object
internalTcpPort: Port
externalTcpPort: Port
internalUdpPort: Port
externalUdpPort: Port
description: string
type PortMappingArgs =
tuple[strategy: NatStrategy, tcpPort, udpPort: Port, description: string]
type NatConfig* = object
case hasExtIp*: bool
of true: extIp*: IpAddress
of false: nat*: NatStrategy
var
upnp {.threadvar.}: Miniupnp
npmp {.threadvar.}: NatPmp
strategy = NatStrategy.NatNone
natClosed: Atomic[bool]
extIp: Option[IpAddress]
activeMappings: seq[PortMappings]
natThreads: seq[Thread[PortMappingArgs]] = @[]
logScope:
topics = "nat"
type PrefSrcStatus = enum
NoRoutingInfo
PrefSrcIsPublic
PrefSrcIsPrivate
BindAddressIsPublic
BindAddressIsPrivate
## Also does threadvar initialisation.
## Must be called before redirectPorts() in each thread.
proc getExternalIP*(natStrategy: NatStrategy, quiet = false): Option[IpAddress] =
var externalIP: IpAddress
if natStrategy == NatStrategy.NatAny or natStrategy == NatStrategy.NatUpnp:
if upnp == nil:
upnp = newMiniupnp()
upnp.discoverDelay = UPNP_TIMEOUT
let dres = upnp.discover()
if dres.isErr:
debug "UPnP", msg = dres.error
else:
var
msg: cstring
canContinue = true
case upnp.selectIGD()
of IGDNotFound:
msg = "Internet Gateway Device not found. Giving up."
canContinue = false
of IGDFound:
msg = "Internet Gateway Device found."
of IGDNotConnected:
msg = "Internet Gateway Device found but it's not connected. Trying anyway."
of NotAnIGD:
msg =
"Some device found, but it's not recognised as an Internet Gateway Device. Trying anyway."
of IGDIpNotRoutable:
msg =
"Internet Gateway Device found and is connected, but with a reserved or non-routable IP. Trying anyway."
if not quiet:
debug "UPnP", msg
if canContinue:
let ires = upnp.externalIPAddress()
if ires.isErr:
debug "UPnP", msg = ires.error
else:
# if we got this far, UPnP is working and we don't need to try NAT-PMP
try:
externalIP = parseIpAddress(ires.value)
strategy = NatStrategy.NatUpnp
return some(externalIP)
except ValueError as e:
error "parseIpAddress() exception", err = e.msg
return
if natStrategy == NatStrategy.NatAny or natStrategy == NatStrategy.NatPmp:
if npmp == nil:
npmp = newNatPmp()
let nres = npmp.init()
if nres.isErr:
debug "NAT-PMP", msg = nres.error
else:
let nires = npmp.externalIPAddress()
if nires.isErr:
debug "NAT-PMP", msg = nires.error
else:
try:
externalIP = parseIpAddress($(nires.value))
strategy = NatStrategy.NatPmp
return some(externalIP)
except ValueError as e:
error "parseIpAddress() exception", err = e.msg
return
# This queries the routing table to get the "preferred source" attribute and
# checks if it's a public IP. If so, then it's our public IP.
#
# Further more, we check if the bind address (user provided, or a "0.0.0.0"
# default) is a public IP. That's a long shot, because code paths involving a
# user-provided bind address are not supposed to get here.
proc getRoutePrefSrc(bindIp: IpAddress): (Option[IpAddress], PrefSrcStatus) =
let bindAddress = initTAddress(bindIp, Port(0))
if bindAddress.isAnyLocal():
let ip = getRouteIpv4()
if ip.isErr():
# No route was found, log error and continue without IP.
error "No routable IP address found, check your network connection",
error = ip.error
return (none(IpAddress), NoRoutingInfo)
elif ip.get().isGlobalUnicast():
return (some(ip.get()), PrefSrcIsPublic)
else:
return (none(IpAddress), PrefSrcIsPrivate)
elif bindAddress.isGlobalUnicast():
return (some(bindIp), BindAddressIsPublic)
else:
return (none(IpAddress), BindAddressIsPrivate)
# Try to detect a public IP assigned to this host, before trying NAT traversal.
proc getPublicRoutePrefSrcOrExternalIP*(
natStrategy: NatStrategy, bindIp: IpAddress, quiet = true
): Option[IpAddress] =
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return prefSrcIp
of PrefSrcIsPrivate, BindAddressIsPrivate:
let extIp = getExternalIP(natStrategy, quiet)
if extIp.isSome:
return some(extIp.get)
proc doPortMapping(
strategy: NatStrategy, tcpPort, udpPort: Port, description: string
): Option[(Port, Port)] {.gcsafe.} =
var
extTcpPort: Port
extUdpPort: Port
if strategy == NatStrategy.NatUpnp:
for t in [(tcpPort, UPNPProtocol.TCP), (udpPort, UPNPProtocol.UDP)]:
let
(port, protocol) = t
pmres = upnp.addPortMapping(
externalPort = $port,
protocol = protocol,
internalHost = upnp.lanAddr,
internalPort = $port,
desc = description,
leaseDuration = 0,
)
if pmres.isErr:
error "UPnP port mapping", msg = pmres.error, port
return
else:
# let's check it
let cres =
upnp.getSpecificPortMapping(externalPort = $port, protocol = protocol)
if cres.isErr:
warn "UPnP port mapping check failed. Assuming the check itself is broken and the port mapping was done.",
msg = cres.error
info "UPnP: added port mapping",
externalPort = port, internalPort = port, protocol = protocol
case protocol
of UPNPProtocol.TCP:
extTcpPort = port
of UPNPProtocol.UDP:
extUdpPort = port
elif strategy == NatStrategy.NatPmp:
for t in [(tcpPort, NatPmpProtocol.TCP), (udpPort, NatPmpProtocol.UDP)]:
let
(port, protocol) = t
pmres = npmp.addPortMapping(
eport = port.cushort,
iport = port.cushort,
protocol = protocol,
lifetime = NATPMP_LIFETIME,
)
if pmres.isErr:
error "NAT-PMP port mapping", msg = pmres.error, port
return
else:
let extPort = Port(pmres.value)
info "NAT-PMP: added port mapping",
externalPort = extPort, internalPort = port, protocol = protocol
case protocol
of NatPmpProtocol.TCP:
extTcpPort = extPort
of NatPmpProtocol.UDP:
extUdpPort = extPort
return some((extTcpPort, extUdpPort))
proc repeatPortMapping(args: PortMappingArgs) {.thread, raises: [ValueError].} =
ignoreSignalsInThread()
let
(strategy, tcpPort, udpPort, description) = args
interval = initDuration(seconds = PORT_MAPPING_INTERVAL)
sleepDuration = 1_000 # in ms, also the maximum delay after pressing Ctrl-C
var lastUpdate = now()
# We can't use copies of Miniupnp and NatPmp objects in this thread, because they share
# C pointers with other instances that have already been garbage collected, so
# we use threadvars instead and initialise them again with getExternalIP(),
# even though we don't need the external IP's value.
let ipres = getExternalIP(strategy, quiet = true)
if ipres.isSome:
while natClosed.load() == false:
let
# we're being silly here with this channel polling because we can't
# select on Nim channels like on Go ones
currTime = now()
if currTime >= (lastUpdate + interval):
discard doPortMapping(strategy, tcpPort, udpPort, description)
lastUpdate = currTime
sleep(sleepDuration)
proc stopNatThreads() {.noconv.} =
# stop the thread
debug "Stopping NAT port mapping renewal threads"
try:
natClosed.store(true)
joinThreads(natThreads)
except Exception as exc:
warn "Failed to stop NAT port mapping renewal thread", exc = exc.msg
# delete our port mappings
# FIXME: if the initial port mapping failed because it already existed for the
# required external port, we should not delete it. It might have been set up
# by another program.
# In Windows, a new thread is created for the signal handler, so we need to
# initialise our threadvars again.
let ipres = getExternalIP(strategy, quiet = true)
if ipres.isSome:
if strategy == NatStrategy.NatUpnp:
for entry in activeMappings:
for t in [
(entry.externalTcpPort, entry.internalTcpPort, UPNPProtocol.TCP),
(entry.externalUdpPort, entry.internalUdpPort, UPNPProtocol.UDP),
]:
let
(eport, iport, protocol) = t
pmres = upnp.deletePortMapping(externalPort = $eport, protocol = protocol)
if pmres.isErr:
error "UPnP port mapping deletion", msg = pmres.error
else:
debug "UPnP: deleted port mapping",
externalPort = eport, internalPort = iport, protocol = protocol
elif strategy == NatStrategy.NatPmp:
for entry in activeMappings:
for t in [
(entry.externalTcpPort, entry.internalTcpPort, NatPmpProtocol.TCP),
(entry.externalUdpPort, entry.internalUdpPort, NatPmpProtocol.UDP),
]:
let
(eport, iport, protocol) = t
pmres = npmp.deletePortMapping(
eport = eport.cushort, iport = iport.cushort, protocol = protocol
)
if pmres.isErr:
error "NAT-PMP port mapping deletion", msg = pmres.error
else:
debug "NAT-PMP: deleted port mapping",
externalPort = eport, internalPort = iport, protocol = protocol
proc redirectPorts*(
strategy: NatStrategy, tcpPort, udpPort: Port, description: string
): Option[(Port, Port)] =
result = doPortMapping(strategy, tcpPort, udpPort, description)
if result.isSome:
let (externalTcpPort, externalUdpPort) = result.get()
# needed by NAT-PMP on port mapping deletion
# Port mapping works. Let's launch a thread that repeats it, in case the
# NAT-PMP lease expires or the router is rebooted and forgets all about
# these mappings.
activeMappings.add(
PortMappings(
internalTcpPort: tcpPort,
externalTcpPort: externalTcpPort,
internalUdpPort: udpPort,
externalUdpPort: externalUdpPort,
description: description,
)
)
try:
natThreads.add(Thread[PortMappingArgs]())
natThreads[^1].createThread(
repeatPortMapping, (strategy, externalTcpPort, externalUdpPort, description)
)
# atexit() in disguise
if natThreads.len == 1:
# we should register the thread termination function only once
addQuitProc(stopNatThreads)
except Exception as exc:
warn "Failed to create NAT port mapping renewal thread", exc = exc.msg
proc setupNat*(
natStrategy: NatStrategy, tcpPort, udpPort: Port, clientId: string
): tuple[ip: Option[IpAddress], tcpPort, udpPort: Option[Port]] =
## Setup NAT port mapping and get external IP address.
## If any of this fails, we don't return any IP address but do return the
## original ports as best effort.
## TODO: Allow for tcp or udp port mapping to be optional.
if extIp.isNone:
extIp = getExternalIP(natStrategy)
if extIp.isSome:
let ip = extIp.get
let extPorts = (
{.gcsafe.}:
redirectPorts(
strategy, tcpPort = tcpPort, udpPort = udpPort, description = clientId
)
)
if extPorts.isSome:
let (extTcpPort, extUdpPort) = extPorts.get()
(ip: some(ip), tcpPort: some(extTcpPort), udpPort: some(extUdpPort))
else:
warn "UPnP/NAT-PMP available but port forwarding failed"
(ip: none(IpAddress), tcpPort: some(tcpPort), udpPort: some(udpPort))
else:
warn "UPnP/NAT-PMP not available"
(ip: none(IpAddress), tcpPort: some(tcpPort), udpPort: some(udpPort))
proc setupAddress*(
natConfig: NatConfig, bindIp: IpAddress, tcpPort, udpPort: Port, clientId: string
): tuple[ip: Option[IpAddress], tcpPort, udpPort: Option[Port]] {.gcsafe.} =
## Set-up of the external address via any of the ways as configured in
## `NatConfig`. In case all fails an error is logged and the bind ports are
## selected also as external ports, as best effort and in hope that the
## external IP can be figured out by other means at a later stage.
## TODO: Allow for tcp or udp bind ports to be optional.
if natConfig.hasExtIp:
# any required port redirection must be done by hand
return (some(natConfig.extIp), some(tcpPort), some(udpPort))
case natConfig.nat
of NatStrategy.NatAny:
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return (prefSrcIp, some(tcpPort), some(udpPort))
of PrefSrcIsPrivate, BindAddressIsPrivate:
return setupNat(natConfig.nat, tcpPort, udpPort, clientId)
of NatStrategy.NatNone:
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return (prefSrcIp, some(tcpPort), some(udpPort))
of PrefSrcIsPrivate:
error "No public IP address found. Should not use --nat:none option"
return (none(IpAddress), some(tcpPort), some(udpPort))
of BindAddressIsPrivate:
error "Bind IP is not a public IP address. Should not use --nat:none option"
return (none(IpAddress), some(tcpPort), some(udpPort))
of NatStrategy.NatUpnp, NatStrategy.NatPmp:
return setupNat(natConfig.nat, tcpPort, udpPort, clientId)
proc nattedAddress*(
natConfig: NatConfig, addrs: seq[MultiAddress], udpPort: Port
): tuple[libp2p, discovery: seq[MultiAddress]] =
## Takes a NAT configuration, sequence of multiaddresses and UDP port and returns:
## - Modified multiaddresses with NAT-mapped addresses for libp2p
## - Discovery addresses with NAT-mapped UDP ports
var discoveryAddrs = newSeq[MultiAddress](0)
let newAddrs = addrs.mapIt:
block:
# Extract IP address and port from the multiaddress
let (ipPart, port) = getAddressAndPort(it)
if ipPart.isSome and port.isSome:
# Try to setup NAT mapping for the address
let (newIP, tcp, udp) =
setupAddress(natConfig, ipPart.get, port.get, udpPort, "codex")
if newIP.isSome:
# NAT mapping successful - add discovery address with mapped UDP port
discoveryAddrs.add(getMultiAddrWithIPAndUDPPort(newIP.get, udp.get))
# Remap original address with NAT IP and TCP port
it.remapAddr(ip = newIP, port = tcp)
else:
# NAT mapping failed - use original address
echo "Failed to get external IP, using original address", it
discoveryAddrs.add(getMultiAddrWithIPAndUDPPort(ipPart.get, udpPort))
it
else:
# Invalid multiaddress format - return as is
it
(newAddrs, discoveryAddrs)

File diff suppressed because it is too large Load Diff

View File

@ -2,10 +2,9 @@ import pkg/stint
type
Periodicity* = object
seconds*: uint64
Period* = uint64
Timestamp* = uint64
seconds*: UInt256
Period* = UInt256
Timestamp* = UInt256
func periodOf*(periodicity: Periodicity, timestamp: Timestamp): Period =
timestamp div periodicity.seconds

View File

@ -14,17 +14,20 @@ export purchase
type
Purchasing* = ref object
market*: Market
market: Market
clock: Clock
purchases: Table[PurchaseId, Purchase]
proofProbability*: UInt256
PurchaseTimeout* = Timeout
const DefaultProofProbability = 100.u256
proc new*(_: type Purchasing, market: Market, clock: Clock): Purchasing =
Purchasing(market: market, clock: clock, proofProbability: DefaultProofProbability)
Purchasing(
market: market,
clock: clock,
proofProbability: DefaultProofProbability,
)
proc load*(purchasing: Purchasing) {.async.} =
let market = purchasing.market
@ -40,9 +43,9 @@ proc start*(purchasing: Purchasing) {.async.} =
proc stop*(purchasing: Purchasing) {.async.} =
discard
proc populate*(
purchasing: Purchasing, request: StorageRequest
): Future[StorageRequest] {.async.} =
proc populate*(purchasing: Purchasing,
request: StorageRequest
): Future[StorageRequest] {.async.} =
result = request
if result.ask.proofProbability == 0.u256:
result.ask.proofProbability = purchasing.proofProbability
@ -52,9 +55,9 @@ proc populate*(
result.nonce = Nonce(id)
result.client = await purchasing.market.getSigner()
proc purchase*(
purchasing: Purchasing, request: StorageRequest
): Future[Purchase] {.async.} =
proc purchase*(purchasing: Purchasing,
request: StorageRequest
): Future[Purchase] {.async.} =
let request = await purchasing.populate(request)
let purchase = Purchase.new(request, purchasing.market, purchasing.clock)
purchase.start()
@ -72,3 +75,4 @@ func getPurchaseIds*(purchasing: Purchasing): seq[PurchaseId] =
for key in purchasing.purchases.keys:
pIds.add(key)
return pIds

View File

@ -25,7 +25,10 @@ export purchaseid
export statemachine
func new*(
_: type Purchase, requestId: RequestId, market: Market, clock: Clock
_: type Purchase,
requestId: RequestId,
market: Market,
clock: Clock
): Purchase =
## create a new instance of a Purchase
##
@ -39,7 +42,10 @@ func new*(
return purchase
func new*(
_: type Purchase, request: StorageRequest, market: Market, clock: Clock
_: type Purchase,
request: StorageRequest,
market: Market,
clock: Clock
): Purchase =
## Create a new purchase using the given market and clock
let purchase = Purchase.new(request.id, market, clock)
@ -70,5 +76,4 @@ func error*(purchase: Purchase): ?(ref CatchableError) =
func state*(purchase: Purchase): ?string =
proc description(state: State): string =
$state
purchase.query(description)

View File

@ -1,14 +1,12 @@
import std/hashes
import pkg/nimcrypto
import ../logutils
type PurchaseId* = distinct array[32, byte]
logutils.formatIt(LogFormat.textLines, PurchaseId):
it.short0xHexLog
logutils.formatIt(LogFormat.json, PurchaseId):
it.to0xHexLog
logutils.formatIt(LogFormat.textLines, PurchaseId): it.short0xHexLog
logutils.formatIt(LogFormat.json, PurchaseId): it.to0xHexLog
proc hash*(x: PurchaseId): Hash {.borrow.}
proc `==`*(x, y: PurchaseId): bool {.borrow.}
proc toHex*(x: PurchaseId): string =
array[32, byte](x).toHex
proc toHex*(x: PurchaseId): string = array[32, byte](x).toHex

View File

@ -14,6 +14,5 @@ type
clock*: Clock
requestId*: RequestId
request*: ?StorageRequest
PurchaseState* = ref object of State
PurchaseError* = object of CodexError

View File

@ -1,35 +1,25 @@
import pkg/metrics
import ../../logutils
import ../../utils/exceptions
import ../statemachine
import ./error
import ./errorhandling
declareCounter(codex_purchases_cancelled, "codex purchases cancelled")
logScope:
topics = "marketplace purchases cancelled"
type PurchaseCancelled* = ref object of PurchaseState
type PurchaseCancelled* = ref object of ErrorHandlingState
method `$`*(state: PurchaseCancelled): string =
"cancelled"
method run*(
state: PurchaseCancelled, machine: Machine
): Future[?State] {.async: (raises: []).} =
method run*(state: PurchaseCancelled, machine: Machine): Future[?State] {.async.} =
codex_purchases_cancelled.inc()
let purchase = Purchase(machine)
try:
warn "Request cancelled, withdrawing remaining funds",
requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
warn "Request cancelled, withdrawing remaining funds", requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
let error = newException(Timeout, "Purchase cancelled due to timeout")
purchase.future.fail(error)
except CancelledError as e:
trace "PurchaseCancelled.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseCancelled.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
let error = newException(Timeout, "Purchase cancelled due to timeout")
purchase.future.fail(error)

View File

@ -14,13 +14,10 @@ type PurchaseErrored* = ref object of PurchaseState
method `$`*(state: PurchaseErrored): string =
"errored"
method run*(
state: PurchaseErrored, machine: Machine
): Future[?State] {.async: (raises: []).} =
method run*(state: PurchaseErrored, machine: Machine): Future[?State] {.async.} =
codex_purchases_error.inc()
let purchase = Purchase(machine)
error "Purchasing error",
error = state.error.msgDetail, requestId = purchase.requestId
error "Purchasing error", error=state.error.msgDetail, requestId = purchase.requestId
purchase.future.fail(state.error)

View File

@ -0,0 +1,9 @@
import pkg/questionable
import ../statemachine
import ./error
type
ErrorHandlingState* = ref object of PurchaseState
method onError*(state: ErrorHandlingState, error: ref CatchableError): ?State =
some State(PurchaseErrored(error: error))

View File

@ -1,30 +1,21 @@
import pkg/metrics
import ../statemachine
import ../../logutils
import ../../utils/exceptions
import ./error
declareCounter(codex_purchases_failed, "codex purchases failed")
type PurchaseFailed* = ref object of PurchaseState
type
PurchaseFailed* = ref object of PurchaseState
method `$`*(state: PurchaseFailed): string =
"failed"
method run*(
state: PurchaseFailed, machine: Machine
): Future[?State] {.async: (raises: []).} =
method run*(state: PurchaseFailed, machine: Machine): Future[?State] {.async.} =
codex_purchases_failed.inc()
let purchase = Purchase(machine)
try:
warn "Request failed, withdrawing remaining funds", requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
except CancelledError as e:
trace "PurchaseFailed.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseFailed.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
warn "Request failed, withdrawing remaining funds", requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
let error = newException(PurchaseError, "Purchase failed")
return some State(PurchaseErrored(error: error))

View File

@ -1,9 +1,7 @@
import pkg/metrics
import ../statemachine
import ../../utils/exceptions
import ../../logutils
import ./error
declareCounter(codex_purchases_finished, "codex purchases finished")
@ -15,19 +13,10 @@ type PurchaseFinished* = ref object of PurchaseState
method `$`*(state: PurchaseFinished): string =
"finished"
method run*(
state: PurchaseFinished, machine: Machine
): Future[?State] {.async: (raises: []).} =
method run*(state: PurchaseFinished, machine: Machine): Future[?State] {.async.} =
codex_purchases_finished.inc()
let purchase = Purchase(machine)
try:
info "Purchase finished, withdrawing remaining funds",
requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
info "Purchase finished, withdrawing remaining funds", requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
purchase.future.complete()
except CancelledError as e:
trace "PurchaseFinished.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseFinished.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
purchase.future.complete()

View File

@ -1,28 +1,18 @@
import pkg/metrics
import ../../logutils
import ../../utils/exceptions
import ../statemachine
import ./errorhandling
import ./submitted
import ./error
declareCounter(codex_purchases_pending, "codex purchases pending")
type PurchasePending* = ref object of PurchaseState
type PurchasePending* = ref object of ErrorHandlingState
method `$`*(state: PurchasePending): string =
"pending"
method run*(
state: PurchasePending, machine: Machine
): Future[?State] {.async: (raises: []).} =
method run*(state: PurchasePending, machine: Machine): Future[?State] {.async.} =
codex_purchases_pending.inc()
let purchase = Purchase(machine)
try:
let request = !purchase.request
await purchase.market.requestStorage(request)
return some State(PurchaseSubmitted())
except CancelledError as e:
trace "PurchasePending.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchasePending.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
let request = !purchase.request
await purchase.market.requestStorage(request)
return some State(PurchaseSubmitted())

View File

@ -1,25 +1,22 @@
import pkg/metrics
import ../../logutils
import ../../utils/exceptions
import ../statemachine
import ./errorhandling
import ./finished
import ./failed
import ./error
declareCounter(codex_purchases_started, "codex purchases started")
logScope:
topics = "marketplace purchases started"
type PurchaseStarted* = ref object of PurchaseState
type PurchaseStarted* = ref object of ErrorHandlingState
method `$`*(state: PurchaseStarted): string =
"started"
method run*(
state: PurchaseStarted, machine: Machine
): Future[?State] {.async: (raises: []).} =
method run*(state: PurchaseStarted, machine: Machine): Future[?State] {.async.} =
codex_purchases_started.inc()
let purchase = Purchase(machine)
@ -30,25 +27,15 @@ method run*(
let failed = newFuture[void]()
proc callback(_: RequestId) =
failed.complete()
let subscription = await market.subscribeRequestFailed(purchase.requestId, callback)
var ended: Future[void]
try:
let subscription = await market.subscribeRequestFailed(purchase.requestId, callback)
# Ensure that we're past the request end by waiting an additional second
ended = clock.waitUntil((await market.getRequestEnd(purchase.requestId)) + 1)
let fut = await one(ended, failed)
await subscription.unsubscribe()
if fut.id == failed.id:
ended.cancelSoon()
return some State(PurchaseFailed())
else:
failed.cancelSoon()
return some State(PurchaseFinished())
except CancelledError as e:
ended.cancelSoon()
failed.cancelSoon()
trace "PurchaseStarted.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseStarted.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
# Ensure that we're past the request end by waiting an additional second
let ended = clock.waitUntil((await market.getRequestEnd(purchase.requestId)) + 1)
let fut = await one(ended, failed)
await subscription.unsubscribe()
if fut.id == failed.id:
ended.cancel()
return some State(PurchaseFailed())
else:
failed.cancel()
return some State(PurchaseFinished())

View File

@ -1,41 +1,36 @@
import pkg/metrics
import ../../logutils
import ../../utils/exceptions
import ../statemachine
import ./errorhandling
import ./started
import ./cancelled
import ./error
logScope:
topics = "marketplace purchases submitted"
declareCounter(codex_purchases_submitted, "codex purchases submitted")
type PurchaseSubmitted* = ref object of PurchaseState
type PurchaseSubmitted* = ref object of ErrorHandlingState
method `$`*(state: PurchaseSubmitted): string =
"submitted"
method run*(
state: PurchaseSubmitted, machine: Machine
): Future[?State] {.async: (raises: []).} =
method run*(state: PurchaseSubmitted, machine: Machine): Future[?State] {.async.} =
codex_purchases_submitted.inc()
let purchase = Purchase(machine)
let request = !purchase.request
let market = purchase.market
let clock = purchase.clock
info "Request submitted, waiting for slots to be filled",
requestId = purchase.requestId
info "Request submitted, waiting for slots to be filled", requestId = purchase.requestId
proc wait() {.async.} =
let done = newAsyncEvent()
proc wait {.async.} =
let done = newFuture[void]()
proc callback(_: RequestId) =
done.fire()
done.complete()
let subscription = await market.subscribeFulfillment(request.id, callback)
await done.wait()
await done
await subscription.unsubscribe()
proc withTimeout(future: Future[void]) {.async.} =
@ -47,10 +42,5 @@ method run*(
await wait().withTimeout()
except Timeout:
return some State(PurchaseCancelled())
except CancelledError as e:
trace "PurchaseSubmitted.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseSubmitted.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
return some State(PurchaseStarted())

View File

@ -1,44 +1,35 @@
import pkg/metrics
import ../../utils/exceptions
import ../../logutils
import ../statemachine
import ./errorhandling
import ./submitted
import ./started
import ./cancelled
import ./finished
import ./failed
import ./error
declareCounter(codex_purchases_unknown, "codex purchases unknown")
type PurchaseUnknown* = ref object of PurchaseState
type PurchaseUnknown* = ref object of ErrorHandlingState
method `$`*(state: PurchaseUnknown): string =
"unknown"
method run*(
state: PurchaseUnknown, machine: Machine
): Future[?State] {.async: (raises: []).} =
try:
codex_purchases_unknown.inc()
let purchase = Purchase(machine)
if (request =? await purchase.market.getRequest(purchase.requestId)) and
(requestState =? await purchase.market.requestState(purchase.requestId)):
purchase.request = some request
method run*(state: PurchaseUnknown, machine: Machine): Future[?State] {.async.} =
codex_purchases_unknown.inc()
let purchase = Purchase(machine)
if (request =? await purchase.market.getRequest(purchase.requestId)) and
(requestState =? await purchase.market.requestState(purchase.requestId)):
case requestState
of RequestState.New:
return some State(PurchaseSubmitted())
of RequestState.Started:
return some State(PurchaseStarted())
of RequestState.Cancelled:
return some State(PurchaseCancelled())
of RequestState.Finished:
return some State(PurchaseFinished())
of RequestState.Failed:
return some State(PurchaseFailed())
except CancelledError as e:
trace "PurchaseUnknown.run was cancelled", error = e.msgDetail
except CatchableError as e:
error "Error during PurchaseUnknown.run", error = e.msgDetail
return some State(PurchaseErrored(error: e))
purchase.request = some request
case requestState
of RequestState.New:
return some State(PurchaseSubmitted())
of RequestState.Started:
return some State(PurchaseStarted())
of RequestState.Cancelled:
return some State(PurchaseCancelled())
of RequestState.Finished:
return some State(PurchaseFinished())
of RequestState.Failed:
return some State(PurchaseFailed())

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -14,7 +14,7 @@ import pkg/chronos
import pkg/libp2p
import pkg/stew/base10
import pkg/stew/byteutils
import pkg/results
import pkg/stew/results
import pkg/stint
import ../sales
@ -25,7 +25,9 @@ proc encodeString*(cid: type Cid): Result[string, cstring] =
ok($cid)
proc decodeString*(T: type Cid, value: string): Result[Cid, cstring] =
Cid.init(value).mapErr do(e: CidError) -> cstring:
Cid
.init(value)
.mapErr do(e: CidError) -> cstring:
case e
of CidError.Incorrect: "Incorrect Cid".cstring
of CidError.Unsupported: "Unsupported Cid".cstring
@ -42,8 +44,9 @@ proc encodeString*(address: MultiAddress): Result[string, cstring] =
ok($address)
proc decodeString*(T: type MultiAddress, value: string): Result[MultiAddress, cstring] =
MultiAddress.init(value).mapErr do(e: string) -> cstring:
cstring(e)
MultiAddress
.init(value)
.mapErr do(e: string) -> cstring: cstring(e)
proc decodeString*(T: type SomeUnsignedInt, value: string): Result[T, cstring] =
Base10.decode(T, value)
@ -52,7 +55,7 @@ proc encodeString*(value: SomeUnsignedInt): Result[string, cstring] =
ok(Base10.toString(value))
proc decodeString*(T: type Duration, value: string): Result[T, cstring] =
let v = ?Base10.decode(uint32, value)
let v = ? Base10.decode(uint32, value)
ok(v.minutes)
proc encodeString*(value: Duration): Result[string, cstring] =
@ -74,20 +77,19 @@ proc decodeString*(_: type UInt256, value: string): Result[UInt256, cstring] =
except ValueError as e:
err e.msg.cstring
proc decodeString*(
_: type array[32, byte], value: string
): Result[array[32, byte], cstring] =
proc decodeString*(_: type array[32, byte],
value: string): Result[array[32, byte], cstring] =
try:
ok array[32, byte].fromHex(value)
except ValueError as e:
err e.msg.cstring
proc decodeString*[T: PurchaseId | RequestId | Nonce | SlotId | AvailabilityId](
_: type T, value: string
): Result[T, cstring] =
proc decodeString*[T: PurchaseId | RequestId | Nonce | SlotId | AvailabilityId](_: type T,
value: string): Result[T, cstring] =
array[32, byte].decodeString(value).map(id => T(id))
proc decodeString*(t: typedesc[string], value: string): Result[string, cstring] =
proc decodeString*(t: typedesc[string],
value: string): Result[string, cstring] =
ok(value)
proc encodeString*(value: string): RestResult[string] =

View File

@ -13,11 +13,11 @@ export json
type
StorageRequestParams* = object
duration* {.serialize.}: uint64
duration* {.serialize.}: UInt256
proofProbability* {.serialize.}: UInt256
pricePerBytePerSecond* {.serialize.}: UInt256
collateralPerByte* {.serialize.}: UInt256
expiry* {.serialize.}: uint64
reward* {.serialize.}: UInt256
collateral* {.serialize.}: UInt256
expiry* {.serialize.}: ?UInt256
nodes* {.serialize.}: ?uint
tolerance* {.serialize.}: ?uint
@ -28,18 +28,16 @@ type
error* {.serialize.}: ?string
RestAvailability* = object
totalSize* {.serialize.}: uint64
duration* {.serialize.}: uint64
minPricePerBytePerSecond* {.serialize.}: UInt256
totalCollateral* {.serialize.}: UInt256
freeSize* {.serialize.}: ?uint64
enabled* {.serialize.}: ?bool
until* {.serialize.}: ?SecondsSince1970
totalSize* {.serialize.}: UInt256
duration* {.serialize.}: UInt256
minPrice* {.serialize.}: UInt256
maxCollateral* {.serialize.}: UInt256
freeSize* {.serialize.}: ?UInt256
RestSalesAgent* = object
state* {.serialize.}: string
requestId* {.serialize.}: RequestId
slotIndex* {.serialize.}: uint64
slotIndex* {.serialize.}: UInt256
request* {.serialize.}: ?StorageRequest
reservation* {.serialize.}: ?Reservation
@ -76,10 +74,15 @@ type
quotaReservedBytes* {.serialize.}: NBytes
proc init*(_: type RestContentList, content: seq[RestContent]): RestContentList =
RestContentList(content: content)
RestContentList(
content: content
)
proc init*(_: type RestContent, cid: Cid, manifest: Manifest): RestContent =
RestContent(cid: cid, manifest: manifest)
RestContent(
cid: cid,
manifest: manifest
)
proc init*(_: type RestNode, node: dn.Node): RestNode =
RestNode(
@ -87,7 +90,7 @@ proc init*(_: type RestNode, node: dn.Node): RestNode =
peerId: node.record.data.peerId,
record: node.record,
address: node.address,
seen: node.seen > 0.5,
seen: node.seen
)
proc init*(_: type RestRoutingTable, routingTable: rt.RoutingTable): RestRoutingTable =
@ -96,23 +99,28 @@ proc init*(_: type RestRoutingTable, routingTable: rt.RoutingTable): RestRouting
for node in bucket.nodes:
nodes.add(RestNode.init(node))
RestRoutingTable(localNode: RestNode.init(routingTable.localNode), nodes: nodes)
RestRoutingTable(
localNode: RestNode.init(routingTable.localNode),
nodes: nodes
)
proc init*(_: type RestPeerRecord, peerRecord: PeerRecord): RestPeerRecord =
RestPeerRecord(
peerId: peerRecord.peerId, seqNo: peerRecord.seqNo, addresses: peerRecord.addresses
peerId: peerRecord.peerId,
seqNo: peerRecord.seqNo,
addresses: peerRecord.addresses
)
proc init*(_: type RestNodeId, id: NodeId): RestNodeId =
RestNodeId(id: id)
RestNodeId(
id: id
)
proc `%`*(obj: StorageRequest | Slot): JsonNode =
let jsonObj = newJObject()
for k, v in obj.fieldPairs:
jsonObj[k] = %v
for k, v in obj.fieldPairs: jsonObj[k] = %v
jsonObj["id"] = %(obj.id)
return jsonObj
proc `%`*(obj: RestNodeId): JsonNode =
% $obj.id
proc `%`*(obj: RestNodeId): JsonNode = % $obj.id

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,7 +7,9 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [], gcsafe.}
import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p/crypto/crypto
import pkg/bearssl/rand
@ -28,8 +30,7 @@ proc instance*(t: type Rng): Rng =
const randMax = 18_446_744_073_709_551_615'u64
proc rand*(rng: Rng, max: Natural): int =
if max == 0:
return 0
if max == 0: return 0
while true:
let x = rng[].generate(uint64)
@ -40,8 +41,8 @@ proc sample*[T](rng: Rng, a: openArray[T]): T =
result = a[rng.rand(a.high)]
proc sample*[T](
rng: Rng, sample, exclude: openArray[T]
): T {.raises: [Defect, RngSampleError].} =
rng: Rng, sample, exclude: openArray[T]): T
{.raises: [Defect, RngSampleError].} =
if sample == exclude:
raise newException(RngSampleError, "Sample and exclude arrays are the same!")
@ -52,15 +53,6 @@ proc sample*[T](
break
proc sample*[T](
rng: Rng, sample: openArray[T], limit: int
): seq[T] {.raises: [Defect, RngSampleError].} =
if limit > sample.len:
raise newException(RngSampleError, "Limit cannot be larger than sample!")
for _ in 0 ..< min(sample.len, limit):
result.add(rng.sample(sample, result))
proc shuffle*[T](rng: Rng, a: var openArray[T]) =
for i in countdown(a.high, 1):
let j = rng.rand(i)

View File

@ -16,13 +16,13 @@ import ./sales/statemachine
import ./sales/slotqueue
import ./sales/states/preparing
import ./sales/states/unknown
import ./utils/then
import ./utils/trackedfutures
import ./utils/exceptions
## Sales holds a list of available storage that it may sell.
##
## When storage is requested on the market that matches availability, the Sales
## object will instruct the Logos Storage node to persist the requested data. Once the
## object will instruct the Codex node to persist the requested data. Once the
## data has been persisted, it uploads a proof of storage to the market in an
## attempt to win a storage contract.
##
@ -45,12 +45,13 @@ export salescontext
logScope:
topics = "sales marketplace"
type Sales* = ref object
context*: SalesContext
agents*: seq[SalesAgent]
running: bool
subscriptions: seq[market.Subscription]
trackedFutures: TrackedFutures
type
Sales* = ref object
context*: SalesContext
agents*: seq[SalesAgent]
running: bool
subscriptions: seq[market.Subscription]
trackedFutures: TrackedFutures
proc `onStore=`*(sales: Sales, onStore: OnStore) =
sales.context.onStore = some onStore
@ -67,31 +68,28 @@ proc `onProve=`*(sales: Sales, callback: OnProve) =
proc `onExpiryUpdate=`*(sales: Sales, callback: OnExpiryUpdate) =
sales.context.onExpiryUpdate = some callback
proc onStore*(sales: Sales): ?OnStore =
sales.context.onStore
proc onStore*(sales: Sales): ?OnStore = sales.context.onStore
proc onClear*(sales: Sales): ?OnClear =
sales.context.onClear
proc onClear*(sales: Sales): ?OnClear = sales.context.onClear
proc onSale*(sales: Sales): ?OnSale =
sales.context.onSale
proc onSale*(sales: Sales): ?OnSale = sales.context.onSale
proc onProve*(sales: Sales): ?OnProve =
sales.context.onProve
proc onProve*(sales: Sales): ?OnProve = sales.context.onProve
proc onExpiryUpdate*(sales: Sales): ?OnExpiryUpdate =
sales.context.onExpiryUpdate
proc onExpiryUpdate*(sales: Sales): ?OnExpiryUpdate = sales.context.onExpiryUpdate
proc new*(_: type Sales, market: Market, clock: Clock, repo: RepoStore): Sales =
proc new*(_: type Sales,
market: Market,
clock: Clock,
repo: RepoStore): Sales =
Sales.new(market, clock, repo, 0)
proc new*(
_: type Sales,
market: Market,
clock: Clock,
repo: RepoStore,
simulateProofFailures: int,
): Sales =
proc new*(_: type Sales,
market: Market,
clock: Clock,
repo: RepoStore,
simulateProofFailures: int): Sales =
let reservations = Reservations.new(repo)
Sales(
context: SalesContext(
@ -99,110 +97,117 @@ proc new*(
clock: clock,
reservations: reservations,
slotQueue: SlotQueue.new(),
simulateProofFailures: simulateProofFailures,
simulateProofFailures: simulateProofFailures
),
trackedFutures: TrackedFutures.new(),
subscriptions: @[],
subscriptions: @[]
)
proc remove(sales: Sales, agent: SalesAgent) {.async: (raises: []).} =
proc remove(sales: Sales, agent: SalesAgent) {.async.} =
await agent.stop()
if sales.running:
sales.agents.keepItIf(it != agent)
proc cleanUp(
sales: Sales, agent: SalesAgent, reprocessSlot: bool, returnedCollateral: ?UInt256
) {.async: (raises: []).} =
proc cleanUp(sales: Sales,
agent: SalesAgent,
returnBytes: bool,
reprocessSlot: bool,
processing: Future[void]) {.async.} =
let data = agent.data
logScope:
topics = "sales cleanUp"
requestId = data.requestId
slotIndex = data.slotIndex
reservationId = data.reservation .? id |? ReservationId.default
availabilityId = data.reservation .? availabilityId |? AvailabilityId.default
reservationId = data.reservation.?id |? ReservationId.default
availabilityId = data.reservation.?availabilityId |? AvailabilityId.default
trace "cleaning up sales agent"
# if reservation for the SalesAgent was not created, then it means
# that the cleanUp was called before the sales process really started, so
# there are not really any bytes to be returned
if request =? data.request and reservation =? data.reservation:
if returnErr =? (
await noCancel sales.context.reservations.returnBytesToAvailability(
reservation.availabilityId, reservation.id, request.ask.slotSize
)
).errorOption:
error "failure returning bytes",
error = returnErr.msg, bytes = request.ask.slotSize
if returnBytes and request =? data.request and reservation =? data.reservation:
if returnErr =? (await sales.context.reservations.returnBytesToAvailability(
reservation.availabilityId,
reservation.id,
request.ask.slotSize
)).errorOption:
error "failure returning bytes",
error = returnErr.msg,
bytes = request.ask.slotSize
# delete reservation and return reservation bytes back to the availability
if reservation =? data.reservation and
deleteErr =? (
await noCancel sales.context.reservations.deleteReservation(
reservation.id, reservation.availabilityId, returnedCollateral
)
).errorOption:
# delete reservation and return reservation bytes back to the availability
if reservation =? data.reservation and
deleteErr =? (await sales.context.reservations.deleteReservation(
reservation.id,
reservation.availabilityId
)).errorOption:
error "failure deleting reservation", error = deleteErr.msg
# Re-add items back into the queue to prevent small availabilities from
# draining the queue. Seen items will be ordered last.
if reprocessSlot and request =? data.request and var item =? agent.data.slotQueueItem:
if reprocessSlot and request =? data.request:
let queue = sales.context.slotQueue
item.seen = true
var seenItem = SlotQueueItem.init(data.requestId,
data.slotIndex.truncate(uint16),
data.ask,
request.expiry,
seen = true)
trace "pushing ignored item to queue, marked as seen"
if err =? queue.push(item).errorOption:
error "failed to readd slot to queue", errorType = $(type err), error = err.msg
if err =? queue.push(seenItem).errorOption:
error "failed to readd slot to queue",
errorType = $(type err), error = err.msg
let fut = sales.remove(agent)
sales.trackedFutures.track(fut)
await sales.remove(agent)
# signal back to the slot queue to cycle a worker
if not processing.isNil and not processing.finished():
processing.complete()
proc filled(
sales: Sales,
request: StorageRequest,
slotIndex: UInt256,
processing: Future[void]) =
proc filled(sales: Sales, request: StorageRequest, slotIndex: uint64) =
if onSale =? sales.context.onSale:
onSale(request, slotIndex)
proc processSlot(
sales: Sales, item: SlotQueueItem
) {.async: (raises: [CancelledError]).} =
debug "Processing slot from queue", requestId = item.requestId, slot = item.slotIndex
# signal back to the slot queue to cycle a worker
if not processing.isNil and not processing.finished():
processing.complete()
proc processSlot(sales: Sales, item: SlotQueueItem, done: Future[void]) =
debug "Processing slot from queue", requestId = item.requestId,
slot = item.slotIndex
let agent = newSalesAgent(
sales.context, item.requestId, item.slotIndex, none StorageRequest, some item
sales.context,
item.requestId,
item.slotIndex.u256,
none StorageRequest
)
let completed = newAsyncEvent()
agent.onCleanUp = proc (returnBytes = false, reprocessSlot = false) {.async.} =
await sales.cleanUp(agent, returnBytes, reprocessSlot, done)
agent.onCleanUp = proc(
reprocessSlot = false, returnedCollateral = UInt256.none
) {.async: (raises: []).} =
trace "slot cleanup"
await sales.cleanUp(agent, reprocessSlot, returnedCollateral)
completed.fire()
agent.onFilled = some proc(request: StorageRequest, slotIndex: uint64) =
trace "slot filled"
sales.filled(request, slotIndex)
completed.fire()
agent.onFilled = some proc(request: StorageRequest, slotIndex: UInt256) =
sales.filled(request, slotIndex, done)
agent.start(SalePreparing())
sales.agents.add agent
trace "waiting for slot processing to complete"
await completed.wait()
trace "slot processing completed"
proc deleteInactiveReservations(sales: Sales, activeSlots: seq[Slot]) {.async.} =
let reservations = sales.context.reservations
without reservs =? await reservations.all(Reservation):
return
let unused = reservs.filter(
r => (
let slotId = slotId(r.requestId, r.slotIndex)
not activeSlots.any(slot => slot.id == slotId)
)
)
let unused = reservs.filter(r => (
let slotId = slotId(r.requestId, r.slotIndex)
not activeSlots.any(slot => slot.id == slotId)
))
if unused.len == 0:
return
@ -210,13 +215,14 @@ proc deleteInactiveReservations(sales: Sales, activeSlots: seq[Slot]) {.async.}
info "Found unused reservations for deletion", unused = unused.len
for reservation in unused:
logScope:
reservationId = reservation.id
availabilityId = reservation.availabilityId
if err =? (
await reservations.deleteReservation(reservation.id, reservation.availabilityId)
).errorOption:
if err =? (await reservations.deleteReservation(
reservation.id, reservation.availabilityId
)).errorOption:
error "Failed to delete unused reservation", error = err.msg
else:
trace "Deleted unused reservation"
@ -246,13 +252,17 @@ proc load*(sales: Sales) {.async.} =
await sales.deleteInactiveReservations(activeSlots)
for slot in activeSlots:
let agent =
newSalesAgent(sales.context, slot.request.id, slot.slotIndex, some slot.request)
let agent = newSalesAgent(
sales.context,
slot.request.id,
slot.slotIndex,
some slot.request)
agent.onCleanUp = proc(
reprocessSlot = false, returnedCollateral = UInt256.none
) {.async: (raises: []).} =
await sales.cleanUp(agent, reprocessSlot, returnedCollateral)
agent.onCleanUp = proc(returnBytes = false, reprocessSlot = false) {.async.} =
# since workers are not being dispatched, this future has not been created
# by a worker. Create a dummy one here so we can call sales.cleanUp
let done: Future[void] = nil
await sales.cleanUp(agent, returnBytes, reprocessSlot, done)
# There is no need to assign agent.onFilled as slots loaded from `mySlots`
# are inherently already filled and so assigning agent.onFilled would be
@ -261,9 +271,7 @@ proc load*(sales: Sales) {.async.} =
agent.start(SaleUnknown())
sales.agents.add agent
proc OnAvailabilitySaved(
sales: Sales, availability: Availability
) {.async: (raises: []).} =
proc onAvailabilityAdded(sales: Sales, availability: Availability) {.async.} =
## When availabilities are modified or added, the queue should be unpaused if
## it was paused and any slots in the queue should have their `seen` flag
## cleared.
@ -274,9 +282,11 @@ proc OnAvailabilitySaved(
trace "unpausing queue after new availability added"
queue.unpause()
proc onStorageRequested(
sales: Sales, requestId: RequestId, ask: StorageAsk, expiry: uint64
) {.raises: [].} =
proc onStorageRequested(sales: Sales,
requestId: RequestId,
ask: StorageAsk,
expiry: UInt256) =
logScope:
topics = "marketplace sales onStorageRequested"
requestId
@ -287,14 +297,7 @@ proc onStorageRequested(
trace "storage requested, adding slots to queue"
let market = sales.context.market
without collateral =? market.slotCollateral(ask.collateralPerSlot, SlotState.Free),
err:
error "Request failure, unable to calculate collateral", error = err.msg
return
without items =? SlotQueueItem.init(requestId, ask, expiry, collateral).catch, err:
without items =? SlotQueueItem.init(requestId, ask, expiry).catch, err:
if err of SlotsOutOfRangeError:
warn "Too many slots, cannot add to queue"
else:
@ -311,7 +314,10 @@ proc onStorageRequested(
else:
warn "Error adding request to SlotQueue", error = err.msg
proc onSlotFreed(sales: Sales, requestId: RequestId, slotIndex: uint64) =
proc onSlotFreed(sales: Sales,
requestId: RequestId,
slotIndex: UInt256) =
logScope:
topics = "marketplace sales onSlotFreed"
requestId
@ -319,59 +325,44 @@ proc onSlotFreed(sales: Sales, requestId: RequestId, slotIndex: uint64) =
trace "slot freed, adding to queue"
proc addSlotToQueue() {.async: (raises: []).} =
proc addSlotToQueue() {.async.} =
let context = sales.context
let market = context.market
let queue = context.slotQueue
try:
without request =? (await market.getRequest(requestId)), err:
error "unknown request in contract", error = err.msgDetail
# first attempt to populate request using existing slot metadata in queue
without var found =? queue.populateItem(requestId,
slotIndex.truncate(uint16)):
trace "no existing request metadata, getting request info from contract"
# if there's no existing slot for that request, retrieve the request
# from the contract.
without request =? await market.getRequest(requestId):
error "unknown request in contract"
return
# Take the repairing state into consideration to calculate the collateral.
# This is particularly needed because it will affect the priority in the queue
# and we want to give the user the ability to tweak the parameters.
# Adding the repairing state directly in the queue priority calculation
# would not allow this flexibility.
without collateral =?
market.slotCollateral(request.ask.collateralPerSlot, SlotState.Repair), err:
error "Failed to add freed slot to queue: unable to calculate collateral",
error = err.msg
return
found = SlotQueueItem.init(request, slotIndex.truncate(uint16))
if slotIndex > uint16.high.uint64:
error "Cannot cast slot index to uint16, value = ", slotIndex
return
if err =? queue.push(found).errorOption:
raise err
without slotQueueItem =?
SlotQueueItem.init(request, slotIndex.uint16, collateral = collateral).catch,
err:
warn "Too many slots, cannot add to queue", error = err.msgDetail
return
if err =? queue.push(slotQueueItem).errorOption:
if err of SlotQueueItemExistsError:
error "Failed to push item to queue because it already exists",
error = err.msgDetail
elif err of QueueNotRunningError:
warn "Failed to push item to queue because queue is not running",
error = err.msgDetail
except CancelledError as e:
trace "sales.addSlotToQueue was cancelled"
# We could get rid of this by adding the storage ask in the SlotFreed event,
# so we would not need to call getRequest to get the collateralPerSlot.
let fut = addSlotToQueue()
sales.trackedFutures.track(fut)
addSlotToQueue()
.track(sales)
.catch(proc(err: ref CatchableError) =
if err of SlotQueueItemExistsError:
error "Failed to push item to queue becaue it already exists"
elif err of QueueNotRunningError:
warn "Failed to push item to queue becaue queue is not running"
else:
warn "Error adding request to SlotQueue", error = err.msg
)
proc subscribeRequested(sales: Sales) {.async.} =
let context = sales.context
let market = context.market
proc onStorageRequested(
requestId: RequestId, ask: StorageAsk, expiry: uint64
) {.raises: [].} =
proc onStorageRequested(requestId: RequestId,
ask: StorageAsk,
expiry: UInt256) =
sales.onStorageRequested(requestId, ask, expiry)
try:
@ -444,13 +435,9 @@ proc subscribeSlotFilled(sales: Sales) {.async.} =
let market = context.market
let queue = context.slotQueue
proc onSlotFilled(requestId: RequestId, slotIndex: uint64) =
if slotIndex > uint16.high.uint64:
error "Cannot cast slot index to uint16, value = ", slotIndex
return
proc onSlotFilled(requestId: RequestId, slotIndex: UInt256) =
trace "slot filled, removing from slot queue", requestId, slotIndex
queue.delete(requestId, slotIndex.uint16)
queue.delete(requestId, slotIndex.truncate(uint16))
for agent in sales.agents:
agent.onSlotFilled(requestId, slotIndex)
@ -467,7 +454,7 @@ proc subscribeSlotFreed(sales: Sales) {.async.} =
let context = sales.context
let market = context.market
proc onSlotFreed(requestId: RequestId, slotIndex: uint64) =
proc onSlotFreed(requestId: RequestId, slotIndex: UInt256) =
sales.onSlotFreed(requestId, slotIndex)
try:
@ -483,13 +470,9 @@ proc subscribeSlotReservationsFull(sales: Sales) {.async.} =
let market = context.market
let queue = context.slotQueue
proc onSlotReservationsFull(requestId: RequestId, slotIndex: uint64) =
if slotIndex > uint16.high.uint64:
error "Cannot cast slot index to uint16, value = ", slotIndex
return
proc onSlotReservationsFull(requestId: RequestId, slotIndex: UInt256) =
trace "reservations for slot full, removing from slot queue", requestId, slotIndex
queue.delete(requestId, slotIndex.uint16)
queue.delete(requestId, slotIndex.truncate(uint16))
try:
let sub = await market.subscribeSlotReservationsFull(onSlotReservationsFull)
@ -499,24 +482,21 @@ proc subscribeSlotReservationsFull(sales: Sales) {.async.} =
except CatchableError as e:
error "Unable to subscribe to slot filled events", msg = e.msg
proc startSlotQueue(sales: Sales) =
proc startSlotQueue(sales: Sales) {.async.} =
let slotQueue = sales.context.slotQueue
let reservations = sales.context.reservations
slotQueue.onProcessSlot = proc(item: SlotQueueItem) {.async: (raises: []).} =
trace "processing slot queue item", reqId = item.requestId, slotIdx = item.slotIndex
try:
await sales.processSlot(item)
except CancelledError:
discard
slotQueue.onProcessSlot =
proc(item: SlotQueueItem, done: Future[void]) {.async.} =
trace "processing slot queue item", reqId = item.requestId, slotIdx = item.slotIndex
sales.processSlot(item, done)
slotQueue.start()
asyncSpawn slotQueue.start()
proc OnAvailabilitySaved(availability: Availability) {.async: (raises: []).} =
if availability.enabled:
await sales.OnAvailabilitySaved(availability)
proc onAvailabilityAdded(availability: Availability) {.async.} =
await sales.onAvailabilityAdded(availability)
reservations.OnAvailabilitySaved = OnAvailabilitySaved
reservations.onAvailabilityAdded = onAvailabilityAdded
proc subscribe(sales: Sales) {.async.} =
await sales.subscribeRequested()
@ -538,7 +518,7 @@ proc unsubscribe(sales: Sales) {.async.} =
proc start*(sales: Sales) {.async.} =
await sales.load()
sales.startSlotQueue()
await sales.startSlotQueue()
await sales.subscribe()
sales.running = true

View File

@ -1,4 +1,4 @@
## Logos Storage
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,35 +7,34 @@
## This file may not be copied, modified, or distributed except according to
## those terms.
##
## +--------------------------------------+
## | RESERVATION |
## +---------------------------------------------------+ |--------------------------------------|
## | AVAILABILITY | | ReservationId | id | PK |
## |---------------------------------------------------| |--------------------------------------|
## | AvailabilityId | id | PK |<-||-------o<-| AvailabilityId | availabilityId | FK |
## |---------------------------------------------------| |--------------------------------------|
## | UInt256 | totalSize | | | UInt256 | size | |
## |---------------------------------------------------| |--------------------------------------|
## | UInt256 | freeSize | | | UInt256 | slotIndex | |
## |---------------------------------------------------| +--------------------------------------+
## | UInt256 | duration | |
## |---------------------------------------------------|
## | UInt256 | minPricePerBytePerSecond | |
## |---------------------------------------------------|
## | UInt256 | totalCollateral | |
## |---------------------------------------------------|
## | UInt256 | totalRemainingCollateral | |
## +---------------------------------------------------+
## +--------------------------------------+
## | RESERVATION |
## +----------------------------------------+ |--------------------------------------|
## | AVAILABILITY | | ReservationId | id | PK |
## |----------------------------------------| |--------------------------------------|
## | AvailabilityId | id | PK |<-||-------o<-| AvailabilityId | availabilityId | FK |
## |----------------------------------------| |--------------------------------------|
## | UInt256 | totalSize | | | UInt256 | size | |
## |----------------------------------------| |--------------------------------------|
## | UInt256 | freeSize | | | UInt256 | slotIndex | |
## |----------------------------------------| +--------------------------------------+
## | UInt256 | duration | |
## |----------------------------------------|
## | UInt256 | minPrice | |
## |----------------------------------------|
## | UInt256 | maxCollateral | |
## +----------------------------------------+
{.push raises: [], gcsafe.}
import pkg/upraises
push: {.upraises: [].}
import std/sequtils
import std/sugar
import std/typetraits
import std/sequtils
import std/times
import pkg/chronos
import pkg/datastore
import pkg/nimcrypto
import pkg/questionable
import pkg/questionable/results
import pkg/stint
@ -52,10 +51,9 @@ import ../units
export requests
export logutils
from nimcrypto import randomBytes
logScope:
topics = "marketplace sales reservations"
topics = "sales reservations"
type
AvailabilityId* = distinct array[32, byte]
@ -64,42 +62,28 @@ type
SomeStorableId = AvailabilityId | ReservationId
Availability* = ref object
id* {.serialize.}: AvailabilityId
totalSize* {.serialize.}: uint64
freeSize* {.serialize.}: uint64
duration* {.serialize.}: uint64
minPricePerBytePerSecond* {.serialize.}: UInt256
totalCollateral {.serialize.}: UInt256
totalRemainingCollateral* {.serialize.}: UInt256
# If set to false, the availability will not accept new slots.
# If enabled, it will not impact any existing slots that are already being hosted.
enabled* {.serialize.}: bool
# Specifies the latest timestamp after which the availability will no longer host any slots.
# If set to 0, there will be no restrictions.
until* {.serialize.}: SecondsSince1970
totalSize* {.serialize.}: UInt256
freeSize* {.serialize.}: UInt256
duration* {.serialize.}: UInt256
minPrice* {.serialize.}: UInt256 # minimal price paid for the whole hosted slot for the request's duration
maxCollateral* {.serialize.}: UInt256
Reservation* = ref object
id* {.serialize.}: ReservationId
availabilityId* {.serialize.}: AvailabilityId
size* {.serialize.}: uint64
size* {.serialize.}: UInt256
requestId* {.serialize.}: RequestId
slotIndex* {.serialize.}: uint64
validUntil* {.serialize.}: SecondsSince1970
slotIndex* {.serialize.}: UInt256
Reservations* = ref object of RootObj
availabilityLock: AsyncLock
# Lock for protecting assertions of availability's sizes when searching for matching availability
availabilityLock: AsyncLock # Lock for protecting assertions of availability's sizes when searching for matching availability
repo: RepoStore
OnAvailabilitySaved: ?OnAvailabilitySaved
GetNext* = proc(): Future[?seq[byte]] {.async: (raises: [CancelledError]), closure.}
IterDispose* = proc(): Future[?!void] {.async: (raises: [CancelledError]), closure.}
OnAvailabilitySaved* =
proc(availability: Availability): Future[void] {.async: (raises: []).}
onAvailabilityAdded: ?OnAvailabilityAdded
GetNext* = proc(): Future[?seq[byte]] {.upraises: [], gcsafe, closure.}
IterDispose* = proc(): Future[?!void] {.gcsafe, closure.}
OnAvailabilityAdded* = proc(availability: Availability): Future[void] {.upraises: [], gcsafe.}
StorableIter* = ref object
finished*: bool
next*: GetNext
dispose*: IterDispose
ReservationsError* = object of CodexError
ReserveFailedError* = object of ReservationsError
ReleaseFailedError* = object of ReservationsError
@ -109,20 +93,13 @@ type
SerializationError* = object of ReservationsError
UpdateFailedError* = object of ReservationsError
BytesOutOfBoundsError* = object of ReservationsError
UntilOutOfBoundsError* = object of ReservationsError
const
SalesKey = (CodexMetaKey / "sales").tryGet # TODO: move to sales module
ReservationsKey = (SalesKey / "reservations").tryGet
proc hash*(x: AvailabilityId): Hash {.borrow.}
proc all*(
self: Reservations, T: type SomeStorableObject
): Future[?!seq[T]] {.async: (raises: [CancelledError]).}
proc all*(
self: Reservations, T: type SomeStorableObject, availabilityId: AvailabilityId
): Future[?!seq[T]] {.async: (raises: [CancelledError]).}
proc all*(self: Reservations, T: type SomeStorableObject): Future[?!seq[T]] {.async.}
template withLock(lock, body) =
try:
@ -132,58 +109,35 @@ template withLock(lock, body) =
if lock.locked:
lock.release()
proc new*(T: type Reservations, repo: RepoStore): Reservations =
T(availabilityLock: newAsyncLock(), repo: repo)
proc new*(T: type Reservations,
repo: RepoStore): Reservations =
T(availabilityLock: newAsyncLock(),repo: repo)
proc init*(
_: type Availability,
totalSize: uint64,
freeSize: uint64,
duration: uint64,
minPricePerBytePerSecond: UInt256,
totalCollateral: UInt256,
enabled: bool,
until: SecondsSince1970,
): Availability =
_: type Availability,
totalSize: UInt256,
freeSize: UInt256,
duration: UInt256,
minPrice: UInt256,
maxCollateral: UInt256): Availability =
var id: array[32, byte]
doAssert randomBytes(id) == 32
Availability(
id: AvailabilityId(id),
totalSize: totalSize,
freeSize: freeSize,
duration: duration,
minPricePerBytePerSecond: minPricePerBytePerSecond,
totalCollateral: totalCollateral,
totalRemainingCollateral: totalCollateral,
enabled: enabled,
until: until,
)
func totalCollateral*(self: Availability): UInt256 {.inline.} =
return self.totalCollateral
proc `totalCollateral=`*(self: Availability, value: UInt256) {.inline.} =
self.totalCollateral = value
self.totalRemainingCollateral = value
Availability(id: AvailabilityId(id), totalSize:totalSize, freeSize: freeSize, duration: duration, minPrice: minPrice, maxCollateral: maxCollateral)
proc init*(
_: type Reservation,
availabilityId: AvailabilityId,
size: uint64,
requestId: RequestId,
slotIndex: uint64,
validUntil: SecondsSince1970,
_: type Reservation,
availabilityId: AvailabilityId,
size: UInt256,
requestId: RequestId,
slotIndex: UInt256
): Reservation =
var id: array[32, byte]
doAssert randomBytes(id) == 32
Reservation(
id: ReservationId(id),
availabilityId: availabilityId,
size: size,
requestId: requestId,
slotIndex: slotIndex,
validUntil: validUntil,
)
Reservation(id: ReservationId(id), availabilityId: availabilityId, size: size, requestId: requestId, slotIndex: slotIndex)
func toArray(id: SomeStorableId): array[32, byte] =
array[32, byte](id)
@ -192,27 +146,24 @@ proc `==`*(x, y: AvailabilityId): bool {.borrow.}
proc `==`*(x, y: ReservationId): bool {.borrow.}
proc `==`*(x, y: Reservation): bool =
x.id == y.id
proc `==`*(x, y: Availability): bool =
x.id == y.id
proc `$`*(id: SomeStorableId): string =
id.toArray.toHex
proc `$`*(id: SomeStorableId): string = id.toArray.toHex
proc toErr[E1: ref CatchableError, E2: ReservationsError](
e1: E1, _: type E2, msg: string = e1.msg
): ref E2 =
e1: E1,
_: type E2,
msg: string = e1.msg): ref E2 =
return newException(E2, msg, e1)
logutils.formatIt(LogFormat.textLines, SomeStorableId):
it.short0xHexLog
logutils.formatIt(LogFormat.json, SomeStorableId):
it.to0xHexLog
logutils.formatIt(LogFormat.textLines, SomeStorableId): it.short0xHexLog
logutils.formatIt(LogFormat.json, SomeStorableId): it.to0xHexLog
proc `OnAvailabilitySaved=`*(
self: Reservations, OnAvailabilitySaved: OnAvailabilitySaved
) =
self.OnAvailabilitySaved = some OnAvailabilitySaved
proc `onAvailabilityAdded=`*(self: Reservations,
onAvailabilityAdded: OnAvailabilityAdded) =
self.onAvailabilityAdded = some onAvailabilityAdded
func key*(id: AvailabilityId): ?!Key =
## sales / reservations / <availabilityId>
@ -225,39 +176,27 @@ func key*(reservationId: ReservationId, availabilityId: AvailabilityId): ?!Key =
func key*(availability: Availability): ?!Key =
return availability.id.key
func maxCollateralPerByte*(availability: Availability): UInt256 =
# If freeSize happens to be zero, we convention that the maxCollateralPerByte
# should be equal to totalRemainingCollateral.
if availability.freeSize == 0.uint64:
return availability.totalRemainingCollateral
return availability.totalRemainingCollateral div availability.freeSize.stuint(256)
func key*(reservation: Reservation): ?!Key =
return key(reservation.id, reservation.availabilityId)
func available*(self: Reservations): uint =
self.repo.available.uint
func available*(self: Reservations): uint = self.repo.available.uint
func hasAvailable*(self: Reservations, bytes: uint): bool =
self.repo.available(bytes.NBytes)
proc exists*(
self: Reservations, key: Key
): Future[bool] {.async: (raises: [CancelledError]).} =
self: Reservations,
key: Key): Future[bool] {.async.} =
let exists = await self.repo.metaDs.ds.contains(key)
return exists
iterator items(self: StorableIter): auto =
while not self.finished:
yield self.next()
proc getImpl(
self: Reservations, key: Key
): Future[?!seq[byte]] {.async: (raises: [CancelledError]).} =
self: Reservations,
key: Key): Future[?!seq[byte]] {.async.} =
if not await self.exists(key):
let err =
newException(NotExistsError, "object with key " & $key & " does not exist")
let err = newException(NotExistsError, "object with key " & $key & " does not exist")
return failure(err)
without serialized =? await self.repo.metaDs.ds.get(key), error:
@ -266,8 +205,10 @@ proc getImpl(
return success serialized
proc get*(
self: Reservations, key: Key, T: type SomeStorableObject
): Future[?!T] {.async: (raises: [CancelledError]).} =
self: Reservations,
key: Key,
T: type SomeStorableObject): Future[?!T] {.async.} =
without serialized =? await self.getImpl(key), error:
return failure(error)
@ -277,29 +218,29 @@ proc get*(
return success obj
proc updateImpl(
self: Reservations, obj: SomeStorableObject
): Future[?!void] {.async: (raises: [CancelledError]).} =
self: Reservations,
obj: SomeStorableObject): Future[?!void] {.async.} =
trace "updating " & $(obj.type), id = obj.id
without key =? obj.key, error:
return failure(error)
if err =? (await self.repo.metaDs.ds.put(key, @(obj.toJson.toBytes))).errorOption:
if err =? (await self.repo.metaDs.ds.put(
key,
@(obj.toJson.toBytes)
)).errorOption:
return failure(err.toErr(UpdateFailedError))
return success()
proc updateAvailability(
self: Reservations, obj: Availability
): Future[?!void] {.async: (raises: [CancelledError]).} =
self: Reservations,
obj: Availability): Future[?!void] {.async.} =
logScope:
availabilityId = obj.id
if obj.until < 0:
let error =
newException(UntilOutOfBoundsError, "Cannot set until to a negative value")
return failure(error)
without key =? obj.key, error:
return failure(error)
@ -308,70 +249,68 @@ proc updateAvailability(
trace "Creating new Availability"
let res = await self.updateImpl(obj)
# inform subscribers that Availability has been added
if OnAvailabilitySaved =? self.OnAvailabilitySaved:
await OnAvailabilitySaved(obj)
if onAvailabilityAdded =? self.onAvailabilityAdded:
# when chronos v4 is implemented, and OnAvailabilityAdded is annotated
# with async:(raises:[]), we can remove this try/catch as we know, with
# certainty, that nothing will be raised
try:
await onAvailabilityAdded(obj)
except CancelledError as e:
raise e
except CatchableError as e:
# we don't have any insight into types of exceptions that
# `onAvailabilityAdded` can raise because it is caller-defined
warn "Unknown error during 'onAvailabilityAdded' callback", error = e.msg
return res
else:
return failure(err)
if obj.until > 0:
without allReservations =? await self.all(Reservation, obj.id), error:
error.msg = "Error updating reservation: " & error.msg
return failure(error)
let requestEnds = allReservations.mapIt(it.validUntil)
if requestEnds.len > 0 and requestEnds.max > obj.until:
let error = newException(
UntilOutOfBoundsError,
"Until parameter must be greater or equal to the longest currently hosted slot",
)
return failure(error)
# Sizing of the availability changed, we need to adjust the repo reservation accordingly
if oldAvailability.totalSize != obj.totalSize:
trace "totalSize changed, updating repo reservation"
if oldAvailability.totalSize < obj.totalSize: # storage added
if reserveErr =? (
await self.repo.reserve((obj.totalSize - oldAvailability.totalSize).NBytes)
).errorOption:
if reserveErr =? (await self.repo.reserve((obj.totalSize - oldAvailability.totalSize).truncate(uint).NBytes)).errorOption:
return failure(reserveErr.toErr(ReserveFailedError))
elif oldAvailability.totalSize > obj.totalSize: # storage removed
if reserveErr =? (
await self.repo.release((oldAvailability.totalSize - obj.totalSize).NBytes)
).errorOption:
if reserveErr =? (await self.repo.release((oldAvailability.totalSize - obj.totalSize).truncate(uint).NBytes)).errorOption:
return failure(reserveErr.toErr(ReleaseFailedError))
let res = await self.updateImpl(obj)
if oldAvailability.freeSize < obj.freeSize or oldAvailability.duration < obj.duration or
oldAvailability.minPricePerBytePerSecond < obj.minPricePerBytePerSecond or
oldAvailability.totalRemainingCollateral < obj.totalRemainingCollateral:
# availability updated
if oldAvailability.freeSize < obj.freeSize: # availability added
# inform subscribers that Availability has been modified (with increased
# size)
if OnAvailabilitySaved =? self.OnAvailabilitySaved:
await OnAvailabilitySaved(obj)
if onAvailabilityAdded =? self.onAvailabilityAdded:
# when chronos v4 is implemented, and OnAvailabilityAdded is annotated
# with async:(raises:[]), we can remove this try/catch as we know, with
# certainty, that nothing will be raised
try:
await onAvailabilityAdded(obj)
except CancelledError as e:
raise e
except CatchableError as e:
# we don't have any insight into types of exceptions that
# `onAvailabilityAdded` can raise because it is caller-defined
warn "Unknown error during 'onAvailabilityAdded' callback", error = e.msg
return res
proc update*(
self: Reservations, obj: Reservation
): Future[?!void] {.async: (raises: [CancelledError]).} =
self: Reservations,
obj: Reservation): Future[?!void] {.async.} =
return await self.updateImpl(obj)
proc update*(
self: Reservations, obj: Availability
): Future[?!void] {.async: (raises: [CancelledError]).} =
try:
withLock(self.availabilityLock):
return await self.updateAvailability(obj)
except AsyncLockError as e:
error "Lock error when trying to update the availability", err = e.msg
return failure(e)
self: Reservations,
obj: Availability): Future[?!void] {.async.} =
withLock(self.availabilityLock):
return await self.updateAvailability(obj)
proc delete(
self: Reservations, key: Key
): Future[?!void] {.async: (raises: [CancelledError]).} =
self: Reservations,
key: Key): Future[?!void] {.async.} =
trace "deleting object", key
if not await self.exists(key):
@ -383,27 +322,28 @@ proc delete(
return success()
proc deleteReservation*(
self: Reservations,
reservationId: ReservationId,
availabilityId: AvailabilityId,
returnedCollateral: ?UInt256 = UInt256.none,
): Future[?!void] {.async: (raises: [CancelledError]).} =
self: Reservations,
reservationId: ReservationId,
availabilityId: AvailabilityId): Future[?!void] {.async.} =
logScope:
reservationId
availabilityId
trace "deleting reservation"
without key =? key(reservationId, availabilityId), error:
return failure(error)
try:
withLock(self.availabilityLock):
without reservation =? (await self.get(key, Reservation)), error:
if error of NotExistsError:
return success()
else:
return failure(error)
withLock(self.availabilityLock):
without reservation =? (await self.get(key, Reservation)), error:
if error of NotExistsError:
return success()
else:
return failure(error)
if reservation.size > 0.u256:
trace "returning remaining reservation bytes to availability",
size = reservation.size
without availabilityKey =? availabilityId.key, error:
return failure(error)
@ -411,54 +351,38 @@ proc deleteReservation*(
without var availability =? await self.get(availabilityKey, Availability), error:
return failure(error)
if reservation.size > 0.uint64:
trace "returning remaining reservation bytes to availability",
size = reservation.size
availability.freeSize += reservation.size
if collateral =? returnedCollateral:
availability.totalRemainingCollateral += collateral
availability.freeSize += reservation.size
if updateErr =? (await self.updateAvailability(availability)).errorOption:
return failure(updateErr)
if err =? (await self.repo.metaDs.ds.delete(key)).errorOption:
return failure(err.toErr(DeleteFailedError))
if err =? (await self.repo.metaDs.ds.delete(key)).errorOption:
return failure(err.toErr(DeleteFailedError))
return success()
except AsyncLockError as e:
error "Lock error when trying to delete the availability", err = e.msg
return failure(e)
return success()
# TODO: add support for deleting availabilities
# To delete, must not have any active sales.
proc createAvailability*(
self: Reservations,
size: uint64,
duration: uint64,
minPricePerBytePerSecond: UInt256,
totalCollateral: UInt256,
enabled: bool,
until: SecondsSince1970,
): Future[?!Availability] {.async: (raises: [CancelledError]).} =
trace "creating availability",
size, duration, minPricePerBytePerSecond, totalCollateral, enabled, until
self: Reservations,
size: UInt256,
duration: UInt256,
minPrice: UInt256,
maxCollateral: UInt256): Future[?!Availability] {.async.} =
if until < 0:
let error =
newException(UntilOutOfBoundsError, "Cannot set until to a negative value")
return failure(error)
trace "creating availability", size, duration, minPrice, maxCollateral
let availability = Availability.init(
size, size, duration, minPricePerBytePerSecond, totalCollateral, enabled, until
size, size, duration, minPrice, maxCollateral
)
let bytes = availability.freeSize
let bytes = availability.freeSize.truncate(uint)
if reserveErr =? (await self.repo.reserve(bytes.NBytes)).errorOption:
return failure(reserveErr.toErr(ReserveFailedError))
if updateErr =? (await self.update(availability)).errorOption:
# rollback the reserve
trace "rolling back reserve"
if rollbackErr =? (await self.repo.release(bytes.NBytes)).errorOption:
@ -470,130 +394,115 @@ proc createAvailability*(
return success(availability)
method createReservation*(
self: Reservations,
availabilityId: AvailabilityId,
slotSize: uint64,
requestId: RequestId,
slotIndex: uint64,
collateralPerByte: UInt256,
validUntil: SecondsSince1970,
): Future[?!Reservation] {.async: (raises: [CancelledError]), base.} =
try:
withLock(self.availabilityLock):
without availabilityKey =? availabilityId.key, error:
return failure(error)
self: Reservations,
availabilityId: AvailabilityId,
slotSize: UInt256,
requestId: RequestId,
slotIndex: UInt256
): Future[?!Reservation] {.async, base.} =
without availability =? await self.get(availabilityKey, Availability), error:
return failure(error)
withLock(self.availabilityLock):
without availabilityKey =? availabilityId.key, error:
return failure(error)
# Check that the found availability has enough free space after the lock has been acquired, to prevent asynchronous Availiability modifications
if availability.freeSize < slotSize:
let error = newException(
BytesOutOfBoundsError,
"trying to reserve an amount of bytes that is greater than the free size of the Availability",
)
return failure(error)
without availability =? await self.get(availabilityKey, Availability), error:
return failure(error)
trace "Creating reservation",
availabilityId, slotSize, requestId, slotIndex, validUntil = validUntil
# Check that the found availability has enough free space after the lock has been acquired, to prevent asynchronous Availiability modifications
if availability.freeSize < slotSize:
let error = newException(
BytesOutOfBoundsError,
"trying to reserve an amount of bytes that is greater than the total size of the Availability")
return failure(error)
let reservation =
Reservation.init(availabilityId, slotSize, requestId, slotIndex, validUntil)
trace "Creating reservation", availabilityId, slotSize, requestId, slotIndex
if createResErr =? (await self.update(reservation)).errorOption:
return failure(createResErr)
let reservation = Reservation.init(availabilityId, slotSize, requestId, slotIndex)
# reduce availability freeSize by the slot size, which is now accounted for in
# the newly created Reservation
availability.freeSize -= slotSize
if createResErr =? (await self.update(reservation)).errorOption:
return failure(createResErr)
# adjust the remaining totalRemainingCollateral
availability.totalRemainingCollateral -= slotSize.u256 * collateralPerByte
# reduce availability freeSize by the slot size, which is now accounted for in
# the newly created Reservation
availability.freeSize -= slotSize
# update availability with reduced size
trace "Updating availability with reduced size", freeSize = availability.freeSize
if updateErr =? (await self.updateAvailability(availability)).errorOption:
trace "Updating availability failed, rolling back reservation creation"
# update availability with reduced size
trace "Updating availability with reduced size"
if updateErr =? (await self.updateAvailability(availability)).errorOption:
trace "Updating availability failed, rolling back reservation creation"
without key =? reservation.key, keyError:
keyError.parent = updateErr
return failure(keyError)
without key =? reservation.key, keyError:
keyError.parent = updateErr
return failure(keyError)
# rollback the reservation creation
if rollbackErr =? (await self.delete(key)).errorOption:
rollbackErr.parent = updateErr
return failure(rollbackErr)
# rollback the reservation creation
if rollbackErr =? (await self.delete(key)).errorOption:
rollbackErr.parent = updateErr
return failure(rollbackErr)
return failure(updateErr)
return failure(updateErr)
trace "Reservation succesfully created"
return success(reservation)
except AsyncLockError as e:
error "Lock error when trying to delete the availability", err = e.msg
return failure(e)
trace "Reservation succesfully created"
return success(reservation)
proc returnBytesToAvailability*(
self: Reservations,
availabilityId: AvailabilityId,
reservationId: ReservationId,
bytes: uint64,
): Future[?!void] {.async: (raises: [CancelledError]).} =
self: Reservations,
availabilityId: AvailabilityId,
reservationId: ReservationId,
bytes: UInt256): Future[?!void] {.async.} =
logScope:
reservationId
availabilityId
try:
withLock(self.availabilityLock):
without key =? key(reservationId, availabilityId), error:
return failure(error)
without var reservation =? (await self.get(key, Reservation)), error:
return failure(error)
withLock(self.availabilityLock):
without key =? key(reservationId, availabilityId), error:
return failure(error)
# We are ignoring bytes that are still present in the Reservation because
# they will be returned to Availability through `deleteReservation`.
let bytesToBeReturned = bytes - reservation.size
without var reservation =? (await self.get(key, Reservation)), error:
return failure(error)
if bytesToBeReturned == 0:
trace "No bytes are returned",
requestSizeBytes = bytes, returningBytes = bytesToBeReturned
return success()
trace "Returning bytes",
requestSizeBytes = bytes, returningBytes = bytesToBeReturned
# First lets see if we can re-reserve the bytes, if the Repo's quota
# is depleted then we will fail-fast as there is nothing to be done atm.
if reserveErr =? (await self.repo.reserve(bytesToBeReturned.NBytes)).errorOption:
return failure(reserveErr.toErr(ReserveFailedError))
without availabilityKey =? availabilityId.key, error:
return failure(error)
without var availability =? await self.get(availabilityKey, Availability), error:
return failure(error)
availability.freeSize += bytesToBeReturned
# Update availability with returned size
if updateErr =? (await self.updateAvailability(availability)).errorOption:
trace "Rolling back returning bytes"
if rollbackErr =? (await self.repo.release(bytesToBeReturned.NBytes)).errorOption:
rollbackErr.parent = updateErr
return failure(rollbackErr)
return failure(updateErr)
# We are ignoring bytes that are still present in the Reservation because
# they will be returned to Availability through `deleteReservation`.
let bytesToBeReturned = bytes - reservation.size
if bytesToBeReturned == 0:
trace "No bytes are returned", requestSizeBytes = bytes, returningBytes = bytesToBeReturned
return success()
except AsyncLockError as e:
error "Lock error when returning bytes to the availability", err = e.msg
return failure(e)
trace "Returning bytes", requestSizeBytes = bytes, returningBytes = bytesToBeReturned
# First lets see if we can re-reserve the bytes, if the Repo's quota
# is depleted then we will fail-fast as there is nothing to be done atm.
if reserveErr =? (await self.repo.reserve(bytesToBeReturned.truncate(uint).NBytes)).errorOption:
return failure(reserveErr.toErr(ReserveFailedError))
without availabilityKey =? availabilityId.key, error:
return failure(error)
without var availability =? await self.get(availabilityKey, Availability), error:
return failure(error)
availability.freeSize += bytesToBeReturned
# Update availability with returned size
if updateErr =? (await self.updateAvailability(availability)).errorOption:
trace "Rolling back returning bytes"
if rollbackErr =? (await self.repo.release(bytesToBeReturned.truncate(uint).NBytes)).errorOption:
rollbackErr.parent = updateErr
return failure(rollbackErr)
return failure(updateErr)
return success()
proc release*(
self: Reservations,
reservationId: ReservationId,
availabilityId: AvailabilityId,
bytes: uint,
): Future[?!void] {.async: (raises: [CancelledError]).} =
self: Reservations,
reservationId: ReservationId,
availabilityId: AvailabilityId,
bytes: uint): Future[?!void] {.async.} =
logScope:
topics = "release"
bytes
@ -608,20 +517,20 @@ proc release*(
without var reservation =? (await self.get(key, Reservation)), error:
return failure(error)
if reservation.size < bytes:
if reservation.size < bytes.u256:
let error = newException(
BytesOutOfBoundsError,
"trying to release an amount of bytes that is greater than the total size of the Reservation",
)
"trying to release an amount of bytes that is greater than the total size of the Reservation")
return failure(error)
if releaseErr =? (await self.repo.release(bytes.NBytes)).errorOption:
return failure(releaseErr.toErr(ReleaseFailedError))
reservation.size -= bytes
reservation.size -= bytes.u256
# persist partially used Reservation with updated size
if err =? (await self.update(reservation)).errorOption:
# rollback release if an update error encountered
trace "rolling back release"
if rollbackErr =? (await self.repo.reserve(bytes.NBytes)).errorOption:
@ -631,9 +540,16 @@ proc release*(
return success()
iterator items(self: StorableIter): Future[?seq[byte]] =
while not self.finished:
yield self.next()
proc storables(
self: Reservations, T: type SomeStorableObject, queryKey: Key = ReservationsKey
): Future[?!StorableIter] {.async: (raises: [CancelledError]).} =
self: Reservations,
T: type SomeStorableObject,
queryKey: Key = ReservationsKey
): Future[?!StorableIter] {.async.} =
var iter = StorableIter()
let query = Query.init(queryKey)
when T is Availability:
@ -651,16 +567,20 @@ proc storables(
return failure(error)
# /sales/reservations
proc next(): Future[?seq[byte]] {.async: (raises: [CancelledError]).} =
proc next(): Future[?seq[byte]] {.async.} =
await idleAsync()
iter.finished = results.finished
if not results.finished and res =? (await results.next()) and res.data.len > 0 and
key =? res.key and key.namespaces.len == defaultKey.namespaces.len:
if not results.finished and
res =? (await results.next()) and
res.data.len > 0 and
key =? res.key and
key.namespaces.len == defaultKey.namespaces.len:
return some res.data
return none seq[byte]
proc dispose(): Future[?!void] {.async: (raises: [CancelledError]).} =
proc dispose(): Future[?!void] {.async.} =
return await results.dispose()
iter.next = next
@ -668,74 +588,70 @@ proc storables(
return success iter
proc allImpl(
self: Reservations, T: type SomeStorableObject, queryKey: Key = ReservationsKey
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} =
self: Reservations,
T: type SomeStorableObject,
queryKey: Key = ReservationsKey
): Future[?!seq[T]] {.async.} =
var ret: seq[T] = @[]
without storables =? (await self.storables(T, queryKey)), error:
return failure(error)
for storable in storables.items:
try:
without bytes =? (await storable):
continue
without obj =? T.fromJson(bytes), error:
error "json deserialization error",
json = string.fromBytes(bytes), error = error.msg
continue
ret.add obj
except CancelledError as err:
raise err
except CatchableError as err:
error "Error when retrieving storable", error = err.msg
without bytes =? (await storable):
continue
without obj =? T.fromJson(bytes), error:
error "json deserialization error",
json = string.fromBytes(bytes),
error = error.msg
continue
ret.add obj
return success(ret)
proc all*(
self: Reservations, T: type SomeStorableObject
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} =
self: Reservations,
T: type SomeStorableObject
): Future[?!seq[T]] {.async.} =
return await self.allImpl(T)
proc all*(
self: Reservations, T: type SomeStorableObject, availabilityId: AvailabilityId
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} =
without key =? key(availabilityId):
self: Reservations,
T: type SomeStorableObject,
availabilityId: AvailabilityId
): Future[?!seq[T]] {.async.} =
without key =? (ReservationsKey / $availabilityId):
return failure("no key")
return await self.allImpl(T, key)
proc findAvailability*(
self: Reservations,
size, duration: uint64,
pricePerBytePerSecond, collateralPerByte: UInt256,
validUntil: SecondsSince1970,
): Future[?Availability] {.async: (raises: [CancelledError]).} =
self: Reservations,
size, duration, minPrice, collateral: UInt256
): Future[?Availability] {.async.} =
without storables =? (await self.storables(Availability)), e:
error "failed to get all storables", error = e.msg
return none Availability
for item in storables.items:
if bytes =? (await item) and availability =? Availability.fromJson(bytes):
if availability.enabled and size <= availability.freeSize and
duration <= availability.duration and
collateralPerByte <= availability.maxCollateralPerByte and
pricePerBytePerSecond >= availability.minPricePerBytePerSecond and
(availability.until == 0 or availability.until >= validUntil):
if bytes =? (await item) and
availability =? Availability.fromJson(bytes):
if size <= availability.freeSize and
duration <= availability.duration and
collateral <= availability.maxCollateral and
minPrice >= availability.minPrice:
trace "availability matched",
id = availability.id,
enabled = availability.enabled,
size,
availFreeSize = availability.freeSize,
duration,
availDuration = availability.duration,
pricePerBytePerSecond,
availMinPricePerBytePerSecond = availability.minPricePerBytePerSecond,
collateralPerByte,
availMaxCollateralPerByte = availability.maxCollateralPerByte,
until = availability.until
size, availFreeSize = availability.freeSize,
duration, availDuration = availability.duration,
minPrice, availMinPrice = availability.minPrice,
collateral, availMaxCollateral = availability.maxCollateral
# TODO: As soon as we're on ARC-ORC, we can use destructors
# to automatically dispose our iterators when they fall out of scope.
@ -747,13 +663,7 @@ proc findAvailability*(
trace "availability did not match",
id = availability.id,
enabled = availability.enabled,
size,
availFreeSize = availability.freeSize,
duration,
availDuration = availability.duration,
pricePerBytePerSecond,
availMinPricePerBytePerSecond = availability.minPricePerBytePerSecond,
collateralPerByte,
availMaxCollateralPerByte = availability.maxCollateralPerByte,
until = availability.until
size, availFreeSize = availability.freeSize,
duration, availDuration = availability.duration,
minPrice, availMinPrice = availability.minPrice,
collateral, availMaxCollateral = availability.maxCollateral

Some files were not shown because too many files have changed in this diff Show More