Compare commits

..

15 Commits

Author SHA1 Message Date
Slava
0c647d8337
chore: new marketplace address for testnet (#961)
https://github.com/codex-storage/infra-codex/issues/248

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-10-21 13:31:54 +03:00
Ben Bierens
f196caf8cb
Download API upgrade (#955)
* Adds API for fetching manifest only and downloading dataset without stream

* Updates openapi.yaml

* Adds tests for downloading manifest-only and without stream.

* review comments by Giuliano

* updates test clients
2024-10-21 13:25:19 +03:00
Adam Uhlíř
bf1434d192
docs: openapi node fix (#950) 2024-10-21 13:25:15 +03:00
Adam Uhlíř
00ab8d712e
ci: linux ci runs on ubuntu-20.04 (#953)
* ci: linux ci runs uses ubuntu-20.04

* ci: use ubuntu-20.04 for nim-matrix

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
2024-10-21 13:25:10 +03:00
Ben Bierens
21d996ab3f
Adds log for cirdl download URL (#948) 2024-10-21 13:24:52 +03:00
Adam Uhlíř
eff0d8cd18
feat: partial rewards and withdraws (#880)
* feat: partial rewards and withdraws

* test: missing reserve slot

* test: fix contracts
2024-10-21 13:24:47 +03:00
Ben Bierens
b0607d3fdb
Handles LPStreamError in chunker (#947)
* Handles LPStreamError in chunker

* Adds test for lpstream exception

* Adds tests for other stream exceptions. Cleanup.
2024-10-21 13:24:38 +03:00
Arnaud
859b7ea0e5
fix(restapi): Add cors headers when the request is returning errors (#942)
* Add cors headers when the request is returning errors

* Prevent nim presto to send multiple cors headers
2024-10-21 13:24:32 +03:00
Eric
29549935ad
Support enforcement of slot reservations before filling slot (#934) 2024-10-21 13:22:55 +03:00
Slava
47061bf29b
Release v0.1.6 (#945)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* remove erasure and por parameters from openapi spec (#915)

* Move Building Codex guide to the main docs site (#893)

* updates Marketplace tutorial documentation (#888)

* updates Marketplace tutorial documentation

* Applies review comments to marketplace-tutorial

* Final formatting touches

* moved `Prerequisites` around

* Fixes indentation in one JSON snippet

* Use CLI args when passed for cirdl in Docker entrypoint (#927)

* Use CLI args when passed for cirdl in Docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase CI timeout

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Validator - support partitioning of the  slot id space (#890)

* Adds validatorPartitionSize and validatorPartitionIndex config options

* adds partitioning options to the validation type

* adds partitioning logic to the validator

* ignores partitionIndex when partitionSize is either 0 or 1

* clips the partition index to <<partitionIndex mod partitionSize>>

* handles negative values for the validation partition index

* updates long description of the new validator cli options

* makes default partitionSize to be 0 for better backward compatibility

* Improving formatting on validator CLI

* reactors validation params into a separate type and simplifies validation of validation params

* removes suspected duplication

* fixes typo in validator CLI help

* updates README

* Applies review comments - using optionals and range types to handle validation params

* Adds initializer to the configFactory for validatorMaxSlots

* [Review] update validator CLI description and README

* [Review]: renaming validationParams to validationConfig (config)

* [Review]: move validationconfig.nim to a higher level (next to validation.nim)

* changes backing type of MaxSlots to be int and makes sure slots are validated without limit when maxSlots is set to 0

* adds more end-to-end test for the validator and the groups

* fixes typo in README and conf.nim

* makes `maxSlotsConstraintRespected` and `shouldValidateSlot` private + updates the tests

* fixes public address of the signer account in the marketplace tutorial

* applies review comments - removes two tests

* Remove moved docs (#930)

* Remove moved document

* Update main Readme and point links to the documentation site

* feat(slot-reservations): Support reserving slots (#907)

* feat(slot-reservations): Support reserving slots

Closes #898.

Wire up reserveSlot and canReserveSlot contract calls, but don't call them

* Remove return value from `reserveSlot`

* convert EthersError to MarketError

* Move convertEthersError to reserveSlot

* bump codex-contracts-eth after rebase

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* bump codex-contracts-eth after rebase

* bump codex-contracts-eth to master after codex-contracts-eth/pull/177 merged

* feat(slot-reservations): Add SaleSlotReserving state (#917)

* convert EthersError to MarketError

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* Add SaleSlotReserving

Adds a new state, SaleSlotReserving, that attempts to reserve a slot before downloading.
If the slot cannot be reserved, the state moves to SaleIgnored.
On error, the state moves to SaleErrored.

SaleIgnored is also updated to pass in `reprocessSlot` and `returnBytes`, controlling the behaviour in the Sales module after the slot is ignored. This is because previously it was assumed that SaleIgnored was only reached when there was no Availability. This is no longer the case, since SaleIgnored can now be reached when a slot cannot be reserved.

* Update SalePreparing

Specify `reprocessSlot` and `returnBytes` when moving to `SaleIgnored` from `SalePreparing`.

Update tests to include test for a raised CatchableError.

* Fix unit test

* Modify `canReserveSlot` and `reverseSlot` params after rebase

* Update MockMarket with new `canReserveSlot` and `reserveSlot` params

* fix after rebase

also bump codex-contracts-eth to master

* Use Ubuntu 20.04 for Linux amd64 releases (#939)

* Use Ubuntu 20.04 for Linux amd64 releases (#932)

* Accept branches with the slash in the name for release workflow (#932)

* Increase artifacts retention-days for release workflow (#932)

* feat(slot-reservations): support SlotReservationsFull event (#926)

* Remove moved docs (#935)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fix: null-ref in networkPeer (#937)

* Fixes nullref in networkPeer

* Removes inflight semaphore

* Revert "Removes inflight semaphore"

This reverts commit 26ec15c6f788df3adb6ff3b912a0c4b5d3139358.

* docs(openapi): provider better documentation for space endpoint parameters (#921)

* Trying to improve documentation

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

* Update openapi.yaml

Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Arnaud <arno.deville@gmail.com>

---------

Signed-off-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>

* Update Codex Testnet marketplace contract address (#944)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Signed-off-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
Co-authored-by: Marcin Czenko <marcin.czenko@pm.me>
2024-10-08 12:22:12 +03:00
Slava
7ba5e8c13a
Release v0.1.5 (#941)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* remove erasure and por parameters from openapi spec (#915)

* Move Building Codex guide to the main docs site (#893)

* updates Marketplace tutorial documentation (#888)

* updates Marketplace tutorial documentation

* Applies review comments to marketplace-tutorial

* Final formatting touches

* moved `Prerequisites` around

* Fixes indentation in one JSON snippet

* Use CLI args when passed for cirdl in Docker entrypoint (#927)

* Use CLI args when passed for cirdl in Docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase CI timeout

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Validator - support partitioning of the  slot id space (#890)

* Adds validatorPartitionSize and validatorPartitionIndex config options

* adds partitioning options to the validation type

* adds partitioning logic to the validator

* ignores partitionIndex when partitionSize is either 0 or 1

* clips the partition index to <<partitionIndex mod partitionSize>>

* handles negative values for the validation partition index

* updates long description of the new validator cli options

* makes default partitionSize to be 0 for better backward compatibility

* Improving formatting on validator CLI

* reactors validation params into a separate type and simplifies validation of validation params

* removes suspected duplication

* fixes typo in validator CLI help

* updates README

* Applies review comments - using optionals and range types to handle validation params

* Adds initializer to the configFactory for validatorMaxSlots

* [Review] update validator CLI description and README

* [Review]: renaming validationParams to validationConfig (config)

* [Review]: move validationconfig.nim to a higher level (next to validation.nim)

* changes backing type of MaxSlots to be int and makes sure slots are validated without limit when maxSlots is set to 0

* adds more end-to-end test for the validator and the groups

* fixes typo in README and conf.nim

* makes `maxSlotsConstraintRespected` and `shouldValidateSlot` private + updates the tests

* fixes public address of the signer account in the marketplace tutorial

* applies review comments - removes two tests

* Remove moved docs (#930)

* Remove moved document

* Update main Readme and point links to the documentation site

* feat(slot-reservations): Support reserving slots (#907)

* feat(slot-reservations): Support reserving slots

Closes #898.

Wire up reserveSlot and canReserveSlot contract calls, but don't call them

* Remove return value from `reserveSlot`

* convert EthersError to MarketError

* Move convertEthersError to reserveSlot

* bump codex-contracts-eth after rebase

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* bump codex-contracts-eth after rebase

* bump codex-contracts-eth to master after codex-contracts-eth/pull/177 merged

* feat(slot-reservations): Add SaleSlotReserving state (#917)

* convert EthersError to MarketError

* change `canReserveSlot` and `reserveSlot` parameters

Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`.

* Add SaleSlotReserving

Adds a new state, SaleSlotReserving, that attempts to reserve a slot before downloading.
If the slot cannot be reserved, the state moves to SaleIgnored.
On error, the state moves to SaleErrored.

SaleIgnored is also updated to pass in `reprocessSlot` and `returnBytes`, controlling the behaviour in the Sales module after the slot is ignored. This is because previously it was assumed that SaleIgnored was only reached when there was no Availability. This is no longer the case, since SaleIgnored can now be reached when a slot cannot be reserved.

* Update SalePreparing

Specify `reprocessSlot` and `returnBytes` when moving to `SaleIgnored` from `SalePreparing`.

Update tests to include test for a raised CatchableError.

* Fix unit test

* Modify `canReserveSlot` and `reverseSlot` params after rebase

* Update MockMarket with new `canReserveSlot` and `reserveSlot` params

* fix after rebase

also bump codex-contracts-eth to master

* Use Ubuntu 20.04 for Linux amd64 releases (#939)

* Use Ubuntu 20.04 for Linux amd64 releases (#932)

* Accept branches with the slash in the name for release workflow (#932)

* Increase artifacts retention-days for release workflow (#932)

* feat(slot-reservations): support SlotReservationsFull event (#926)

* Remove moved docs (#935)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fix: null-ref in networkPeer (#937)

* Fixes nullref in networkPeer

* Removes inflight semaphore

* Revert "Removes inflight semaphore"

This reverts commit 26ec15c6f788df3adb6ff3b912a0c4b5d3139358.

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
Co-authored-by: Marcin Czenko <marcin.czenko@pm.me>
2024-10-07 15:27:25 +03:00
Slava
484124db09
Release v0.1.4 (#912)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

* Bump Nim to 1.6.21 (#851)

* bump Nim to 1.6.21 (range type reset fixes)

* remove incompatible versions from compiler matrix

* feat(rest): adds erasure coding constraints when requesting storage (#848)

* Rest API: add erasure coding constraints when requesting storage

* clean up

* Make error message for "dataset too small" more informative.

* fix API integration test

---------

Co-authored-by: gmega <giuliano.mega@gmail.com>

* Prover workshop band-aid (#853)

* add prover bandaid

* Improve error message text

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Bandaid for failing erasure coding (#855)

* Update Release workflow (#858)

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Fixes prover behavior with singleton proof trees (#859)

* add logs and test

* add Merkle proof checks

* factor out Circom input normalization, fix proof input serialization

* add test and update existing ones

* update circuit assets

* add back trace message

* switch contracts to fix branch

* update codex-contracts-eth to latest

* do not expose prove with prenormalized inputs

* Chronos v4 Update (v3 Compat Mode) (#814)

* add changes to use chronos v4 in compat mode

* switch chronos to compat fix branch

* use nimbus-build-system with configurable Nim repo

* add missing imports

* add missing await

* bump compat

* pin nim version in Makefile

* add await instead of asyncSpawn to advertisement queue loop

* bump DHT to v0.5.0

* allow error state of `onBatch` to propagate upwards in test code

* pin Nim compiler commit to avoid fetching stale branch

* make CI build against branch head instead of merge

* fix handling of return values in testslotqueue

* Downgrade to gcc 13 on Windows (#874)

* Downgrade to gcc 13 on Windows

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Increase build job timeout to 90 minutes

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add MIT/Apache licenses (#861)

* Add MIT/Apache licenses

* Center "Apache License"

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* remove wrong legal entity; rename apache license file

---------

Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>

* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)

* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S

* Remove useless header "Access-Control-Headers" and add cache

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>

* chore: add `downtimeProduct` config parameter (#867)

* chore: add `downtimeProduct` config parameter

* bump codex-contracts-eth to master

* Support CORS preflight requests when the storage request api returns an error  (#878)

* Add CORS headers when the REST API is returning an error

* Use the allowedOrigin instead of the wilcard when setting the origin

Signed-off-by: Arnaud <arnaud@status.im>

---------

Signed-off-by: Arnaud <arnaud@status.im>

* refactor(marketplace): generic querying of historical marketplace events (#872)

* refactor(marketplace): move marketplace events to the Market abstraction

Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.

* Remove unneeded conversion

* Switch to generic implementation of event querying

* change parent type to MarketplaceEvent

* Remove extra license file (#876)

* remove extra license

* center "apache license"

* Update advertising (#862)

* Setting up advertiser

* Wires up advertiser

* cleanup

* test compiles

* tests pass

* setting up test for advertiser

* Finishes advertiser tests

* fixes commonstore tests

* Review comments by Giuliano

* Race condition found by Giuliano

* Review comment by Dmitriy

Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>

* fixes tests

---------

Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>

* feat: add `--payout-address` (#870)

* feat: add `--payout-address`

Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.

* Remove optional payoutAddress

Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.

* Update integration tests to include --payout-address

* move payoutAddress from fillSlot to freeSlot

* Update integration tests to use required payoutAddress

- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.

* support client payout address

- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.

* fix integration test

adds --payout-address to validators

* refactor: support withdrawFunds and freeSlot optional parameters

- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming

* Revert "Update integration tests to include --payout-address"

This reverts commit 8f9535cf35b0f2b183ac4013a7ed11b246486964.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.

* small fix

* bump contracts to fix marketplace spec

* bump codex-contracts-eth, now rebased on master

* bump codex-contracts-eth

now that feat/reward-address has been merged to master

* clean up, comments

* Rework circuit downloader (#882)

* Introduces a start method to prover

* Moves backend creation into start method

* sets up three paths for backend initialization

* Extracts backend initialization to backend-factory

* Implements loading backend from cli files or previously downloaded local files

* Wires up downloading and unzipping

* functional implementation

* Fixes testprover.nim

* Sets up tests for backendfactory

* includes libzip-dev

* pulls in updated contracts

* removes integration cli tests for r1cs, wasm, and zkey file arguments.

* Fixes issue where inner-scope values are lost before returning

* sets local proof verification for dist-test images

* Adds two traces and bumps nim-ethers

* Adds separate path for circuit files

* Create circuit dir if not exists

* fix: make sure requestStorage is mined

* fix: correct place to plug confirm

* test: fixing contracts tests

* Restores gitmodules

* restores nim-datastore reference

* Sets up downloader exe

* sets up tool skeleton

* implements getting of circuit hash

* Implements downloader tool

* sets up test skeleton

* Implements test for cirdl

* includes testTools in testAll

* Cleanup building.md

* cleans up previous downloader implementation

* cleans up testbackendfactory

* moves start of prover into node.nim

* Fills in arguments in example command

* Initializes backend in prover constructor

* Restores tests

* Restores tests for cli instructions

* Review comments by Dmitriy, part 1

* Quotes path in download instruction.

* replaces curl with chronos http session

* Moves cirdl build output to 'build' folder.

* Fixes chronicles log output

* Add cirdl support to the codex Dockerfile

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the docker entrypoint

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Add cirdl support to the release workflow

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Disable verify_circuit flag for releases

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>

* Removes backendFactory placeholder type

* wip

* Replaces zip library with status-im/zippy library (which supports zip and tar)

* Updates cirdl to not change circuitdir folder

* Switches from zip to tar.gz

* Review comments by Dmitriy

* updates codex-contracts-eth

* Adds testTools to CI

* Adds check for access to config.circuitdir

* Update fixture circuit zkey

* Update matrix to run tools tests on Windows

* Adds 'deps' dependency for cirdl

* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var

* Review comments by Giuliano

---------

Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>

* Support CORS for POST and PATCH availability endpoints (#897)

* Adds testnet marketplace address to known deployments (#911)

* API tweaks for OpenAPI, errors and endpoints (#886)

* All sort of tweaks

* docs: availability's minPrice doc

* Revert changes to the two node test example

* Change default EC params in REST API

Change default EC params in REST API to 3 nodes and 1 tolerance.

Adjust integration tests to honour these settings.

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
2024-09-24 13:19:58 +03:00
Slava
89917d4bb6
Release v0.1.3 (#856) 2024-07-03 20:20:53 +03:00
Slava
7602adc0df
Release v0.1.2 (#847)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

* prevent node crashing with `not val.isNil` (#843)

* bump nim-leopard to handle no parity data (#845)

* Fix verifiable manifest constructor (#844)

* Fix verifiable manifest constructor

* Add integration test for verifiable manifest download

Add integration test for testing download of verifiable dataset after creating request for storage

* add missing import

* add testecbug to integration suite

* Remove hardhat instance from integration test

* change description, drop echo

---------

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
2024-06-27 08:51:50 +03:00
Slava
15ff87a8bb
Merge latest master into release (#842)
* fix: createReservation lock (#825)

* fix: createReservation lock

* fix: additional locking places

* fix: acquire lock

* chore: feedback

Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* feat: withLock template and fixed tests

* fix: use proc for MockReservations constructor

* chore: feedback

Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>

* chore: feedback implementation

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>

* Block deletion with ref count & repostore refactor (#631)

* Fix StoreStream so it doesn't return parity bytes  (#838)

* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests

* use Cid.example instead of creating a mock manually

* Fix verifiable manifest initialization (#839)

* fix verifiable manifest initialization

* fix linearstrategy, use verifiableStrategy to select blocks for slots

* check for both strategies in attribute inheritance test

* ci: add verify_circuit=true to the releases (#840)

* provisional fix so EC errors do not crash the node on download (#841)

---------

Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
2024-06-26 05:38:04 +03:00
438 changed files with 25222 additions and 15648 deletions

View File

@ -1,2 +0,0 @@
# Formatted with nph v0.6.1-0-g0d8000e
e5df8c50d3b6e70e6eec1ff031657d2b7bb6fe63

View File

@ -11,22 +11,28 @@ inputs:
default: "amd64" default: "amd64"
nim_version: nim_version:
description: "Nim version" description: "Nim version"
default: "v2.0.14" default: "version-1-6"
rust_version:
description: "Rust version"
default: "1.78.0"
shell: shell:
description: "Shell to run commands in" description: "Shell to run commands in"
default: "bash --noprofile --norc -e -o pipefail" default: "bash --noprofile --norc -e -o pipefail"
coverage:
description: "True if the process is used for coverage"
default: false
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Rust (Linux)
if: inputs.os == 'linux'
shell: ${{ inputs.shell }} {0}
run: |
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs/ | sh -s -- --default-toolchain=${{ inputs.rust_version }} -y
- name: APT (Linux amd64/arm64) - name: APT (Linux amd64/arm64)
if: inputs.os == 'linux' && (inputs.cpu == 'amd64' || inputs.cpu == 'arm64') if: inputs.os == 'linux' && (inputs.cpu == 'amd64' || inputs.cpu == 'arm64')
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: | run: |
sudo apt-get update -qq sudo apt-fast update -qq
sudo DEBIAN_FRONTEND='noninteractive' apt-get install \ sudo DEBIAN_FRONTEND='noninteractive' apt-fast install \
--no-install-recommends -yq lcov --no-install-recommends -yq lcov
- name: APT (Linux i386) - name: APT (Linux i386)
@ -34,8 +40,8 @@ runs:
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: | run: |
sudo dpkg --add-architecture i386 sudo dpkg --add-architecture i386
sudo apt-get update -qq sudo apt-fast update -qq
sudo DEBIAN_FRONTEND='noninteractive' apt-get install \ sudo DEBIAN_FRONTEND='noninteractive' apt-fast install \
--no-install-recommends -yq gcc-multilib g++-multilib --no-install-recommends -yq gcc-multilib g++-multilib
- name: Homebrew (macOS) - name: Homebrew (macOS)
@ -56,6 +62,7 @@ runs:
mingw-w64-ucrt-x86_64-toolchain mingw-w64-ucrt-x86_64-toolchain
mingw-w64-ucrt-x86_64-cmake mingw-w64-ucrt-x86_64-cmake
mingw-w64-ucrt-x86_64-ntldd-git mingw-w64-ucrt-x86_64-ntldd-git
mingw-w64-ucrt-x86_64-rust
- name: MSYS2 (Windows i386) - name: MSYS2 (Windows i386)
if: inputs.os == 'windows' && inputs.cpu == 'i386' if: inputs.os == 'windows' && inputs.cpu == 'i386'
@ -69,56 +76,13 @@ runs:
mingw-w64-i686-toolchain mingw-w64-i686-toolchain
mingw-w64-i686-cmake mingw-w64-i686-cmake
mingw-w64-i686-ntldd-git mingw-w64-i686-ntldd-git
mingw-w64-i686-rust
- name: Install gcc 14 on Linux - name: MSYS2 (Windows All) - Downgrade to gcc 13
# We don't want to install gcc 14 for coverage (Ubuntu 20.04)
if: ${{ inputs.os == 'linux' && inputs.coverage != 'true' }}
shell: ${{ inputs.shell }} {0}
run: |
# Skip for older Ubuntu versions
if [[ $(lsb_release -r | awk -F '[^0-9]+' '{print $2}') -ge 24 ]]; then
# Install GCC-14
sudo apt-get update -qq
sudo apt-get install -yq gcc-14
# Add GCC-14 to alternatives
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-14 14
# Set GCC-14 as the default
sudo update-alternatives --set gcc /usr/bin/gcc-14
fi
- name: Install ccache on Linux/Mac
if: inputs.os == 'linux' || inputs.os == 'macos'
uses: hendrikmuhs/ccache-action@v1.2
with:
create-symlink: false
key: ${{ inputs.os }}-${{ inputs.builder }}-${{ inputs.cpu }}-${{ inputs.tests }}-${{ inputs.nim_version }}-${{ github.run_id }}-${{ github.run_number }}-${{ github.run_attempt }}
evict-old-files: 7d
- name: Add ccache to path on Linux/Mac
if: inputs.os == 'linux' || inputs.os == 'macos'
shell: ${{ inputs.shell }} {0}
run: |
echo "/usr/lib/ccache:/usr/local/opt/ccache/libexec" >> "$GITHUB_PATH"
echo "/usr/local/opt/ccache/libexec" >> "$GITHUB_PATH"
- name: Install ccache on Windows
if: inputs.os == 'windows'
uses: hendrikmuhs/ccache-action@v1.2
with:
key: ${{ inputs.os }}-${{ inputs.builder }}-${{ inputs.cpu }}-${{ inputs.tests }}-${{ inputs.nim_version }}-${{ github.run_id }}-${{ github.run_number }}-${{ github.run_attempt }}
evict-old-files: 7d
- name: Enable ccache on Windows
if: inputs.os == 'windows' if: inputs.os == 'windows'
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: | run: |
CCACHE_DIR=$(dirname $(which ccache))/ccached pacman -U --noconfirm https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-13.2.0-6-any.pkg.tar.zst https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-libs-13.2.0-6-any.pkg.tar.zst
mkdir -p ${CCACHE_DIR}
ln -sf $(which ccache) ${CCACHE_DIR}/gcc.exe
ln -sf $(which ccache) ${CCACHE_DIR}/g++.exe
ln -sf $(which ccache) ${CCACHE_DIR}/cc.exe
ln -sf $(which ccache) ${CCACHE_DIR}/c++.exe
echo "export PATH=${CCACHE_DIR}:\$PATH" >> $HOME/.bash_profile # prefix path in MSYS2
- name: Derive environment variables - name: Derive environment variables
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
@ -177,11 +141,8 @@ runs:
llvm_bin_dir="${llvm_dir}/bin" llvm_bin_dir="${llvm_dir}/bin"
llvm_lib_dir="${llvm_dir}/lib" llvm_lib_dir="${llvm_dir}/lib"
echo "${llvm_bin_dir}" >> ${GITHUB_PATH} echo "${llvm_bin_dir}" >> ${GITHUB_PATH}
# Make sure ccache has precedence (GITHUB_PATH is appending before)
echo "$(brew --prefix)/opt/ccache/libexec" >> ${GITHUB_PATH}
echo $PATH
echo "LDFLAGS=${LDFLAGS} -L${libomp_lib_dir} -L${llvm_lib_dir} -Wl,-rpath,${llvm_lib_dir}" >> ${GITHUB_ENV} echo "LDFLAGS=${LDFLAGS} -L${libomp_lib_dir} -L${llvm_lib_dir} -Wl,-rpath,${llvm_lib_dir}" >> ${GITHUB_ENV}
NIMFLAGS="${NIMFLAGS} $(quote "-d:LeopardCmakeFlags='-DCMAKE_BUILD_TYPE=Release' -d:LeopardExtraCompilerFlags='-fopenmp' -d:LeopardExtraLinkerFlags='-fopenmp -L${libomp_lib_dir}'")" NIMFLAGS="${NIMFLAGS} $(quote "-d:LeopardCmakeFlags='-DCMAKE_BUILD_TYPE=Release -DCMAKE_C_COMPILER=${llvm_bin_dir}/clang -DCMAKE_CXX_COMPILER=${llvm_bin_dir}/clang++' -d:LeopardExtraCompilerlags='-fopenmp' -d:LeopardExtraLinkerFlags='-fopenmp -L${libomp_lib_dir}'")"
echo "NIMFLAGS=${NIMFLAGS}" >> $GITHUB_ENV echo "NIMFLAGS=${NIMFLAGS}" >> $GITHUB_ENV
fi fi
@ -198,27 +159,18 @@ runs:
- name: Restore Nim toolchain binaries from cache - name: Restore Nim toolchain binaries from cache
id: nim-cache id: nim-cache
uses: actions/cache@v4 uses: actions/cache@v4
if: ${{ inputs.coverage != 'true' }}
with: with:
path: NimBinaries path: NimBinaries
key: ${{ inputs.os }}-${{ inputs.cpu }}-nim-${{ inputs.nim_version }}-cache-${{ env.cache_nonce }}-${{ github.run_id }}-${{ github.run_number }}-${{ github.run_attempt }} key: ${{ inputs.os }}-${{ inputs.cpu }}-nim-${{ inputs.nim_version }}-cache-${{ env.cache_nonce }}-${{ github.run_id }}
restore-keys: ${{ inputs.os }}-${{ inputs.cpu }}-nim-${{ inputs.nim_version }}-cache-${{ env.cache_nonce }} restore-keys: ${{ inputs.os }}-${{ inputs.cpu }}-nim-${{ inputs.nim_version }}-cache-${{ env.cache_nonce }}
- name: Set NIM_COMMIT - name: Set NIM_COMMIT
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: echo "NIM_COMMIT=${{ inputs.nim_version }}" >> ${GITHUB_ENV} run: echo "NIM_COMMIT=${{ inputs.nim_version }}" >> ${GITHUB_ENV}
- name: MSYS2 (Windows All) - Disable git symbolic links (since miniupnp 2.2.5) - name: Build Nim and Codex dependencies
if: inputs.os == 'windows'
shell: ${{ inputs.shell }} {0} shell: ${{ inputs.shell }} {0}
run: | run: |
git config --global core.symlinks false
- name: Build Nim and Logos Storage dependencies
shell: ${{ inputs.shell }} {0}
run: |
which gcc
gcc --version
make -j${ncpu} CI_CACHE=NimBinaries ${ARCH_OVERRIDE} QUICK_AND_DIRTY_COMPILER=1 update make -j${ncpu} CI_CACHE=NimBinaries ${ARCH_OVERRIDE} QUICK_AND_DIRTY_COMPILER=1 update
echo echo
./env.sh nim --version ./env.sh nim --version

View File

@ -3,14 +3,12 @@ Tips for shorter build times
### Runner availability ### ### Runner availability ###
When running on the Github free, pro or team plan, the bottleneck when Currently, the biggest bottleneck when optimizing workflows is the availability
optimizing workflows is the availability of macOS runners. Therefore, anything of Windows and macOS runners. Therefore, anything that reduces the time spent in
that reduces the time spent in macOS jobs will have a positive impact on the Windows or macOS jobs will have a positive impact on the time waiting for
time waiting for runners to become available. On the Github enterprise plan, runners to become available. The usage limits for Github Actions are [described
this is not the case and you can more freely use parallelization on multiple here][limits]. You can see a breakdown of runner usage for your jobs in the
runners. The usage limits for Github Actions are [described here][limits]. You Github Actions tab ([example][usage]).
can see a breakdown of runner usage for your jobs in the Github Actions tab
([example][usage]).
### Windows is slow ### ### Windows is slow ###
@ -24,10 +22,11 @@ analysis, etc. are therefore better performed on a Linux runner.
Breaking up a long build job into several jobs that you run in parallel can have Breaking up a long build job into several jobs that you run in parallel can have
a positive impact on the wall clock time that a workflow runs. For instance, you a positive impact on the wall clock time that a workflow runs. For instance, you
might consider running unit tests and integration tests in parallel. When might consider running unit tests and integration tests in parallel. Keep in
running on the Github free, pro or team plan, keep in mind that availability of mind however that availability of macOS and Windows runners is the biggest
macOS runners is a bottleneck. If you split a macOS job into two jobs, you now bottleneck. If you split a Windows job into two jobs, you now need to wait for
need to wait for two macOS runners to become available. two Windows runners to become available! Therefore parallelization often only
makes sense for Linux jobs.
### Refactoring ### ### Refactoring ###
@ -67,10 +66,9 @@ might seem inconvenient, because when you're debugging an issue you often want
to know whether you introduced a failure on all platforms, or only on a single to know whether you introduced a failure on all platforms, or only on a single
one. You might be tempted to disable fail-fast, but keep in mind that this keeps one. You might be tempted to disable fail-fast, but keep in mind that this keeps
runners busy for longer on a workflow that you know is going to fail anyway. runners busy for longer on a workflow that you know is going to fail anyway.
Consequent runs will therefore take longer to start. Fail fast is most likely Consequent runs will therefore take longer to start. Fail fast is most likely better for overall development speed.
better for overall development speed.
[usage]: https://github.com/logos-storage/logos-storage-nim/actions/runs/3462031231/usage [usage]: https://github.com/codex-storage/nim-codex/actions/runs/3462031231/usage
[composite]: https://docs.github.com/en/actions/creating-actions/creating-a-composite-action [composite]: https://docs.github.com/en/actions/creating-actions/creating-a-composite-action
[reusable]: https://docs.github.com/en/actions/using-workflows/reusing-workflows [reusable]: https://docs.github.com/en/actions/using-workflows/reusing-workflows
[cache]: https://github.com/actions/cache/blob/main/workarounds.md#update-a-cache [cache]: https://github.com/actions/cache/blob/main/workarounds.md#update-a-cache

View File

@ -24,9 +24,9 @@ jobs:
run: run:
shell: ${{ matrix.shell }} {0} shell: ${{ matrix.shell }} {0}
name: ${{ matrix.os }}-${{ matrix.tests }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.job_number }} name: '${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.tests }}'
runs-on: ${{ matrix.builder }} runs-on: ${{ matrix.builder }}
timeout-minutes: 90 timeout-minutes: 100
steps: steps:
- name: Checkout sources - name: Checkout sources
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -38,31 +38,52 @@ jobs:
uses: ./.github/actions/nimbus-build-system uses: ./.github/actions/nimbus-build-system
with: with:
os: ${{ matrix.os }} os: ${{ matrix.os }}
cpu: ${{ matrix.cpu }}
shell: ${{ matrix.shell }} shell: ${{ matrix.shell }}
nim_version: ${{ matrix.nim_version }} nim_version: ${{ matrix.nim_version }}
coverage: false
## Part 1 Tests ## ## Part 1 Tests ##
- name: Unit tests - name: Unit tests
if: matrix.tests == 'unittest' || matrix.tests == 'all' if: matrix.tests == 'unittest' || matrix.tests == 'all'
run: make -j${ncpu} test run: make -j${ncpu} test
# workaround for https://github.com/NomicFoundation/hardhat/issues/3877
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 18.15
- name: Start Ethereum node with Codex contracts
if: matrix.tests == 'contract' || matrix.tests == 'integration' || matrix.tests == 'tools' || matrix.tests == 'all'
working-directory: vendor/codex-contracts-eth
env:
MSYS2_PATH_TYPE: inherit
run: |
npm install
npm start &
## Part 2 Tests ## ## Part 2 Tests ##
- name: Contract tests
if: matrix.tests == 'contract' || matrix.tests == 'all'
run: make -j${ncpu} testContracts
## Part 3 Tests ##
- name: Integration tests - name: Integration tests
if: matrix.tests == 'integration' || matrix.tests == 'all' if: matrix.tests == 'integration' || matrix.tests == 'all'
env: run: make -j${ncpu} testIntegration
CODEX_INTEGRATION_TEST_INCLUDES: ${{ matrix.includes }}
run: make -j${ncpu} DEBUG=${{ runner.debug }} testIntegration
- name: Upload integration tests log files - name: Upload integration tests log files
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
if: (matrix.tests == 'integration' || matrix.tests == 'all') && always() if: (matrix.tests == 'integration' || matrix.tests == 'all') && always()
with: with:
name: ${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.job_number }}-integration-tests-logs name: ${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-integration-tests-logs
path: tests/integration/logs/ path: tests/integration/logs/
retention-days: 1 retention-days: 1
## Part 4 Tools ##
- name: Tools tests
if: matrix.tests == 'tools' || matrix.tests == 'all'
run: make -j${ncpu} testTools
status: status:
if: always() if: always()
needs: [build] needs: [build]

View File

@ -9,29 +9,31 @@ on:
env: env:
cache_nonce: 0 # Allows for easily busting actions/cache caches cache_nonce: 0 # Allows for easily busting actions/cache caches
nim_version: v2.2.4 nim_version: pinned
concurrency: concurrency:
group: ${{ github.workflow }}-${{ github.ref || github.run_id }} group: ${{ github.workflow }}-${{ github.ref || github.run_id }}
cancel-in-progress: true cancel-in-progress: true
jobs: jobs:
matrix: matrix:
name: Compute matrix
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
matrix: ${{ steps.matrix.outputs.matrix }} matrix: ${{ steps.matrix.outputs.matrix }}
cache_nonce: ${{ env.cache_nonce }} cache_nonce: ${{ env.cache_nonce }}
steps: steps:
- name: Checkout sources - name: Compute matrix
uses: actions/checkout@v4 id: matrix
- name: Compute matrix uses: fabiocaccamo/create-matrix-action@v4
id: matrix with:
run: | matrix: |
echo 'matrix<<EOF' >> $GITHUB_OUTPUT os {linux}, cpu {amd64}, builder {ubuntu-20.04}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
tools/scripts/ci-job-matrix.sh >> $GITHUB_OUTPUT os {macos}, cpu {amd64}, builder {macos-13}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
echo 'EOF' >> $GITHUB_OUTPUT os {windows}, cpu {amd64}, builder {windows-latest}, tests {unittest}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {contract}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {integration}, nim_version {${{ env.nim_version }}}, shell {msys2}
os {windows}, cpu {amd64}, builder {windows-latest}, tests {tools}, nim_version {${{ env.nim_version }}}, shell {msys2}
build: build:
needs: matrix needs: matrix
@ -40,21 +42,8 @@ jobs:
matrix: ${{ needs.matrix.outputs.matrix }} matrix: ${{ needs.matrix.outputs.matrix }}
cache_nonce: ${{ needs.matrix.outputs.cache_nonce }} cache_nonce: ${{ needs.matrix.outputs.cache_nonce }}
linting:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v4
- name: Check `nph` formatting
uses: arnetheduck/nph-action@v1
with:
version: 0.6.1
options: "codex/ tests/"
fail: true
suggest: true
coverage: coverage:
runs-on: ubuntu-latest runs-on: ubuntu-20.04
steps: steps:
- name: Checkout sources - name: Checkout sources
uses: actions/checkout@v4 uses: actions/checkout@v4
@ -67,7 +56,6 @@ jobs:
with: with:
os: linux os: linux
nim_version: ${{ env.nim_version }} nim_version: ${{ env.nim_version }}
coverage: true
- name: Generate coverage data - name: Generate coverage data
run: | run: |
@ -85,29 +73,3 @@ jobs:
name: codecov-umbrella name: codecov-umbrella
token: ${{ secrets.CODECOV_TOKEN }} token: ${{ secrets.CODECOV_TOKEN }}
verbose: true verbose: true
cbinding:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4
with:
submodules: recursive
ref: ${{ github.event.pull_request.head.sha }}
- name: Setup Nimbus Build System
uses: ./.github/actions/nimbus-build-system
with:
os: linux
nim_version: ${{ env.nim_version }}
- name: C Binding build
run: |
make -j${ncpu} update
make -j${ncpu} libstorage
- name: C Binding test
run: |
cd examples/c
gcc -o storage storage.c -L../../build -lstorage -Wl,-rpath,../../ -pthread
LD_LIBRARY_PATH=../../build ./storage

View File

@ -1,19 +0,0 @@
name: Conventional Commits Linting
on:
push:
branches:
- master
pull_request:
workflow_dispatch:
merge_group:
jobs:
pr-title:
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- name: PR Conventional Commit Validation
uses: ytanikin/pr-conventional-commits@1.4.1
with:
task_types: '["feat","fix","docs","test","ci","build","refactor","style","perf","chore","revert"]'

View File

@ -17,12 +17,6 @@ on:
- '!docker/codex.Dockerfile' - '!docker/codex.Dockerfile'
- '!docker/docker-entrypoint.sh' - '!docker/docker-entrypoint.sh'
workflow_dispatch: workflow_dispatch:
inputs:
run_release_tests:
description: Run Release tests
required: false
type: boolean
default: false
jobs: jobs:
@ -34,6 +28,6 @@ jobs:
nat_ip_auto: true nat_ip_auto: true
tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }} tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }}
tag_suffix: dist-tests tag_suffix: dist-tests
tag_stable: ${{ startsWith(github.ref, 'refs/tags/') }} continuous_tests_list: PeersTest HoldMyBeerTest
run_release_tests: ${{ inputs.run_release_tests }} continuous_tests_duration: 12h
secrets: inherit secrets: inherit

View File

@ -34,11 +34,6 @@ on:
description: Set latest tag for Docker images description: Set latest tag for Docker images
required: false required: false
type: boolean type: boolean
tag_stable:
default: false
description: Set stable tag for Docker images
required: false
type: boolean
tag_sha: tag_sha:
default: true default: true
description: Set Git short commit as Docker tag description: Set Git short commit as Docker tag
@ -59,15 +54,6 @@ on:
description: Continuous Tests duration description: Continuous Tests duration
required: false required: false
type: string type: string
run_release_tests:
description: Run Release tests
required: false
type: string
default: false
outputs:
codex_image:
description: Logos Storage Docker image tag
value: ${{ jobs.publish.outputs.codex_image }}
env: env:
@ -78,32 +64,19 @@ env:
NIMFLAGS: ${{ inputs.nimflags }} NIMFLAGS: ${{ inputs.nimflags }}
NAT_IP_AUTO: ${{ inputs.nat_ip_auto }} NAT_IP_AUTO: ${{ inputs.nat_ip_auto }}
TAG_LATEST: ${{ inputs.tag_latest }} TAG_LATEST: ${{ inputs.tag_latest }}
TAG_STABLE: ${{ inputs.tag_stable }}
TAG_SHA: ${{ inputs.tag_sha }} TAG_SHA: ${{ inputs.tag_sha }}
TAG_SUFFIX: ${{ inputs.tag_suffix }} TAG_SUFFIX: ${{ inputs.tag_suffix }}
# Tests # Tests
TESTS_SOURCE: logos-storage/logos-storage-nim-cs-dist-tests CONTINUOUS_TESTS_SOURCE: codex-storage/cs-codex-dist-tests
TESTS_BRANCH: master CONTINUOUS_TESTS_BRANCH: master
CONTINUOUS_TESTS_LIST: ${{ inputs.continuous_tests_list }} CONTINUOUS_TESTS_LIST: ${{ inputs.continuous_tests_list }}
CONTINUOUS_TESTS_DURATION: ${{ inputs.continuous_tests_duration }} CONTINUOUS_TESTS_DURATION: ${{ inputs.continuous_tests_duration }}
CONTINUOUS_TESTS_NAMEPREFIX: c-tests-ci CONTINUOUS_TESTS_NAMEPREFIX: c-tests-ci
jobs: jobs:
# Compute variables
compute:
name: Compute build ID
runs-on: ubuntu-latest
outputs:
build_id: ${{ steps.build_id.outputs.build_id }}
steps:
- name: Generate unique build id
id: build_id
run: echo "build_id=$(openssl rand -hex 5)" >> $GITHUB_OUTPUT
# Build platform specific image # Build platform specific image
build: build:
needs: compute
strategy: strategy:
fail-fast: true fail-fast: true
matrix: matrix:
@ -116,11 +89,11 @@ jobs:
- target: - target:
os: linux os: linux
arch: amd64 arch: amd64
builder: ubuntu-24.04 builder: ubuntu-22.04
- target: - target:
os: linux os: linux
arch: arm64 arch: arm64
builder: ubuntu-24.04-arm builder: buildjet-4vcpu-ubuntu-2204-arm
name: Build ${{ matrix.target.os }}/${{ matrix.target.arch }} name: Build ${{ matrix.target.os }}/${{ matrix.target.arch }}
runs-on: ${{ matrix.builder }} runs-on: ${{ matrix.builder }}
@ -169,7 +142,7 @@ jobs:
- name: Docker - Upload digest - name: Docker - Upload digest
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: digests-${{ needs.compute.outputs.build_id }}-${{ matrix.target.arch }} name: digests-${{ matrix.target.arch }}
path: /tmp/digests/* path: /tmp/digests/*
if-no-files-found: error if-no-files-found: error
retention-days: 1 retention-days: 1
@ -181,36 +154,35 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
outputs: outputs:
version: ${{ steps.meta.outputs.version }} version: ${{ steps.meta.outputs.version }}
codex_image: ${{ steps.image_tag.outputs.codex_image }} needs: build
needs: [build, compute]
steps: steps:
- name: Docker - Variables - name: Docker - Variables
run: | run: |
# Adjust custom suffix when set # Adjust custom suffix when set and
if [[ -n "${{ env.TAG_SUFFIX }}" ]]; then if [[ -n "${{ env.TAG_SUFFIX }}" ]]; then
echo "TAG_SUFFIX=-${{ env.TAG_SUFFIX }}" >> $GITHUB_ENV echo "TAG_SUFFIX=-${{ env.TAG_SUFFIX }}" >>$GITHUB_ENV
fi fi
# Disable SHA tags on tagged release # Disable SHA tags on tagged release
if [[ ${{ startsWith(github.ref, 'refs/tags/') }} == "true" ]]; then if [[ ${{ startsWith(github.ref, 'refs/tags/') }} == "true" ]]; then
echo "TAG_SHA=false" >> $GITHUB_ENV echo "TAG_SHA=false" >>$GITHUB_ENV
fi fi
# Handle latest and latest-custom using raw # Handle latest and latest-custom using raw
if [[ ${{ env.TAG_SHA }} == "false" ]]; then if [[ ${{ env.TAG_SHA }} == "false" ]]; then
echo "TAG_LATEST=false" >> $GITHUB_ENV echo "TAG_LATEST=false" >>$GITHUB_ENV
echo "TAG_RAW=true" >> $GITHUB_ENV echo "TAG_RAW=true" >>$GITHUB_ENV
if [[ -z "${{ env.TAG_SUFFIX }}" ]]; then if [[ -z "${{ env.TAG_SUFFIX }}" ]]; then
echo "TAG_RAW_VALUE=latest" >> $GITHUB_ENV echo "TAG_RAW_VALUE=latest" >>$GITHUB_ENV
else else
echo "TAG_RAW_VALUE=latest-{{ env.TAG_SUFFIX }}" >> $GITHUB_ENV echo "TAG_RAW_VALUE=latest-{{ env.TAG_SUFFIX }}" >>$GITHUB_ENV
fi fi
else else
echo "TAG_RAW=false" >> $GITHUB_ENV echo "TAG_RAW=false" >>$GITHUB_ENV
fi fi
- name: Docker - Download digests - name: Docker - Download digests
uses: actions/download-artifact@v4 uses: actions/download-artifact@v4
with: with:
pattern: digests-${{ needs.compute.outputs.build_id }}-* pattern: digests-*
merge-multiple: true merge-multiple: true
path: /tmp/digests path: /tmp/digests
@ -228,7 +200,6 @@ jobs:
tags: | tags: |
type=semver,pattern={{version}} type=semver,pattern={{version}}
type=raw,enable=${{ env.TAG_RAW }},value=latest type=raw,enable=${{ env.TAG_RAW }},value=latest
type=raw,enable=${{ env.TAG_STABLE }},value=stable
type=sha,enable=${{ env.TAG_SHA }} type=sha,enable=${{ env.TAG_SHA }}
- name: Docker - Login to Docker Hub - name: Docker - Login to Docker Hub
@ -243,81 +214,54 @@ jobs:
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \ docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env.DOCKER_REPO }}@sha256:%s ' *) $(printf '${{ env.DOCKER_REPO }}@sha256:%s ' *)
- name: Docker - Image tag
id: image_tag
run: echo "codex_image=${{ env.DOCKER_REPO }}:${{ steps.meta.outputs.version }}" >> "$GITHUB_OUTPUT"
- name: Docker - Inspect image - name: Docker - Inspect image
run: docker buildx imagetools inspect ${{ steps.image_tag.outputs.codex_image }} run: |
docker buildx imagetools inspect ${{ env.DOCKER_REPO }}:${{ steps.meta.outputs.version }}
# Compute Tests inputs # Compute Continuous Tests inputs
compute-tests-inputs: compute-tests-inputs:
name: Compute Tests inputs name: Compute Continuous Tests list
if: ${{ inputs.continuous_tests_list != '' || inputs.run_release_tests == 'true' }} if: ${{ inputs.continuous_tests_list != '' && github.ref_name == github.event.repository.default_branch }}
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: publish needs: publish
outputs: outputs:
source: ${{ steps.compute.outputs.source }} source: ${{ steps.compute.outputs.source }}
branch: ${{ env.TESTS_BRANCH }} branch: ${{ steps.compute.outputs.branch }}
workflow_source: ${{ env.TESTS_SOURCE }}
codexdockerimage: ${{ steps.compute.outputs.codexdockerimage }} codexdockerimage: ${{ steps.compute.outputs.codexdockerimage }}
steps:
- name: Compute Tests inputs
id: compute
run: |
echo "source=${{ format('{0}/{1}', github.server_url, env.TESTS_SOURCE) }}" >> "$GITHUB_OUTPUT"
echo "codexdockerimage=${{ inputs.docker_repo }}:${{ needs.publish.outputs.version }}" >> "$GITHUB_OUTPUT"
# Compute Continuous Tests inputs
compute-continuous-tests-inputs:
name: Compute Continuous Tests inputs
if: ${{ inputs.continuous_tests_list != '' && github.ref_name == github.event.repository.default_branch }}
runs-on: ubuntu-latest
needs: compute-tests-inputs
outputs:
nameprefix: ${{ steps.compute.outputs.nameprefix }} nameprefix: ${{ steps.compute.outputs.nameprefix }}
continuous_tests_list: ${{ steps.compute.outputs.continuous_tests_list }} continuous_tests_list: ${{ steps.compute.outputs.continuous_tests_list }}
continuous_tests_duration: ${{ env.CONTINUOUS_TESTS_DURATION }} continuous_tests_duration: ${{ steps.compute.outputs.continuous_tests_duration }}
continuous_tests_workflow: ${{ steps.compute.outputs.continuous_tests_workflow }} continuous_tests_workflow: ${{ steps.compute.outputs.continuous_tests_workflow }}
workflow_source: ${{ steps.compute.outputs.workflow_source }}
steps: steps:
- name: Compute Continuous Tests inputs - name: Compute Continuous Tests list
id: compute id: compute
run: | run: |
echo "source=${{ format('{0}/{1}', github.server_url, env.CONTINUOUS_TESTS_SOURCE) }}" >> "$GITHUB_OUTPUT"
echo "branch=${{ env.CONTINUOUS_TESTS_BRANCH }}" >> "$GITHUB_OUTPUT"
echo "codexdockerimage=${{ inputs.docker_repo }}:${{ needs.publish.outputs.version }}" >> "$GITHUB_OUTPUT"
echo "nameprefix=$(awk '{ print tolower($0) }' <<< ${{ env.CONTINUOUS_TESTS_NAMEPREFIX }})" >> "$GITHUB_OUTPUT" echo "nameprefix=$(awk '{ print tolower($0) }' <<< ${{ env.CONTINUOUS_TESTS_NAMEPREFIX }})" >> "$GITHUB_OUTPUT"
echo "continuous_tests_list=$(jq -cR 'split(" ")' <<< '${{ env.CONTINUOUS_TESTS_LIST }}')" >> "$GITHUB_OUTPUT" echo "continuous_tests_list=$(jq -cR 'split(" ")' <<< '${{ env.CONTINUOUS_TESTS_LIST }}')" >> "$GITHUB_OUTPUT"
echo "continuous_tests_duration=${{ env.CONTINUOUS_TESTS_DURATION }}" >> "$GITHUB_OUTPUT"
echo "workflow_source=${{ env.CONTINUOUS_TESTS_SOURCE }}" >> "$GITHUB_OUTPUT"
# Run Continuous Tests # Run Continuous Tests
run-continuous-tests: run-tests:
name: Run Continuous Tests name: Run Continuous Tests
needs: [compute-tests-inputs, compute-continuous-tests-inputs] needs: [publish, compute-tests-inputs]
strategy: strategy:
max-parallel: 1 max-parallel: 1
matrix: matrix:
tests: ${{ fromJSON(needs.compute-continuous-tests-inputs.outputs.continuous_tests_list) }} tests: ${{ fromJSON(needs.compute-tests-inputs.outputs.continuous_tests_list) }}
uses: logos-storage/logos-storage-nim-cs-dist-tests/.github/workflows/run-continuous-tests.yaml@master uses: codex-storage/cs-codex-dist-tests/.github/workflows/run-continuous-tests.yaml@master
with: with:
source: ${{ needs.compute-tests-inputs.outputs.source }} source: ${{ needs.compute-tests-inputs.outputs.source }}
branch: ${{ needs.compute-tests-inputs.outputs.branch }} branch: ${{ needs.compute-tests-inputs.outputs.branch }}
codexdockerimage: ${{ needs.compute-tests-inputs.outputs.codexdockerimage }} codexdockerimage: ${{ needs.compute-tests-inputs.outputs.codexdockerimage }}
nameprefix: ${{ needs.compute-continuous-tests-inputs.outputs.nameprefix }}-${{ matrix.tests }}-${{ needs.compute-continuous-tests-inputs.outputs.continuous_tests_duration }} nameprefix: ${{ needs.compute-tests-inputs.outputs.nameprefix }}-${{ matrix.tests }}-${{ needs.compute-tests-inputs.outputs.continuous_tests_duration }}
tests_filter: ${{ matrix.tests }} tests_filter: ${{ matrix.tests }}
tests_target_duration: ${{ needs.compute-tests-inputs.outputs.continuous_tests_duration }} tests_target_duration: ${{ needs.compute-tests-inputs.outputs.continuous_tests_duration }}
workflow_source: ${{ needs.compute-tests-inputs.outputs.workflow_source }} workflow_source: ${{ needs.compute-tests-inputs.outputs.workflow_source }}
secrets: inherit secrets: inherit
# Run Release Tests
run-release-tests:
name: Run Release Tests
needs: [compute-tests-inputs]
if: ${{ inputs.run_release_tests == 'true' }}
uses: logos-storage/logos-storage-nim-cs-dist-tests/.github/workflows/run-release-tests.yaml@master
with:
source: ${{ needs.compute-tests-inputs.outputs.source }}
branch: ${{ needs.compute-tests-inputs.outputs.branch }}
codexdockerimage: ${{ needs.compute-tests-inputs.outputs.codexdockerimage }}
workflow_source: ${{ needs.compute-tests-inputs.outputs.workflow_source }}
secrets: inherit

View File

@ -18,11 +18,11 @@ on:
- '!docker/docker-entrypoint.sh' - '!docker/docker-entrypoint.sh'
workflow_dispatch: workflow_dispatch:
jobs: jobs:
build-and-push: build-and-push:
name: Build and Push name: Build and Push
uses: ./.github/workflows/docker-reusable.yml uses: ./.github/workflows/docker-reusable.yml
with: with:
tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }} tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }}
tag_stable: ${{ startsWith(github.ref, 'refs/tags/') }}
secrets: inherit secrets: inherit

View File

@ -2,17 +2,17 @@ name: OpenAPI
on: on:
push: push:
tags: branches:
- "v*.*.*" - 'master'
paths: paths:
- "openapi.yaml" - 'openapi.yaml'
- ".github/workflows/docs.yml" - '.github/workflows/docs.yml'
pull_request: pull_request:
branches: branches:
- "**" - '**'
paths: paths:
- "openapi.yaml" - 'openapi.yaml'
- ".github/workflows/docs.yml" - '.github/workflows/docs.yml'
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions: permissions:
@ -28,39 +28,38 @@ jobs:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: '0'
- uses: actions/setup-node@v4 - uses: actions/setup-node@v4
with: with:
node-version: 18 node-version: 18
- name: Lint OpenAPI - name: Lint OpenAPI
shell: bash
run: npx @redocly/cli lint openapi.yaml run: npx @redocly/cli lint openapi.yaml
deploy: deploy:
name: Deploy name: Deploy
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/') if: github.ref == 'refs/heads/master'
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: '0'
- uses: actions/setup-node@v4 - uses: actions/setup-node@v4
with: with:
node-version: 18 node-version: 18
- name: Build OpenAPI - name: Build OpenAPI
run: npx @redocly/cli build-docs openapi.yaml --output openapi/index.html --title "Logos Storage API" shell: bash
run: npx @redocly/cli build-docs openapi.yaml --output "openapi/index.html" --title "Codex API"
- name: Build Postman Collection
run: npx -y openapi-to-postmanv2 -s openapi.yaml -o openapi/postman.json -p -O folderStrategy=Tags,includeAuthInfoInExample=false
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@v3 uses: actions/upload-pages-artifact@v3
with: with:
path: openapi path: './openapi'
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
uses: actions/deploy-pages@v4 uses: actions/deploy-pages@v4

View File

@ -15,14 +15,12 @@ jobs:
matrix: ${{ steps.matrix.outputs.matrix }} matrix: ${{ steps.matrix.outputs.matrix }}
cache_nonce: ${{ env.cache_nonce }} cache_nonce: ${{ env.cache_nonce }}
steps: steps:
- name: Checkout sources - name: Compute matrix
uses: actions/checkout@v4 id: matrix
- name: Compute matrix uses: fabiocaccamo/create-matrix-action@v4
id: matrix with:
run: | matrix: |
echo 'matrix<<EOF' >> $GITHUB_OUTPUT os {linux}, cpu {amd64}, builder {ubuntu-20.04}, tests {all}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail}
tools/scripts/ci-job-matrix.sh linux >> $GITHUB_OUTPUT
echo 'EOF' >> $GITHUB_OUTPUT
build: build:
needs: matrix needs: matrix

View File

@ -4,14 +4,14 @@ on:
push: push:
tags: tags:
- 'v*.*.*' - 'v*.*.*'
branches:
- master
workflow_dispatch: workflow_dispatch:
env: env:
cache_nonce: 0 # Allows for easily busting actions/cache caches cache_nonce: 0 # Allows for easily busting actions/cache caches
nim_version: pinned nim_version: pinned
storage_binary_base: storage rust_version: 1.78.0
codex_binary_base: codex
cirdl_binary_base: cirdl
build_dir: build build_dir: build
nim_flags: '' nim_flags: ''
windows_libs: 'libstdc++-6.dll libgomp-1.dll libgcc_s_seh-1.dll libwinpthread-1.dll' windows_libs: 'libstdc++-6.dll libgomp-1.dll libgcc_s_seh-1.dll libwinpthread-1.dll'
@ -25,13 +25,14 @@ jobs:
steps: steps:
- name: Compute matrix - name: Compute matrix
id: matrix id: matrix
uses: fabiocaccamo/create-matrix-action@v5 uses: fabiocaccamo/create-matrix-action@v4
with: with:
matrix: | matrix: |
os {linux}, cpu {amd64}, builder {ubuntu-22.04}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail} os {linux}, cpu {amd64}, builder {ubuntu-20.04}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {linux}, cpu {arm64}, builder {ubuntu-22.04-arm}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail} os {linux}, cpu {arm64}, builder {buildjet-4vcpu-ubuntu-2204-arm}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {macos}, cpu {arm64}, builder {macos-14}, nim_version {${{ env.nim_version }}}, shell {bash --noprofile --norc -e -o pipefail} os {macos}, cpu {amd64}, builder {macos-13}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {windows}, cpu {amd64}, builder {windows-latest}, nim_version {${{ env.nim_version }}}, shell {msys2} os {macos}, cpu {arm64}, builder {macos-14}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
os {windows}, cpu {amd64}, builder {windows-latest}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {msys2}
# Build # Build
build: build:
@ -61,6 +62,7 @@ jobs:
cpu: ${{ matrix.cpu }} cpu: ${{ matrix.cpu }}
shell: ${{ matrix.shell }} shell: ${{ matrix.shell }}
nim_version: ${{ matrix.nim_version }} nim_version: ${{ matrix.nim_version }}
rust_version: ${{ matrix.rust_version }}
- name: Release - Compute binary name - name: Release - Compute binary name
run: | run: |
@ -70,34 +72,19 @@ jobs:
windows*) os_name="windows" ;; windows*) os_name="windows" ;;
esac esac
github_ref_name="${GITHUB_REF_NAME/\//-}" github_ref_name="${GITHUB_REF_NAME/\//-}"
storage_binary="${{ env.storage_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}" codex_binary="${{ env.codex_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
cirdl_binary="${{ env.cirdl_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
if [[ ${os_name} == "windows" ]]; then if [[ ${os_name} == "windows" ]]; then
storage_binary="${storage_binary}.exe" codex_binary="${codex_binary}.exe"
cirdl_binary="${cirdl_binary}.exe"
fi fi
echo "storage_binary=${storage_binary}" >>$GITHUB_ENV echo "codex_binary=${codex_binary}" >>$GITHUB_ENV
echo "cirdl_binary=${cirdl_binary}" >>$GITHUB_ENV
- name: Release - Build - name: Release - Build
run: | run: |
make NIMFLAGS="--out:${{ env.build_dir }}/${{ env.storage_binary }} ${{ env.nim_flags }}" make NIMFLAGS="--out:${{ env.build_dir }}/${{ env.codex_binary }} ${{ env.nim_flags }}"
make cirdl NIMFLAGS="--out:${{ env.build_dir }}/${{ env.cirdl_binary }} ${{ env.nim_flags }}"
- name: Release - Build libstorage (Linux)
if: matrix.os == 'linux'
run: |
make -j${ncpu} update
make -j${ncpu} libstorage
- name: Release - Build libstorage (MacOS)
if: matrix.os == 'macos'
run: |
make -j${ncpu} update
STORAGE_LIB_PARAMS="--passL:\"-Wl,-install_name,@rpath/libstorage.dylib\"" make -j${ncpu} libstorage
- name: Release - Build libstorage (Windows)
if: matrix.os == 'windows'
shell: msys2 {0}
run: |
make -j${ncpu} update
make -j${ncpu} libstorage
- name: Release - Libraries - name: Release - Libraries
run: | run: |
@ -107,14 +94,21 @@ jobs:
done done
fi fi
- name: Release - Upload Logos Storage build artifacts - name: Release - Upload codex build artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: release-${{ env.storage_binary }} name: release-${{ env.codex_binary }}
path: ${{ env.build_dir }}/${{ env.storage_binary_base }}* path: ${{ env.build_dir }}/${{ env.codex_binary_base }}*
retention-days: 30 retention-days: 30
- name: Release - Upload Windows libs - name: Release - Upload cirdl build artifacts
uses: actions/upload-artifact@v4
with:
name: release-${{ env.cirdl_binary }}
path: ${{ env.build_dir }}/${{ env.cirdl_binary_base }}*
retention-days: 30
- name: Release - Upload windows libs
if: matrix.os == 'windows' if: matrix.os == 'windows'
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
@ -122,37 +116,6 @@ jobs:
path: ${{ env.build_dir }}/*.dll path: ${{ env.build_dir }}/*.dll
retention-days: 30 retention-days: 30
- name: Release -Package artifacts Linux
if: matrix.os == 'linux'
run: |
sudo apt-get update && sudo apt-get install -y zip
ZIPFILE=storage-linux-${{ matrix.cpu }}.zip
zip -j $ZIPFILE ./build/libstorage.so ./library/libstorage.h
echo "ZIPFILE=$ZIPFILE" >> $GITHUB_ENV
- name: Package artifacts MacOS
if: matrix.os == 'macos'
run: |
ZIPFILE=storage-macos-${{ matrix.cpu }}.zip
zip -j $ZIPFILE ./build/libstorage.dylib ./library/libstorage.h
echo "ZIPFILE=$ZIPFILE" >> $GITHUB_ENV
- name: Release - Package artifacts (Windows)
if: matrix.os == 'windows'
shell: msys2 {0}
run: |
ZIPFILE=storage-windows-${{ matrix.cpu }}.zip
(cd ./build && 7z a -tzip "${GITHUB_WORKSPACE}/${ZIPFILE}" libstorage.dll)
(cd ./library && 7z a -tzip "${GITHUB_WORKSPACE}/${ZIPFILE}" libstorage.h)
echo "ZIPFILE=$ZIPFILE" >> $GITHUB_ENV
- name: Release - Upload artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ env.ZIPFILE }}
path: ${{ env.ZIPFILE }}
if-no-files-found: error
# Release # Release
release: release:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@ -166,12 +129,6 @@ jobs:
merge-multiple: true merge-multiple: true
path: /tmp/release path: /tmp/release
- name: Release - Download artifacts
uses: actions/download-artifact@v5
with:
pattern: libstorage*
path: /tmp/release
- name: Release - Compress and checksum - name: Release - Compress and checksum
run: | run: |
cd /tmp/release cd /tmp/release
@ -181,7 +138,7 @@ jobs:
} }
# Compress and prepare # Compress and prepare
for file in ${{ env.storage_binary_base }}*; do for file in ${{ env.codex_binary_base }}* ${{ env.cirdl_binary_base }}*; do
if [[ "${file}" == *".exe"* ]]; then if [[ "${file}" == *".exe"* ]]; then
# Windows - binary only # Windows - binary only
@ -213,34 +170,6 @@ jobs:
path: /tmp/release/ path: /tmp/release/
retention-days: 30 retention-days: 30
- name: Release - Upload to the cloud
env:
s3_endpoint: ${{ secrets.S3_ENDPOINT }}
s3_bucket: ${{ secrets.S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
run: |
# Variables
branch="${GITHUB_REF_NAME/\//-}"
folder="/tmp/release"
# Tagged releases
if [[ "${{ github.ref }}" == *"refs/tags/"* ]]; then
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/releases/${branch} --endpoint-url ${{ env.s3_endpoint }}
echo "${branch}" > "${folder}"/latest
aws s3 cp "${folder}"/latest s3://${{ env.s3_bucket }}/releases/latest --endpoint-url ${{ env.s3_endpoint }}
rm -f "${folder}"/latest
# master branch
elif [[ "${branch}" == "${{ github.event.repository.default_branch }}" ]]; then
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/${branch} --endpoint-url ${{ env.s3_endpoint }}
# Custom branch
else
aws s3 cp --recursive "${folder}" s3://${{ env.s3_bucket }}/branches/${branch} --endpoint-url ${{ env.s3_endpoint }}
fi
- name: Release - name: Release
uses: softprops/action-gh-release@v2 uses: softprops/action-gh-release@v2
if: startsWith(github.ref, 'refs/tags/') if: startsWith(github.ref, 'refs/tags/')
@ -248,12 +177,3 @@ jobs:
files: | files: |
/tmp/release/* /tmp/release/*
make_latest: true make_latest: true
- name: Generate Python SDK
uses: peter-evans/repository-dispatch@v3
if: startsWith(github.ref, 'refs/tags/')
with:
token: ${{ secrets.DISPATCH_PAT }}
repository: logos-storage/logos-storage-py-api-client
event-type: generate
client-payload: '{"openapi_url": "https://raw.githubusercontent.com/logos-storage/logos-storage-nim/${{ github.ref }}/openapi.yaml"}'

9
.gitignore vendored
View File

@ -5,13 +5,9 @@
!LICENSE* !LICENSE*
!Makefile !Makefile
!Jenkinsfile
nimcache/ nimcache/
# Executables when using nix will be stored in result/ directory
result/
# Executables shall be put in an ignored build/ directory # Executables shall be put in an ignored build/ directory
build/ build/
@ -45,8 +41,3 @@ docker/prometheus-data
.DS_Store .DS_Store
nim.cfg nim.cfg
tests/integration/logs tests/integration/logs
data/
examples/c/data-dir
examples/c/downloaded_hello.txt

57
.gitmodules vendored
View File

@ -37,17 +37,22 @@
path = vendor/nim-nitro path = vendor/nim-nitro
url = https://github.com/status-im/nim-nitro.git url = https://github.com/status-im/nim-nitro.git
ignore = untracked ignore = untracked
branch = main branch = master
[submodule "vendor/questionable"] [submodule "vendor/questionable"]
path = vendor/questionable path = vendor/questionable
url = https://github.com/status-im/questionable.git url = https://github.com/status-im/questionable.git
ignore = untracked ignore = untracked
branch = main branch = master
[submodule "vendor/upraises"]
path = vendor/upraises
url = https://github.com/markspanbroek/upraises.git
ignore = untracked
branch = master
[submodule "vendor/asynctest"] [submodule "vendor/asynctest"]
path = vendor/asynctest path = vendor/asynctest
url = https://github.com/status-im/asynctest.git url = https://github.com/status-im/asynctest.git
ignore = untracked ignore = untracked
branch = main branch = master
[submodule "vendor/nim-presto"] [submodule "vendor/nim-presto"]
path = vendor/nim-presto path = vendor/nim-presto
url = https://github.com/status-im/nim-presto.git url = https://github.com/status-im/nim-presto.git
@ -127,7 +132,7 @@
path = vendor/nim-websock path = vendor/nim-websock
url = https://github.com/status-im/nim-websock.git url = https://github.com/status-im/nim-websock.git
ignore = untracked ignore = untracked
branch = main branch = master
[submodule "vendor/nim-contract-abi"] [submodule "vendor/nim-contract-abi"]
path = vendor/nim-contract-abi path = vendor/nim-contract-abi
url = https://github.com/status-im/nim-contract-abi url = https://github.com/status-im/nim-contract-abi
@ -155,10 +160,13 @@
path = vendor/nim-taskpools path = vendor/nim-taskpools
url = https://github.com/status-im/nim-taskpools.git url = https://github.com/status-im/nim-taskpools.git
ignore = untracked ignore = untracked
branch = stable branch = master
[submodule "vendor/logos-storage-nim-dht"] [submodule "vendor/nim-leopard"]
path = vendor/logos-storage-nim-dht path = vendor/nim-leopard
url = https://github.com/logos-storage/logos-storage-nim-dht.git url = https://github.com/status-im/nim-leopard.git
[submodule "vendor/nim-codex-dht"]
path = vendor/nim-codex-dht
url = https://github.com/codex-storage/nim-codex-dht.git
ignore = untracked ignore = untracked
branch = master branch = master
[submodule "vendor/nim-datastore"] [submodule "vendor/nim-datastore"]
@ -170,6 +178,9 @@
[submodule "vendor/nim-eth"] [submodule "vendor/nim-eth"]
path = vendor/nim-eth path = vendor/nim-eth
url = https://github.com/status-im/nim-eth url = https://github.com/status-im/nim-eth
[submodule "vendor/codex-contracts-eth"]
path = vendor/codex-contracts-eth
url = https://github.com/status-im/codex-contracts-eth
[submodule "vendor/nim-protobuf-serialization"] [submodule "vendor/nim-protobuf-serialization"]
path = vendor/nim-protobuf-serialization path = vendor/nim-protobuf-serialization
url = https://github.com/status-im/nim-protobuf-serialization url = https://github.com/status-im/nim-protobuf-serialization
@ -182,28 +193,28 @@
[submodule "vendor/npeg"] [submodule "vendor/npeg"]
path = vendor/npeg path = vendor/npeg
url = https://github.com/zevv/npeg url = https://github.com/zevv/npeg
[submodule "vendor/nim-poseidon2"]
path = vendor/nim-poseidon2
url = https://github.com/codex-storage/nim-poseidon2.git
[submodule "vendor/constantine"] [submodule "vendor/constantine"]
path = vendor/constantine path = vendor/constantine
url = https://github.com/mratsim/constantine.git url = https://github.com/mratsim/constantine.git
[submodule "vendor/nim-circom-compat"]
path = vendor/nim-circom-compat
url = https://github.com/codex-storage/nim-circom-compat.git
ignore = untracked
branch = master
[submodule "vendor/codex-storage-proofs-circuits"]
path = vendor/codex-storage-proofs-circuits
url = https://github.com/codex-storage/codex-storage-proofs-circuits.git
ignore = untracked
branch = master
[submodule "vendor/nim-serde"] [submodule "vendor/nim-serde"]
path = vendor/nim-serde path = vendor/nim-serde
url = https://github.com/logos-storage/nim-serde.git url = https://github.com/codex-storage/nim-serde.git
[submodule "vendor/nim-leveldbstatic"] [submodule "vendor/nim-leveldbstatic"]
path = vendor/nim-leveldbstatic path = vendor/nim-leveldbstatic
url = https://github.com/logos-storage/nim-leveldb.git url = https://github.com/codex-storage/nim-leveldb.git
[submodule "vendor/nim-zippy"] [submodule "vendor/nim-zippy"]
path = vendor/nim-zippy path = vendor/nim-zippy
url = https://github.com/status-im/nim-zippy.git url = https://github.com/status-im/nim-zippy.git
[submodule "vendor/nph"]
path = vendor/nph
url = https://github.com/arnetheduck/nph.git
[submodule "vendor/nim-quic"]
path = vendor/nim-quic
url = https://github.com/vacp2p/nim-quic.git
ignore = untracked
branch = main
[submodule "vendor/nim-ngtcp2"]
path = vendor/nim-ngtcp2
url = https://github.com/vacp2p/nim-ngtcp2.git
ignore = untracked
branch = main

130
Makefile
View File

@ -15,7 +15,7 @@
# #
# If NIM_COMMIT is set to "nimbusbuild", this will use the # If NIM_COMMIT is set to "nimbusbuild", this will use the
# version pinned by nimbus-build-system. # version pinned by nimbus-build-system.
PINNED_NIM_VERSION := v2.2.4 PINNED_NIM_VERSION := 38640664088251bbc88917b4bacfd86ec53014b8 # 1.6.21
ifeq ($(NIM_COMMIT),) ifeq ($(NIM_COMMIT),)
NIM_COMMIT := $(PINNED_NIM_VERSION) NIM_COMMIT := $(PINNED_NIM_VERSION)
@ -40,30 +40,6 @@ DOCKER_IMAGE_NIM_PARAMS ?= -d:chronicles_colors:none -d:insecure
LINK_PCRE := 0 LINK_PCRE := 0
ifeq ($(OS),Windows_NT)
ifeq ($(PROCESSOR_ARCHITECTURE), AMD64)
ARCH = x86_64
endif
ifeq ($(PROCESSOR_ARCHITECTURE), ARM64)
ARCH = arm64
endif
else
UNAME_P := $(shell uname -m)
ifneq ($(filter $(UNAME_P), i686 i386 x86_64),)
ARCH = x86_64
endif
ifneq ($(filter $(UNAME_P), aarch64 arm),)
ARCH = arm64
endif
endif
ifeq ($(ARCH), x86_64)
CXXFLAGS ?= -std=c++17 -mssse3
else
CXXFLAGS ?= -std=c++17
endif
export CXXFLAGS
# we don't want an error here, so we can handle things later, in the ".DEFAULT" target # we don't want an error here, so we can handle things later, in the ".DEFAULT" target
-include $(BUILD_SYSTEM_DIR)/makefiles/variables.mk -include $(BUILD_SYSTEM_DIR)/makefiles/variables.mk
@ -93,10 +69,15 @@ else # "variables.mk" was included. Business as usual until the end of this file
# default target, because it's the first one that doesn't start with '.' # default target, because it's the first one that doesn't start with '.'
# Builds the Logos Storage binary # Builds the codex binary
all: | build deps all: | build deps
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim storage $(NIM_PARAMS) build.nims $(ENV_SCRIPT) nim codex $(NIM_PARAMS) build.nims
# Build tools/cirdl
cirdl: | deps
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim toolsCirdl $(NIM_PARAMS) build.nims
# must be included after the default target # must be included after the default target
-include $(BUILD_SYSTEM_DIR)/makefiles/targets.mk -include $(BUILD_SYSTEM_DIR)/makefiles/targets.mk
@ -130,16 +111,31 @@ test: | build deps
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim test $(NIM_PARAMS) build.nims $(ENV_SCRIPT) nim test $(NIM_PARAMS) build.nims
# Builds and runs the smart contract tests
testContracts: | build deps
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testContracts $(NIM_PARAMS) build.nims
# Builds and runs the integration tests # Builds and runs the integration tests
testIntegration: | build deps testIntegration: | build deps
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testIntegration $(TEST_PARAMS) $(NIM_PARAMS) --define:ws_resubscribe=240 build.nims $(ENV_SCRIPT) nim testIntegration $(NIM_PARAMS) build.nims
# Builds and runs all tests (except for Taiko L2 tests) # Builds and runs all tests (except for Taiko L2 tests)
testAll: | build deps testAll: | build deps
echo -e $(BUILD_MSG) "build/$@" && \ echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testAll $(NIM_PARAMS) build.nims $(ENV_SCRIPT) nim testAll $(NIM_PARAMS) build.nims
# Builds and runs Taiko L2 tests
testTaiko: | build deps
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testTaiko $(NIM_PARAMS) build.nims
# Builds and runs tool tests
testTools: | cirdl
echo -e $(BUILD_MSG) "build/$@" && \
$(ENV_SCRIPT) nim testTools $(NIM_PARAMS) build.nims
# nim-libbacktrace # nim-libbacktrace
LIBBACKTRACE_MAKE_FLAGS := -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0 LIBBACKTRACE_MAKE_FLAGS := -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0
libbacktrace: libbacktrace:
@ -158,11 +154,11 @@ coverage:
$(MAKE) NIMFLAGS="$(NIMFLAGS) --lineDir:on --passC:-fprofile-arcs --passC:-ftest-coverage --passL:-fprofile-arcs --passL:-ftest-coverage" test $(MAKE) NIMFLAGS="$(NIMFLAGS) --lineDir:on --passC:-fprofile-arcs --passC:-ftest-coverage --passL:-fprofile-arcs --passL:-ftest-coverage" test
cd nimcache/release/testCodex && rm -f *.c cd nimcache/release/testCodex && rm -f *.c
mkdir -p coverage mkdir -p coverage
lcov --capture --keep-going --directory nimcache/release/testCodex --output-file coverage/coverage.info lcov --capture --directory nimcache/release/testCodex --output-file coverage/coverage.info
shopt -s globstar && ls $$(pwd)/codex/{*,**/*}.nim shopt -s globstar && ls $$(pwd)/codex/{*,**/*}.nim
shopt -s globstar && lcov --extract coverage/coverage.info --keep-going $$(pwd)/codex/{*,**/*}.nim --output-file coverage/coverage.f.info shopt -s globstar && lcov --extract coverage/coverage.info $$(pwd)/codex/{*,**/*}.nim --output-file coverage/coverage.f.info
echo -e $(BUILD_MSG) "coverage/report/index.html" echo -e $(BUILD_MSG) "coverage/report/index.html"
genhtml coverage/coverage.f.info --keep-going --output-directory coverage/report genhtml coverage/coverage.f.info --output-directory coverage/report
show-coverage: show-coverage:
if which open >/dev/null; then (echo -e "\e[92mOpening\e[39m HTML coverage report in browser..." && open coverage/report/index.html) || true; fi if which open >/dev/null; then (echo -e "\e[92mOpening\e[39m HTML coverage report in browser..." && open coverage/report/index.html) || true; fi
@ -179,76 +175,4 @@ ifneq ($(USE_LIBBACKTRACE), 0)
+ $(MAKE) -C vendor/nim-libbacktrace clean $(HANDLE_OUTPUT) + $(MAKE) -C vendor/nim-libbacktrace clean $(HANDLE_OUTPUT)
endif endif
############
## Format ##
############
.PHONY: build-nph install-nph-hook clean-nph print-nph-path
# Default location for nph binary shall be next to nim binary to make it available on the path.
NPH:=$(shell dirname $(NIM_BINARY))/nph
build-nph:
ifeq ("$(wildcard $(NPH))","")
$(ENV_SCRIPT) nim c vendor/nph/src/nph.nim && \
mv vendor/nph/src/nph $(shell dirname $(NPH))
echo "nph utility is available at " $(NPH)
endif
GIT_PRE_COMMIT_HOOK := .git/hooks/pre-commit
install-nph-hook: build-nph
ifeq ("$(wildcard $(GIT_PRE_COMMIT_HOOK))","")
cp ./tools/scripts/git_pre_commit_format.sh $(GIT_PRE_COMMIT_HOOK)
else
echo "$(GIT_PRE_COMMIT_HOOK) already present, will NOT override"
exit 1
endif
nph/%: build-nph
echo -e $(FORMAT_MSG) "nph/$*" && \
$(NPH) $*
format:
$(NPH) *.nim
$(NPH) codex/
$(NPH) tests/
$(NPH) library/
clean-nph:
rm -f $(NPH)
# To avoid hardcoding nph binary location in several places
print-nph-path:
echo "$(NPH)"
clean: | clean-nph
################
## C Bindings ##
################
.PHONY: libstorage
STATIC ?= 0
ifneq ($(strip $(STORAGE_LIB_PARAMS)),)
NIM_PARAMS := $(NIM_PARAMS) $(STORAGE_LIB_PARAMS)
endif
libstorage:
$(MAKE) deps
rm -f build/libstorage*
ifeq ($(STATIC), 1)
echo -e $(BUILD_MSG) "build/$@.a" && \
$(ENV_SCRIPT) nim libstorageStatic $(NIM_PARAMS) codex.nims
else ifeq ($(detected_OS),Windows)
echo -e $(BUILD_MSG) "build/$@.dll" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) codex.nims
else ifeq ($(detected_OS),macOS)
echo -e $(BUILD_MSG) "build/$@.dylib" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) codex.nims
else
echo -e $(BUILD_MSG) "build/$@.so" && \
$(ENV_SCRIPT) nim libstorageDynamic $(NIM_PARAMS) codex.nims
endif
endif # "variables.mk" was not included endif # "variables.mk" was not included

View File

@ -1,22 +1,22 @@
# Logos Storage Decentralized Engine # Codex Decentralized Durability Engine
> The Logos Storage project aims to create a decentralized engine that allows persisting data in p2p networks. > The Codex project aims to create a decentralized durability engine that allows persisting data in p2p networks. In other words, it allows storing files and data with predictable durability guarantees for later retrieval.
> WARNING: This project is under active development and is considered pre-alpha. > WARNING: This project is under active development and is considered pre-alpha.
[![License: Apache](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![License: Apache](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![Stability: experimental](https://img.shields.io/badge/stability-experimental-orange.svg)](#stability) [![Stability: experimental](https://img.shields.io/badge/stability-experimental-orange.svg)](#stability)
[![CI](https://github.com/logos-storage/logos-storage-nim/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/logos-storage/logos-storage-nim/actions/workflows/ci.yml?query=branch%3Amaster) [![CI](https://github.com/codex-storage/nim-codex/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/codex-storage/nim-codex/actions/workflows/ci.yml?query=branch%3Amaster)
[![Docker](https://github.com/logos-storage/logos-storage-nim/actions/workflows/docker.yml/badge.svg?branch=master)](https://github.com/logos-storage/logos-storage-nim/actions/workflows/docker.yml?query=branch%3Amaster) [![Docker](https://github.com/codex-storage/nim-codex/actions/workflows/docker.yml/badge.svg?branch=master)](https://github.com/codex-storage/nim-codex/actions/workflows/docker.yml?query=branch%3Amaster)
[![Codecov](https://codecov.io/gh/logos-storage/logos-storage-nim/branch/master/graph/badge.svg?token=XFmCyPSNzW)](https://codecov.io/gh/logos-storage/logos-storage-nim) [![Codecov](https://codecov.io/gh/codex-storage/nim-codex/branch/master/graph/badge.svg?token=XFmCyPSNzW)](https://codecov.io/gh/codex-storage/nim-codex)
[![Discord](https://img.shields.io/discord/895609329053474826)](https://discord.gg/CaJTh24ddQ) [![Discord](https://img.shields.io/discord/895609329053474826)](https://discord.gg/CaJTh24ddQ)
![Docker Pulls](https://img.shields.io/docker/pulls/codexstorage/nim-codex) ![Docker Pulls](https://img.shields.io/docker/pulls/codexstorage/nim-codex)
## Build and Run ## Build and Run
For detailed instructions on preparing to build logos-storagenim see [*Build Logos Storage*](https://docs.codex.storage/learn/build). For detailed instructions on preparing to build nim-codex see [*Build Codex*](https://docs.codex.storage/learn/build).
To build the project, clone it and run: To build the project, clone it and run:
@ -29,12 +29,11 @@ The executable will be placed under the `build` directory under the project root
Run the client with: Run the client with:
```bash ```bash
build/storage build/codex
``` ```
## Configuration ## Configuration
It is possible to configure a Logos Storage node in several ways: It is possible to configure a Codex node in several ways:
1. CLI options 1. CLI options
2. Environment variables 2. Environment variables
3. Configuration file 3. Configuration file
@ -45,71 +44,10 @@ Please check [documentation](https://docs.codex.storage/learn/run#configuration)
## Guides ## Guides
To get acquainted with Logos Storage, consider: To get acquainted with Codex, consider:
* running the simple [Logos Storage Two-Client Test](https://docs.codex.storage/learn/local-two-client-test) for a start, and; * running the simple [Codex Two-Client Test](https://docs.codex.storage/learn/local-two-client-test) for a start, and;
* if you are feeling more adventurous, try [Running a Local Codex Network with Marketplace Support](https://docs.codex.storage/learn/local-marketplace) using a local blockchain as well.
## API ## API
The client exposes a REST API that can be used to interact with the clients. Overview of the API can be found on [api.codex.storage](https://api.codex.storage). The client exposes a REST API that can be used to interact with the clients. Overview of the API can be found on [api.codex.storage](https://api.codex.storage).
## Bindings
Logos Storage provides a C API that can be wrapped by other languages. The bindings is located in the `library` folder.
Currently, only a Go binding is included.
### Build the C library
```bash
make libstorage
```
This produces the shared library under `build/`.
### Run the Go example
Build the Go example:
```bash
go build -o storage-go examples/golang/storage.go
```
Export the library path:
```bash
export LD_LIBRARY_PATH=build
```
Run the example:
```bash
./storage-go
```
### Static vs Dynamic build
By default, Logos Storage builds a dynamic library (`libstorage.so`), which you can load at runtime.
If you prefer a static library (`libstorage.a`), set the `STATIC` flag:
```bash
# Build dynamic (default)
make libstorage
# Build static
make STATIC=1 libstorage
```
### Limitation
Callbacks must be fast and non-blocking; otherwise, the working thread will hang and prevent other requests from being processed.
## Contributing and development
Feel free to dive in, contributions are welcomed! Open an issue or submit PRs.
### Linting and formatting
`logos-storage-nim` uses [nph](https://github.com/arnetheduck/nph) for formatting our code and it is required to adhere to its styling.
If you are setting up fresh setup, in order to get `nph` run `make build-nph`.
In order to format files run `make nph/<file/folder you want to format>`.
If you want you can install Git pre-commit hook using `make install-nph-commit`, which will format modified files prior committing them.
If you are using VSCode and the [NimLang](https://marketplace.visualstudio.com/items?itemName=NimLang.nimlang) extension you can enable "Format On Save" (eq. the `nim.formatOnSave` property) that will format the files using `nph`.

2
benchmarks/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
ceremony
circuit_bench_*

33
benchmarks/README.md Normal file
View File

@ -0,0 +1,33 @@
## Benchmark Runner
Modify `runAllBenchmarks` proc in `run_benchmarks.nim` to the desired parameters and variations.
Then run it:
```sh
nim c -r run_benchmarks
```
By default all circuit files for each combinations of circuit args will be generated in a unique folder named like:
nim-codex/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3
Generating the circuit files often takes longer than running benchmarks, so caching the results allows re-running the benchmark as needed.
You can modify the `CircuitArgs` and `CircuitEnv` objects in `runAllBenchMarks` to suite your needs. See `create_circuits.nim` for their definition.
The runner executes all commands relative to the `nim-codex` repo. This simplifies finding the correct circuit includes paths, etc. `CircuitEnv` sets all of this.
## Codex Ark Circom CLI
Runs Codex's prover setup with Ark / Circom.
Compile:
```sh
nim c codex_ark_prover_cli.nim
```
Run to see usage:
```sh
./codex_ark_prover_cli.nim -h
```

15
benchmarks/config.nims Normal file
View File

@ -0,0 +1,15 @@
--path:
".."
--path:
"../tests"
--threads:
on
--tlsEmulation:
off
--d:
release
# when not defined(chronicles_log_level):
# --define:"chronicles_log_level:NONE" # compile all log statements
# --define:"chronicles_sinks:textlines[dynamic]" # allow logs to be filtered at runtime
# --"import":"logging" # ensure that logging is ignored at runtime

View File

@ -0,0 +1,187 @@
import std/[hashes, json, strutils, strformat, os, osproc, uri]
import ./utils
type
CircuitEnv* = object
nimCircuitCli*: string
circuitDirIncludes*: string
ptauPath*: string
ptauUrl*: Uri
codexProjDir*: string
CircuitArgs* = object
depth*: int
maxslots*: int
cellsize*: int
blocksize*: int
nsamples*: int
entropy*: int
seed*: int
nslots*: int
ncells*: int
index*: int
proc findCodexProjectDir(): string =
## find codex proj dir -- assumes this script is in codex/benchmarks
result = currentSourcePath().parentDir.parentDir
func default*(tp: typedesc[CircuitEnv]): CircuitEnv =
let codexDir = findCodexProjectDir()
result.nimCircuitCli =
codexDir / "vendor" / "codex-storage-proofs-circuits" / "reference" / "nim" /
"proof_input" / "cli"
result.circuitDirIncludes =
codexDir / "vendor" / "codex-storage-proofs-circuits" / "circuit"
result.ptauPath =
codexDir / "benchmarks" / "ceremony" / "powersOfTau28_hez_final_23.ptau"
result.ptauUrl = "https://storage.googleapis.com/zkevm/ptau".parseUri
result.codexProjDir = codexDir
proc check*(env: var CircuitEnv) =
## check that the CWD of script is in the codex parent
let codexProjDir = findCodexProjectDir()
echo "\n\nFound project dir: ", codexProjDir
let snarkjs = findExe("snarkjs")
if snarkjs == "":
echo dedent"""
ERROR: must install snarkjs first
npm install -g snarkjs@latest
"""
let circom = findExe("circom")
if circom == "":
echo dedent"""
ERROR: must install circom first
git clone https://github.com/iden3/circom.git
cargo install --path circom
"""
if snarkjs == "" or circom == "":
quit 2
echo "Found SnarkJS: ", snarkjs
echo "Found Circom: ", circom
if not env.nimCircuitCli.fileExists:
echo "Nim Circuit reference cli not found: ", env.nimCircuitCli
echo "Building Circuit reference cli...\n"
withDir env.nimCircuitCli.parentDir:
runit "nimble build -d:release --styleCheck:off cli"
echo "CWD: ", getCurrentDir()
assert env.nimCircuitCli.fileExists()
echo "Found NimCircuitCli: ", env.nimCircuitCli
echo "Found Circuit Path: ", env.circuitDirIncludes
echo "Found PTAU file: ", env.ptauPath
proc downloadPtau*(ptauPath: string, ptauUrl: Uri) =
## download ptau file using curl if needed
if not ptauPath.fileExists:
echo "Ceremony file not found, downloading..."
createDir ptauPath.parentDir
withDir ptauPath.parentDir:
runit fmt"curl --output '{ptauPath}' '{$ptauUrl}/{ptauPath.splitPath().tail}'"
else:
echo "Found PTAU file at: ", ptauPath
proc getCircuitBenchStr*(args: CircuitArgs): string =
for f, v in fieldPairs(args):
result &= "_" & f & $v
proc getCircuitBenchPath*(args: CircuitArgs, env: CircuitEnv): string =
## generate folder name for unique circuit args
result = env.codexProjDir / "benchmarks/circuit_bench" & getCircuitBenchStr(args)
proc generateCircomAndSamples*(args: CircuitArgs, env: CircuitEnv, name: string) =
## run nim circuit and sample generator
var cliCmd = env.nimCircuitCli
for f, v in fieldPairs(args):
cliCmd &= " --" & f & "=" & $v
if not "input.json".fileExists:
echo "Generating Circom Files..."
runit fmt"{cliCmd} -v --circom={name}.circom --output=input.json"
proc createCircuit*(
args: CircuitArgs,
env: CircuitEnv,
name = "proof_main",
circBenchDir = getCircuitBenchPath(args, env),
someEntropy = "some_entropy_75289v3b7rcawcsyiur",
doGenerateWitness = false,
): tuple[dir: string, name: string] =
## Generates all the files needed for to run a proof circuit. Downloads the PTAU file if needed.
##
## All needed circuit files will be generated as needed.
## They will be located in `circBenchDir` which defaults to a folder like:
## `nim-codex/benchmarks/circuit_bench_depth32_maxslots256_cellsize2048_blocksize65536_nsamples9_entropy1234567_seed12345_nslots11_ncells512_index3`
## with all the given CircuitArgs.
##
let circdir = circBenchDir
downloadPtau env.ptauPath, env.ptauUrl
echo "Creating circuit dir: ", circdir
createDir circdir
withDir circdir:
writeFile("circuit_params.json", pretty(%*args))
let
inputs = circdir / "input.json"
zkey = circdir / fmt"{name}.zkey"
wasm = circdir / fmt"{name}.wasm"
r1cs = circdir / fmt"{name}.r1cs"
wtns = circdir / fmt"{name}.wtns"
generateCircomAndSamples(args, env, name)
if not wasm.fileExists or not r1cs.fileExists:
runit fmt"circom --r1cs --wasm --O2 -l{env.circuitDirIncludes} {name}.circom"
moveFile fmt"{name}_js" / fmt"{name}.wasm", fmt"{name}.wasm"
echo "Found wasm: ", wasm
echo "Found r1cs: ", r1cs
if not zkey.fileExists:
echo "ZKey not found, generating..."
putEnv "NODE_OPTIONS", "--max-old-space-size=8192"
if not fmt"{name}_0000.zkey".fileExists:
runit fmt"snarkjs groth16 setup {r1cs} {env.ptauPath} {name}_0000.zkey"
echo fmt"Generated {name}_0000.zkey"
let cmd =
fmt"snarkjs zkey contribute {name}_0000.zkey {name}_0001.zkey --name='1st Contributor Name'"
echo "CMD: ", cmd
let cmdRes = execCmdEx(cmd, options = {}, input = someEntropy & "\n")
assert cmdRes.exitCode == 0
moveFile fmt"{name}_0001.zkey", fmt"{name}.zkey"
removeFile fmt"{name}_0000.zkey"
if not wtns.fileExists and doGenerateWitness:
runit fmt"node generate_witness.js {wtns} ../input.json ../witness.wtns"
return (circdir, name)
when isMainModule:
echo "findCodexProjectDir: ", findCodexProjectDir()
## test run creating a circuit
var env = CircuitEnv.default()
env.check()
let args = CircuitArgs(
depth: 32, # maximum depth of the slot tree
maxslots: 256, # maximum number of slots
cellsize: 2048, # cell size in bytes
blocksize: 65536, # block size in bytes
nsamples: 5, # number of samples to prove
entropy: 1234567, # external randomness
seed: 12345, # seed for creating fake data
nslots: 11, # number of slots in the dataset
index: 3, # which slot we prove (0..NSLOTS-1)
ncells: 512, # number of cells in this slot
)
let benchenv = createCircuit(args, env)
echo "\nBench dir:\n", benchenv

View File

@ -0,0 +1,105 @@
import std/[sequtils, strformat, os, options, importutils]
import std/[times, os, strutils, terminal]
import pkg/questionable
import pkg/questionable/results
import pkg/datastore
import pkg/codex/[rng, stores, merkletree, codextypes, slots]
import pkg/codex/utils/[json, poseidon2digest]
import pkg/codex/slots/[builder, sampler/utils, backends/helpers]
import pkg/constantine/math/[arithmetic, io/io_bigints, io/io_fields]
import ./utils
import ./create_circuits
type CircuitFiles* = object
r1cs*: string
wasm*: string
zkey*: string
inputs*: string
proc runArkCircom(args: CircuitArgs, files: CircuitFiles, benchmarkLoops: int) =
echo "Loading sample proof..."
var
inputData = files.inputs.readFile()
inputJson = !JsonNode.parse(inputData)
proofInputs = Poseidon2Hash.jsonToProofInput(inputJson)
circom = CircomCompat.init(
files.r1cs,
files.wasm,
files.zkey,
slotDepth = args.depth,
numSamples = args.nsamples,
)
defer:
circom.release() # this comes from the rust FFI
echo "Sample proof loaded..."
echo "Proving..."
let nameArgs = getCircuitBenchStr(args)
var proof: CircomProof
benchmark fmt"prover-{nameArgs}", benchmarkLoops:
proof = circom.prove(proofInputs).tryGet
var verRes: bool
benchmark fmt"verify-{nameArgs}", benchmarkLoops:
verRes = circom.verify(proof, proofInputs).tryGet
echo "verify result: ", verRes
proc runRapidSnark(args: CircuitArgs, files: CircuitFiles, benchmarkLoops: int) =
# time rapidsnark ${CIRCUIT_MAIN}.zkey witness.wtns proof.json public.json
echo "generating the witness..."
## TODO
proc runBenchmark(args: CircuitArgs, env: CircuitEnv, benchmarkLoops: int) =
## execute benchmarks given a set of args
## will create a folder in `benchmarks/circuit_bench_$(args)`
##
let env = createCircuit(args, env)
## TODO: copy over testcircomcompat proving
let files = CircuitFiles(
r1cs: env.dir / fmt"{env.name}.r1cs",
wasm: env.dir / fmt"{env.name}.wasm",
zkey: env.dir / fmt"{env.name}.zkey",
inputs: env.dir / fmt"input.json",
)
runArkCircom(args, files, benchmarkLoops)
proc runAllBenchmarks*() =
echo "Running benchmark"
# setup()
var env = CircuitEnv.default()
env.check()
var args = CircuitArgs(
depth: 32, # maximum depth of the slot tree
maxslots: 256, # maximum number of slots
cellsize: 2048, # cell size in bytes
blocksize: 65536, # block size in bytes
nsamples: 1, # number of samples to prove
entropy: 1234567, # external randomness
seed: 12345, # seed for creating fake data
nslots: 11, # number of slots in the dataset
index: 3, # which slot we prove (0..NSLOTS-1)
ncells: 512, # number of cells in this slot
)
let
numberSamples = 3
benchmarkLoops = 5
for i in 1 .. numberSamples:
args.nsamples = i
stdout.styledWriteLine(fgYellow, "\nbenchmarking args: ", $args)
runBenchmark(args, env, benchmarkLoops)
printBenchMarkSummaries()
when isMainModule:
runAllBenchmarks()

76
benchmarks/utils.nim Normal file
View File

@ -0,0 +1,76 @@
import std/tables
template withDir*(dir: string, blk: untyped) =
## set working dir for duration of blk
let prev = getCurrentDir()
try:
setCurrentDir(dir)
`blk`
finally:
setCurrentDir(prev)
template runit*(cmd: string) =
## run shell commands and verify it runs without an error code
echo "RUNNING: ", cmd
let cmdRes = execShellCmd(cmd)
echo "STATUS: ", cmdRes
assert cmdRes == 0
var benchRuns* = newTable[string, tuple[avgTimeSec: float, count: int]]()
func avg(vals: openArray[float]): float =
for v in vals:
result += v / vals.len().toFloat()
template benchmark*(name: untyped, count: int, blk: untyped) =
let benchmarkName: string = name
## simple benchmarking of a block of code
var runs = newSeqOfCap[float](count)
for i in 1 .. count:
block:
let t0 = epochTime()
`blk`
let elapsed = epochTime() - t0
runs.add elapsed
var elapsedStr = ""
for v in runs:
elapsedStr &= ", " & v.formatFloat(format = ffDecimal, precision = 3)
stdout.styledWriteLine(
fgGreen, "CPU Time [", benchmarkName, "] ", "avg(", $count, "): ", elapsedStr, " s"
)
benchRuns[benchmarkName] = (runs.avg(), count)
template printBenchMarkSummaries*(printRegular=true, printTsv=true) =
if printRegular:
echo ""
for k, v in benchRuns:
echo "Benchmark average run ", v.avgTimeSec, " for ", v.count, " runs ", "for ", k
if printTsv:
echo ""
echo "name", "\t", "avgTimeSec", "\t", "count"
for k, v in benchRuns:
echo k, "\t", v.avgTimeSec, "\t", v.count
import std/math
func floorLog2*(x: int): int =
var k = -1
var y = x
while (y > 0):
k += 1
y = y shr 1
return k
func ceilingLog2*(x: int): int =
if (x == 0):
return -1
else:
return (floorLog2(x - 1) + 1)
func checkPowerOfTwo*(x: int, what: string): int =
let k = ceilingLog2(x)
assert(x == 2 ^ k, ("`" & what & "` is expected to be a power of 2"))
return x

View File

@ -1,92 +1,66 @@
mode = ScriptMode.Verbose mode = ScriptMode.Verbose
import std/os except commandLineParams import std/os except commandLineParams
import std/strutils
### Helper functions ### Helper functions
proc truthy(val: string): bool = proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") =
const truthySwitches = @["yes", "1", "on", "true"]
return val in truthySwitches
proc buildBinary(
srcName: string,
outName = os.lastPathPart(srcName),
srcDir = "./",
params = "",
lang = "c",
) =
if not dirExists "build": if not dirExists "build":
mkDir "build" mkDir "build"
# allow something like "nim nimbus --verbosity:0 --hints:off nimbus.nims" # allow something like "nim nimbus --verbosity:0 --hints:off nimbus.nims"
var extra_params = params var extra_params = params
when compiles(commandLineParams): when compiles(commandLineParams):
for param in commandLineParams(): for param in commandLineParams():
extra_params &= " " & param extra_params &= " " & param
else: else:
for i in 2 ..< paramCount(): for i in 2..<paramCount():
extra_params &= " " & paramStr(i) extra_params &= " " & paramStr(i)
let let
# Place build output in 'build' folder, even if name includes a longer path. # Place build output in 'build' folder, even if name includes a longer path.
cmd = outName = os.lastPathPart(name)
"nim " & lang & " --out:build/" & outName & " " & extra_params & " " & srcDir & cmd = "nim " & lang & " --out:build/" & outName & " " & extra_params & " " & srcDir & name & ".nim"
srcName & ".nim"
exec(cmd) exec(cmd)
proc buildLibrary(name: string, srcDir = "./", params = "", `type` = "dynamic") = proc test(name: string, srcDir = "tests/", params = "", lang = "c") =
if not dirExists "build": buildBinary name, srcDir, params
mkDir "build" exec "build/" & name
if `type` == "dynamic": task codex, "build codex binary":
let lib_name = ( buildBinary "codex", params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE"
when defined(windows): name & ".dll"
elif defined(macosx): name & ".dylib"
else: name & ".so"
)
exec "nim c" & " --out:build/" & lib_name &
" --threads:on --app:lib --opt:size --noMain --mm:refc --header --d:metrics " &
"--nimMainPrefix:libstorage -d:noSignalHandler " &
"-d:chronicles_runtime_filtering " & "-d:chronicles_log_level=TRACE " & params &
" " & srcDir & name & ".nim"
else:
exec "nim c" & " --out:build/" & name &
".a --threads:on --app:staticlib --opt:size --noMain --mm:refc --header --d:metrics " &
"--nimMainPrefix:libstorage -d:noSignalHandler " &
"-d:chronicles_runtime_filtering " & "-d:chronicles_log_level=TRACE " & params &
" " & srcDir & name & ".nim"
proc test(name: string, outName = name, srcDir = "tests/", params = "", lang = "c") = task toolsCirdl, "build tools/cirdl binary":
buildBinary name, outName, srcDir, params buildBinary "tools/cirdl/cirdl"
exec "build/" & outName
task storage, "build logos storage binary": task testCodex, "Build & run Codex tests":
buildBinary "codex", test "testCodex", params = "-d:codex_enable_proof_failures=true"
outname = "storage",
params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE"
task testStorage, "Build & run Logos Storage tests": task testContracts, "Build & run Codex Contract tests":
test "testCodex", outName = "testStorage" test "testContracts"
task testIntegration, "Run integration tests": task testIntegration, "Run integration tests":
buildBinary "codex", buildBinary "codex", params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE -d:codex_enable_proof_failures=true"
outName = "storage",
params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE"
test "testIntegration" test "testIntegration"
# use params to enable logging from the integration test executable
# test "testIntegration", params = "-d:chronicles_sinks=textlines[notimestamps,stdout],textlines[dynamic] " &
# "-d:chronicles_enabled_topics:integration:TRACE"
task build, "build Logos Storage binary": task build, "build codex binary":
storageTask() codexTask()
task test, "Run tests": task test, "Run tests":
testStorageTask() testCodexTask()
task testTools, "Run Tools tests":
toolsCirdlTask()
test "testTools"
task testAll, "Run all tests (except for Taiko L2 tests)": task testAll, "Run all tests (except for Taiko L2 tests)":
testStorageTask() testCodexTask()
testContractsTask()
testIntegrationTask() testIntegrationTask()
testToolsTask()
task testTaiko, "Run Taiko L2 tests":
codexTask()
test "testTaiko"
import strutils import strutils
import os import os
@ -111,50 +85,20 @@ task coverage, "generates code coverage report":
var nimSrcs = " " var nimSrcs = " "
for f in walkDirRec("codex", {pcFile}): for f in walkDirRec("codex", {pcFile}):
if f.endswith(".nim"): if f.endswith(".nim"): nimSrcs.add " " & f.absolutePath.quoteShell()
nimSrcs.add " " & f.absolutePath.quoteShell()
echo "======== Running Tests ======== " echo "======== Running Tests ======== "
test "coverage", test "coverage", srcDir = "tests/", params = " --nimcache:nimcache/coverage -d:release -d:codex_enable_proof_failures=true"
srcDir = "tests/", params = " --nimcache:nimcache/coverage -d:release"
exec("rm nimcache/coverage/*.c") exec("rm nimcache/coverage/*.c")
rmDir("coverage") rmDir("coverage"); mkDir("coverage")
mkDir("coverage")
echo " ======== Running LCOV ======== " echo " ======== Running LCOV ======== "
exec( exec("lcov --capture --directory nimcache/coverage --output-file coverage/coverage.info")
"lcov --capture --keep-going --directory nimcache/coverage --output-file coverage/coverage.info" exec("lcov --extract coverage/coverage.info --output-file coverage/coverage.f.info " & nimSrcs)
)
exec(
"lcov --extract coverage/coverage.info --keep-going --output-file coverage/coverage.f.info " &
nimSrcs
)
echo " ======== Generating HTML coverage report ======== " echo " ======== Generating HTML coverage report ======== "
exec( exec("genhtml coverage/coverage.f.info --output-directory coverage/report ")
"genhtml coverage/coverage.f.info --keep-going --output-directory coverage/report "
)
echo " ======== Coverage report Done ======== " echo " ======== Coverage report Done ======== "
task showCoverage, "open coverage html": task showCoverage, "open coverage html":
echo " ======== Opening HTML coverage report in browser... ======== " echo " ======== Opening HTML coverage report in browser... ======== "
if findExe("open") != "": if findExe("open") != "":
exec("open coverage/report/index.html") exec("open coverage/report/index.html")
task libstorageDynamic, "Generate bindings":
var params = ""
when compiles(commandLineParams):
for param in commandLineParams():
if param.len > 0 and param.startsWith("-"):
params.add " " & param
let name = "libstorage"
buildLibrary name, "library/", params, "dynamic"
task libstorageStatic, "Generate bindings":
var params = ""
when compiles(commandLineParams):
for param in commandLineParams():
if param.len > 0 and param.startsWith("-"):
params.add " " & param
let name = "libstorage"
buildLibrary name, "library/", params, "static"

View File

@ -1,51 +0,0 @@
#!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.37'
pipeline {
agent {
docker {
label 'linuxcontainer'
image 'harbor.status.im/infra/ci-build-containers:linux-base-1.0.0'
args '--volume=/nix:/nix ' +
'--volume=/etc/nix:/etc/nix '
}
}
options {
timestamps()
ansiColor('xterm')
timeout(time: 20, unit: 'MINUTES')
disableConcurrentBuilds()
disableRestartFromStage()
/* manage how many builds we keep */
buildDiscarder(logRotator(
numToKeepStr: '20',
daysToKeepStr: '30',
))
}
stages {
stage('Build') {
steps {
script {
nix.flake("default")
}
}
}
stage('Check') {
steps {
script {
sh './result/bin/storage --version'
}
}
}
}
post {
cleanup {
cleanWs()
dir(env.WORKSPACE_TMP) { deleteDir() }
}
}
}

View File

@ -1,44 +0,0 @@
#!/usr/bin/env groovy
library 'status-jenkins-lib@v1.9.37'
pipeline {
agent { label 'macos && aarch64 && nix' }
options {
timestamps()
ansiColor('xterm')
timeout(time: 20, unit: 'MINUTES')
disableConcurrentBuilds()
disableRestartFromStage()
/* manage how many builds we keep */
buildDiscarder(logRotator(
numToKeepStr: '20',
daysToKeepStr: '30',
))
}
stages {
stage('Build') {
steps {
script {
nix.flake("default")
}
}
}
stage('Check') {
steps {
script {
sh './result/bin/storage --version'
}
}
}
}
post {
cleanup {
cleanWs()
dir(env.WORKSPACE_TMP) { deleteDir() }
}
}
}

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -28,6 +28,7 @@ import ./codex/codextypes
export codex, conf, libp2p, chronos, logutils export codex, conf, libp2p, chronos, logutils
when isMainModule: when isMainModule:
import std/sequtils
import std/os import std/os
import pkg/confutils/defs import pkg/confutils/defs
import ./codex/utils/fileutils import ./codex/utils/fileutils
@ -38,42 +39,40 @@ when isMainModule:
when defined(posix): when defined(posix):
import system/ansi_c import system/ansi_c
type CodexStatus {.pure.} = enum type
Stopped CodexStatus {.pure.} = enum
Stopping Stopped,
Running Stopping,
Running
let config = CodexConf.load( let config = CodexConf.load(
version = codexFullVersion, version = codexFullVersion,
envVarsPrefix = "storage", envVarsPrefix = "codex",
secondarySources = proc( secondarySources = proc (config: CodexConf, sources: auto) =
config: CodexConf, sources: auto if configFile =? config.configFile:
) {.gcsafe, raises: [ConfigurationError].} = sources.addConfigFile(Toml, configFile)
if configFile =? config.configFile:
sources.addConfigFile(Toml, configFile)
,
) )
config.setupLogging() config.setupLogging()
try:
updateLogLevel(config.logLevel)
except ValueError as err:
try:
stderr.write "Invalid value for --log-level. " & err.msg & "\n"
except IOError:
echo "Invalid value for --log-level. " & err.msg
quit QuitFailure
config.setupMetrics() config.setupMetrics()
if not (checkAndCreateDataDir((config.dataDir).string)): if config.nat == ValidIpAddress.init(IPv4_any()):
error "`--nat` cannot be set to the any (`0.0.0.0`) address"
quit QuitFailure
if config.nat == ValidIpAddress.init("127.0.0.1"):
warn "`--nat` is set to loopback, your node wont properly announce over the DHT"
if not(checkAndCreateDataDir((config.dataDir).string)):
# We are unable to access/create data folder or data folder's # We are unable to access/create data folder or data folder's
# permissions are insecure. # permissions are insecure.
quit QuitFailure quit QuitFailure
if config.prover() and not(checkAndCreateDataDir((config.circuitDir).string)):
quit QuitFailure
trace "Data dir initialized", dir = $config.dataDir trace "Data dir initialized", dir = $config.dataDir
if not (checkAndCreateDataDir((config.dataDir / "repo"))): if not(checkAndCreateDataDir((config.dataDir / "repo"))):
# We are unable to access/create data folder or data folder's # We are unable to access/create data folder or data folder's
# permissions are insecure. # permissions are insecure.
quit QuitFailure quit QuitFailure
@ -92,28 +91,25 @@ when isMainModule:
config.dataDir / config.netPrivKeyFile config.dataDir / config.netPrivKeyFile
privateKey = setupKey(keyPath).expect("Should setup private key!") privateKey = setupKey(keyPath).expect("Should setup private key!")
server = server = try:
try: CodexServer.new(config, privateKey)
CodexServer.new(config, privateKey) except Exception as exc:
except Exception as exc: error "Failed to start Codex", msg = exc.msg
error "Failed to start Logos Storage", msg = exc.msg quit QuitFailure
quit QuitFailure
## Ctrl+C handling ## Ctrl+C handling
proc doShutdown() = proc doShutdown() =
shutdown = server.shutdown() shutdown = server.stop()
state = CodexStatus.Stopping state = CodexStatus.Stopping
notice "Stopping Logos Storage" notice "Stopping Codex"
proc controlCHandler() {.noconv.} = proc controlCHandler() {.noconv.} =
when defined(windows): when defined(windows):
# workaround for https://github.com/nim-lang/Nim/issues/4057 # workaround for https://github.com/nim-lang/Nim/issues/4057
try: try:
setupForeignThreadGc() setupForeignThreadGc()
except Exception as exc: except Exception as exc: raiseAssert exc.msg # shouldn't happen
raiseAssert exc.msg
# shouldn't happen
notice "Shutting down after having received SIGINT" notice "Shutting down after having received SIGINT"
doShutdown() doShutdown()
@ -135,7 +131,7 @@ when isMainModule:
try: try:
waitFor server.start() waitFor server.start()
except CatchableError as error: except CatchableError as error:
error "Logos Storage failed to start", error = error.msg error "Codex failed to start", error = error.msg
# XXX ideally we'd like to issue a stop instead of quitting cold turkey, # XXX ideally we'd like to issue a stop instead of quitting cold turkey,
# but this would mean we'd have to fix the implementation of all # but this would mean we'd have to fix the implementation of all
# services so they won't crash if we attempt to stop them before they # services so they won't crash if we attempt to stop them before they
@ -156,7 +152,7 @@ when isMainModule:
# be assigned before state switches to Stopping # be assigned before state switches to Stopping
waitFor shutdown waitFor shutdown
except CatchableError as error: except CatchableError as error:
error "Logos Storage didn't shutdown correctly", error = error.msg error "Codex didn't shutdown correctly", error = error.msg
quit QuitFailure quit QuitFailure
notice "Exited Storage" notice "Exited codex"

View File

@ -1,9 +1,9 @@
version = "0.1.0" version = "0.1.0"
author = "Logos Storage Team" author = "Codex Team"
description = "p2p data durability engine" description = "p2p data durability engine"
license = "MIT" license = "MIT"
binDir = "build" binDir = "build"
srcDir = "." srcDir = "."
installFiles = @["build.nims"] installFiles = @["build.nims"]
include "build.nims" include "build.nims"

View File

@ -1,5 +1,10 @@
import ./blockexchange/[network, engine, peers] import ./blockexchange/[
network,
engine,
peers]
import ./blockexchange/protobuf/[blockexc, presence] import ./blockexchange/protobuf/[
blockexc,
presence]
export network, engine, blockexc, presence, peers export network, engine, blockexc, presence, peers

View File

@ -1,5 +1,6 @@
import ./engine/discovery import ./engine/discovery
import ./engine/advertiser import ./engine/advertiser
import ./engine/engine import ./engine/engine
import ./engine/payments
export discovery, advertiser, engine export discovery, advertiser, engine, payments

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,8 +7,6 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import pkg/chronos import pkg/chronos
import pkg/libp2p/cid import pkg/libp2p/cid
import pkg/libp2p/multicodec import pkg/libp2p/multicodec
@ -20,8 +18,6 @@ import ../protobuf/presence
import ../peers import ../peers
import ../../utils import ../../utils
import ../../utils/exceptions
import ../../utils/trackedfutures
import ../../discovery import ../../discovery
import ../../stores/blockstore import ../../stores/blockstore
import ../../logutils import ../../logutils
@ -30,122 +26,114 @@ import ../../manifest
logScope: logScope:
topics = "codex discoveryengine advertiser" topics = "codex discoveryengine advertiser"
declareGauge(codex_inflight_advertise, "inflight advertise requests") declareGauge(codexInflightAdvertise, "inflight advertise requests")
const const
DefaultConcurrentAdvertRequests = 10 DefaultConcurrentAdvertRequests = 10
DefaultAdvertiseLoopSleep = 30.minutes DefaultAdvertiseLoopSleep = 30.minutes
type Advertiser* = ref object of RootObj type
localStore*: BlockStore # Local block store for this instance Advertiser* = ref object of RootObj
discovery*: Discovery # Discovery interface localStore*: BlockStore # Local block store for this instance
discovery*: Discovery # Discovery interface
advertiserRunning*: bool # Indicates if discovery is running advertiserRunning*: bool # Indicates if discovery is running
concurrentAdvReqs: int # Concurrent advertise requests concurrentAdvReqs: int # Concurrent advertise requests
advertiseLocalStoreLoop*: Future[void].Raising([]) # Advertise loop task handle advertiseLocalStoreLoop*: Future[void] # Advertise loop task handle
advertiseQueue*: AsyncQueue[Cid] # Advertise queue advertiseQueue*: AsyncQueue[Cid] # Advertise queue
trackedFutures*: TrackedFutures # Advertise tasks futures advertiseTasks*: seq[Future[void]] # Advertise tasks
advertiseLocalStoreLoopSleep: Duration # Advertise loop sleep advertiseLocalStoreLoopSleep: Duration # Advertise loop sleep
inFlightAdvReqs*: Table[Cid, Future[void]] # Inflight advertise requests inFlightAdvReqs*: Table[Cid, Future[void]] # Inflight advertise requests
proc addCidToQueue(b: Advertiser, cid: Cid) {.async: (raises: [CancelledError]).} = proc addCidToQueue(b: Advertiser, cid: Cid) {.async.} =
if cid notin b.advertiseQueue: if cid notin b.advertiseQueue:
await b.advertiseQueue.put(cid) await b.advertiseQueue.put(cid)
trace "Advertising", cid trace "Advertising", cid
proc advertiseBlock(b: Advertiser, cid: Cid) {.async: (raises: [CancelledError]).} = proc advertiseBlock(b: Advertiser, cid: Cid) {.async.} =
without isM =? cid.isManifest, err: without isM =? cid.isManifest, err:
warn "Unable to determine if cid is manifest" warn "Unable to determine if cid is manifest"
return return
try: if isM:
if isM: without blk =? await b.localStore.getBlock(cid), err:
without blk =? await b.localStore.getBlock(cid), err: error "Error retrieving manifest block", cid, err = err.msg
error "Error retrieving manifest block", cid, err = err.msg return
return
without manifest =? Manifest.decode(blk), err: without manifest =? Manifest.decode(blk), err:
error "Unable to decode as manifest", err = err.msg error "Unable to decode as manifest", err = err.msg
return return
# announce manifest cid and tree cid # announce manifest cid and tree cid
await b.addCidToQueue(cid) await b.addCidToQueue(cid)
await b.addCidToQueue(manifest.treeCid) await b.addCidToQueue(manifest.treeCid)
except CancelledError as exc:
trace "Cancelled advertise block", cid
raise exc
except CatchableError as e:
error "failed to advertise block", cid, error = e.msgDetail
proc advertiseLocalStoreLoop(b: Advertiser) {.async: (raises: []).} = proc advertiseLocalStoreLoop(b: Advertiser) {.async.} =
try: while b.advertiserRunning:
while b.advertiserRunning: if cids =? await b.localStore.listBlocks(blockType = BlockType.Manifest):
if cidsIter =? await b.localStore.listBlocks(blockType = BlockType.Manifest): trace "Advertiser begins iterating blocks..."
trace "Advertiser begins iterating blocks..." for c in cids:
for c in cidsIter: if cid =? await c:
if cid =? await c: await b.advertiseBlock(cid)
await b.advertiseBlock(cid) trace "Advertiser iterating blocks finished."
trace "Advertiser iterating blocks finished."
await sleepAsync(b.advertiseLocalStoreLoopSleep) await sleepAsync(b.advertiseLocalStoreLoopSleep)
except CancelledError:
warn "Cancelled advertise local store loop"
info "Exiting advertise task loop" info "Exiting advertise task loop"
proc processQueueLoop(b: Advertiser) {.async: (raises: []).} = proc processQueueLoop(b: Advertiser) {.async.} =
try: while b.advertiserRunning:
while b.advertiserRunning: try:
let cid = await b.advertiseQueue.get() let
cid = await b.advertiseQueue.get()
if cid in b.inFlightAdvReqs: if cid in b.inFlightAdvReqs:
continue continue
let request = b.discovery.provide(cid) try:
b.inFlightAdvReqs[cid] = request let
codex_inflight_advertise.set(b.inFlightAdvReqs.len.int64) request = b.discovery.provide(cid)
defer: b.inFlightAdvReqs[cid] = request
codexInflightAdvertise.set(b.inFlightAdvReqs.len.int64)
await request
finally:
b.inFlightAdvReqs.del(cid) b.inFlightAdvReqs.del(cid)
codex_inflight_advertise.set(b.inFlightAdvReqs.len.int64) codexInflightAdvertise.set(b.inFlightAdvReqs.len.int64)
except CancelledError:
await request trace "Advertise task cancelled"
except CancelledError: return
warn "Cancelled advertise task runner" except CatchableError as exc:
warn "Exception in advertise task runner", exc = exc.msg
info "Exiting advertise task runner" info "Exiting advertise task runner"
proc start*(b: Advertiser) {.async: (raises: []).} = proc start*(b: Advertiser) {.async.} =
## Start the advertiser ## Start the advertiser
## ##
trace "Advertiser start" trace "Advertiser start"
# The advertiser is expected to be started only once. proc onBlock(cid: Cid) {.async.} =
if b.advertiserRunning: await b.advertiseBlock(cid)
raiseAssert "Advertiser can only be started once — this should not happen"
proc onBlock(cid: Cid) {.async: (raises: []).} =
try:
await b.advertiseBlock(cid)
except CancelledError:
trace "Cancelled advertise block", cid
doAssert(b.localStore.onBlockStored.isNone()) doAssert(b.localStore.onBlockStored.isNone())
b.localStore.onBlockStored = onBlock.some b.localStore.onBlockStored = onBlock.some
if b.advertiserRunning:
warn "Starting advertiser twice"
return
b.advertiserRunning = true b.advertiserRunning = true
for i in 0 ..< b.concurrentAdvReqs: for i in 0..<b.concurrentAdvReqs:
let fut = b.processQueueLoop() b.advertiseTasks.add(processQueueLoop(b))
b.trackedFutures.track(fut)
b.advertiseLocalStoreLoop = advertiseLocalStoreLoop(b) b.advertiseLocalStoreLoop = advertiseLocalStoreLoop(b)
b.trackedFutures.track(b.advertiseLocalStoreLoop)
proc stop*(b: Advertiser) {.async: (raises: []).} = proc stop*(b: Advertiser) {.async.} =
## Stop the advertiser ## Stop the advertiser
## ##
@ -157,16 +145,26 @@ proc stop*(b: Advertiser) {.async: (raises: []).} =
b.advertiserRunning = false b.advertiserRunning = false
# Stop incoming tasks from callback and localStore loop # Stop incoming tasks from callback and localStore loop
b.localStore.onBlockStored = CidCallback.none b.localStore.onBlockStored = CidCallback.none
trace "Stopping advertise loop and tasks" if not b.advertiseLocalStoreLoop.isNil and not b.advertiseLocalStoreLoop.finished:
await b.trackedFutures.cancelTracked() trace "Awaiting advertise loop to stop"
trace "Advertiser loop and tasks stopped" await b.advertiseLocalStoreLoop.cancelAndWait()
trace "Advertise loop stopped"
# Clear up remaining tasks
for task in b.advertiseTasks:
if not task.finished:
trace "Awaiting advertise task to stop"
await task.cancelAndWait()
trace "Advertise task stopped"
trace "Advertiser stopped"
proc new*( proc new*(
T: type Advertiser, T: type Advertiser,
localStore: BlockStore, localStore: BlockStore,
discovery: Discovery, discovery: Discovery,
concurrentAdvReqs = DefaultConcurrentAdvertRequests, concurrentAdvReqs = DefaultConcurrentAdvertRequests,
advertiseLocalStoreLoopSleep = DefaultAdvertiseLoopSleep, advertiseLocalStoreLoopSleep = DefaultAdvertiseLoopSleep
): Advertiser = ): Advertiser =
## Create a advertiser instance ## Create a advertiser instance
## ##
@ -175,7 +173,5 @@ proc new*(
discovery: discovery, discovery: discovery,
concurrentAdvReqs: concurrentAdvReqs, concurrentAdvReqs: concurrentAdvReqs,
advertiseQueue: newAsyncQueue[Cid](concurrentAdvReqs), advertiseQueue: newAsyncQueue[Cid](concurrentAdvReqs),
trackedFutures: TrackedFutures.new(),
inFlightAdvReqs: initTable[Cid, Future[void]](), inFlightAdvReqs: initTable[Cid, Future[void]](),
advertiseLocalStoreLoopSleep: advertiseLocalStoreLoopSleep, advertiseLocalStoreLoopSleep: advertiseLocalStoreLoopSleep)
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -8,7 +8,6 @@
## those terms. ## those terms.
import std/sequtils import std/sequtils
import std/algorithm
import pkg/chronos import pkg/chronos
import pkg/libp2p/cid import pkg/libp2p/cid
@ -24,7 +23,6 @@ import ../network
import ../peers import ../peers
import ../../utils import ../../utils
import ../../utils/trackedfutures
import ../../discovery import ../../discovery
import ../../stores/blockstore import ../../stores/blockstore
import ../../logutils import ../../logutils
@ -33,107 +31,95 @@ import ../../manifest
logScope: logScope:
topics = "codex discoveryengine" topics = "codex discoveryengine"
declareGauge(codex_inflight_discovery, "inflight discovery requests") declareGauge(codexInflightDiscovery, "inflight discovery requests")
const const
DefaultConcurrentDiscRequests = 10 DefaultConcurrentDiscRequests = 10
DefaultDiscoveryTimeout = 1.minutes DefaultDiscoveryTimeout = 1.minutes
DefaultMinPeersPerBlock = 3 DefaultMinPeersPerBlock = 3
DefaultMaxPeersPerBlock = 8
DefaultDiscoveryLoopSleep = 3.seconds DefaultDiscoveryLoopSleep = 3.seconds
type DiscoveryEngine* = ref object of RootObj type
localStore*: BlockStore # Local block store for this instance DiscoveryEngine* = ref object of RootObj
peers*: PeerCtxStore # Peer context store localStore*: BlockStore # Local block store for this instance
network*: BlockExcNetwork # Network interface peers*: PeerCtxStore # Peer context store
discovery*: Discovery # Discovery interface network*: BlockExcNetwork # Network interface
pendingBlocks*: PendingBlocksManager # Blocks we're awaiting to be resolved discovery*: Discovery # Discovery interface
discEngineRunning*: bool # Indicates if discovery is running pendingBlocks*: PendingBlocksManager # Blocks we're awaiting to be resolved
concurrentDiscReqs: int # Concurrent discovery requests discEngineRunning*: bool # Indicates if discovery is running
discoveryLoop*: Future[void].Raising([]) # Discovery loop task handle concurrentDiscReqs: int # Concurrent discovery requests
discoveryQueue*: AsyncQueue[Cid] # Discovery queue discoveryLoop*: Future[void] # Discovery loop task handle
trackedFutures*: TrackedFutures # Tracked Discovery tasks futures discoveryQueue*: AsyncQueue[Cid] # Discovery queue
minPeersPerBlock*: int # Min number of peers with block discoveryTasks*: seq[Future[void]] # Discovery tasks
maxPeersPerBlock*: int # Max number of peers with block minPeersPerBlock*: int # Max number of peers with block
discoveryLoopSleep: Duration # Discovery loop sleep discoveryLoopSleep: Duration # Discovery loop sleep
inFlightDiscReqs*: Table[Cid, Future[seq[SignedPeerRecord]]] inFlightDiscReqs*: Table[Cid, Future[seq[SignedPeerRecord]]] # Inflight discovery requests
# Inflight discovery requests
proc cleanupExcessPeers(b: DiscoveryEngine, cid: Cid) {.gcsafe, raises: [].} = proc discoveryQueueLoop(b: DiscoveryEngine) {.async.} =
var haves = b.peers.peersHave(cid) while b.discEngineRunning:
let count = haves.len - b.maxPeersPerBlock for cid in toSeq(b.pendingBlocks.wantListBlockCids):
if count <= 0: try:
return
haves.sort(
proc(a, b: BlockExcPeerCtx): int =
cmp(a.lastExchange, b.lastExchange)
)
let toRemove = haves[0 ..< count]
for peer in toRemove:
try:
peer.cleanPresence(BlockAddress.init(cid))
trace "Removed block presence from peer", cid, peer = peer.id
except CatchableError as exc:
error "Failed to clean presence for peer",
cid, peer = peer.id, error = exc.msg, name = exc.name
proc discoveryQueueLoop(b: DiscoveryEngine) {.async: (raises: []).} =
try:
while b.discEngineRunning:
for cid in toSeq(b.pendingBlocks.wantListBlockCids):
await b.discoveryQueue.put(cid) await b.discoveryQueue.put(cid)
except CancelledError:
trace "Discovery loop cancelled"
return
except CatchableError as exc:
warn "Exception in discovery loop", exc = exc.msg
await sleepAsync(b.discoveryLoopSleep) logScope:
except CancelledError: sleep = b.discoveryLoopSleep
trace "Discovery loop cancelled" wanted = b.pendingBlocks.len
proc discoveryTaskLoop(b: DiscoveryEngine) {.async: (raises: []).} = await sleepAsync(b.discoveryLoopSleep)
proc discoveryTaskLoop(b: DiscoveryEngine) {.async.} =
## Run discovery tasks ## Run discovery tasks
## ##
try: while b.discEngineRunning:
while b.discEngineRunning: try:
let cid = await b.discoveryQueue.get() let
cid = await b.discoveryQueue.get()
if cid in b.inFlightDiscReqs: if cid in b.inFlightDiscReqs:
trace "Discovery request already in progress", cid trace "Discovery request already in progress", cid
continue continue
trace "Running discovery task for cid", cid let
haves = b.peers.peersHave(cid)
let haves = b.peers.peersHave(cid)
if haves.len > b.maxPeersPerBlock:
trace "Cleaning up excess peers",
cid, peers = haves.len, max = b.maxPeersPerBlock
b.cleanupExcessPeers(cid)
continue
if haves.len < b.minPeersPerBlock: if haves.len < b.minPeersPerBlock:
let request = b.discovery.find(cid) try:
b.inFlightDiscReqs[cid] = request let
codex_inflight_discovery.set(b.inFlightDiscReqs.len.int64) request = b.discovery
.find(cid)
.wait(DefaultDiscoveryTimeout)
defer: b.inFlightDiscReqs[cid] = request
b.inFlightDiscReqs.del(cid) codexInflightDiscovery.set(b.inFlightDiscReqs.len.int64)
codex_inflight_discovery.set(b.inFlightDiscReqs.len.int64) let
peers = await request
if (await request.withTimeout(DefaultDiscoveryTimeout)) and let
peers =? (await request).catch: dialed = await allFinished(
let dialed = await allFinished(peers.mapIt(b.network.dialPeer(it.data))) peers.mapIt( b.network.dialPeer(it.data) ))
for i, f in dialed: for i, f in dialed:
if f.failed: if f.failed:
await b.discovery.removeProvider(peers[i].data.peerId) await b.discovery.removeProvider(peers[i].data.peerId)
except CancelledError:
trace "Discovery task cancelled" finally:
return b.inFlightDiscReqs.del(cid)
codexInflightDiscovery.set(b.inFlightDiscReqs.len.int64)
except CancelledError:
trace "Discovery task cancelled"
return
except CatchableError as exc:
warn "Exception in discovery task runner", exc = exc.msg
info "Exiting discovery task runner" info "Exiting discovery task runner"
proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) = proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) {.inline.} =
for cid in cids: for cid in cids:
if cid notin b.discoveryQueue: if cid notin b.discoveryQueue:
try: try:
@ -141,27 +127,23 @@ proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) =
except CatchableError as exc: except CatchableError as exc:
warn "Exception queueing discovery request", exc = exc.msg warn "Exception queueing discovery request", exc = exc.msg
proc start*(b: DiscoveryEngine) {.async: (raises: []).} = proc start*(b: DiscoveryEngine) {.async.} =
## Start the discengine task ## Start the discengine task
## ##
trace "Discovery engine starting" trace "Discovery engine start"
if b.discEngineRunning: if b.discEngineRunning:
warn "Starting discovery engine twice" warn "Starting discovery engine twice"
return return
b.discEngineRunning = true b.discEngineRunning = true
for i in 0 ..< b.concurrentDiscReqs: for i in 0..<b.concurrentDiscReqs:
let fut = b.discoveryTaskLoop() b.discoveryTasks.add(discoveryTaskLoop(b))
b.trackedFutures.track(fut)
b.discoveryLoop = b.discoveryQueueLoop() b.discoveryLoop = discoveryQueueLoop(b)
b.trackedFutures.track(b.discoveryLoop)
trace "Discovery engine started" proc stop*(b: DiscoveryEngine) {.async.} =
proc stop*(b: DiscoveryEngine) {.async: (raises: []).} =
## Stop the discovery engine ## Stop the discovery engine
## ##
@ -171,9 +153,16 @@ proc stop*(b: DiscoveryEngine) {.async: (raises: []).} =
return return
b.discEngineRunning = false b.discEngineRunning = false
trace "Stopping discovery loop and tasks" for task in b.discoveryTasks:
await b.trackedFutures.cancelTracked() if not task.finished:
trace "Discovery loop and tasks stopped" trace "Awaiting discovery task to stop"
await task.cancelAndWait()
trace "Discovery task stopped"
if not b.discoveryLoop.isNil and not b.discoveryLoop.finished:
trace "Awaiting discovery loop to stop"
await b.discoveryLoop.cancelAndWait()
trace "Discovery loop stopped"
trace "Discovery engine stopped" trace "Discovery engine stopped"
@ -186,8 +175,7 @@ proc new*(
pendingBlocks: PendingBlocksManager, pendingBlocks: PendingBlocksManager,
concurrentDiscReqs = DefaultConcurrentDiscRequests, concurrentDiscReqs = DefaultConcurrentDiscRequests,
discoveryLoopSleep = DefaultDiscoveryLoopSleep, discoveryLoopSleep = DefaultDiscoveryLoopSleep,
minPeersPerBlock = DefaultMinPeersPerBlock, minPeersPerBlock = DefaultMinPeersPerBlock
maxPeersPerBlock = DefaultMaxPeersPerBlock,
): DiscoveryEngine = ): DiscoveryEngine =
## Create a discovery engine instance for advertising services ## Create a discovery engine instance for advertising services
## ##
@ -199,9 +187,6 @@ proc new*(
pendingBlocks: pendingBlocks, pendingBlocks: pendingBlocks,
concurrentDiscReqs: concurrentDiscReqs, concurrentDiscReqs: concurrentDiscReqs,
discoveryQueue: newAsyncQueue[Cid](concurrentDiscReqs), discoveryQueue: newAsyncQueue[Cid](concurrentDiscReqs),
trackedFutures: TrackedFutures.new(),
inFlightDiscReqs: initTable[Cid, Future[seq[SignedPeerRecord]]](), inFlightDiscReqs: initTable[Cid, Future[seq[SignedPeerRecord]]](),
discoveryLoopSleep: discoveryLoopSleep, discoveryLoopSleep: discoveryLoopSleep,
minPeersPerBlock: minPeersPerBlock, minPeersPerBlock: minPeersPerBlock)
maxPeersPerBlock: maxPeersPerBlock,
)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,48 @@
## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import std/math
import pkg/nitro
import pkg/questionable/results
import ../peers
export nitro
export results
push: {.upraises: [].}
const ChainId* = 0.u256 # invalid chain id for now
const Asset* = EthAddress.zero # invalid ERC20 asset address for now
const AmountPerChannel = (10'u64^18).u256 # 1 asset, ERC20 default is 18 decimals
func openLedgerChannel*(wallet: WalletRef,
hub: EthAddress,
asset: EthAddress): ?!ChannelId =
wallet.openLedgerChannel(hub, ChainId, asset, AmountPerChannel)
func getOrOpenChannel(wallet: WalletRef, peer: BlockExcPeerCtx): ?!ChannelId =
if channel =? peer.paymentChannel:
success channel
elif account =? peer.account:
let channel = ?wallet.openLedgerChannel(account.address, Asset)
peer.paymentChannel = channel.some
success channel
else:
failure "no account set for peer"
func pay*(wallet: WalletRef,
peer: BlockExcPeerCtx,
amount: UInt256): ?!SignedState =
if account =? peer.account:
let asset = Asset
let receiver = account.address
let channel = ?wallet.getOrOpenChannel(peer)
wallet.pay(channel, asset, receiver, amount)
else:
failure "no account set for peer"

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,11 +7,12 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/tables import std/tables
import std/monotimes import std/monotimes
import std/strutils
import pkg/upraises
push: {.upraises: [].}
import pkg/chronos import pkg/chronos
import pkg/libp2p import pkg/libp2p
@ -24,194 +25,133 @@ import ../../logutils
logScope: logScope:
topics = "codex pendingblocks" topics = "codex pendingblocks"
declareGauge( declareGauge(codex_block_exchange_pending_block_requests, "codex blockexchange pending block requests")
codex_block_exchange_pending_block_requests, declareGauge(codex_block_exchange_retrieval_time_us, "codex blockexchange block retrieval time us")
"codex blockexchange pending block requests",
)
declareGauge(
codex_block_exchange_retrieval_time_us, "codex blockexchange block retrieval time us"
)
const const
DefaultBlockRetries* = 3000 DefaultBlockTimeout* = 10.minutes
DefaultRetryInterval* = 2.seconds
type type
RetriesExhaustedError* = object of CatchableError
BlockHandle* = Future[Block].Raising([CancelledError, RetriesExhaustedError])
BlockReq* = object BlockReq* = object
handle*: BlockHandle handle*: Future[Block]
requested*: ?PeerId inFlight*: bool
blockRetries*: int
startTime*: int64 startTime*: int64
PendingBlocksManager* = ref object of RootObj PendingBlocksManager* = ref object of RootObj
blockRetries*: int = DefaultBlockRetries
retryInterval*: Duration = DefaultRetryInterval
blocks*: Table[BlockAddress, BlockReq] # pending Block requests blocks*: Table[BlockAddress, BlockReq] # pending Block requests
lastInclusion*: Moment # time at which we last included a block into our wantlist
proc updatePendingBlockGauge(p: PendingBlocksManager) = proc updatePendingBlockGauge(p: PendingBlocksManager) =
codex_block_exchange_pending_block_requests.set(p.blocks.len.int64) codex_block_exchange_pending_block_requests.set(p.blocks.len.int64)
proc getWantHandle*( proc getWantHandle*(
self: PendingBlocksManager, address: BlockAddress, requested: ?PeerId = PeerId.none p: PendingBlocksManager,
): Future[Block] {.async: (raw: true, raises: [CancelledError, RetriesExhaustedError]).} = address: BlockAddress,
timeout = DefaultBlockTimeout,
inFlight = false): Future[Block] {.async.} =
## Add an event for a block ## Add an event for a block
## ##
self.blocks.withValue(address, blk): try:
return blk[].handle if address notin p.blocks:
do: p.blocks[address] = BlockReq(
let blk = BlockReq( handle: newFuture[Block]("pendingBlocks.getWantHandle"),
handle: newFuture[Block]("pendingBlocks.getWantHandle"), inFlight: inFlight,
requested: requested, startTime: getMonoTime().ticks)
blockRetries: self.blockRetries,
startTime: getMonoTime().ticks,
)
self.blocks[address] = blk
self.lastInclusion = Moment.now()
let handle = blk.handle p.updatePendingBlockGauge()
return await p.blocks[address].handle.wait(timeout)
proc cleanUpBlock(data: pointer) {.raises: [].} = except CancelledError as exc:
self.blocks.del(address) trace "Blocks cancelled", exc = exc.msg, address
self.updatePendingBlockGauge() raise exc
except CatchableError as exc:
handle.addCallback(cleanUpBlock) error "Pending WANT failed or expired", exc = exc.msg
handle.cancelCallback = proc(data: pointer) {.raises: [].} = # no need to cancel, it is already cancelled by wait()
if not handle.finished: raise exc
handle.removeCallback(cleanUpBlock) finally:
cleanUpBlock(nil) p.blocks.del(address)
p.updatePendingBlockGauge()
self.updatePendingBlockGauge()
return handle
proc getWantHandle*( proc getWantHandle*(
self: PendingBlocksManager, cid: Cid, requested: ?PeerId = PeerId.none p: PendingBlocksManager,
): Future[Block] {.async: (raw: true, raises: [CancelledError, RetriesExhaustedError]).} = cid: Cid,
self.getWantHandle(BlockAddress.init(cid), requested) timeout = DefaultBlockTimeout,
inFlight = false): Future[Block] =
proc completeWantHandle*( p.getWantHandle(BlockAddress.init(cid), timeout, inFlight)
self: PendingBlocksManager, address: BlockAddress, blk: Block
) {.raises: [].} =
## Complete a pending want handle
self.blocks.withValue(address, blockReq):
if not blockReq[].handle.finished:
trace "Completing want handle from provided block", address
blockReq[].handle.complete(blk)
else:
trace "Want handle already completed", address
do:
trace "No pending want handle found for address", address
proc resolve*( proc resolve*(
self: PendingBlocksManager, blocksDelivery: seq[BlockDelivery] p: PendingBlocksManager,
) {.gcsafe, raises: [].} = blocksDelivery: seq[BlockDelivery]) {.gcsafe, raises: [].} =
## Resolve pending blocks ## Resolve pending blocks
## ##
for bd in blocksDelivery: for bd in blocksDelivery:
self.blocks.withValue(bd.address, blockReq): p.blocks.withValue(bd.address, blockReq):
if not blockReq[].handle.finished: if not blockReq.handle.finished:
trace "Resolving pending block", address = bd.address
let let
startTime = blockReq[].startTime startTime = blockReq.startTime
stopTime = getMonoTime().ticks stopTime = getMonoTime().ticks
retrievalDurationUs = (stopTime - startTime) div 1000 retrievalDurationUs = (stopTime - startTime) div 1000
blockReq.handle.complete(bd.blk) blockReq.handle.complete(bd.blk)
codex_block_exchange_retrieval_time_us.set(retrievalDurationUs) codex_block_exchange_retrieval_time_us.set(retrievalDurationUs)
if retrievalDurationUs > 500000:
warn "High block retrieval time", retrievalDurationUs, address = bd.address
else: else:
trace "Block handle already finished", address = bd.address trace "Block handle already finished", address = bd.address
func retries*(self: PendingBlocksManager, address: BlockAddress): int = proc setInFlight*(
self.blocks.withValue(address, pending): p: PendingBlocksManager,
result = pending[].blockRetries address: BlockAddress,
do: inFlight = true) =
result = 0 ## Set inflight status for a block
func decRetries*(self: PendingBlocksManager, address: BlockAddress) =
self.blocks.withValue(address, pending):
pending[].blockRetries -= 1
func retriesExhausted*(self: PendingBlocksManager, address: BlockAddress): bool =
self.blocks.withValue(address, pending):
result = pending[].blockRetries <= 0
func isRequested*(self: PendingBlocksManager, address: BlockAddress): bool =
## Check if a block has been requested to a peer
##
result = false
self.blocks.withValue(address, pending):
result = pending[].requested.isSome
func getRequestPeer*(self: PendingBlocksManager, address: BlockAddress): ?PeerId =
## Returns the peer that requested this block
##
result = PeerId.none
self.blocks.withValue(address, pending):
result = pending[].requested
proc markRequested*(
self: PendingBlocksManager, address: BlockAddress, peer: PeerId
): bool =
## Marks this block as having been requested to a peer
## ##
if self.isRequested(address): p.blocks.withValue(address, pending):
return false pending[].inFlight = inFlight
self.blocks.withValue(address, pending): proc isInFlight*(
pending[].requested = peer.some p: PendingBlocksManager,
return true address: BlockAddress): bool =
## Check if a block is in flight
##
proc clearRequest*( p.blocks.withValue(address, pending):
self: PendingBlocksManager, address: BlockAddress, peer: ?PeerId = PeerId.none result = pending[].inFlight
) =
self.blocks.withValue(address, pending):
if peer.isSome:
assert peer == pending[].requested
pending[].requested = PeerId.none
func contains*(self: PendingBlocksManager, cid: Cid): bool = proc contains*(p: PendingBlocksManager, cid: Cid): bool =
BlockAddress.init(cid) in self.blocks BlockAddress.init(cid) in p.blocks
func contains*(self: PendingBlocksManager, address: BlockAddress): bool = proc contains*(p: PendingBlocksManager, address: BlockAddress): bool =
address in self.blocks address in p.blocks
iterator wantList*(self: PendingBlocksManager): BlockAddress = iterator wantList*(p: PendingBlocksManager): BlockAddress =
for a in self.blocks.keys: for a in p.blocks.keys:
yield a yield a
iterator wantListBlockCids*(self: PendingBlocksManager): Cid = iterator wantListBlockCids*(p: PendingBlocksManager): Cid =
for a in self.blocks.keys: for a in p.blocks.keys:
if not a.leaf: if not a.leaf:
yield a.cid yield a.cid
iterator wantListCids*(self: PendingBlocksManager): Cid = iterator wantListCids*(p: PendingBlocksManager): Cid =
var yieldedCids = initHashSet[Cid]() var yieldedCids = initHashSet[Cid]()
for a in self.blocks.keys: for a in p.blocks.keys:
let cid = a.cidOrTreeCid let cid = a.cidOrTreeCid
if cid notin yieldedCids: if cid notin yieldedCids:
yieldedCids.incl(cid) yieldedCids.incl(cid)
yield cid yield cid
iterator wantHandles*(self: PendingBlocksManager): Future[Block] = iterator wantHandles*(p: PendingBlocksManager): Future[Block] =
for v in self.blocks.values: for v in p.blocks.values:
yield v.handle yield v.handle
proc wantListLen*(self: PendingBlocksManager): int = proc wantListLen*(p: PendingBlocksManager): int =
self.blocks.len p.blocks.len
func len*(self: PendingBlocksManager): int = func len*(p: PendingBlocksManager): int =
self.blocks.len p.blocks.len
func new*( func new*(T: type PendingBlocksManager): PendingBlocksManager =
T: type PendingBlocksManager, PendingBlocksManager()
retries = DefaultBlockRetries,
interval = DefaultRetryInterval,
): PendingBlocksManager =
PendingBlocksManager(blockRetries: retries, retryInterval: interval)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -20,34 +20,32 @@ import pkg/questionable/results
import ../../blocktype as bt import ../../blocktype as bt
import ../../logutils import ../../logutils
import ../protobuf/blockexc as pb import ../protobuf/blockexc as pb
import ../../utils/trackedfutures import ../protobuf/payments
import ./networkpeer import ./networkpeer
export networkpeer export network, payments
logScope: logScope:
topics = "codex blockexcnetwork" topics = "codex blockexcnetwork"
const const
Codec* = "/codex/blockexc/1.0.0" Codec* = "/codex/blockexc/1.0.0"
DefaultMaxInflight* = 100 MaxInflight* = 100
type type
WantListHandler* = proc(peer: PeerId, wantList: WantList) {.async: (raises: []).} WantListHandler* = proc(peer: PeerId, wantList: WantList): Future[void] {.gcsafe.}
BlocksDeliveryHandler* = BlocksDeliveryHandler* = proc(peer: PeerId, blocks: seq[BlockDelivery]): Future[void] {.gcsafe.}
proc(peer: PeerId, blocks: seq[BlockDelivery]) {.async: (raises: []).} BlockPresenceHandler* = proc(peer: PeerId, precense: seq[BlockPresence]): Future[void] {.gcsafe.}
BlockPresenceHandler* = AccountHandler* = proc(peer: PeerId, account: Account): Future[void] {.gcsafe.}
proc(peer: PeerId, precense: seq[BlockPresence]) {.async: (raises: []).} PaymentHandler* = proc(peer: PeerId, payment: SignedState): Future[void] {.gcsafe.}
PeerEventHandler* = proc(peer: PeerId) {.async: (raises: [CancelledError]).}
BlockExcHandlers* = object BlockExcHandlers* = object
onWantList*: WantListHandler onWantList*: WantListHandler
onBlocksDelivery*: BlocksDeliveryHandler onBlocksDelivery*: BlocksDeliveryHandler
onPresence*: BlockPresenceHandler onPresence*: BlockPresenceHandler
onPeerJoined*: PeerEventHandler onAccount*: AccountHandler
onPeerDeparted*: PeerEventHandler onPayment*: PaymentHandler
onPeerDropped*: PeerEventHandler
WantListSender* = proc( WantListSender* = proc(
id: PeerId, id: PeerId,
@ -56,23 +54,20 @@ type
cancel: bool = false, cancel: bool = false,
wantType: WantType = WantType.WantHave, wantType: WantType = WantType.WantHave,
full: bool = false, full: bool = false,
sendDontHave: bool = false, sendDontHave: bool = false): Future[void] {.gcsafe.}
) {.async: (raises: [CancelledError]).} WantCancellationSender* = proc(peer: PeerId, addresses: seq[BlockAddress]): Future[void] {.gcsafe.}
WantCancellationSender* = proc(peer: PeerId, addresses: seq[BlockAddress]) {. BlocksDeliverySender* = proc(peer: PeerId, blocksDelivery: seq[BlockDelivery]): Future[void] {.gcsafe.}
async: (raises: [CancelledError]) PresenceSender* = proc(peer: PeerId, presence: seq[BlockPresence]): Future[void] {.gcsafe.}
.} AccountSender* = proc(peer: PeerId, account: Account): Future[void] {.gcsafe.}
BlocksDeliverySender* = proc(peer: PeerId, blocksDelivery: seq[BlockDelivery]) {. PaymentSender* = proc(peer: PeerId, payment: SignedState): Future[void] {.gcsafe.}
async: (raises: [CancelledError])
.}
PresenceSender* = proc(peer: PeerId, presence: seq[BlockPresence]) {.
async: (raises: [CancelledError])
.}
BlockExcRequest* = object BlockExcRequest* = object
sendWantList*: WantListSender sendWantList*: WantListSender
sendWantCancellations*: WantCancellationSender sendWantCancellations*: WantCancellationSender
sendBlocksDelivery*: BlocksDeliverySender sendBlocksDelivery*: BlocksDeliverySender
sendPresence*: PresenceSender sendPresence*: PresenceSender
sendAccount*: AccountSender
sendPayment*: PaymentSender
BlockExcNetwork* = ref object of LPProtocol BlockExcNetwork* = ref object of LPProtocol
peers*: Table[PeerId, NetworkPeer] peers*: Table[PeerId, NetworkPeer]
@ -81,8 +76,6 @@ type
request*: BlockExcRequest request*: BlockExcRequest
getConn: ConnProvider getConn: ConnProvider
inflightSema: AsyncSemaphore inflightSema: AsyncSemaphore
maxInflight: int = DefaultMaxInflight
trackedFutures*: TrackedFutures = TrackedFutures()
proc peerId*(b: BlockExcNetwork): PeerId = proc peerId*(b: BlockExcNetwork): PeerId =
## Return peer id ## Return peer id
@ -96,9 +89,7 @@ proc isSelf*(b: BlockExcNetwork, peer: PeerId): bool =
return b.peerId == peer return b.peerId == peer
proc send*( proc send*(b: BlockExcNetwork, id: PeerId, msg: pb.Message) {.async.} =
b: BlockExcNetwork, id: PeerId, msg: pb.Message
) {.async: (raises: [CancelledError]).} =
## Send message to peer ## Send message to peer
## ##
@ -106,9 +97,8 @@ proc send*(
trace "Unable to send, peer not found", peerId = id trace "Unable to send, peer not found", peerId = id
return return
let peer = b.peers[id]
try: try:
let peer = b.peers[id]
await b.inflightSema.acquire() await b.inflightSema.acquire()
await peer.send(msg) await peer.send(msg)
except CancelledError as error: except CancelledError as error:
@ -119,8 +109,9 @@ proc send*(
b.inflightSema.release() b.inflightSema.release()
proc handleWantList( proc handleWantList(
b: BlockExcNetwork, peer: NetworkPeer, list: WantList b: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
list: WantList) {.async.} =
## Handle incoming want list ## Handle incoming want list
## ##
@ -128,15 +119,14 @@ proc handleWantList(
await b.handlers.onWantList(peer.id, list) await b.handlers.onWantList(peer.id, list)
proc sendWantList*( proc sendWantList*(
b: BlockExcNetwork, b: BlockExcNetwork,
id: PeerId, id: PeerId,
addresses: seq[BlockAddress], addresses: seq[BlockAddress],
priority: int32 = 0, priority: int32 = 0,
cancel: bool = false, cancel: bool = false,
wantType: WantType = WantType.WantHave, wantType: WantType = WantType.WantHave,
full: bool = false, full: bool = false,
sendDontHave: bool = false, sendDontHave: bool = false): Future[void] =
) {.async: (raw: true, raises: [CancelledError]).} =
## Send a want message to peer ## Send a want message to peer
## ##
@ -147,41 +137,43 @@ proc sendWantList*(
priority: priority, priority: priority,
cancel: cancel, cancel: cancel,
wantType: wantType, wantType: wantType,
sendDontHave: sendDontHave, sendDontHave: sendDontHave) ),
) full: full)
),
full: full,
)
b.send(id, Message(wantlist: msg)) b.send(id, Message(wantlist: msg))
proc sendWantCancellations*( proc sendWantCancellations*(
b: BlockExcNetwork, id: PeerId, addresses: seq[BlockAddress] b: BlockExcNetwork,
): Future[void] {.async: (raises: [CancelledError]).} = id: PeerId,
addresses: seq[BlockAddress]): Future[void] {.async.} =
## Informs a remote peer that we're no longer interested in a set of blocks ## Informs a remote peer that we're no longer interested in a set of blocks
## ##
await b.sendWantList(id = id, addresses = addresses, cancel = true) await b.sendWantList(id = id, addresses = addresses, cancel = true)
proc handleBlocksDelivery( proc handleBlocksDelivery(
b: BlockExcNetwork, peer: NetworkPeer, blocksDelivery: seq[BlockDelivery] b: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
blocksDelivery: seq[BlockDelivery]) {.async.} =
## Handle incoming blocks ## Handle incoming blocks
## ##
if not b.handlers.onBlocksDelivery.isNil: if not b.handlers.onBlocksDelivery.isNil:
await b.handlers.onBlocksDelivery(peer.id, blocksDelivery) await b.handlers.onBlocksDelivery(peer.id, blocksDelivery)
proc sendBlocksDelivery*( proc sendBlocksDelivery*(
b: BlockExcNetwork, id: PeerId, blocksDelivery: seq[BlockDelivery] b: BlockExcNetwork,
) {.async: (raw: true, raises: [CancelledError]).} = id: PeerId,
blocksDelivery: seq[BlockDelivery]): Future[void] =
## Send blocks to remote ## Send blocks to remote
## ##
b.send(id, pb.Message(payload: blocksDelivery)) b.send(id, pb.Message(payload: blocksDelivery))
proc handleBlockPresence( proc handleBlockPresence(
b: BlockExcNetwork, peer: NetworkPeer, presence: seq[BlockPresence] b: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
presence: seq[BlockPresence]) {.async.} =
## Handle block presence ## Handle block presence
## ##
@ -189,185 +181,194 @@ proc handleBlockPresence(
await b.handlers.onPresence(peer.id, presence) await b.handlers.onPresence(peer.id, presence)
proc sendBlockPresence*( proc sendBlockPresence*(
b: BlockExcNetwork, id: PeerId, presence: seq[BlockPresence] b: BlockExcNetwork,
) {.async: (raw: true, raises: [CancelledError]).} = id: PeerId,
presence: seq[BlockPresence]): Future[void] =
## Send presence to remote ## Send presence to remote
## ##
b.send(id, Message(blockPresences: @presence)) b.send(id, Message(blockPresences: @presence))
proc handleAccount(
network: BlockExcNetwork,
peer: NetworkPeer,
account: Account) {.async.} =
## Handle account info
##
if not network.handlers.onAccount.isNil:
await network.handlers.onAccount(peer.id, account)
proc sendAccount*(
b: BlockExcNetwork,
id: PeerId,
account: Account): Future[void] =
## Send account info to remote
##
b.send(id, Message(account: AccountMessage.init(account)))
proc sendPayment*(
b: BlockExcNetwork,
id: PeerId,
payment: SignedState): Future[void] =
## Send payment to remote
##
b.send(id, Message(payment: StateChannelUpdate.init(payment)))
proc handlePayment(
network: BlockExcNetwork,
peer: NetworkPeer,
payment: SignedState) {.async.} =
## Handle payment
##
if not network.handlers.onPayment.isNil:
await network.handlers.onPayment(peer.id, payment)
proc rpcHandler( proc rpcHandler(
self: BlockExcNetwork, peer: NetworkPeer, msg: Message b: BlockExcNetwork,
) {.async: (raises: []).} = peer: NetworkPeer,
msg: Message) {.raises: [].} =
## handle rpc messages ## handle rpc messages
## ##
if msg.wantList.entries.len > 0: if msg.wantList.entries.len > 0:
self.trackedFutures.track(self.handleWantList(peer, msg.wantList)) asyncSpawn b.handleWantList(peer, msg.wantList)
if msg.payload.len > 0: if msg.payload.len > 0:
self.trackedFutures.track(self.handleBlocksDelivery(peer, msg.payload)) asyncSpawn b.handleBlocksDelivery(peer, msg.payload)
if msg.blockPresences.len > 0: if msg.blockPresences.len > 0:
self.trackedFutures.track(self.handleBlockPresence(peer, msg.blockPresences)) asyncSpawn b.handleBlockPresence(peer, msg.blockPresences)
proc getOrCreatePeer(self: BlockExcNetwork, peer: PeerId): NetworkPeer = if account =? Account.init(msg.account):
asyncSpawn b.handleAccount(peer, account)
if payment =? SignedState.init(msg.payment):
asyncSpawn b.handlePayment(peer, payment)
proc getOrCreatePeer(b: BlockExcNetwork, peer: PeerId): NetworkPeer =
## Creates or retrieves a BlockExcNetwork Peer ## Creates or retrieves a BlockExcNetwork Peer
## ##
if peer in self.peers: if peer in b.peers:
return self.peers.getOrDefault(peer, nil) return b.peers.getOrDefault(peer, nil)
var getConn: ConnProvider = proc(): Future[Connection] {. var getConn: ConnProvider = proc(): Future[Connection] {.async, gcsafe, closure.} =
async: (raises: [CancelledError])
.} =
try: try:
trace "Getting new connection stream", peer return await b.switch.dial(peer, Codec)
return await self.switch.dial(peer, Codec)
except CancelledError as error: except CancelledError as error:
raise error raise error
except CatchableError as exc: except CatchableError as exc:
trace "Unable to connect to blockexc peer", exc = exc.msg trace "Unable to connect to blockexc peer", exc = exc.msg
if not isNil(self.getConn): if not isNil(b.getConn):
getConn = self.getConn getConn = b.getConn
let rpcHandler = proc(p: NetworkPeer, msg: Message) {.async: (raises: []).} = let rpcHandler = proc (p: NetworkPeer, msg: Message) {.async.} =
await self.rpcHandler(p, msg) b.rpcHandler(p, msg)
# create new pubsub peer # create new pubsub peer
let blockExcPeer = NetworkPeer.new(peer, getConn, rpcHandler) let blockExcPeer = NetworkPeer.new(peer, getConn, rpcHandler)
debug "Created new blockexc peer", peer debug "Created new blockexc peer", peer
self.peers[peer] = blockExcPeer b.peers[peer] = blockExcPeer
return blockExcPeer return blockExcPeer
proc dialPeer*(self: BlockExcNetwork, peer: PeerRecord) {.async.} = proc setupPeer*(b: BlockExcNetwork, peer: PeerId) =
## Perform initial setup, such as want
## list exchange
##
discard b.getOrCreatePeer(peer)
proc dialPeer*(b: BlockExcNetwork, peer: PeerRecord) {.async.} =
## Dial a peer ## Dial a peer
## ##
if self.isSelf(peer.peerId): if b.isSelf(peer.peerId):
trace "Skipping dialing self", peer = peer.peerId trace "Skipping dialing self", peer = peer.peerId
return return
if peer.peerId in self.peers: await b.switch.connect(peer.peerId, peer.addresses.mapIt(it.address))
trace "Already connected to peer", peer = peer.peerId
return
await self.switch.connect(peer.peerId, peer.addresses.mapIt(it.address)) proc dropPeer*(b: BlockExcNetwork, peer: PeerId) =
proc dropPeer*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
trace "Dropping peer", peer
try:
if not self.switch.isNil:
await self.switch.disconnect(peer)
except CatchableError as error:
warn "Error attempting to disconnect from peer", peer = peer, error = error.msg
if not self.handlers.onPeerDropped.isNil:
await self.handlers.onPeerDropped(peer)
proc handlePeerJoined*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
discard self.getOrCreatePeer(peer)
if not self.handlers.onPeerJoined.isNil:
await self.handlers.onPeerJoined(peer)
proc handlePeerDeparted*(
self: BlockExcNetwork, peer: PeerId
) {.async: (raises: [CancelledError]).} =
## Cleanup disconnected peer ## Cleanup disconnected peer
## ##
trace "Cleaning up departed peer", peer b.peers.del(peer)
self.peers.del(peer)
if not self.handlers.onPeerDeparted.isNil:
await self.handlers.onPeerDeparted(peer)
method init*(self: BlockExcNetwork) {.raises: [].} = method init*(b: BlockExcNetwork) =
## Perform protocol initialization ## Perform protocol initialization
## ##
proc peerEventHandler( proc peerEventHandler(peerId: PeerId, event: PeerEvent) {.async.} =
peerId: PeerId, event: PeerEvent
): Future[void] {.async: (raises: [CancelledError]).} =
if event.kind == PeerEventKind.Joined: if event.kind == PeerEventKind.Joined:
await self.handlePeerJoined(peerId) b.setupPeer(peerId)
elif event.kind == PeerEventKind.Left:
await self.handlePeerDeparted(peerId)
else: else:
warn "Unknown peer event", event b.dropPeer(peerId)
self.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Joined) b.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Joined)
self.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Left) b.switch.addPeerEventHandler(peerEventHandler, PeerEventKind.Left)
proc handler( proc handle(conn: Connection, proto: string) {.async, gcsafe, closure.} =
conn: Connection, proto: string
): Future[void] {.async: (raises: [CancelledError]).} =
let peerId = conn.peerId let peerId = conn.peerId
let blockexcPeer = self.getOrCreatePeer(peerId) let blockexcPeer = b.getOrCreatePeer(peerId)
await blockexcPeer.readLoop(conn) # attach read loop await blockexcPeer.readLoop(conn) # attach read loop
self.handler = handler b.handler = handle
self.codec = Codec b.codec = Codec
proc stop*(self: BlockExcNetwork) {.async: (raises: []).} =
await self.trackedFutures.cancelTracked()
proc new*( proc new*(
T: type BlockExcNetwork, T: type BlockExcNetwork,
switch: Switch, switch: Switch,
connProvider: ConnProvider = nil, connProvider: ConnProvider = nil,
maxInflight = DefaultMaxInflight, maxInflight = MaxInflight): BlockExcNetwork =
): BlockExcNetwork =
## Create a new BlockExcNetwork instance ## Create a new BlockExcNetwork instance
## ##
let self = BlockExcNetwork( let
switch: switch, self = BlockExcNetwork(
getConn: connProvider, switch: switch,
inflightSema: newAsyncSemaphore(maxInflight), getConn: connProvider,
maxInflight: maxInflight, inflightSema: newAsyncSemaphore(maxInflight))
)
self.maxIncomingStreams = self.maxInflight
proc sendWantList( proc sendWantList(
id: PeerId, id: PeerId,
cids: seq[BlockAddress], cids: seq[BlockAddress],
priority: int32 = 0, priority: int32 = 0,
cancel: bool = false, cancel: bool = false,
wantType: WantType = WantType.WantHave, wantType: WantType = WantType.WantHave,
full: bool = false, full: bool = false,
sendDontHave: bool = false, sendDontHave: bool = false): Future[void] {.gcsafe.} =
): Future[void] {.async: (raw: true, raises: [CancelledError]).} = self.sendWantList(
self.sendWantList(id, cids, priority, cancel, wantType, full, sendDontHave) id, cids, priority, cancel,
wantType, full, sendDontHave)
proc sendWantCancellations( proc sendWantCancellations(id: PeerId, addresses: seq[BlockAddress]): Future[void] {.gcsafe.} =
id: PeerId, addresses: seq[BlockAddress]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendWantCancellations(id, addresses) self.sendWantCancellations(id, addresses)
proc sendBlocksDelivery( proc sendBlocksDelivery(id: PeerId, blocksDelivery: seq[BlockDelivery]): Future[void] {.gcsafe.} =
id: PeerId, blocksDelivery: seq[BlockDelivery]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendBlocksDelivery(id, blocksDelivery) self.sendBlocksDelivery(id, blocksDelivery)
proc sendPresence( proc sendPresence(id: PeerId, presence: seq[BlockPresence]): Future[void] {.gcsafe.} =
id: PeerId, presence: seq[BlockPresence]
): Future[void] {.async: (raw: true, raises: [CancelledError]).} =
self.sendBlockPresence(id, presence) self.sendBlockPresence(id, presence)
proc sendAccount(id: PeerId, account: Account): Future[void] {.gcsafe.} =
self.sendAccount(id, account)
proc sendPayment(id: PeerId, payment: SignedState): Future[void] {.gcsafe.} =
self.sendPayment(id, payment)
self.request = BlockExcRequest( self.request = BlockExcRequest(
sendWantList: sendWantList, sendWantList: sendWantList,
sendWantCancellations: sendWantCancellations, sendWantCancellations: sendWantCancellations,
sendBlocksDelivery: sendBlocksDelivery, sendBlocksDelivery: sendBlocksDelivery,
sendPresence: sendPresence, sendPresence: sendPresence,
) sendAccount: sendAccount,
sendPayment: sendPayment)
self.init() self.init()
return self return self

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,7 +7,8 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].} import pkg/upraises
push: {.upraises: [].}
import pkg/chronos import pkg/chronos
import pkg/libp2p import pkg/libp2p
@ -16,98 +17,78 @@ import ../protobuf/blockexc
import ../protobuf/message import ../protobuf/message
import ../../errors import ../../errors
import ../../logutils import ../../logutils
import ../../utils/trackedfutures
logScope: logScope:
topics = "codex blockexcnetworkpeer" topics = "codex blockexcnetworkpeer"
const DefaultYieldInterval = 50.millis
type type
ConnProvider* = proc(): Future[Connection] {.async: (raises: [CancelledError]).} ConnProvider* = proc(): Future[Connection] {.gcsafe, closure.}
RPCHandler* = proc(peer: NetworkPeer, msg: Message) {.async: (raises: []).} RPCHandler* = proc(peer: NetworkPeer, msg: Message): Future[void] {.gcsafe.}
NetworkPeer* = ref object of RootObj NetworkPeer* = ref object of RootObj
id*: PeerId id*: PeerId
handler*: RPCHandler handler*: RPCHandler
sendConn: Connection sendConn: Connection
getConn: ConnProvider getConn: ConnProvider
yieldInterval*: Duration = DefaultYieldInterval
trackedFutures: TrackedFutures
proc connected*(self: NetworkPeer): bool = proc connected*(b: NetworkPeer): bool =
not (isNil(self.sendConn)) and not (self.sendConn.closed or self.sendConn.atEof) not(isNil(b.sendConn)) and
not(b.sendConn.closed or b.sendConn.atEof)
proc readLoop*(self: NetworkPeer, conn: Connection) {.async: (raises: []).} = proc readLoop*(b: NetworkPeer, conn: Connection) {.async.} =
if isNil(conn): if isNil(conn):
trace "No connection to read from", peer = self.id
return return
trace "Attaching read loop", peer = self.id, connId = conn.oid
try: try:
var nextYield = Moment.now() + self.yieldInterval
while not conn.atEof or not conn.closed: while not conn.atEof or not conn.closed:
if Moment.now() > nextYield:
nextYield = Moment.now() + self.yieldInterval
trace "Yielding in read loop",
peer = self.id, nextYield = nextYield, interval = self.yieldInterval
await sleepAsync(10.millis)
let let
data = await conn.readLp(MaxMessageSize.int) data = await conn.readLp(MaxMessageSize.int)
msg = Message.protobufDecode(data).mapFailure().tryGet() msg = Message.protobufDecode(data).mapFailure().tryGet()
trace "Received message", peer = self.id, connId = conn.oid await b.handler(b, msg)
await self.handler(self, msg)
except CancelledError: except CancelledError:
trace "Read loop cancelled" trace "Read loop cancelled"
except CatchableError as err: except CatchableError as err:
warn "Exception in blockexc read loop", msg = err.msg warn "Exception in blockexc read loop", msg = err.msg
finally: finally:
warn "Detaching read loop", peer = self.id, connId = conn.oid
if self.sendConn == conn:
self.sendConn = nil
await conn.close() await conn.close()
proc connect*( proc connect*(b: NetworkPeer): Future[Connection] {.async.} =
self: NetworkPeer if b.connected:
): Future[Connection] {.async: (raises: [CancelledError]).} = return b.sendConn
if self.connected:
trace "Already connected", peer = self.id, connId = self.sendConn.oid
return self.sendConn
self.sendConn = await self.getConn() b.sendConn = await b.getConn()
self.trackedFutures.track(self.readLoop(self.sendConn)) asyncSpawn b.readLoop(b.sendConn)
return self.sendConn return b.sendConn
proc send*( proc send*(b: NetworkPeer, msg: Message) {.async.} =
self: NetworkPeer, msg: Message let conn = await b.connect()
) {.async: (raises: [CancelledError, LPStreamError]).} =
let conn = await self.connect()
if isNil(conn): if isNil(conn):
warn "Unable to get send connection for peer message not sent", peer = self.id warn "Unable to get send connection for peer message not sent", peer = b.id
return return
trace "Sending message", peer = self.id, connId = conn.oid await conn.writeLp(protobufEncode(msg))
try:
await conn.writeLp(protobufEncode(msg)) proc broadcast*(b: NetworkPeer, msg: Message) =
except CatchableError as err: proc sendAwaiter() {.async.} =
if self.sendConn == conn: try:
self.sendConn = nil await b.send(msg)
raise newException(LPStreamError, "Failed to send message: " & err.msg) except CatchableError as exc:
warn "Exception broadcasting message to peer", peer = b.id, exc = exc.msg
asyncSpawn sendAwaiter()
func new*( func new*(
T: type NetworkPeer, T: type NetworkPeer,
peer: PeerId, peer: PeerId,
connProvider: ConnProvider, connProvider: ConnProvider,
rpcHandler: RPCHandler, rpcHandler: RPCHandler): NetworkPeer =
): NetworkPeer =
doAssert(not isNil(connProvider), "should supply connection provider") doAssert(not isNil(connProvider),
"should supply connection provider")
NetworkPeer( NetworkPeer(
id: peer, id: peer,
getConn: connProvider, getConn: connProvider,
handler: rpcHandler, handler: rpcHandler)
trackedFutures: TrackedFutures(),
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -13,83 +13,41 @@ import std/sets
import pkg/libp2p import pkg/libp2p
import pkg/chronos import pkg/chronos
import pkg/nitro
import pkg/questionable import pkg/questionable
import ../protobuf/blockexc import ../protobuf/blockexc
import ../protobuf/payments
import ../protobuf/presence import ../protobuf/presence
import ../../blocktype import ../../blocktype
import ../../logutils import ../../logutils
const export payments, nitro
MinRefreshInterval = 1.seconds
MaxRefreshBackoff = 36 # 36 seconds
MaxWantListBatchSize* = 1024 # Maximum blocks to send per WantList message
type BlockExcPeerCtx* = ref object of RootObj type
id*: PeerId BlockExcPeerCtx* = ref object of RootObj
blocks*: Table[BlockAddress, Presence] # remote peer have list id*: PeerId
wantedBlocks*: HashSet[BlockAddress] # blocks that the peer wants blocks*: Table[BlockAddress, Presence] # remote peer have list including price
exchanged*: int # times peer has exchanged with us peerWants*: seq[WantListEntry] # remote peers want lists
refreshInProgress*: bool # indicates if a refresh is in progress exchanged*: int # times peer has exchanged with us
lastRefresh*: Moment # last time we refreshed our knowledge of the blocks this peer has lastExchange*: Moment # last time peer has exchanged with us
refreshBackoff*: int = 1 # backoff factor for refresh requests account*: ?Account # ethereum account of this peer
blocksSent*: HashSet[BlockAddress] # blocks sent to peer paymentChannel*: ?ChannelId # payment channel id
blocksRequested*: HashSet[BlockAddress] # pending block requests to this peer
lastExchange*: Moment # last time peer has sent us a block
activityTimeout*: Duration
lastSentWants*: HashSet[BlockAddress]
# track what wantList we last sent for delta updates
proc isKnowledgeStale*(self: BlockExcPeerCtx): bool = proc peerHave*(self: BlockExcPeerCtx): seq[BlockAddress] =
let staleness = toSeq(self.blocks.keys)
self.lastRefresh + self.refreshBackoff * MinRefreshInterval < Moment.now()
if staleness and self.refreshInProgress: proc peerHaveCids*(self: BlockExcPeerCtx): HashSet[Cid] =
trace "Cleaning up refresh state", peer = self.id self.blocks.keys.toSeq.mapIt(it.cidOrTreeCid).toHashSet
self.refreshInProgress = false
self.refreshBackoff = 1
staleness proc peerWantsCids*(self: BlockExcPeerCtx): HashSet[Cid] =
self.peerWants.mapIt(it.address.cidOrTreeCid).toHashSet
proc isBlockSent*(self: BlockExcPeerCtx, address: BlockAddress): bool =
address in self.blocksSent
proc markBlockAsSent*(self: BlockExcPeerCtx, address: BlockAddress) =
self.blocksSent.incl(address)
proc markBlockAsNotSent*(self: BlockExcPeerCtx, address: BlockAddress) =
self.blocksSent.excl(address)
proc refreshRequested*(self: BlockExcPeerCtx) =
trace "Refresh requested for peer", peer = self.id, backoff = self.refreshBackoff
self.refreshInProgress = true
self.lastRefresh = Moment.now()
proc refreshReplied*(self: BlockExcPeerCtx) =
self.refreshInProgress = false
self.lastRefresh = Moment.now()
self.refreshBackoff = min(self.refreshBackoff * 2, MaxRefreshBackoff)
proc havesUpdated(self: BlockExcPeerCtx) =
self.refreshBackoff = 1
proc wantsUpdated*(self: BlockExcPeerCtx) =
self.refreshBackoff = 1
proc peerHave*(self: BlockExcPeerCtx): HashSet[BlockAddress] =
# XXX: this is ugly an inefficient, but since those will typically
# be used in "joins", it's better to pay the price here and have
# a linear join than to not do it and have a quadratic join.
toHashSet(self.blocks.keys.toSeq)
proc contains*(self: BlockExcPeerCtx, address: BlockAddress): bool = proc contains*(self: BlockExcPeerCtx, address: BlockAddress): bool =
address in self.blocks address in self.blocks
func setPresence*(self: BlockExcPeerCtx, presence: Presence) = func setPresence*(self: BlockExcPeerCtx, presence: Presence) =
if presence.address notin self.blocks:
self.havesUpdated()
self.blocks[presence.address] = presence self.blocks[presence.address] = presence
func cleanPresence*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]) = func cleanPresence*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]) =
@ -99,35 +57,10 @@ func cleanPresence*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]) =
func cleanPresence*(self: BlockExcPeerCtx, address: BlockAddress) = func cleanPresence*(self: BlockExcPeerCtx, address: BlockAddress) =
self.cleanPresence(@[address]) self.cleanPresence(@[address])
proc blockRequestScheduled*(self: BlockExcPeerCtx, address: BlockAddress) = func price*(self: BlockExcPeerCtx, addresses: seq[BlockAddress]): UInt256 =
## Adds a block the set of blocks that have been requested to this peer var price = 0.u256
## (its request schedule). for a in addresses:
if self.blocksRequested.len == 0: self.blocks.withValue(a, precense):
self.lastExchange = Moment.now() price += precense[].price
self.blocksRequested.incl(address)
proc blockRequestCancelled*(self: BlockExcPeerCtx, address: BlockAddress) = price
## Removes a block from the set of blocks that have been requested to this peer
## (its request schedule).
self.blocksRequested.excl(address)
proc blockReceived*(self: BlockExcPeerCtx, address: BlockAddress): bool =
let wasRequested = address in self.blocksRequested
self.blocksRequested.excl(address)
self.lastExchange = Moment.now()
wasRequested
proc activityTimer*(
self: BlockExcPeerCtx
): Future[void] {.async: (raises: [CancelledError]).} =
## This is called by the block exchange when a block is scheduled for this peer.
## If the peer sends no blocks for a while, it is considered inactive/uncooperative
## and the peer is dropped. Note that ANY block that the peer sends will reset this
## timer for all blocks.
##
while true:
let idleTime = Moment.now() - self.lastExchange
if idleTime > self.activityTimeout:
return
await sleepAsync(self.activityTimeout - idleTime)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,12 +7,13 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/sequtils import std/sequtils
import std/tables import std/tables
import std/algorithm import std/algorithm
import std/sequtils
import pkg/upraises
push: {.upraises: [].}
import pkg/chronos import pkg/chronos
import pkg/libp2p import pkg/libp2p
@ -21,6 +22,7 @@ import ../protobuf/blockexc
import ../../blocktype import ../../blocktype
import ../../logutils import ../../logutils
import ./peercontext import ./peercontext
export peercontext export peercontext
@ -31,8 +33,6 @@ type
PeerCtxStore* = ref object of RootObj PeerCtxStore* = ref object of RootObj
peers*: OrderedTable[PeerId, BlockExcPeerCtx] peers*: OrderedTable[PeerId, BlockExcPeerCtx]
PeersForBlock* = tuple[with: seq[BlockExcPeerCtx], without: seq[BlockExcPeerCtx]]
iterator items*(self: PeerCtxStore): BlockExcPeerCtx = iterator items*(self: PeerCtxStore): BlockExcPeerCtx =
for p in self.peers.values: for p in self.peers.values:
yield p yield p
@ -41,10 +41,7 @@ proc contains*(a: openArray[BlockExcPeerCtx], b: PeerId): bool =
## Convenience method to check for peer precense ## Convenience method to check for peer precense
## ##
a.anyIt(it.id == b) a.anyIt( it.id == b )
func peerIds*(self: PeerCtxStore): seq[PeerId] =
toSeq(self.peers.keys)
func contains*(self: PeerCtxStore, peerId: PeerId): bool = func contains*(self: PeerCtxStore, peerId: PeerId): bool =
peerId in self.peers peerId in self.peers
@ -62,27 +59,43 @@ func len*(self: PeerCtxStore): int =
self.peers.len self.peers.len
func peersHave*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] = func peersHave*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
toSeq(self.peers.values).filterIt(address in it.peerHave) toSeq(self.peers.values).filterIt( it.peerHave.anyIt( it == address ) )
func peersHave*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] = func peersHave*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] =
# FIXME: this is way slower and can end up leading to unexpected performance loss. toSeq(self.peers.values).filterIt( it.peerHave.anyIt( it.cidOrTreeCid == cid ) )
toSeq(self.peers.values).filterIt(it.peerHave.anyIt(it.cidOrTreeCid == cid))
func peersWant*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] = func peersWant*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
toSeq(self.peers.values).filterIt(address in it.wantedBlocks) toSeq(self.peers.values).filterIt( it.peerWants.anyIt( it == address ) )
func peersWant*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] = func peersWant*(self: PeerCtxStore, cid: Cid): seq[BlockExcPeerCtx] =
# FIXME: this is way slower and can end up leading to unexpected performance loss. toSeq(self.peers.values).filterIt( it.peerWants.anyIt( it.address.cidOrTreeCid == cid ) )
toSeq(self.peers.values).filterIt(it.wantedBlocks.anyIt(it.cidOrTreeCid == cid))
proc getPeersForBlock*(self: PeerCtxStore, address: BlockAddress): PeersForBlock = func selectCheapest*(self: PeerCtxStore, address: BlockAddress): seq[BlockExcPeerCtx] =
var res: PeersForBlock = (@[], @[]) # assume that the price for all leaves in a tree is the same
for peer in self: let rootAddress = BlockAddress(leaf: false, cid: address.cidOrTreeCid)
if address in peer: var peers = self.peersHave(rootAddress)
res.with.add(peer)
func cmp(a, b: BlockExcPeerCtx): int =
var
priceA = 0.u256
priceB = 0.u256
a.blocks.withValue(rootAddress, precense):
priceA = precense[].price
b.blocks.withValue(rootAddress, precense):
priceB = precense[].price
if priceA == priceB:
0
elif priceA > priceB:
1
else: else:
res.without.add(peer) -1
res
peers.sort(cmp)
trace "Selected cheapest peers", peers = peers.len
return peers
proc new*(T: type PeerCtxStore): PeerCtxStore = proc new*(T: type PeerCtxStore): PeerCtxStore =
## create new instance of a peer context store ## create new instance of a peer context store

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,6 +9,7 @@
import std/hashes import std/hashes
import std/sequtils import std/sequtils
import pkg/stew/endians2
import message import message
@ -17,6 +18,14 @@ import ../../blocktype
export Message, protobufEncode, protobufDecode export Message, protobufEncode, protobufDecode
export Wantlist, WantType, WantListEntry export Wantlist, WantType, WantListEntry
export BlockDelivery, BlockPresenceType, BlockPresence export BlockDelivery, BlockPresenceType, BlockPresence
export AccountMessage, StateChannelUpdate
proc hash*(a: BlockAddress): Hash =
if a.leaf:
let data = a.treeCid.data.buffer & @(a.index.uint64.toBytesBE)
hash(data)
else:
hash(a.cid.data.buffer)
proc hash*(e: WantListEntry): Hash = proc hash*(e: WantListEntry): Hash =
hash(e.address) hash(e.address)
@ -33,6 +42,7 @@ proc `==`*(a: WantListEntry, b: BlockAddress): bool =
proc `<`*(a, b: WantListEntry): bool = proc `<`*(a, b: WantListEntry): bool =
a.priority < b.priority a.priority < b.priority
proc `==`*(a: BlockPresence, b: BlockAddress): bool = proc `==`*(a: BlockPresence, b: BlockAddress): bool =
return a.address == b return a.address == b

View File

@ -1,4 +1,4 @@
# Protocol of data exchange between Logos Storage nodes # Protocol of data exchange between Codex nodes
# and Protobuf encoder/decoder for these messages. # and Protobuf encoder/decoder for these messages.
# #
# Eventually all this code should be auto-generated from message.proto. # Eventually all this code should be auto-generated from message.proto.
@ -20,46 +20,48 @@ const
type type
WantType* = enum WantType* = enum
WantBlock = 0 WantBlock = 0,
WantHave = 1 WantHave = 1
WantListEntry* = object WantListEntry* = object
address*: BlockAddress address*: BlockAddress
# XXX: I think explicit priority is pointless as the peer will request priority*: int32 # The priority (normalized). default to 1
# the blocks in the order it wants to receive them, and all we have to cancel*: bool # Whether this revokes an entry
# do is process those in the same order as we send them back. It also wantType*: WantType # Note: defaults to enum 0, ie Block
# complicates things for no reason at the moment, as the priority is sendDontHave*: bool # Note: defaults to false
# always set to 0. inFlight*: bool # Whether block sending is in progress. Not serialized.
priority*: int32 # The priority (normalized). default to 1
cancel*: bool # Whether this revokes an entry
wantType*: WantType # Note: defaults to enum 0, ie Block
sendDontHave*: bool # Note: defaults to false
WantList* = object WantList* = object
entries*: seq[WantListEntry] # A list of wantList entries entries*: seq[WantListEntry] # A list of wantList entries
full*: bool # Whether this is the full wantList. default to false full*: bool # Whether this is the full wantList. default to false
BlockDelivery* = object BlockDelivery* = object
blk*: Block blk*: Block
address*: BlockAddress address*: BlockAddress
proof*: ?CodexProof # Present only if `address.leaf` is true proof*: ?CodexProof # Present only if `address.leaf` is true
BlockPresenceType* = enum BlockPresenceType* = enum
Have = 0 Have = 0,
DontHave = 1 DontHave = 1
BlockPresence* = object BlockPresence* = object
address*: BlockAddress address*: BlockAddress
`type`*: BlockPresenceType `type`*: BlockPresenceType
price*: seq[byte] # Amount of assets to pay for the block (UInt256)
AccountMessage* = object
address*: seq[byte] # Ethereum address to which payments should be made
StateChannelUpdate* = object StateChannelUpdate* = object
update*: seq[byte] # Signed Nitro state, serialized as JSON update*: seq[byte] # Signed Nitro state, serialized as JSON
Message* = object Message* = object
wantList*: WantList wantList*: WantList
payload*: seq[BlockDelivery] payload*: seq[BlockDelivery]
blockPresences*: seq[BlockPresence] blockPresences*: seq[BlockPresence]
pendingBytes*: uint pendingBytes*: uint
account*: AccountMessage
payment*: StateChannelUpdate
# #
# Encoding Message into seq[byte] in Protobuf format # Encoding Message into seq[byte] in Protobuf format
@ -95,7 +97,7 @@ proc write*(pb: var ProtoBuffer, field: int, value: WantList) =
pb.write(field, ipb) pb.write(field, ipb)
proc write*(pb: var ProtoBuffer, field: int, value: BlockDelivery) = proc write*(pb: var ProtoBuffer, field: int, value: BlockDelivery) =
var ipb = initProtoBuffer() var ipb = initProtoBuffer(maxSize = MaxBlockSize)
ipb.write(1, value.blk.cid.data.buffer) ipb.write(1, value.blk.cid.data.buffer)
ipb.write(2, value.blk.data) ipb.write(2, value.blk.data)
ipb.write(3, value.address) ipb.write(3, value.address)
@ -109,20 +111,36 @@ proc write*(pb: var ProtoBuffer, field: int, value: BlockPresence) =
var ipb = initProtoBuffer() var ipb = initProtoBuffer()
ipb.write(1, value.address) ipb.write(1, value.address)
ipb.write(2, value.`type`.uint) ipb.write(2, value.`type`.uint)
ipb.write(3, value.price)
ipb.finish()
pb.write(field, ipb)
proc write*(pb: var ProtoBuffer, field: int, value: AccountMessage) =
var ipb = initProtoBuffer()
ipb.write(1, value.address)
ipb.finish()
pb.write(field, ipb)
proc write*(pb: var ProtoBuffer, field: int, value: StateChannelUpdate) =
var ipb = initProtoBuffer()
ipb.write(1, value.update)
ipb.finish() ipb.finish()
pb.write(field, ipb) pb.write(field, ipb)
proc protobufEncode*(value: Message): seq[byte] = proc protobufEncode*(value: Message): seq[byte] =
var ipb = initProtoBuffer() var ipb = initProtoBuffer(maxSize = MaxMessageSize)
ipb.write(1, value.wantList) ipb.write(1, value.wantList)
for v in value.payload: for v in value.payload:
ipb.write(3, v) # is this meant to be 2? ipb.write(3, v)
for v in value.blockPresences: for v in value.blockPresences:
ipb.write(4, v) ipb.write(4, v)
ipb.write(5, value.pendingBytes) ipb.write(5, value.pendingBytes)
ipb.write(6, value.account)
ipb.write(7, value.payment)
ipb.finish() ipb.finish()
ipb.buffer ipb.buffer
# #
# Decoding Message from seq[byte] in Protobuf format # Decoding Message from seq[byte] in Protobuf format
# #
@ -133,22 +151,22 @@ proc decode*(_: type BlockAddress, pb: ProtoBuffer): ProtoResult[BlockAddress] =
field: uint64 field: uint64
cidBuf = newSeq[byte]() cidBuf = newSeq[byte]()
if ?pb.getField(1, field): if ? pb.getField(1, field):
leaf = bool(field) leaf = bool(field)
if leaf: if leaf:
var var
treeCid: Cid treeCid: Cid
index: Natural index: Natural
if ?pb.getField(2, cidBuf): if ? pb.getField(2, cidBuf):
treeCid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob) treeCid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ?pb.getField(3, field): if ? pb.getField(3, field):
index = field index = field
value = BlockAddress(leaf: true, treeCid: treeCid, index: index) value = BlockAddress(leaf: true, treeCid: treeCid, index: index)
else: else:
var cid: Cid var cid: Cid
if ?pb.getField(4, cidBuf): if ? pb.getField(4, cidBuf):
cid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob) cid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
value = BlockAddress(leaf: false, cid: cid) value = BlockAddress(leaf: false, cid: cid)
ok(value) ok(value)
@ -158,15 +176,15 @@ proc decode*(_: type WantListEntry, pb: ProtoBuffer): ProtoResult[WantListEntry]
value = WantListEntry() value = WantListEntry()
field: uint64 field: uint64
ipb: ProtoBuffer ipb: ProtoBuffer
if ?pb.getField(1, ipb): if ? pb.getField(1, ipb):
value.address = ?BlockAddress.decode(ipb) value.address = ? BlockAddress.decode(ipb)
if ?pb.getField(2, field): if ? pb.getField(2, field):
value.priority = int32(field) value.priority = int32(field)
if ?pb.getField(3, field): if ? pb.getField(3, field):
value.cancel = bool(field) value.cancel = bool(field)
if ?pb.getField(4, field): if ? pb.getField(4, field):
value.wantType = WantType(field) value.wantType = WantType(field)
if ?pb.getField(5, field): if ? pb.getField(5, field):
value.sendDontHave = bool(field) value.sendDontHave = bool(field)
ok(value) ok(value)
@ -175,10 +193,10 @@ proc decode*(_: type WantList, pb: ProtoBuffer): ProtoResult[WantList] =
value = WantList() value = WantList()
field: uint64 field: uint64
sublist: seq[seq[byte]] sublist: seq[seq[byte]]
if ?pb.getRepeatedField(1, sublist): if ? pb.getRepeatedField(1, sublist):
for item in sublist: for item in sublist:
value.entries.add(?WantListEntry.decode(initProtoBuffer(item))) value.entries.add(? WantListEntry.decode(initProtoBuffer(item)))
if ?pb.getField(2, field): if ? pb.getField(2, field):
value.full = bool(field) value.full = bool(field)
ok(value) ok(value)
@ -190,18 +208,17 @@ proc decode*(_: type BlockDelivery, pb: ProtoBuffer): ProtoResult[BlockDelivery]
cid: Cid cid: Cid
ipb: ProtoBuffer ipb: ProtoBuffer
if ?pb.getField(1, cidBuf): if ? pb.getField(1, cidBuf):
cid = ?Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob) cid = ? Cid.init(cidBuf).mapErr(x => ProtoError.IncorrectBlob)
if ?pb.getField(2, dataBuf): if ? pb.getField(2, dataBuf):
value.blk = value.blk = ? Block.new(cid, dataBuf, verify = true).mapErr(x => ProtoError.IncorrectBlob)
?Block.new(cid, dataBuf, verify = true).mapErr(x => ProtoError.IncorrectBlob) if ? pb.getField(3, ipb):
if ?pb.getField(3, ipb): value.address = ? BlockAddress.decode(ipb)
value.address = ?BlockAddress.decode(ipb)
if value.address.leaf: if value.address.leaf:
var proofBuf = newSeq[byte]() var proofBuf = newSeq[byte]()
if ?pb.getField(4, proofBuf): if ? pb.getField(4, proofBuf):
let proof = ?CodexProof.decode(proofBuf).mapErr(x => ProtoError.IncorrectBlob) let proof = ? CodexProof.decode(proofBuf).mapErr(x => ProtoError.IncorrectBlob)
value.proof = proof.some value.proof = proof.some
else: else:
value.proof = CodexProof.none value.proof = CodexProof.none
@ -215,25 +232,42 @@ proc decode*(_: type BlockPresence, pb: ProtoBuffer): ProtoResult[BlockPresence]
value = BlockPresence() value = BlockPresence()
field: uint64 field: uint64
ipb: ProtoBuffer ipb: ProtoBuffer
if ?pb.getField(1, ipb): if ? pb.getField(1, ipb):
value.address = ?BlockAddress.decode(ipb) value.address = ? BlockAddress.decode(ipb)
if ?pb.getField(2, field): if ? pb.getField(2, field):
value.`type` = BlockPresenceType(field) value.`type` = BlockPresenceType(field)
discard ? pb.getField(3, value.price)
ok(value)
proc decode*(_: type AccountMessage, pb: ProtoBuffer): ProtoResult[AccountMessage] =
var
value = AccountMessage()
discard ? pb.getField(1, value.address)
ok(value)
proc decode*(_: type StateChannelUpdate, pb: ProtoBuffer): ProtoResult[StateChannelUpdate] =
var
value = StateChannelUpdate()
discard ? pb.getField(1, value.update)
ok(value) ok(value)
proc protobufDecode*(_: type Message, msg: seq[byte]): ProtoResult[Message] = proc protobufDecode*(_: type Message, msg: seq[byte]): ProtoResult[Message] =
var var
value = Message() value = Message()
pb = initProtoBuffer(msg) pb = initProtoBuffer(msg, maxSize = MaxMessageSize)
ipb: ProtoBuffer ipb: ProtoBuffer
sublist: seq[seq[byte]] sublist: seq[seq[byte]]
if ?pb.getField(1, ipb): if ? pb.getField(1, ipb):
value.wantList = ?WantList.decode(ipb) value.wantList = ? WantList.decode(ipb)
if ?pb.getRepeatedField(3, sublist): # meant to be 2? if ? pb.getRepeatedField(3, sublist):
for item in sublist: for item in sublist:
value.payload.add(?BlockDelivery.decode(initProtoBuffer(item))) value.payload.add(? BlockDelivery.decode(initProtoBuffer(item, maxSize = MaxBlockSize)))
if ?pb.getRepeatedField(4, sublist): if ? pb.getRepeatedField(4, sublist):
for item in sublist: for item in sublist:
value.blockPresences.add(?BlockPresence.decode(initProtoBuffer(item))) value.blockPresences.add(? BlockPresence.decode(initProtoBuffer(item)))
discard ?pb.getField(5, value.pendingBytes) discard ? pb.getField(5, value.pendingBytes)
if ? pb.getField(6, ipb):
value.account = ? AccountMessage.decode(ipb)
if ? pb.getField(7, ipb):
value.payment = ? StateChannelUpdate.decode(ipb)
ok(value) ok(value)

View File

@ -1,4 +1,4 @@
// Protocol of data exchange between Logos Storage nodes. // Protocol of data exchange between Codex nodes.
// Extended version of https://github.com/ipfs/specs/blob/main/BITSWAP.md // Extended version of https://github.com/ipfs/specs/blob/main/BITSWAP.md
syntax = "proto3"; syntax = "proto3";
@ -38,10 +38,21 @@ message Message {
message BlockPresence { message BlockPresence {
bytes cid = 1; bytes cid = 1;
BlockPresenceType type = 2; BlockPresenceType type = 2;
bytes price = 3; // Amount of assets to pay for the block (UInt256)
}
message AccountMessage {
bytes address = 1; // Ethereum address to which payments should be made
}
message StateChannelUpdate {
bytes update = 1; // Signed Nitro state, serialized as JSON
} }
Wantlist wantlist = 1; Wantlist wantlist = 1;
repeated Block payload = 3; // what happened to 2? repeated Block payload = 3;
repeated BlockPresence blockPresences = 4; repeated BlockPresence blockPresences = 4;
int32 pendingBytes = 5; int32 pendingBytes = 5;
AccountMessage account = 6;
StateChannelUpdate payment = 7;
} }

View File

@ -0,0 +1,40 @@
import pkg/stew/byteutils
import pkg/stint
import pkg/nitro
import pkg/questionable
import pkg/upraises
import ./blockexc
export AccountMessage
export StateChannelUpdate
export stint
export nitro
push: {.upraises: [].}
type
Account* = object
address*: EthAddress
func init*(_: type AccountMessage, account: Account): AccountMessage =
AccountMessage(address: @(account.address.toArray))
func parse(_: type EthAddress, bytes: seq[byte]): ?EthAddress =
var address: array[20, byte]
if bytes.len != address.len:
return EthAddress.none
for i in 0..<address.len:
address[i] = bytes[i]
EthAddress(address).some
func init*(_: type Account, message: AccountMessage): ?Account =
without address =? EthAddress.parse(message.address):
return none Account
some Account(address: address)
func init*(_: type StateChannelUpdate, state: SignedState): StateChannelUpdate =
StateChannelUpdate(update: state.toJson.toBytes)
proc init*(_: type SignedState, update: StateChannelUpdate): ?SignedState =
SignedState.fromJson(string.fromBytes(update.update))

View File

@ -1,9 +1,8 @@
{.push raises: [].}
import libp2p import libp2p
import pkg/stint import pkg/stint
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/upraises
import ./blockexc import ./blockexc
import ../../blocktype import ../../blocktype
@ -12,11 +11,14 @@ export questionable
export stint export stint
export BlockPresenceType export BlockPresenceType
upraises.push: {.upraises: [].}
type type
PresenceMessage* = blockexc.BlockPresence PresenceMessage* = blockexc.BlockPresence
Presence* = object Presence* = object
address*: BlockAddress address*: BlockAddress
have*: bool have*: bool
price*: UInt256
func parse(_: type UInt256, bytes: seq[byte]): ?UInt256 = func parse(_: type UInt256, bytes: seq[byte]): ?UInt256 =
if bytes.len > 32: if bytes.len > 32:
@ -24,12 +26,21 @@ func parse(_: type UInt256, bytes: seq[byte]): ?UInt256 =
UInt256.fromBytesBE(bytes).some UInt256.fromBytesBE(bytes).some
func init*(_: type Presence, message: PresenceMessage): ?Presence = func init*(_: type Presence, message: PresenceMessage): ?Presence =
without price =? UInt256.parse(message.price):
return none Presence
some Presence( some Presence(
address: message.address, have: message.`type` == BlockPresenceType.Have address: message.address,
have: message.`type` == BlockPresenceType.Have,
price: price
) )
func init*(_: type PresenceMessage, presence: Presence): PresenceMessage = func init*(_: type PresenceMessage, presence: Presence): PresenceMessage =
PresenceMessage( PresenceMessage(
address: presence.address, address: presence.address,
`type`: if presence.have: BlockPresenceType.Have else: BlockPresenceType.DontHave, `type`: if presence.have:
BlockPresenceType.Have
else:
BlockPresenceType.DontHave,
price: @(presence.price.toBytesBE)
) )

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,14 +9,15 @@
import std/tables import std/tables
import std/sugar import std/sugar
import std/hashes
export tables export tables
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p/[cid, multicodec, multihash] import pkg/libp2p/[cid, multicodec, multihash]
import pkg/stew/[byteutils, endians2] import pkg/stew/byteutils
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
@ -48,16 +49,16 @@ logutils.formatIt(LogFormat.textLines, BlockAddress):
else: else:
"cid: " & shortLog($it.cid) "cid: " & shortLog($it.cid)
logutils.formatIt(LogFormat.json, BlockAddress): logutils.formatIt(LogFormat.json, BlockAddress): %it
%it
proc `==`*(a, b: BlockAddress): bool = proc `==`*(a, b: BlockAddress): bool =
a.leaf == b.leaf and ( a.leaf == b.leaf and
if a.leaf: (
a.treeCid == b.treeCid and a.index == b.index if a.leaf:
else: a.treeCid == b.treeCid and a.index == b.index
a.cid == b.cid else:
) a.cid == b.cid
)
proc `$`*(a: BlockAddress): string = proc `$`*(a: BlockAddress): string =
if a.leaf: if a.leaf:
@ -65,15 +66,11 @@ proc `$`*(a: BlockAddress): string =
else: else:
"cid: " & $a.cid "cid: " & $a.cid
proc hash*(a: BlockAddress): Hash =
if a.leaf:
let data = a.treeCid.data.buffer & @(a.index.uint64.toBytesBE)
hash(data)
else:
hash(a.cid.data.buffer)
proc cidOrTreeCid*(a: BlockAddress): Cid = proc cidOrTreeCid*(a: BlockAddress): Cid =
if a.leaf: a.treeCid else: a.cid if a.leaf:
a.treeCid
else:
a.cid
proc address*(b: Block): BlockAddress = proc address*(b: Block): BlockAddress =
BlockAddress(leaf: false, cid: b.cid) BlockAddress(leaf: false, cid: b.cid)
@ -89,55 +86,57 @@ proc `$`*(b: Block): string =
result &= "\ndata: " & string.fromBytes(b.data) result &= "\ndata: " & string.fromBytes(b.data)
func new*( func new*(
T: type Block, T: type Block,
data: openArray[byte] = [], data: openArray[byte] = [],
version = CIDv1, version = CIDv1,
mcodec = Sha256HashCodec, mcodec = Sha256HashCodec,
codec = BlockCodec, codec = BlockCodec): ?!Block =
): ?!Block =
## creates a new block for both storage and network IO ## creates a new block for both storage and network IO
## ##
let let
hash = ?MultiHash.digest($mcodec, data).mapFailure hash = ? MultiHash.digest($mcodec, data).mapFailure
cid = ?Cid.init(version, codec, hash).mapFailure cid = ? Cid.init(version, codec, hash).mapFailure
# TODO: If the hash is `>=` to the data, # TODO: If the hash is `>=` to the data,
# use the Cid as a container! # use the Cid as a container!
Block(
Block(cid: cid, data: @data).success cid: cid,
data: @data).success
proc new*( proc new*(
T: type Block, cid: Cid, data: openArray[byte], verify: bool = true T: type Block,
cid: Cid,
data: openArray[byte],
verify: bool = true
): ?!Block = ): ?!Block =
## creates a new block for both storage and network IO ## creates a new block for both storage and network IO
## ##
if verify: if verify:
let let
mhash = ?cid.mhash.mapFailure mhash = ? cid.mhash.mapFailure
computedMhash = ?MultiHash.digest($mhash.mcodec, data).mapFailure computedMhash = ? MultiHash.digest($mhash.mcodec, data).mapFailure
computedCid = ?Cid.init(cid.cidver, cid.mcodec, computedMhash).mapFailure computedCid = ? Cid.init(cid.cidver, cid.mcodec, computedMhash).mapFailure
if computedCid != cid: if computedCid != cid:
return "Cid doesn't match the data".failure return "Cid doesn't match the data".failure
return Block(cid: cid, data: @data).success return Block(
cid: cid,
data: @data
).success
proc emptyBlock*(version: CidVersion, hcodec: MultiCodec): ?!Block = proc emptyBlock*(version: CidVersion, hcodec: MultiCodec): ?!Block =
emptyCid(version, hcodec, BlockCodec).flatMap( emptyCid(version, hcodec, BlockCodec)
(cid: Cid) => Block.new(cid = cid, data = @[]) .flatMap((cid: Cid) => Block.new(cid = cid, data = @[]))
)
proc emptyBlock*(cid: Cid): ?!Block = proc emptyBlock*(cid: Cid): ?!Block =
cid.mhash.mapFailure.flatMap( cid.mhash.mapFailure.flatMap((mhash: MultiHash) =>
(mhash: MultiHash) => emptyBlock(cid.cidver, mhash.mcodec) emptyBlock(cid.cidver, mhash.mcodec))
)
proc isEmpty*(cid: Cid): bool = proc isEmpty*(cid: Cid): bool =
success(cid) == success(cid) == cid.mhash.mapFailure.flatMap((mhash: MultiHash) =>
cid.mhash.mapFailure.flatMap( emptyCid(cid.cidver, mhash.mcodec, cid.mcodec))
(mhash: MultiHash) => emptyCid(cid.cidver, mhash.mcodec, cid.mcodec)
)
proc isEmpty*(blk: Block): bool = proc isEmpty*(blk: Block): bool =
blk.cid.isEmpty blk.cid.isEmpty

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,7 +9,9 @@
# TODO: This is super inneficient and needs a rewrite, but it'll do for now # TODO: This is super inneficient and needs a rewrite, but it'll do for now
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
@ -21,22 +23,20 @@ import ./logutils
export blocktype export blocktype
const DefaultChunkSize* = DefaultBlockSize const
DefaultChunkSize* = DefaultBlockSize
type type
# default reader type # default reader type
ChunkerError* = object of CatchableError
ChunkBuffer* = ptr UncheckedArray[byte] ChunkBuffer* = ptr UncheckedArray[byte]
Reader* = proc(data: ChunkBuffer, len: int): Future[int] {. Reader* = proc(data: ChunkBuffer, len: int): Future[int] {.gcsafe, raises: [Defect].}
async: (raises: [ChunkerError, CancelledError])
.}
# Reader that splits input data into fixed-size chunks # Reader that splits input data into fixed-size chunks
Chunker* = ref object Chunker* = ref object
reader*: Reader # Procedure called to actually read the data reader*: Reader # Procedure called to actually read the data
offset*: int # Bytes read so far (position in the stream) offset*: int # Bytes read so far (position in the stream)
chunkSize*: NBytes # Size of each chunk chunkSize*: NBytes # Size of each chunk
pad*: bool # Pad last chunk to chunkSize? pad*: bool # Pad last chunk to chunkSize?
FileChunker* = Chunker FileChunker* = Chunker
LPStreamChunker* = Chunker LPStreamChunker* = Chunker
@ -60,21 +60,30 @@ proc getBytes*(c: Chunker): Future[seq[byte]] {.async.} =
return move buff return move buff
proc new*( proc new*(
T: type Chunker, reader: Reader, chunkSize = DefaultChunkSize, pad = true T: type Chunker,
reader: Reader,
chunkSize = DefaultChunkSize,
pad = true
): Chunker = ): Chunker =
## create a new Chunker instance ## create a new Chunker instance
## ##
Chunker(reader: reader, offset: 0, chunkSize: chunkSize, pad: pad) Chunker(
reader: reader,
offset: 0,
chunkSize: chunkSize,
pad: pad)
proc new*( proc new*(
T: type LPStreamChunker, stream: LPStream, chunkSize = DefaultChunkSize, pad = true T: type LPStreamChunker,
stream: LPStream,
chunkSize = DefaultChunkSize,
pad = true
): LPStreamChunker = ): LPStreamChunker =
## create the default File chunker ## create the default File chunker
## ##
proc reader( proc reader(data: ChunkBuffer, len: int): Future[int]
data: ChunkBuffer, len: int {.gcsafe, async, raises: [Defect].} =
): Future[int] {.async: (raises: [ChunkerError, CancelledError]).} =
var res = 0 var res = 0
try: try:
while res < len: while res < len:
@ -85,24 +94,29 @@ proc new*(
raise error raise error
except LPStreamError as error: except LPStreamError as error:
error "LPStream error", err = error.msg error "LPStream error", err = error.msg
raise newException(ChunkerError, "LPStream error", error) raise error
except CatchableError as exc: except CatchableError as exc:
error "CatchableError exception", exc = exc.msg error "CatchableError exception", exc = exc.msg
raise newException(Defect, exc.msg) raise newException(Defect, exc.msg)
return res return res
LPStreamChunker.new(reader = reader, chunkSize = chunkSize, pad = pad) LPStreamChunker.new(
reader = reader,
chunkSize = chunkSize,
pad = pad)
proc new*( proc new*(
T: type FileChunker, file: File, chunkSize = DefaultChunkSize, pad = true T: type FileChunker,
file: File,
chunkSize = DefaultChunkSize,
pad = true
): FileChunker = ): FileChunker =
## create the default File chunker ## create the default File chunker
## ##
proc reader( proc reader(data: ChunkBuffer, len: int): Future[int]
data: ChunkBuffer, len: int {.gcsafe, async, raises: [Defect].} =
): Future[int] {.async: (raises: [ChunkerError, CancelledError]).} =
var total = 0 var total = 0
try: try:
while total < len: while total < len:
@ -121,4 +135,7 @@ proc new*(
return total return total
FileChunker.new(reader = reader, chunkSize = chunkSize, pad = pad) FileChunker.new(
reader = reader,
chunkSize = chunkSize,
pad = pad)

View File

@ -1,7 +1,6 @@
{.push raises: [].}
import pkg/chronos import pkg/chronos
import pkg/stew/endians2 import pkg/stew/endians2
import pkg/upraises
import pkg/stint import pkg/stint
type type
@ -9,12 +8,10 @@ type
SecondsSince1970* = int64 SecondsSince1970* = int64
Timeout* = object of CatchableError Timeout* = object of CatchableError
method now*(clock: Clock): SecondsSince1970 {.base, gcsafe, raises: [].} = method now*(clock: Clock): SecondsSince1970 {.base, upraises: [].} =
raiseAssert "not implemented" raiseAssert "not implemented"
method waitUntil*( method waitUntil*(clock: Clock, time: SecondsSince1970) {.base, async.} =
clock: Clock, time: SecondsSince1970
) {.base, async: (raises: [CancelledError]).} =
raiseAssert "not implemented" raiseAssert "not implemented"
method start*(clock: Clock) {.base, async.} = method start*(clock: Clock) {.base, async.} =
@ -23,9 +20,9 @@ method start*(clock: Clock) {.base, async.} =
method stop*(clock: Clock) {.base, async.} = method stop*(clock: Clock) {.base, async.} =
discard discard
proc withTimeout*( proc withTimeout*(future: Future[void],
future: Future[void], clock: Clock, expiry: SecondsSince1970 clock: Clock,
) {.async.} = expiry: SecondsSince1970) {.async.} =
let timeout = clock.waitUntil(expiry) let timeout = clock.waitUntil(expiry)
try: try:
await future or timeout await future or timeout
@ -43,8 +40,5 @@ proc toSecondsSince1970*(bytes: seq[byte]): SecondsSince1970 =
let asUint = uint64.fromBytes(bytes) let asUint = uint64.fromBytes(bytes)
cast[int64](asUint) cast[int64](asUint)
proc toSecondsSince1970*(num: uint64): SecondsSince1970 =
cast[int64](num)
proc toSecondsSince1970*(bigint: UInt256): SecondsSince1970 = proc toSecondsSince1970*(bigint: UInt256): SecondsSince1970 =
bigint.truncate(int64) bigint.truncate(int64)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -12,32 +12,38 @@ import std/strutils
import std/os import std/os
import std/tables import std/tables
import std/cpuinfo import std/cpuinfo
import std/net
import pkg/chronos import pkg/chronos
import pkg/taskpools
import pkg/presto import pkg/presto
import pkg/libp2p import pkg/libp2p
import pkg/confutils import pkg/confutils
import pkg/confutils/defs import pkg/confutils/defs
import pkg/nitro
import pkg/stew/io2 import pkg/stew/io2
import pkg/stew/shims/net as stewnet
import pkg/datastore import pkg/datastore
import pkg/ethers except Rng
import pkg/stew/io2 import pkg/stew/io2
import pkg/taskpools
import ./node import ./node
import ./conf import ./conf
import ./rng as random import ./rng
import ./rest/api import ./rest/api
import ./stores import ./stores
import ./slots
import ./blockexchange import ./blockexchange
import ./utils/fileutils import ./utils/fileutils
import ./erasure
import ./discovery import ./discovery
import ./contracts
import ./systemclock import ./systemclock
import ./contracts/clock
import ./contracts/deployment
import ./utils/addrutils import ./utils/addrutils
import ./namespaces import ./namespaces
import ./codextypes import ./codextypes
import ./logutils import ./logutils
import ./nat
logScope: logScope:
topics = "codex node" topics = "codex node"
@ -50,220 +56,249 @@ type
repoStore: RepoStore repoStore: RepoStore
maintenance: BlockMaintainer maintenance: BlockMaintainer
taskpool: Taskpool taskpool: Taskpool
isStarted: bool
CodexPrivateKey* = libp2p.PrivateKey # alias CodexPrivateKey* = libp2p.PrivateKey # alias
EthWallet = ethers.Wallet
func config*(self: CodexServer): CodexConf = proc waitForSync(provider: Provider): Future[void] {.async.} =
return self.config var sleepTime = 1
trace "Checking sync state of Ethereum provider..."
while await provider.isSyncing:
notice "Waiting for Ethereum provider to sync..."
await sleepAsync(sleepTime.seconds)
if sleepTime < 10:
inc sleepTime
trace "Ethereum provider is synced."
func node*(self: CodexServer): CodexNodeRef = proc bootstrapInteractions(
return self.codexNode s: CodexServer): Future[void] {.async.} =
## bootstrap interactions and return contracts
## using clients, hosts, validators pairings
##
let
config = s.config
repo = s.repoStore
func repoStore*(self: CodexServer): RepoStore = if config.persistence:
return self.repoStore if not config.ethAccount.isSome and not config.ethPrivateKey.isSome:
error "Persistence enabled, but no Ethereum account was set"
quit QuitFailure
let provider = JsonRpcProvider.new(config.ethProvider)
await waitForSync(provider)
var signer: Signer
if account =? config.ethAccount:
signer = provider.getSigner(account)
elif keyFile =? config.ethPrivateKey:
without isSecure =? checkSecureFile(keyFile):
error "Could not check file permissions: does Ethereum private key file exist?"
quit QuitFailure
if not isSecure:
error "Ethereum private key file does not have safe file permissions"
quit QuitFailure
without key =? keyFile.readAllChars():
error "Unable to read Ethereum private key file"
quit QuitFailure
without wallet =? EthWallet.new(key.strip(), provider):
error "Invalid Ethereum private key in file"
quit QuitFailure
signer = wallet
let deploy = Deployment.new(provider, config)
without marketplaceAddress =? await deploy.address(Marketplace):
error "No Marketplace address was specified or there is no known address for the current network"
quit QuitFailure
let marketplace = Marketplace.new(marketplaceAddress, signer)
let market = OnChainMarket.new(marketplace, config.rewardRecipient)
let clock = OnChainClock.new(provider)
var client: ?ClientInteractions
var host: ?HostInteractions
var validator: ?ValidatorInteractions
if config.validator or config.persistence:
s.codexNode.clock = clock
else:
s.codexNode.clock = SystemClock()
# This is used for simulation purposes. Normal nodes won't be compiled with this flag
# and hence the proof failure will always be 0.
when codex_enable_proof_failures:
let proofFailures = config.simulateProofFailures
if proofFailures > 0:
warn "Enabling proof failure simulation!"
else:
let proofFailures = 0
if config.simulateProofFailures > 0:
warn "Proof failure simulation is not enabled for this build! Configuration ignored"
let purchasing = Purchasing.new(market, clock)
let sales = Sales.new(market, clock, repo, proofFailures)
client = some ClientInteractions.new(clock, purchasing)
host = some HostInteractions.new(clock, sales)
if config.validator:
without validationConfig =? ValidationConfig.init(
config.validatorMaxSlots,
config.validatorGroups,
config.validatorGroupIndex), err:
error "Invalid validation parameters", err = err.msg
quit QuitFailure
let validation = Validation.new(clock, market, validationConfig)
validator = some ValidatorInteractions.new(clock, validation)
s.codexNode.contracts = (client, host, validator)
proc start*(s: CodexServer) {.async.} = proc start*(s: CodexServer) {.async.} =
if s.isStarted: trace "Starting codex node", config = $s.config
warn "Storage server already started, skipping"
return
trace "Starting Storage node", config = $s.config
await s.repoStore.start() await s.repoStore.start()
s.maintenance.start() s.maintenance.start()
await s.codexNode.switch.start() await s.codexNode.switch.start()
let (announceAddrs, discoveryAddrs) = nattedAddress( let
s.config.nat, s.codexNode.switch.peerInfo.addrs, s.config.discoveryPort # TODO: Can't define these as constants, pity
) natIpPart = MultiAddress.init("/ip4/" & $s.config.nat & "/")
.expect("Should create multiaddress")
anyAddrIp = MultiAddress.init("/ip4/0.0.0.0/")
.expect("Should create multiaddress")
loopBackAddrIp = MultiAddress.init("/ip4/127.0.0.1/")
.expect("Should create multiaddress")
# announce addresses should be set to bound addresses,
# but the IP should be mapped to the provided nat ip
announceAddrs = s.codexNode.switch.peerInfo.addrs.mapIt:
block:
let
listenIPPart = it[multiCodec("ip4")].expect("Should get IP")
if listenIPPart == anyAddrIp or
(listenIPPart == loopBackAddrIp and natIpPart != loopBackAddrIp):
it.remapAddr(s.config.nat.some)
else:
it
s.codexNode.discovery.updateAnnounceRecord(announceAddrs) s.codexNode.discovery.updateAnnounceRecord(announceAddrs)
s.codexNode.discovery.updateDhtRecord(discoveryAddrs) s.codexNode.discovery.updateDhtRecord(s.config.nat, s.config.discoveryPort)
await s.bootstrapInteractions()
await s.codexNode.start() await s.codexNode.start()
s.restServer.start()
if s.restServer != nil:
s.restServer.start()
s.isStarted = true
proc stop*(s: CodexServer) {.async.} = proc stop*(s: CodexServer) {.async.} =
if not s.isStarted: notice "Stopping codex node"
warn "Storage is not started"
return
notice "Stopping Storage node"
var futures = s.taskpool.syncAll()
@[ s.taskpool.shutdown()
s.codexNode.switch.stop(),
s.codexNode.stop(),
s.repoStore.stop(),
s.maintenance.stop(),
]
if s.restServer != nil: await allFuturesThrowing(
futures.add(s.restServer.stop()) s.restServer.stop(),
s.codexNode.switch.stop(),
let res = await noCancel allFinishedFailed[void](futures) s.codexNode.stop(),
s.repoStore.stop(),
s.isStarted = false s.maintenance.stop())
if res.failure.len > 0:
error "Failed to stop Storage node", failures = res.failure.len
raiseAssert "Failed to stop Storage node"
proc close*(s: CodexServer) {.async.} =
var futures =
@[s.codexNode.close(), s.repoStore.close(), s.codexNode.discovery.close()]
let res = await noCancel allFinishedFailed[void](futures)
if not s.taskpool.isNil:
try:
s.taskpool.shutdown()
except Exception as exc:
error "Failed to stop the taskpool", failures = res.failure.len
raiseAssert("Failure in taskpool shutdown:" & exc.msg)
if res.failure.len > 0:
error "Failed to close Storage node", failures = res.failure.len
raiseAssert "Failed to close Storage node"
proc shutdown*(server: CodexServer) {.async.} =
await server.stop()
await server.close()
proc new*( proc new*(
T: type CodexServer, config: CodexConf, privateKey: CodexPrivateKey T: type CodexServer,
): CodexServer = config: CodexConf,
privateKey: CodexPrivateKey): CodexServer =
## create CodexServer including setting up datastore, repostore, etc ## create CodexServer including setting up datastore, repostore, etc
let switch = SwitchBuilder let
switch = SwitchBuilder
.new() .new()
.withPrivateKey(privateKey) .withPrivateKey(privateKey)
.withAddresses(config.listenAddrs) .withAddresses(config.listenAddrs)
.withRng(random.Rng.instance()) .withRng(Rng.instance())
.withNoise() .withNoise()
.withMplex(5.minutes, 5.minutes) .withMplex(5.minutes, 5.minutes)
.withMaxConnections(config.maxPeers) .withMaxConnections(config.maxPeers)
.withAgentVersion(config.agentString) .withAgentVersion(config.agentString)
.withSignedPeerRecord(true) .withSignedPeerRecord(true)
.withTcpTransport({ServerFlags.ReuseAddr, ServerFlags.TcpNoDelay}) .withTcpTransport({ServerFlags.ReuseAddr})
.build() .build()
var var
cache: CacheStore = nil cache: CacheStore = nil
taskPool: Taskpool
try:
if config.numThreads == ThreadCount(0):
taskPool = Taskpool.new(numThreads = min(countProcessors(), 16))
else:
taskPool = Taskpool.new(numThreads = int(config.numThreads))
info "Threadpool started", numThreads = taskPool.numThreads
except CatchableError as exc:
raiseAssert("Failure in taskPool initialization:" & exc.msg)
if config.cacheSize > 0'nb: if config.cacheSize > 0'nb:
cache = CacheStore.new(cacheSize = config.cacheSize) cache = CacheStore.new(cacheSize = config.cacheSize)
## Is unused? ## Is unused?
let discoveryDir = config.dataDir / CodexDhtNamespace let
discoveryDir = config.dataDir / CodexDhtNamespace
if io2.createPath(discoveryDir).isErr: if io2.createPath(discoveryDir).isErr:
trace "Unable to create discovery directory for block store", trace "Unable to create discovery directory for block store", discoveryDir = discoveryDir
discoveryDir = discoveryDir
raise (ref Defect)( raise (ref Defect)(
msg: "Unable to create discovery directory for block store: " & discoveryDir msg: "Unable to create discovery directory for block store: " & discoveryDir)
)
let providersPath = config.dataDir / CodexDhtProvidersNamespace
let discoveryStoreRes = LevelDbDatastore.new(providersPath)
if discoveryStoreRes.isErr:
error "Failed to initialize discovery datastore",
path = providersPath, err = discoveryStoreRes.error.msg
let let
discoveryStore = discoveryStore = Datastore(
Datastore(discoveryStoreRes.expect("Should create discovery datastore!")) LevelDbDatastore.new(config.dataDir / CodexDhtProvidersNamespace)
.expect("Should create discovery datastore!"))
discovery = Discovery.new( discovery = Discovery.new(
switch.peerInfo.privateKey, switch.peerInfo.privateKey,
announceAddrs = config.listenAddrs, announceAddrs = config.listenAddrs,
bindIp = config.discoveryIp,
bindPort = config.discoveryPort, bindPort = config.discoveryPort,
bootstrapNodes = config.bootstrapNodes, bootstrapNodes = config.bootstrapNodes,
store = discoveryStore, store = discoveryStore)
)
wallet = WalletRef.new(EthPrivateKey.random())
network = BlockExcNetwork.new(switch) network = BlockExcNetwork.new(switch)
repoData = repoData = case config.repoKind
case config.repoKind of repoFS: Datastore(FSDatastore.new($config.dataDir, depth = 5)
of repoFS: .expect("Should create repo file data store!"))
Datastore( of repoSQLite: Datastore(SQLiteDatastore.new($config.dataDir)
FSDatastore.new($config.dataDir, depth = 5).expect( .expect("Should create repo SQLite data store!"))
"Should create repo file data store!" of repoLevelDb: Datastore(LevelDbDatastore.new($config.dataDir)
) .expect("Should create repo LevelDB data store!"))
)
of repoSQLite:
Datastore(
SQLiteDatastore.new($config.dataDir).expect(
"Should create repo SQLite data store!"
)
)
of repoLevelDb:
Datastore(
LevelDbDatastore.new($config.dataDir).expect(
"Should create repo LevelDB data store!"
)
)
repoStore = RepoStore.new( repoStore = RepoStore.new(
repoDs = repoData, repoDs = repoData,
metaDs = LevelDbDatastore.new(config.dataDir / CodexMetaNamespace).expect( metaDs = LevelDbDatastore.new(config.dataDir / CodexMetaNamespace)
"Should create metadata store!" .expect("Should create metadata store!"),
),
quotaMaxBytes = config.storageQuota, quotaMaxBytes = config.storageQuota,
blockTtl = config.blockTtl, blockTtl = config.blockTtl)
)
maintenance = BlockMaintainer.new( maintenance = BlockMaintainer.new(
repoStore, repoStore,
interval = config.blockMaintenanceInterval, interval = config.blockMaintenanceInterval,
numberOfBlocksPerInterval = config.blockMaintenanceNumberOfBlocks, numberOfBlocksPerInterval = config.blockMaintenanceNumberOfBlocks)
)
peerStore = PeerCtxStore.new() peerStore = PeerCtxStore.new()
pendingBlocks = PendingBlocksManager.new(retries = config.blockRetries) pendingBlocks = PendingBlocksManager.new()
advertiser = Advertiser.new(repoStore, discovery) advertiser = Advertiser.new(repoStore, discovery)
blockDiscovery = blockDiscovery = DiscoveryEngine.new(repoStore, peerStore, network, discovery, pendingBlocks)
DiscoveryEngine.new(repoStore, peerStore, network, discovery, pendingBlocks) engine = BlockExcEngine.new(repoStore, wallet, network, blockDiscovery, advertiser, peerStore, pendingBlocks)
engine = BlockExcEngine.new(
repoStore, network, blockDiscovery, advertiser, peerStore, pendingBlocks
)
store = NetworkStore.new(engine, repoStore) store = NetworkStore.new(engine, repoStore)
prover = if config.prover:
let backend = config.initializeBackend().expect("Unable to create prover backend.")
some Prover.new(store, backend, config.numProofSamples)
else:
none Prover
taskpool = Taskpool.new(num_threads = countProcessors())
codexNode = CodexNodeRef.new( codexNode = CodexNodeRef.new(
switch = switch, switch = switch,
networkStore = store, networkStore = store,
engine = engine, engine = engine,
prover = prover,
discovery = discovery, discovery = discovery,
taskPool = taskPool, taskpool = taskpool)
)
var restServer: RestServerRef = nil restServer = RestServerRef.new(
codexNode.initRestApi(config, repoStore, config.apiCorsAllowedOrigin),
if config.apiBindAddress.isSome: initTAddress(config.apiBindAddress , config.apiPort),
restServer = RestServerRef bufferSize = (1024 * 64),
.new( maxRequestBodySize = int.high)
codexNode.initRestApi(config, repoStore, config.apiCorsAllowedOrigin), .expect("Should start rest server!")
initTAddress(config.apiBindAddress.get(), config.apiPort),
bufferSize = (1024 * 64),
maxRequestBodySize = int.high,
)
.expect("Should create rest server!")
switch.mount(network) switch.mount(network)
@ -273,5 +308,4 @@ proc new*(
restServer: restServer, restServer: restServer,
repoStore: repoStore, repoStore: repoStore,
maintenance: maintenance, maintenance: maintenance,
taskPool: taskPool, taskpool: taskpool)
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -25,16 +25,43 @@ export tables
const const
# Size of blocks for storage / network exchange, # Size of blocks for storage / network exchange,
DefaultBlockSize* = NBytes 1024 * 64 DefaultBlockSize* = NBytes 1024*64
DefaultCellSize* = NBytes 2048
# Proving defaults
DefaultMaxSlotDepth* = 32
DefaultMaxDatasetDepth* = 8
DefaultBlockDepth* = 5
DefaultCellElms* = 67
DefaultSamplesNum* = 5
# hashes # hashes
Sha256HashCodec* = multiCodec("sha2-256") Sha256HashCodec* = multiCodec("sha2-256")
Sha512HashCodec* = multiCodec("sha2-512")
Pos2Bn128SpngCodec* = multiCodec("poseidon2-alt_bn_128-sponge-r2")
Pos2Bn128MrklCodec* = multiCodec("poseidon2-alt_bn_128-merkle-2kb")
ManifestCodec* = multiCodec("codex-manifest") ManifestCodec* = multiCodec("codex-manifest")
DatasetRootCodec* = multiCodec("codex-root") DatasetRootCodec* = multiCodec("codex-root")
BlockCodec* = multiCodec("codex-block") BlockCodec* = multiCodec("codex-block")
SlotRootCodec* = multiCodec("codex-slot-root")
SlotProvingRootCodec* = multiCodec("codex-proving-root")
CodexSlotCellCodec* = multiCodec("codex-slot-cell")
CodexPrimitivesCodecs* = [ManifestCodec, DatasetRootCodec, BlockCodec] CodexHashesCodecs* = [
Sha256HashCodec,
Pos2Bn128SpngCodec,
Pos2Bn128MrklCodec
]
CodexPrimitivesCodecs* = [
ManifestCodec,
DatasetRootCodec,
BlockCodec,
SlotRootCodec,
SlotProvingRootCodec,
CodexSlotCellCodec,
]
proc initEmptyCidTable(): ?!Table[(CidVersion, MultiCodec, MultiCodec), Cid] = proc initEmptyCidTable(): ?!Table[(CidVersion, MultiCodec, MultiCodec), Cid] =
## Initialize padding blocks table ## Initialize padding blocks table
@ -47,33 +74,40 @@ proc initEmptyCidTable(): ?!Table[(CidVersion, MultiCodec, MultiCodec), Cid] =
let let
emptyData: seq[byte] = @[] emptyData: seq[byte] = @[]
PadHashes = { PadHashes = {
Sha256HashCodec: ?MultiHash.digest($Sha256HashCodec, emptyData).mapFailure Sha256HashCodec: ? MultiHash.digest($Sha256HashCodec, emptyData).mapFailure,
Sha512HashCodec: ? MultiHash.digest($Sha512HashCodec, emptyData).mapFailure,
}.toTable }.toTable
var table = initTable[(CidVersion, MultiCodec, MultiCodec), Cid]() var
table = initTable[(CidVersion, MultiCodec, MultiCodec), Cid]()
for hcodec, mhash in PadHashes.pairs: for hcodec, mhash in PadHashes.pairs:
table[(CIDv1, hcodec, BlockCodec)] = ?Cid.init(CIDv1, BlockCodec, mhash).mapFailure table[(CIDv1, hcodec, BlockCodec)] = ? Cid.init(CIDv1, BlockCodec, mhash).mapFailure
success table success table
proc emptyCid*(version: CidVersion, hcodec: MultiCodec, dcodec: MultiCodec): ?!Cid = proc emptyCid*(
version: CidVersion,
hcodec: MultiCodec,
dcodec: MultiCodec): ?!Cid =
## Returns cid representing empty content, ## Returns cid representing empty content,
## given cid version, hash codec and data codec ## given cid version, hash codec and data codec
## ##
var table {.global, threadvar.}: Table[(CidVersion, MultiCodec, MultiCodec), Cid] var
table {.global, threadvar.}: Table[(CidVersion, MultiCodec, MultiCodec), Cid]
once: once:
table = ?initEmptyCidTable() table = ? initEmptyCidTable()
table[(version, hcodec, dcodec)].catch table[(version, hcodec, dcodec)].catch
proc emptyDigest*( proc emptyDigest*(
version: CidVersion, hcodec: MultiCodec, dcodec: MultiCodec version: CidVersion,
): ?!MultiHash = hcodec: MultiCodec,
dcodec: MultiCodec): ?!MultiHash =
## Returns hash representing empty content, ## Returns hash representing empty content,
## given cid version, hash codec and data codec ## given cid version, hash codec and data codec
## ##
emptyCid(version, hcodec, dcodec)
emptyCid(version, hcodec, dcodec).flatMap((cid: Cid) => cid.mhash.mapFailure) .flatMap((cid: Cid) => cid.mhash.mapFailure)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -10,16 +10,10 @@
{.push raises: [].} {.push raises: [].}
import std/os import std/os
import std/terminal
{.push warning[UnusedImport]: on.}
import std/terminal # Is not used in tests
{.pop.}
import std/options import std/options
import std/parseutils
import std/strutils import std/strutils
import std/typetraits import std/typetraits
import std/net
import pkg/chronos import pkg/chronos
import pkg/chronicles/helpers import pkg/chronicles/helpers
@ -29,11 +23,13 @@ import pkg/confutils/std/net
import pkg/toml_serialization import pkg/toml_serialization
import pkg/metrics import pkg/metrics
import pkg/metrics/chronos_httpserver import pkg/metrics/chronos_httpserver
import pkg/stew/shims/net as stewnet
import pkg/stew/shims/parseutils
import pkg/stew/byteutils import pkg/stew/byteutils
import pkg/libp2p import pkg/libp2p
import pkg/ethers
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/stew/base64
import ./codextypes import ./codextypes
import ./discovery import ./discovery
@ -41,43 +37,44 @@ import ./logutils
import ./stores import ./stores
import ./units import ./units
import ./utils import ./utils
import ./nat from ./validationconfig import MaxSlots, ValidationGroups
import ./utils/natutils
from ./blockexchange/engine/pendingblocks import DefaultBlockRetries export units, net, codextypes, logutils
export ValidationGroups, MaxSlots
export units, net, codextypes, logutils, completeCmdArg, parseCmdArg, NatConfig
export export
DefaultQuotaBytes, DefaultBlockTtl, DefaultBlockInterval, DefaultNumBlocksPerInterval, DefaultQuotaBytes,
DefaultBlockRetries DefaultBlockTtl,
DefaultBlockMaintenanceInterval,
type ThreadCount* = distinct Natural DefaultNumberOfBlocksToMaintainPerInterval
proc `==`*(a, b: ThreadCount): bool {.borrow.}
proc defaultDataDir*(): string = proc defaultDataDir*(): string =
let dataDir = let dataDir = when defined(windows):
when defined(windows): "AppData" / "Roaming" / "Codex"
"AppData" / "Roaming" / "Storage" elif defined(macosx):
elif defined(macosx): "Library" / "Application Support" / "Codex"
"Library" / "Application Support" / "Storage" else:
else: ".cache" / "codex"
".cache" / "storage"
getHomeDir() / dataDir getHomeDir() / dataDir
const const
storage_enable_api_debug_peers* {.booldefine.} = false codex_enable_api_debug_peers* {.booldefine.} = false
storage_enable_log_counter* {.booldefine.} = false codex_enable_proof_failures* {.booldefine.} = false
codex_enable_log_counter* {.booldefine.} = false
DefaultThreadCount* = ThreadCount(0) DefaultDataDir* = defaultDataDir()
DefaultCircuitDir* = defaultDataDir() / "circuits"
type type
StartUpCmd* {.pure.} = enum StartUpCmd* {.pure.} = enum
noCmd noCmd
persistence persistence
PersistenceCmd* {.pure.} = enum
noCmd
prover
LogKind* {.pure.} = enum LogKind* {.pure.} = enum
Auto = "auto" Auto = "auto"
Colors = "colors" Colors = "colors"
@ -92,199 +89,338 @@ type
CodexConf* = object CodexConf* = object
configFile* {. configFile* {.
desc: "Loads the configuration from a TOML file", desc: "Loads the configuration from a TOML file"
defaultValueDesc: "none", defaultValueDesc: "none"
defaultValue: InputFile.none, defaultValue: InputFile.none
name: "config-file" name: "config-file" }: Option[InputFile]
.}: Option[InputFile]
logLevel* {.defaultValue: "info", desc: "Sets the log level", name: "log-level".}: logLevel* {.
string defaultValue: "info"
desc: "Sets the log level",
name: "log-level" }: string
logFormat* {. logFormat* {.
desc: hidden
"Specifies what kind of logs should be written to stdout (auto, " & desc: "Specifies what kind of logs should be written to stdout (auto, " &
"colors, nocolors, json)", "colors, nocolors, json)"
defaultValueDesc: "auto", defaultValueDesc: "auto"
defaultValue: LogKind.Auto, defaultValue: LogKind.Auto
name: "log-format" name: "log-format" }: LogKind
.}: LogKind
metricsEnabled* {. metricsEnabled* {.
desc: "Enable the metrics server", defaultValue: false, name: "metrics" desc: "Enable the metrics server"
.}: bool defaultValue: false
name: "metrics" }: bool
metricsAddress* {. metricsAddress* {.
desc: "Listening address of the metrics server", desc: "Listening address of the metrics server"
defaultValue: defaultAddress(config), defaultValue: ValidIpAddress.init("127.0.0.1")
defaultValueDesc: "127.0.0.1", defaultValueDesc: "127.0.0.1"
name: "metrics-address" name: "metrics-address" }: ValidIpAddress
.}: IpAddress
metricsPort* {. metricsPort* {.
desc: "Listening HTTP port of the metrics server", desc: "Listening HTTP port of the metrics server"
defaultValue: 8008, defaultValue: 8008
name: "metrics-port" name: "metrics-port" }: Port
.}: Port
dataDir* {. dataDir* {.
desc: "The directory where Storage will store configuration and data", desc: "The directory where codex will store configuration and data"
defaultValue: defaultDataDir(), defaultValue: DefaultDataDir
defaultValueDesc: "", defaultValueDesc: $DefaultDataDir
abbr: "d", abbr: "d"
name: "data-dir" name: "data-dir" }: OutDir
.}: OutDir
listenAddrs* {. listenAddrs* {.
desc: "Multi Addresses to listen on", desc: "Multi Addresses to listen on"
defaultValue: defaultValue: @[
@[MultiAddress.init("/ip4/0.0.0.0/tcp/0").expect("Should init multiaddress")], MultiAddress.init("/ip4/0.0.0.0/tcp/0")
defaultValueDesc: "/ip4/0.0.0.0/tcp/0", .expect("Should init multiaddress")]
abbr: "i", defaultValueDesc: "/ip4/0.0.0.0/tcp/0"
name: "listen-addrs" abbr: "i"
.}: seq[MultiAddress] name: "listen-addrs" }: seq[MultiAddress]
# TODO: change this once we integrate nat support
nat* {. nat* {.
desc: desc: "IP Addresses to announce behind a NAT"
"Specify method to use for determining public address. " & defaultValue: ValidIpAddress.init("127.0.0.1")
"Must be one of: any, none, upnp, pmp, extip:<IP>", defaultValueDesc: "127.0.0.1"
defaultValue: defaultNatConfig(), abbr: "a"
defaultValueDesc: "any", name: "nat" }: ValidIpAddress
name: "nat"
.}: NatConfig discoveryIp* {.
desc: "Discovery listen address"
defaultValue: ValidIpAddress.init(IPv4_any())
defaultValueDesc: "0.0.0.0"
abbr: "e"
name: "disc-ip" }: ValidIpAddress
discoveryPort* {. discoveryPort* {.
desc: "Discovery (UDP) port", desc: "Discovery (UDP) port"
defaultValue: 8090.Port, defaultValue: 8090.Port
defaultValueDesc: "8090", defaultValueDesc: "8090"
abbr: "u", abbr: "u"
name: "disc-port" name: "disc-port" }: Port
.}: Port
netPrivKeyFile* {. netPrivKeyFile* {.
desc: "Source of network (secp256k1) private key file path or name", desc: "Source of network (secp256k1) private key file path or name"
defaultValue: "key", defaultValue: "key"
name: "net-privkey" name: "net-privkey" }: string
.}: string
bootstrapNodes* {. bootstrapNodes* {.
desc: desc: "Specifies one or more bootstrap nodes to use when " &
"Specifies one or more bootstrap nodes to use when " & "connecting to the network"
"connecting to the network", abbr: "b"
abbr: "b", name: "bootstrap-node" }: seq[SignedPeerRecord]
name: "bootstrap-node"
.}: seq[SignedPeerRecord]
maxPeers* {. maxPeers* {.
desc: "The maximum number of peers to connect to", desc: "The maximum number of peers to connect to"
defaultValue: 160, defaultValue: 160
name: "max-peers" name: "max-peers" }: int
.}: int
numThreads* {.
desc:
"Number of worker threads (\"0\" = use as many threads as there are CPU cores available)",
defaultValue: DefaultThreadCount,
name: "num-threads"
.}: ThreadCount
agentString* {. agentString* {.
defaultValue: "Logos Storage", defaultValue: "Codex"
desc: "Node agent string which is used as identifier in network", desc: "Node agent string which is used as identifier in network"
name: "agent-string" name: "agent-string" }: string
.}: string
apiBindAddress* {. apiBindAddress* {.
desc: "The REST API bind address", desc: "The REST API bind address"
defaultValue: "127.0.0.1".some, defaultValue: "127.0.0.1"
name: "api-bindaddr" name: "api-bindaddr"
.}: Option[string] }: string
apiPort* {. apiPort* {.
desc: "The REST Api port", desc: "The REST Api port",
defaultValue: 8080.Port, defaultValue: 8080.Port
defaultValueDesc: "8080", defaultValueDesc: "8080"
name: "api-port", name: "api-port"
abbr: "p" abbr: "p" }: Port
.}: Port
apiCorsAllowedOrigin* {. apiCorsAllowedOrigin* {.
desc: desc: "The REST Api CORS allowed origin for downloading data. " &
"The REST Api CORS allowed origin for downloading data. " &
"'*' will allow all origins, '' will allow none.", "'*' will allow all origins, '' will allow none.",
defaultValue: string.none, defaultValue: string.none
defaultValueDesc: "Disallow all cross origin requests to download data", defaultValueDesc: "Disallow all cross origin requests to download data"
name: "api-cors-origin" name: "api-cors-origin" }: Option[string]
.}: Option[string]
repoKind* {. repoKind* {.
desc: "Backend for main repo store (fs, sqlite, leveldb)", desc: "Backend for main repo store (fs, sqlite, leveldb)"
defaultValueDesc: "fs", defaultValueDesc: "fs"
defaultValue: repoFS, defaultValue: repoFS
name: "repo-kind" name: "repo-kind" }: RepoKind
.}: RepoKind
storageQuota* {. storageQuota* {.
desc: "The size of the total storage quota dedicated to the node", desc: "The size of the total storage quota dedicated to the node"
defaultValue: DefaultQuotaBytes, defaultValue: DefaultQuotaBytes
defaultValueDesc: $DefaultQuotaBytes, defaultValueDesc: $DefaultQuotaBytes
name: "storage-quota", name: "storage-quota"
abbr: "q" abbr: "q" }: NBytes
.}: NBytes
blockTtl* {. blockTtl* {.
desc: "Default block timeout in seconds - 0 disables the ttl", desc: "Default block timeout in seconds - 0 disables the ttl"
defaultValue: DefaultBlockTtl, defaultValue: DefaultBlockTtl
defaultValueDesc: $DefaultBlockTtl, defaultValueDesc: $DefaultBlockTtl
name: "block-ttl", name: "block-ttl"
abbr: "t" abbr: "t" }: Duration
.}: Duration
blockMaintenanceInterval* {. blockMaintenanceInterval* {.
desc: desc: "Time interval in seconds - determines frequency of block " &
"Time interval in seconds - determines frequency of block " & "maintenance cycle: how often blocks are checked " &
"maintenance cycle: how often blocks are checked " & "for expiration and cleanup", "for expiration and cleanup"
defaultValue: DefaultBlockInterval, defaultValue: DefaultBlockMaintenanceInterval
defaultValueDesc: $DefaultBlockInterval, defaultValueDesc: $DefaultBlockMaintenanceInterval
name: "block-mi" name: "block-mi" }: Duration
.}: Duration
blockMaintenanceNumberOfBlocks* {. blockMaintenanceNumberOfBlocks* {.
desc: "Number of blocks to check every maintenance cycle", desc: "Number of blocks to check every maintenance cycle"
defaultValue: DefaultNumBlocksPerInterval, defaultValue: DefaultNumberOfBlocksToMaintainPerInterval
defaultValueDesc: $DefaultNumBlocksPerInterval, defaultValueDesc: $DefaultNumberOfBlocksToMaintainPerInterval
name: "block-mn" name: "block-mn" }: int
.}: int
blockRetries* {.
desc: "Number of times to retry fetching a block before giving up",
defaultValue: DefaultBlockRetries,
defaultValueDesc: $DefaultBlockRetries,
name: "block-retries"
.}: int
cacheSize* {. cacheSize* {.
desc: desc: "The size of the block cache, 0 disables the cache - " &
"The size of the block cache, 0 disables the cache - " & "might help on slow hardrives"
"might help on slow hardrives", defaultValue: 0
defaultValue: 0, defaultValueDesc: "0"
defaultValueDesc: "0", name: "cache-size"
name: "cache-size", abbr: "c" }: NBytes
abbr: "c"
.}: NBytes
logFile* {. logFile* {.
desc: "Logs to file", defaultValue: string.none, name: "log-file", hidden desc: "Logs to file"
.}: Option[string] defaultValue: string.none
name: "log-file"
hidden
.}: Option[string]
func defaultAddress*(conf: CodexConf): IpAddress = case cmd* {.
result = static parseIpAddress("127.0.0.1") defaultValue: noCmd
command }: StartUpCmd
of persistence:
ethProvider* {.
desc: "The URL of the JSON-RPC API of the Ethereum node"
defaultValue: "ws://localhost:8545"
name: "eth-provider"
.}: string
func defaultNatConfig*(): NatConfig = ethAccount* {.
result = NatConfig(hasExtIp: false, nat: NatStrategy.NatAny) desc: "The Ethereum account that is used for storage contracts"
defaultValue: EthAddress.none
defaultValueDesc: ""
name: "eth-account"
.}: Option[EthAddress]
ethPrivateKey* {.
desc: "File containing Ethereum private key for storage contracts"
defaultValue: string.none
defaultValueDesc: ""
name: "eth-private-key"
.}: Option[string]
marketplaceAddress* {.
desc: "Address of deployed Marketplace contract"
defaultValue: EthAddress.none
defaultValueDesc: ""
name: "marketplace-address"
.}: Option[EthAddress]
# TODO: should go behind a feature flag
simulateProofFailures* {.
desc: "Simulates proof failures once every N proofs. 0 = disabled."
defaultValue: 0
name: "simulate-proof-failures"
hidden
.}: int
validator* {.
desc: "Enables validator, requires an Ethereum node"
defaultValue: false
name: "validator"
.}: bool
validatorMaxSlots* {.
desc: "Maximum number of slots that the validator monitors"
longDesc: "If set to 0, the validator will not limit " &
"the maximum number of slots it monitors"
defaultValue: 1000
name: "validator-max-slots"
.}: MaxSlots
validatorGroups* {.
desc: "Slot validation groups"
longDesc: "A number indicating total number of groups into " &
"which the whole slot id space will be divided. " &
"The value must be in the range [2, 65535]. " &
"If not provided, the validator will observe " &
"the whole slot id space and the value of " &
"the --validator-group-index parameter will be ignored. " &
"Powers of twos are advised for even distribution"
defaultValue: ValidationGroups.none
name: "validator-groups"
.}: Option[ValidationGroups]
validatorGroupIndex* {.
desc: "Slot validation group index"
longDesc: "The value provided must be in the range " &
"[0, validatorGroups). Ignored when --validator-groups " &
"is not provided. Only slot ids satisfying condition " &
"[(slotId mod validationGroups) == groupIndex] will be " &
"observed by the validator"
defaultValue: 0
name: "validator-group-index"
.}: uint16
rewardRecipient* {.
desc: "Address to send payouts to (eg rewards and refunds)"
name: "reward-recipient"
.}: Option[EthAddress]
case persistenceCmd* {.
defaultValue: noCmd
command }: PersistenceCmd
of PersistenceCmd.prover:
circuitDir* {.
desc: "Directory where Codex will store proof circuit data"
defaultValue: DefaultCircuitDir
defaultValueDesc: $DefaultCircuitDir
abbr: "cd"
name: "circuit-dir" }: OutDir
circomR1cs* {.
desc: "The r1cs file for the storage circuit"
defaultValue: $DefaultCircuitDir / "proof_main.r1cs"
defaultValueDesc: $DefaultCircuitDir & "/proof_main.r1cs"
name: "circom-r1cs"
.}: InputFile
circomWasm* {.
desc: "The wasm file for the storage circuit"
defaultValue: $DefaultCircuitDir / "proof_main.wasm"
defaultValueDesc: $DefaultDataDir & "/circuits/proof_main.wasm"
name: "circom-wasm"
.}: InputFile
circomZkey* {.
desc: "The zkey file for the storage circuit"
defaultValue: $DefaultCircuitDir / "proof_main.zkey"
defaultValueDesc: $DefaultDataDir & "/circuits/proof_main.zkey"
name: "circom-zkey"
.}: InputFile
# TODO: should probably be hidden and behind a feature flag
circomNoZkey* {.
desc: "Ignore the zkey file - use only for testing!"
defaultValue: false
name: "circom-no-zkey"
.}: bool
numProofSamples* {.
desc: "Number of samples to prove"
defaultValue: DefaultSamplesNum
defaultValueDesc: $DefaultSamplesNum
name: "proof-samples" }: int
maxSlotDepth* {.
desc: "The maximum depth of the slot tree"
defaultValue: DefaultMaxSlotDepth
defaultValueDesc: $DefaultMaxSlotDepth
name: "max-slot-depth" }: int
maxDatasetDepth* {.
desc: "The maximum depth of the dataset tree"
defaultValue: DefaultMaxDatasetDepth
defaultValueDesc: $DefaultMaxDatasetDepth
name: "max-dataset-depth" }: int
maxBlockDepth* {.
desc: "The maximum depth of the network block merkle tree"
defaultValue: DefaultBlockDepth
defaultValueDesc: $DefaultBlockDepth
name: "max-block-depth" }: int
maxCellElms* {.
desc: "The maximum number of elements in a cell"
defaultValue: DefaultCellElms
defaultValueDesc: $DefaultCellElms
name: "max-cell-elements" }: int
of PersistenceCmd.noCmd:
discard
of StartUpCmd.noCmd:
discard # end of persistence
EthAddress* = ethers.Address
logutils.formatIt(LogFormat.textLines, EthAddress): it.short0xHexLog
logutils.formatIt(LogFormat.json, EthAddress): %it
func persistence*(self: CodexConf): bool =
self.cmd == StartUpCmd.persistence
func prover*(self: CodexConf): bool =
self.persistence and self.persistenceCmd == PersistenceCmd.prover
proc getCodexVersion(): string = proc getCodexVersion(): string =
let tag = strip(staticExec("git describe --tags --abbrev=0")) let tag = strip(staticExec("git tag"))
if tag.isEmptyOrWhitespace: if tag.isEmptyOrWhitespace:
return "untagged build" return "untagged build"
return tag return tag
@ -303,121 +439,62 @@ const
nimBanner* = getNimBanner() nimBanner* = getNimBanner()
codexFullVersion* = codexFullVersion* =
"Storage version: " & codexVersion & "\p" & "Storage revision: " & codexRevision & "Codex version: " & codexVersion & "\p" &
"\p" "Codex revision: " & codexRevision & "\p" &
nimBanner
proc parseCmdArg*( proc parseCmdArg*(T: typedesc[MultiAddress],
T: typedesc[MultiAddress], input: string input: string): MultiAddress
): MultiAddress {.raises: [ValueError].} = {.upraises: [ValueError, LPError].} =
var ma: MultiAddress var ma: MultiAddress
try: let res = MultiAddress.init(input)
let res = MultiAddress.init(input) if res.isOk:
if res.isOk: ma = res.get()
ma = res.get() else:
else: warn "Invalid MultiAddress", input=input, error = res.error()
fatal "Invalid MultiAddress", input = input, error = res.error()
quit QuitFailure
except LPError as exc:
fatal "Invalid MultiAddress uri", uri = input, error = exc.msg
quit QuitFailure quit QuitFailure
ma ma
proc parse*(T: type ThreadCount, p: string): Result[ThreadCount, string] = proc parseCmdArg*(T: type SignedPeerRecord, uri: string): T =
try:
let count = parseInt(p)
if count != 0 and count < 2:
return err("Invalid number of threads: " & p)
return ok(ThreadCount(count))
except ValueError as e:
return err("Invalid number of threads: " & p & ", error=" & e.msg)
proc parseCmdArg*(T: type ThreadCount, input: string): T =
let val = ThreadCount.parse(input)
if val.isErr:
fatal "Cannot parse the thread count.", input = input, error = val.error()
quit QuitFailure
return val.get()
proc parse*(T: type SignedPeerRecord, p: string): Result[SignedPeerRecord, string] =
var res: SignedPeerRecord var res: SignedPeerRecord
try: try:
if not res.fromURI(p): if not res.fromURI(uri):
return err("The uri is not a valid SignedPeerRecord: " & p) warn "Invalid SignedPeerRecord uri", uri = uri
return ok(res) quit QuitFailure
except LPError, Base64Error: except CatchableError as exc:
let e = getCurrentException() warn "Invalid SignedPeerRecord uri", uri = uri, error = exc.msg
return err(e.msg)
proc parseCmdArg*(T: type SignedPeerRecord, uri: string): T =
let res = SignedPeerRecord.parse(uri)
if res.isErr:
fatal "Cannot parse the signed peer.", error = res.error(), input = uri
quit QuitFailure quit QuitFailure
return res.get() res
func parse*(T: type NatConfig, p: string): Result[NatConfig, string] = proc parseCmdArg*(T: type EthAddress, address: string): T =
case p.toLowerAscii EthAddress.init($address).get()
of "any":
return ok(NatConfig(hasExtIp: false, nat: NatStrategy.NatAny))
of "none":
return ok(NatConfig(hasExtIp: false, nat: NatStrategy.NatNone))
of "upnp":
return ok(NatConfig(hasExtIp: false, nat: NatStrategy.NatUpnp))
of "pmp":
return ok(NatConfig(hasExtIp: false, nat: NatStrategy.NatPmp))
else:
if p.startsWith("extip:"):
try:
let ip = parseIpAddress(p[6 ..^ 1])
return ok(NatConfig(hasExtIp: true, extIp: ip))
except ValueError:
let error = "Not a valid IP address: " & p[6 ..^ 1]
return err(error)
else:
return err("Not a valid NAT option: " & p)
proc parseCmdArg*(T: type NatConfig, p: string): T =
let res = NatConfig.parse(p)
if res.isErr:
fatal "Cannot parse the NAT config.", error = res.error(), input = p
quit QuitFailure
return res.get()
proc completeCmdArg*(T: type NatConfig, val: string): seq[string] =
return @[]
func parse*(T: type NBytes, p: string): Result[NBytes, string] =
var num = 0'i64
let count = parseSize(p, num, alwaysBin = true)
if count == 0:
return err("Invalid number of bytes: " & p)
return ok(NBytes(num))
proc parseCmdArg*(T: type NBytes, val: string): T = proc parseCmdArg*(T: type NBytes, val: string): T =
let res = NBytes.parse(val) var num = 0'i64
if res.isErr: let count = parseSize(val, num, alwaysBin = true)
fatal "Cannot parse NBytes.", error = res.error(), input = val if count == 0:
quit QuitFailure warn "Invalid number of bytes", nbytes = val
return res.get() quit QuitFailure
NBytes(num)
proc parseCmdArg*(T: type Duration, val: string): T = proc parseCmdArg*(T: type Duration, val: string): T =
var dur: Duration var dur: Duration
let count = parseDuration(val, dur) let count = parseDuration(val, dur)
if count == 0: if count == 0:
fatal "Cannot parse duration", dur = dur warn "Cannot parse duration", dur = dur
quit QuitFailure quit QuitFailure
dur dur
proc readValue*(r: var TomlReader, val: var EthAddress)
{.upraises: [SerializationError, IOError].} =
val = EthAddress.init(r.readValue(string)).get()
proc readValue*(r: var TomlReader, val: var SignedPeerRecord) = proc readValue*(r: var TomlReader, val: var SignedPeerRecord) =
without uri =? r.readValue(string).catch, err: without uri =? r.readValue(string).catch, err:
error "invalid SignedPeerRecord configuration value", error = err.msg error "invalid SignedPeerRecord configuration value", error = err.msg
quit QuitFailure quit QuitFailure
try: val = SignedPeerRecord.parseCmdArg(uri)
val = SignedPeerRecord.parseCmdArg(uri)
except LPError as err:
fatal "Invalid SignedPeerRecord uri", uri = uri, error = err.msg
quit QuitFailure
proc readValue*(r: var TomlReader, val: var MultiAddress) = proc readValue*(r: var TomlReader, val: var MultiAddress) =
without input =? r.readValue(string).catch, err: without input =? r.readValue(string).catch, err:
@ -428,12 +505,11 @@ proc readValue*(r: var TomlReader, val: var MultiAddress) =
if res.isOk: if res.isOk:
val = res.get() val = res.get()
else: else:
fatal "Invalid MultiAddress", input = input, error = res.error() warn "Invalid MultiAddress", input=input, error=res.error()
quit QuitFailure quit QuitFailure
proc readValue*( proc readValue*(r: var TomlReader, val: var NBytes)
r: var TomlReader, val: var NBytes {.upraises: [SerializationError, IOError].} =
) {.raises: [SerializationError, IOError].} =
var value = 0'i64 var value = 0'i64
var str = r.readValue(string) var str = r.readValue(string)
let count = parseSize(str, value, alwaysBin = true) let count = parseSize(str, value, alwaysBin = true)
@ -442,18 +518,8 @@ proc readValue*(
quit QuitFailure quit QuitFailure
val = NBytes(value) val = NBytes(value)
proc readValue*( proc readValue*(r: var TomlReader, val: var Duration)
r: var TomlReader, val: var ThreadCount {.upraises: [SerializationError, IOError].} =
) {.raises: [SerializationError, IOError].} =
var str = r.readValue(string)
try:
val = parseCmdArg(ThreadCount, str)
except CatchableError as err:
raise newException(SerializationError, err.msg)
proc readValue*(
r: var TomlReader, val: var Duration
) {.raises: [SerializationError, IOError].} =
var str = r.readValue(string) var str = r.readValue(string)
var dur: Duration var dur: Duration
let count = parseDuration(str, dur) let count = parseDuration(str, dur)
@ -462,23 +528,14 @@ proc readValue*(
quit QuitFailure quit QuitFailure
val = dur val = dur
proc readValue*(
r: var TomlReader, val: var NatConfig
) {.raises: [SerializationError].} =
val =
try:
parseCmdArg(NatConfig, r.readValue(string))
except CatchableError as err:
raise newException(SerializationError, err.msg)
# no idea why confutils needs this: # no idea why confutils needs this:
proc completeCmdArg*(T: type NBytes, val: string): seq[string] = proc completeCmdArg*(T: type EthAddress; val: string): seq[string] =
discard discard
proc completeCmdArg*(T: type Duration, val: string): seq[string] = proc completeCmdArg*(T: type NBytes; val: string): seq[string] =
discard discard
proc completeCmdArg*(T: type ThreadCount, val: string): seq[string] = proc completeCmdArg*(T: type Duration; val: string): seq[string] =
discard discard
# silly chronicles, colors is a compile-time property # silly chronicles, colors is a compile-time property
@ -500,7 +557,7 @@ proc stripAnsi*(v: string): string =
if c2 != '[': if c2 != '[':
break break
else: else:
if c2 in {'0' .. '9'} + {';'}: if c2 in {'0'..'9'} + {';'}:
discard # keep looking discard # keep looking
elif c2 == 'm': elif c2 == 'm':
i = x + 1 i = x + 1
@ -517,19 +574,19 @@ proc stripAnsi*(v: string): string =
res res
proc updateLogLevel*(logLevel: string) {.raises: [ValueError].} = proc updateLogLevel*(logLevel: string) {.upraises: [ValueError].} =
# Updates log levels (without clearing old ones) # Updates log levels (without clearing old ones)
let directives = logLevel.split(";") let directives = logLevel.split(";")
try: try:
setLogLevel(parseEnum[LogLevel](directives[0].toUpperAscii)) setLogLevel(parseEnum[LogLevel](directives[0].toUpperAscii))
except ValueError: except ValueError:
raise (ref ValueError)( raise (ref ValueError)(
msg: msg: "Please specify one of: trace, debug, " &
"Please specify one of: trace, debug, " & "info, notice, warn, error or fatal" "info, notice, warn, error or fatal"
) )
if directives.len > 1: if directives.len > 1:
for topicName, settings in parseTopicDirectives(directives[1 ..^ 1]): for topicName, settings in parseTopicDirectives(directives[1..^1]):
if not setTopicState(topicName, settings.state, settings.logLevel): if not setTopicState(topicName, settings.state, settings.logLevel):
warn "Unrecognized logging topic", topic = topicName warn "Unrecognized logging topic", topic = topicName
@ -538,9 +595,7 @@ proc setupLogging*(conf: CodexConf) =
warn "Logging configuration options not enabled in the current build" warn "Logging configuration options not enabled in the current build"
else: else:
var logFile: ?IoHandle var logFile: ?IoHandle
proc noOutput(logLevel: LogLevel, msg: LogOutputStr) = proc noOutput(logLevel: LogLevel, msg: LogOutputStr) = discard
discard
proc writeAndFlush(f: File, msg: LogOutputStr) = proc writeAndFlush(f: File, msg: LogOutputStr) =
try: try:
f.write(msg) f.write(msg)
@ -561,11 +616,14 @@ proc setupLogging*(conf: CodexConf) =
defaultChroniclesStream.outputs[2].writer = noOutput defaultChroniclesStream.outputs[2].writer = noOutput
if logFilePath =? conf.logFile and logFilePath.len > 0: if logFilePath =? conf.logFile and logFilePath.len > 0:
let logFileHandle = let logFileHandle = openFile(
openFile(logFilePath, {OpenFlags.Write, OpenFlags.Create, OpenFlags.Truncate}) logFilePath,
{OpenFlags.Write, OpenFlags.Create, OpenFlags.Truncate}
)
if logFileHandle.isErr: if logFileHandle.isErr:
error "failed to open log file", error "failed to open log file",
path = logFilePath, errorCode = $logFileHandle.error path = logFilePath,
errorCode = $logFileHandle.error
else: else:
logFile = logFileHandle.option logFile = logFileHandle.option
defaultChroniclesStream.outputs[2].writer = fileFlush defaultChroniclesStream.outputs[2].writer = fileFlush
@ -573,30 +631,39 @@ proc setupLogging*(conf: CodexConf) =
defaultChroniclesStream.outputs[1].writer = noOutput defaultChroniclesStream.outputs[1].writer = noOutput
let writer = let writer =
case conf.logFormat case conf.logFormat:
of LogKind.Auto: of LogKind.Auto:
if isatty(stdout): stdoutFlush else: noColorsFlush if isatty(stdout):
of LogKind.Colors: stdoutFlush
stdoutFlush else:
of LogKind.NoColors: noColorsFlush
noColorsFlush of LogKind.Colors: stdoutFlush
of LogKind.NoColors: noColorsFlush
of LogKind.Json: of LogKind.Json:
defaultChroniclesStream.outputs[1].writer = stdoutFlush defaultChroniclesStream.outputs[1].writer = stdoutFlush
noOutput noOutput
of LogKind.None: of LogKind.None:
noOutput noOutput
when storage_enable_log_counter: when codex_enable_log_counter:
var counter = 0.uint64 var counter = 0.uint64
proc numberedWriter(logLevel: LogLevel, msg: LogOutputStr) = proc numberedWriter(logLevel: LogLevel, msg: LogOutputStr) =
inc(counter) inc(counter)
let withoutNewLine = msg[0 ..^ 2] let withoutNewLine = msg[0..^2]
writer(logLevel, withoutNewLine & " count=" & $counter & "\n") writer(logLevel, withoutNewLine & " count=" & $counter & "\n")
defaultChroniclesStream.outputs[0].writer = numberedWriter defaultChroniclesStream.outputs[0].writer = numberedWriter
else: else:
defaultChroniclesStream.outputs[0].writer = writer defaultChroniclesStream.outputs[0].writer = writer
try:
updateLogLevel(conf.logLevel)
except ValueError as err:
try:
stderr.write "Invalid value for --log-level. " & err.msg & "\n"
except IOError:
echo "Invalid value for --log-level. " & err.msg
quit QuitFailure
proc setupMetrics*(config: CodexConf) = proc setupMetrics*(config: CodexConf) =
if config.metricsEnabled: if config.metricsEnabled:
let metricsAddress = config.metricsAddress let metricsAddress = config.metricsAddress

View File

@ -1,2 +0,0 @@
const ContentIdsExts =
[multiCodec("codex-root"), multiCodec("codex-manifest"), multiCodec("codex-block")]

9
codex/contracts.nim Normal file
View File

@ -0,0 +1,9 @@
import contracts/requests
import contracts/marketplace
import contracts/market
import contracts/interactions
export requests
export marketplace
export market
export interactions

148
codex/contracts/Readme.md Normal file
View File

@ -0,0 +1,148 @@
Codex Contracts in Nim
=======================
Nim API for the [Codex smart contracts][1].
Usage
-----
For a global overview of the steps involved in starting and fulfilling a
storage contract, see [Codex Contracts][1].
Smart contract
--------------
Connecting to the smart contract on an Ethereum node:
```nim
import codex/contracts
import ethers
let address = # fill in address where the contract was deployed
let provider = JsonRpcProvider.new("ws://localhost:8545")
let marketplace = Marketplace.new(address, provider)
```
Setup client and host so that they can sign transactions; here we use the first
two accounts on the Ethereum node:
```nim
let accounts = await provider.listAccounts()
let client = provider.getSigner(accounts[0])
let host = provider.getSigner(accounts[1])
```
Storage requests
----------------
Creating a request for storage:
```nim
let request : StorageRequest = (
client: # address of the client requesting storage
duration: # duration of the contract in seconds
size: # size in bytes
contentHash: # SHA256 hash of the content that's going to be stored
proofProbability: # require a storage proof roughly once every N periods
maxPrice: # maximum price the client is willing to pay
expiry: # expiration time of the request (in unix time)
nonce: # random nonce to differentiate between similar requests
)
```
When a client wants to submit this request to the network, it needs to pay the
maximum price to the smart contract in advance. The difference between the
maximum price and the offered price will be reimbursed later.
Once the payment has been prepared, the client can submit the request to the
network:
```nim
await storage
.connect(client)
.requestStorage(request)
```
Storage offers
--------------
Creating a storage offer:
```nim
let offer: StorageOffer = (
host: # address of the host that is offering storage
requestId: request.id,
price: # offered price (in number of tokens)
expiry: # expiration time of the offer (in unix time)
)
```
Hosts submits an offer:
```nim
await storage
.connect(host)
.offerStorage(offer)
```
Client selects an offer:
```nim
await storage
.connect(client)
.selectOffer(offer.id)
```
Starting and finishing a storage contract
-----------------------------------------
The host whose offer got selected can start the storage contract once it
received the data that needs to be stored:
```nim
await storage
.connect(host)
.startContract(offer.id)
```
Once the storage contract is finished, the host can release payment:
```nim
await storage
.connect(host)
.finishContract(id)
```
Storage proofs
--------------
Time is divided into periods, and each period a storage proof may be required
from the host. The odds of requiring a storage proof are negotiated through the
storage request. For more details about the timing of storage proofs, please
refer to the [design document][2].
At the start of each period of time, the host can check whether a storage proof
is required:
```nim
let isProofRequired = await storage.isProofRequired(offer.id)
```
If a proof is required, the host can submit it before the end of the period:
```nim
await storage
.connect(host)
.submitProof(id, proof)
```
If a proof is not submitted, then a validator can mark a proof as missing:
```nim
await storage
.connect(validator)
.markProofAsMissing(id, period)
```
[1]: https://github.com/status-im/codex-contracts-eth/
[2]: https://github.com/status-im/codex-research/blob/main/design/storage-proof-timing.md

71
codex/contracts/clock.nim Normal file
View File

@ -0,0 +1,71 @@
import std/times
import pkg/ethers
import pkg/chronos
import pkg/stint
import ../clock
import ../conf
export clock
logScope:
topics = "contracts clock"
type
OnChainClock* = ref object of Clock
provider: Provider
subscription: Subscription
offset: times.Duration
blockNumber: UInt256
started: bool
newBlock: AsyncEvent
proc new*(_: type OnChainClock, provider: Provider): OnChainClock =
OnChainClock(provider: provider, newBlock: newAsyncEvent())
proc update(clock: OnChainClock, blck: Block) =
if number =? blck.number and number > clock.blockNumber:
let blockTime = initTime(blck.timestamp.truncate(int64), 0)
let computerTime = getTime()
clock.offset = blockTime - computerTime
clock.blockNumber = number
trace "updated clock", blockTime=blck.timestamp, blockNumber=number, offset=clock.offset
clock.newBlock.fire()
proc update(clock: OnChainClock) {.async.} =
try:
if latest =? (await clock.provider.getBlock(BlockTag.latest)):
clock.update(latest)
except CancelledError as error:
raise error
except CatchableError as error:
debug "error updating clock: ", error=error.msg
discard
method start*(clock: OnChainClock) {.async.} =
if clock.started:
return
proc onBlock(_: Block) =
# ignore block parameter; hardhat may call this with pending blocks
asyncSpawn clock.update()
await clock.update()
clock.subscription = await clock.provider.subscribe(onBlock)
clock.started = true
method stop*(clock: OnChainClock) {.async.} =
if not clock.started:
return
await clock.subscription.unsubscribe()
clock.started = false
method now*(clock: OnChainClock): SecondsSince1970 =
doAssert clock.started, "clock should be started before calling now()"
return toUnix(getTime() + clock.offset)
method waitUntil*(clock: OnChainClock, time: SecondsSince1970) {.async.} =
while (let difference = time - clock.now(); difference > 0):
clock.newBlock.clear()
discard await clock.newBlock.wait().withTimeout(chronos.seconds(difference))

View File

@ -0,0 +1,78 @@
import pkg/contractabi
import pkg/ethers/fields
import pkg/questionable/results
export contractabi
type
MarketplaceConfig* = object
collateral*: CollateralConfig
proofs*: ProofConfig
CollateralConfig* = object
repairRewardPercentage*: uint8 # percentage of remaining collateral slot has after it has been freed
maxNumberOfSlashes*: uint8 # frees slot when the number of slashes reaches this value
slashCriterion*: uint16 # amount of proofs missed that lead to slashing
slashPercentage*: uint8 # percentage of the collateral that is slashed
ProofConfig* = object
period*: UInt256 # proofs requirements are calculated per period (in seconds)
timeout*: UInt256 # mark proofs as missing before the timeout (in seconds)
downtime*: uint8 # ignore this much recent blocks for proof requirements
zkeyHash*: string # hash of the zkey file which is linked to the verifier
# Ensures the pointer does not remain in downtime for many consecutive
# periods. For each period increase, move the pointer `pointerProduct`
# blocks. Should be a prime number to ensure there are no cycles.
downtimeProduct*: uint8
func fromTuple(_: type ProofConfig, tupl: tuple): ProofConfig =
ProofConfig(
period: tupl[0],
timeout: tupl[1],
downtime: tupl[2],
zkeyHash: tupl[3],
downtimeProduct: tupl[4]
)
func fromTuple(_: type CollateralConfig, tupl: tuple): CollateralConfig =
CollateralConfig(
repairRewardPercentage: tupl[0],
maxNumberOfSlashes: tupl[1],
slashCriterion: tupl[2],
slashPercentage: tupl[3]
)
func fromTuple(_: type MarketplaceConfig, tupl: tuple): MarketplaceConfig =
MarketplaceConfig(
collateral: tupl[0],
proofs: tupl[1]
)
func solidityType*(_: type ProofConfig): string =
solidityType(ProofConfig.fieldTypes)
func solidityType*(_: type CollateralConfig): string =
solidityType(CollateralConfig.fieldTypes)
func solidityType*(_: type MarketplaceConfig): string =
solidityType(CollateralConfig.fieldTypes)
func encode*(encoder: var AbiEncoder, slot: ProofConfig) =
encoder.write(slot.fieldValues)
func encode*(encoder: var AbiEncoder, slot: CollateralConfig) =
encoder.write(slot.fieldValues)
func encode*(encoder: var AbiEncoder, slot: MarketplaceConfig) =
encoder.write(slot.fieldValues)
func decode*(decoder: var AbiDecoder, T: type ProofConfig): ?!T =
let tupl = ?decoder.read(ProofConfig.fieldTypes)
success ProofConfig.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type CollateralConfig): ?!T =
let tupl = ?decoder.read(CollateralConfig.fieldTypes)
success CollateralConfig.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type MarketplaceConfig): ?!T =
let tupl = ?decoder.read(MarketplaceConfig.fieldTypes)
success MarketplaceConfig.fromTuple(tupl)

View File

@ -0,0 +1,47 @@
import std/os
import std/tables
import pkg/ethers
import pkg/questionable
import ../conf
import ../logutils
import ./marketplace
type Deployment* = ref object
provider: Provider
config: CodexConf
const knownAddresses = {
# Hardhat localhost network
"31337": {
"Marketplace": Address.init("0x322813Fd9A801c5507c9de605d63CEA4f2CE6c44"),
}.toTable,
# Taiko Alpha-3 Testnet
"167005": {
"Marketplace": Address.init("0x948CF9291b77Bd7ad84781b9047129Addf1b894F")
}.toTable,
# Codex Testnet - Oct 21 2024 07:31:50 AM (+00:00 UTC)
"789987": {
"Marketplace": Address.init("0x3F9Cf3F40F0e87d804B776D8403e3d29F85211f4")
}.toTable
}.toTable
proc getKnownAddress(T: type, chainId: UInt256): ?Address =
let id = chainId.toString(10)
notice "Looking for well-known contract address with ChainID ", chainId=id
if not (id in knownAddresses):
return none Address
return knownAddresses[id].getOrDefault($T, Address.none)
proc new*(_: type Deployment, provider: Provider, config: CodexConf): Deployment =
Deployment(provider: provider, config: config)
proc address*(deployment: Deployment, contract: type): Future[?Address] {.async.} =
when contract is Marketplace:
if address =? deployment.config.marketplaceAddress:
return some address
let chainId = await deployment.provider.getChainId()
return contract.getKnownAddress(chainId)

View File

@ -0,0 +1,9 @@
import ./interactions/interactions
import ./interactions/hostinteractions
import ./interactions/clientinteractions
import ./interactions/validatorinteractions
export interactions
export hostinteractions
export clientinteractions
export validatorinteractions

View File

@ -0,0 +1,27 @@
import pkg/ethers
import ../../purchasing
import ../../logutils
import ../market
import ../clock
import ./interactions
export purchasing
export logutils
type
ClientInteractions* = ref object of ContractInteractions
purchasing*: Purchasing
proc new*(_: type ClientInteractions,
clock: OnChainClock,
purchasing: Purchasing): ClientInteractions =
ClientInteractions(clock: clock, purchasing: purchasing)
proc start*(self: ClientInteractions) {.async.} =
await procCall ContractInteractions(self).start()
await self.purchasing.start()
proc stop*(self: ClientInteractions) {.async.} =
await self.purchasing.stop()
await procCall ContractInteractions(self).stop()

View File

@ -0,0 +1,29 @@
import pkg/chronos
import ../../logutils
import ../../sales
import ./interactions
export sales
export logutils
type
HostInteractions* = ref object of ContractInteractions
sales*: Sales
proc new*(
_: type HostInteractions,
clock: Clock,
sales: Sales
): HostInteractions =
## Create a new HostInteractions instance
##
HostInteractions(clock: clock, sales: sales)
method start*(self: HostInteractions) {.async.} =
await procCall ContractInteractions(self).start()
await self.sales.start()
method stop*(self: HostInteractions) {.async.} =
await self.sales.stop()
await procCall ContractInteractions(self).start()

View File

@ -0,0 +1,16 @@
import pkg/ethers
import ../clock
import ../marketplace
import ../market
export clock
type
ContractInteractions* = ref object of RootObj
clock*: Clock
method start*(self: ContractInteractions) {.async, base.} =
discard
method stop*(self: ContractInteractions) {.async, base.} =
discard

View File

@ -0,0 +1,21 @@
import ./interactions
import ../../validation
export validation
type
ValidatorInteractions* = ref object of ContractInteractions
validation: Validation
proc new*(_: type ValidatorInteractions,
clock: OnChainClock,
validation: Validation): ValidatorInteractions =
ValidatorInteractions(clock: clock, validation: validation)
proc start*(self: ValidatorInteractions) {.async.} =
await procCall ContractInteractions(self).start()
await self.validation.start()
proc stop*(self: ValidatorInteractions) {.async.} =
await self.validation.stop()
await procCall ContractInteractions(self).stop()

414
codex/contracts/market.nim Normal file
View File

@ -0,0 +1,414 @@
import std/sequtils
import std/strutils
import std/sugar
import pkg/ethers
import pkg/upraises
import pkg/questionable
import ../utils/exceptions
import ../logutils
import ../market
import ./marketplace
import ./proofs
export market
logScope:
topics = "marketplace onchain market"
type
OnChainMarket* = ref object of Market
contract: Marketplace
signer: Signer
rewardRecipient: ?Address
MarketSubscription = market.Subscription
EventSubscription = ethers.Subscription
OnChainMarketSubscription = ref object of MarketSubscription
eventSubscription: EventSubscription
func new*(
_: type OnChainMarket,
contract: Marketplace,
rewardRecipient = Address.none): OnChainMarket =
without signer =? contract.signer:
raiseAssert("Marketplace contract should have a signer")
OnChainMarket(
contract: contract,
signer: signer,
rewardRecipient: rewardRecipient
)
proc raiseMarketError(message: string) {.raises: [MarketError].} =
raise newException(MarketError, message)
template convertEthersError(body) =
try:
body
except EthersError as error:
raiseMarketError(error.msgDetail)
proc approveFunds(market: OnChainMarket, amount: UInt256) {.async.} =
debug "Approving tokens", amount
convertEthersError:
let tokenAddress = await market.contract.token()
let token = Erc20Token.new(tokenAddress, market.signer)
discard await token.increaseAllowance(market.contract.address(), amount).confirm(0)
method getZkeyHash*(market: OnChainMarket): Future[?string] {.async.} =
let config = await market.contract.config()
return some config.proofs.zkeyHash
method getSigner*(market: OnChainMarket): Future[Address] {.async.} =
convertEthersError:
return await market.signer.getAddress()
method periodicity*(market: OnChainMarket): Future[Periodicity] {.async.} =
convertEthersError:
let config = await market.contract.config()
let period = config.proofs.period
return Periodicity(seconds: period)
method proofTimeout*(market: OnChainMarket): Future[UInt256] {.async.} =
convertEthersError:
let config = await market.contract.config()
return config.proofs.timeout
method proofDowntime*(market: OnChainMarket): Future[uint8] {.async.} =
convertEthersError:
let config = await market.contract.config()
return config.proofs.downtime
method getPointer*(market: OnChainMarket, slotId: SlotId): Future[uint8] {.async.} =
convertEthersError:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.getPointer(slotId, overrides)
method myRequests*(market: OnChainMarket): Future[seq[RequestId]] {.async.} =
convertEthersError:
return await market.contract.myRequests
method mySlots*(market: OnChainMarket): Future[seq[SlotId]] {.async.} =
convertEthersError:
let slots = await market.contract.mySlots()
debug "Fetched my slots", numSlots=len(slots)
return slots
method requestStorage(market: OnChainMarket, request: StorageRequest){.async.} =
convertEthersError:
debug "Requesting storage"
await market.approveFunds(request.price())
discard await market.contract.requestStorage(request).confirm(0)
method getRequest(market: OnChainMarket,
id: RequestId): Future[?StorageRequest] {.async.} =
convertEthersError:
try:
return some await market.contract.getRequest(id)
except ProviderError as e:
if e.msgDetail.contains("Unknown request"):
return none StorageRequest
raise e
method requestState*(market: OnChainMarket,
requestId: RequestId): Future[?RequestState] {.async.} =
convertEthersError:
try:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return some await market.contract.requestState(requestId, overrides)
except ProviderError as e:
if e.msgDetail.contains("Unknown request"):
return none RequestState
raise e
method slotState*(market: OnChainMarket,
slotId: SlotId): Future[SlotState] {.async.} =
convertEthersError:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.slotState(slotId, overrides)
method getRequestEnd*(market: OnChainMarket,
id: RequestId): Future[SecondsSince1970] {.async.} =
convertEthersError:
return await market.contract.requestEnd(id)
method requestExpiresAt*(market: OnChainMarket,
id: RequestId): Future[SecondsSince1970] {.async.} =
convertEthersError:
return await market.contract.requestExpiry(id)
method getHost(market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256): Future[?Address] {.async.} =
convertEthersError:
let slotId = slotId(requestId, slotIndex)
let address = await market.contract.getHost(slotId)
if address != Address.default:
return some address
else:
return none Address
method getActiveSlot*(market: OnChainMarket,
slotId: SlotId): Future[?Slot] {.async.} =
convertEthersError:
try:
return some await market.contract.getActiveSlot(slotId)
except ProviderError as e:
if e.msgDetail.contains("Slot is free"):
return none Slot
raise e
method fillSlot(market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256,
proof: Groth16Proof,
collateral: UInt256) {.async.} =
convertEthersError:
await market.approveFunds(collateral)
discard await market.contract.fillSlot(requestId, slotIndex, proof).confirm(0)
method freeSlot*(market: OnChainMarket, slotId: SlotId) {.async.} =
convertEthersError:
var freeSlot: Future[?TransactionResponse]
if rewardRecipient =? market.rewardRecipient:
# If --reward-recipient specified, use it as the reward recipient, and use
# the SP's address as the collateral recipient
let collateralRecipient = await market.getSigner()
freeSlot = market.contract.freeSlot(
slotId,
rewardRecipient, # --reward-recipient
collateralRecipient) # SP's address
else:
# Otherwise, use the SP's address as both the reward and collateral
# recipient (the contract will use msg.sender for both)
freeSlot = market.contract.freeSlot(slotId)
discard await freeSlot.confirm(0)
method withdrawFunds(market: OnChainMarket,
requestId: RequestId) {.async.} =
convertEthersError:
discard await market.contract.withdrawFunds(requestId).confirm(0)
method isProofRequired*(market: OnChainMarket,
id: SlotId): Future[bool] {.async.} =
convertEthersError:
try:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.isProofRequired(id, overrides)
except ProviderError as e:
if e.msgDetail.contains("Slot is free"):
return false
raise e
method willProofBeRequired*(market: OnChainMarket,
id: SlotId): Future[bool] {.async.} =
convertEthersError:
try:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.willProofBeRequired(id, overrides)
except ProviderError as e:
if e.msgDetail.contains("Slot is free"):
return false
raise e
method getChallenge*(market: OnChainMarket, id: SlotId): Future[ProofChallenge] {.async.} =
convertEthersError:
let overrides = CallOverrides(blockTag: some BlockTag.pending)
return await market.contract.getChallenge(id, overrides)
method submitProof*(market: OnChainMarket,
id: SlotId,
proof: Groth16Proof) {.async.} =
convertEthersError:
discard await market.contract.submitProof(id, proof).confirm(0)
method markProofAsMissing*(market: OnChainMarket,
id: SlotId,
period: Period) {.async.} =
convertEthersError:
discard await market.contract.markProofAsMissing(id, period).confirm(0)
method canProofBeMarkedAsMissing*(
market: OnChainMarket,
id: SlotId,
period: Period
): Future[bool] {.async.} =
let provider = market.contract.provider
let contractWithoutSigner = market.contract.connect(provider)
let overrides = CallOverrides(blockTag: some BlockTag.pending)
try:
discard await contractWithoutSigner.markProofAsMissing(id, period, overrides)
return true
except EthersError as e:
trace "Proof cannot be marked as missing", msg = e.msg
return false
method reserveSlot*(
market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256) {.async.} =
convertEthersError:
discard await market.contract.reserveSlot(requestId, slotIndex).confirm(0)
method canReserveSlot*(
market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256): Future[bool] {.async.} =
convertEthersError:
return await market.contract.canReserveSlot(requestId, slotIndex)
method subscribeRequests*(market: OnChainMarket,
callback: OnRequest):
Future[MarketSubscription] {.async.} =
proc onEvent(event: StorageRequested) {.upraises:[].} =
callback(event.requestId,
event.ask,
event.expiry)
convertEthersError:
let subscription = await market.contract.subscribe(StorageRequested, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotFilled*(market: OnChainMarket,
callback: OnSlotFilled):
Future[MarketSubscription] {.async.} =
proc onEvent(event: SlotFilled) {.upraises:[].} =
callback(event.requestId, event.slotIndex)
convertEthersError:
let subscription = await market.contract.subscribe(SlotFilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotFilled*(market: OnChainMarket,
requestId: RequestId,
slotIndex: UInt256,
callback: OnSlotFilled):
Future[MarketSubscription] {.async.} =
proc onSlotFilled(eventRequestId: RequestId, eventSlotIndex: UInt256) =
if eventRequestId == requestId and eventSlotIndex == slotIndex:
callback(requestId, slotIndex)
convertEthersError:
return await market.subscribeSlotFilled(onSlotFilled)
method subscribeSlotFreed*(market: OnChainMarket,
callback: OnSlotFreed):
Future[MarketSubscription] {.async.} =
proc onEvent(event: SlotFreed) {.upraises:[].} =
callback(event.requestId, event.slotIndex)
convertEthersError:
let subscription = await market.contract.subscribe(SlotFreed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeSlotReservationsFull*(
market: OnChainMarket,
callback: OnSlotReservationsFull): Future[MarketSubscription] {.async.} =
proc onEvent(event: SlotReservationsFull) {.upraises:[].} =
callback(event.requestId, event.slotIndex)
convertEthersError:
let subscription = await market.contract.subscribe(SlotReservationsFull, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeFulfillment(market: OnChainMarket,
callback: OnFulfillment):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestFulfilled) {.upraises:[].} =
callback(event.requestId)
convertEthersError:
let subscription = await market.contract.subscribe(RequestFulfilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeFulfillment(market: OnChainMarket,
requestId: RequestId,
callback: OnFulfillment):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestFulfilled) {.upraises:[].} =
if event.requestId == requestId:
callback(event.requestId)
convertEthersError:
let subscription = await market.contract.subscribe(RequestFulfilled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestCancelled*(market: OnChainMarket,
callback: OnRequestCancelled):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestCancelled) {.upraises:[].} =
callback(event.requestId)
convertEthersError:
let subscription = await market.contract.subscribe(RequestCancelled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestCancelled*(market: OnChainMarket,
requestId: RequestId,
callback: OnRequestCancelled):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestCancelled) {.upraises:[].} =
if event.requestId == requestId:
callback(event.requestId)
convertEthersError:
let subscription = await market.contract.subscribe(RequestCancelled, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestFailed*(market: OnChainMarket,
callback: OnRequestFailed):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestFailed) {.upraises:[]} =
callback(event.requestId)
convertEthersError:
let subscription = await market.contract.subscribe(RequestFailed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeRequestFailed*(market: OnChainMarket,
requestId: RequestId,
callback: OnRequestFailed):
Future[MarketSubscription] {.async.} =
proc onEvent(event: RequestFailed) {.upraises:[]} =
if event.requestId == requestId:
callback(event.requestId)
convertEthersError:
let subscription = await market.contract.subscribe(RequestFailed, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method subscribeProofSubmission*(market: OnChainMarket,
callback: OnProofSubmitted):
Future[MarketSubscription] {.async.} =
proc onEvent(event: ProofSubmitted) {.upraises: [].} =
callback(event.id)
convertEthersError:
let subscription = await market.contract.subscribe(ProofSubmitted, onEvent)
return OnChainMarketSubscription(eventSubscription: subscription)
method unsubscribe*(subscription: OnChainMarketSubscription) {.async.} =
await subscription.eventSubscription.unsubscribe()
method queryPastEvents*[T: MarketplaceEvent](
market: OnChainMarket,
_: type T,
blocksAgo: int): Future[seq[T]] {.async.} =
convertEthersError:
let contract = market.contract
let provider = contract.provider
let head = await provider.getBlockNumber()
let fromBlock = BlockTag.init(head - blocksAgo.abs.u256)
return await contract.queryFilter(T,
fromBlock,
BlockTag.latest)

View File

@ -0,0 +1,56 @@
import pkg/ethers
import pkg/ethers/erc20
import pkg/json_rpc/rpcclient
import pkg/stint
import pkg/chronos
import ../clock
import ./requests
import ./proofs
import ./config
export stint
export ethers except `%`, `%*`, toJson
export erc20 except `%`, `%*`, toJson
export config
export requests
type
Marketplace* = ref object of Contract
proc config*(marketplace: Marketplace): MarketplaceConfig {.contract, view.}
proc token*(marketplace: Marketplace): Address {.contract, view.}
proc slashMisses*(marketplace: Marketplace): UInt256 {.contract, view.}
proc slashPercentage*(marketplace: Marketplace): UInt256 {.contract, view.}
proc minCollateralThreshold*(marketplace: Marketplace): UInt256 {.contract, view.}
proc requestStorage*(marketplace: Marketplace, request: StorageRequest): ?TransactionResponse {.contract.}
proc fillSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256, proof: Groth16Proof): ?TransactionResponse {.contract.}
proc withdrawFunds*(marketplace: Marketplace, requestId: RequestId): ?TransactionResponse {.contract.}
proc withdrawFunds*(marketplace: Marketplace, requestId: RequestId, withdrawAddress: Address): ?TransactionResponse {.contract.}
proc freeSlot*(marketplace: Marketplace, id: SlotId): ?TransactionResponse {.contract.}
proc freeSlot*(marketplace: Marketplace, id: SlotId, rewardRecipient: Address, collateralRecipient: Address): ?TransactionResponse {.contract.}
proc getRequest*(marketplace: Marketplace, id: RequestId): StorageRequest {.contract, view.}
proc getHost*(marketplace: Marketplace, id: SlotId): Address {.contract, view.}
proc getActiveSlot*(marketplace: Marketplace, id: SlotId): Slot {.contract, view.}
proc myRequests*(marketplace: Marketplace): seq[RequestId] {.contract, view.}
proc mySlots*(marketplace: Marketplace): seq[SlotId] {.contract, view.}
proc requestState*(marketplace: Marketplace, requestId: RequestId): RequestState {.contract, view.}
proc slotState*(marketplace: Marketplace, slotId: SlotId): SlotState {.contract, view.}
proc requestEnd*(marketplace: Marketplace, requestId: RequestId): SecondsSince1970 {.contract, view.}
proc requestExpiry*(marketplace: Marketplace, requestId: RequestId): SecondsSince1970 {.contract, view.}
proc proofTimeout*(marketplace: Marketplace): UInt256 {.contract, view.}
proc proofEnd*(marketplace: Marketplace, id: SlotId): UInt256 {.contract, view.}
proc missingProofs*(marketplace: Marketplace, id: SlotId): UInt256 {.contract, view.}
proc isProofRequired*(marketplace: Marketplace, id: SlotId): bool {.contract, view.}
proc willProofBeRequired*(marketplace: Marketplace, id: SlotId): bool {.contract, view.}
proc getChallenge*(marketplace: Marketplace, id: SlotId): array[32, byte] {.contract, view.}
proc getPointer*(marketplace: Marketplace, id: SlotId): uint8 {.contract, view.}
proc submitProof*(marketplace: Marketplace, id: SlotId, proof: Groth16Proof): ?TransactionResponse {.contract.}
proc markProofAsMissing*(marketplace: Marketplace, id: SlotId, period: UInt256): ?TransactionResponse {.contract.}
proc reserveSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256): ?TransactionResponse {.contract.}
proc canReserveSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256): bool {.contract, view.}

View File

@ -0,0 +1,43 @@
import pkg/stint
import pkg/contractabi
import pkg/ethers/fields
type
Groth16Proof* = object
a*: G1Point
b*: G2Point
c*: G1Point
G1Point* = object
x*: UInt256
y*: UInt256
# A field element F_{p^2} encoded as `real + i * imag`
Fp2Element* = object
real*: UInt256
imag*: UInt256
G2Point* = object
x*: Fp2Element
y*: Fp2Element
func solidityType*(_: type G1Point): string =
solidityType(G1Point.fieldTypes)
func solidityType*(_: type Fp2Element): string =
solidityType(Fp2Element.fieldTypes)
func solidityType*(_: type G2Point): string =
solidityType(G2Point.fieldTypes)
func solidityType*(_: type Groth16Proof): string =
solidityType(Groth16Proof.fieldTypes)
func encode*(encoder: var AbiEncoder, point: G1Point) =
encoder.write(point.fieldValues)
func encode*(encoder: var AbiEncoder, element: Fp2Element) =
encoder.write(element.fieldValues)
func encode*(encoder: var AbiEncoder, point: G2Point) =
encoder.write(point.fieldValues)
func encode*(encoder: var AbiEncoder, proof: Groth16Proof) =
encoder.write(proof.fieldValues)

View File

@ -0,0 +1,186 @@
import std/hashes
import std/sequtils
import std/typetraits
import pkg/contractabi
import pkg/nimcrypto
import pkg/ethers/fields
import pkg/questionable/results
import pkg/stew/byteutils
import pkg/upraises
import ../logutils
import ../utils/json
export contractabi
type
StorageRequest* = object
client* {.serialize.}: Address
ask* {.serialize.}: StorageAsk
content* {.serialize.}: StorageContent
expiry* {.serialize.}: UInt256
nonce*: Nonce
StorageAsk* = object
slots* {.serialize.}: uint64
slotSize* {.serialize.}: UInt256
duration* {.serialize.}: UInt256
proofProbability* {.serialize.}: UInt256
reward* {.serialize.}: UInt256
collateral* {.serialize.}: UInt256
maxSlotLoss* {.serialize.}: uint64
StorageContent* = object
cid* {.serialize.}: string
merkleRoot*: array[32, byte]
Slot* = object
request* {.serialize.}: StorageRequest
slotIndex* {.serialize.}: UInt256
SlotId* = distinct array[32, byte]
RequestId* = distinct array[32, byte]
Nonce* = distinct array[32, byte]
RequestState* {.pure.} = enum
New
Started
Cancelled
Finished
Failed
SlotState* {.pure.} = enum
Free
Filled
Finished
Failed
Paid
Cancelled
proc `==`*(x, y: Nonce): bool {.borrow.}
proc `==`*(x, y: RequestId): bool {.borrow.}
proc `==`*(x, y: SlotId): bool {.borrow.}
proc hash*(x: SlotId): Hash {.borrow.}
proc hash*(x: Nonce): Hash {.borrow.}
proc hash*(x: Address): Hash {.borrow.}
func toArray*(id: RequestId | SlotId | Nonce): array[32, byte] =
array[32, byte](id)
proc `$`*(id: RequestId | SlotId | Nonce): string =
id.toArray.toHex
proc fromHex*(T: type RequestId, hex: string): T =
T array[32, byte].fromHex(hex)
proc fromHex*(T: type SlotId, hex: string): T =
T array[32, byte].fromHex(hex)
proc fromHex*(T: type Nonce, hex: string): T =
T array[32, byte].fromHex(hex)
proc fromHex*[T: distinct](_: type T, hex: string): T =
type baseType = T.distinctBase
T baseType.fromHex(hex)
proc toHex*[T: distinct](id: T): string =
type baseType = T.distinctBase
baseType(id).toHex
logutils.formatIt(LogFormat.textLines, Nonce): it.short0xHexLog
logutils.formatIt(LogFormat.textLines, RequestId): it.short0xHexLog
logutils.formatIt(LogFormat.textLines, SlotId): it.short0xHexLog
logutils.formatIt(LogFormat.json, Nonce): it.to0xHexLog
logutils.formatIt(LogFormat.json, RequestId): it.to0xHexLog
logutils.formatIt(LogFormat.json, SlotId): it.to0xHexLog
func fromTuple(_: type StorageRequest, tupl: tuple): StorageRequest =
StorageRequest(
client: tupl[0],
ask: tupl[1],
content: tupl[2],
expiry: tupl[3],
nonce: tupl[4]
)
func fromTuple(_: type Slot, tupl: tuple): Slot =
Slot(
request: tupl[0],
slotIndex: tupl[1]
)
func fromTuple(_: type StorageAsk, tupl: tuple): StorageAsk =
StorageAsk(
slots: tupl[0],
slotSize: tupl[1],
duration: tupl[2],
proofProbability: tupl[3],
reward: tupl[4],
collateral: tupl[5],
maxSlotLoss: tupl[6]
)
func fromTuple(_: type StorageContent, tupl: tuple): StorageContent =
StorageContent(
cid: tupl[0],
merkleRoot: tupl[1]
)
func solidityType*(_: type StorageContent): string =
solidityType(StorageContent.fieldTypes)
func solidityType*(_: type StorageAsk): string =
solidityType(StorageAsk.fieldTypes)
func solidityType*(_: type StorageRequest): string =
solidityType(StorageRequest.fieldTypes)
func encode*(encoder: var AbiEncoder, content: StorageContent) =
encoder.write(content.fieldValues)
func encode*(encoder: var AbiEncoder, ask: StorageAsk) =
encoder.write(ask.fieldValues)
func encode*(encoder: var AbiEncoder, id: RequestId | SlotId | Nonce) =
encoder.write(id.toArray)
func encode*(encoder: var AbiEncoder, request: StorageRequest) =
encoder.write(request.fieldValues)
func encode*(encoder: var AbiEncoder, request: Slot) =
encoder.write(request.fieldValues)
func decode*(decoder: var AbiDecoder, T: type StorageContent): ?!T =
let tupl = ?decoder.read(StorageContent.fieldTypes)
success StorageContent.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type StorageAsk): ?!T =
let tupl = ?decoder.read(StorageAsk.fieldTypes)
success StorageAsk.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type StorageRequest): ?!T =
let tupl = ?decoder.read(StorageRequest.fieldTypes)
success StorageRequest.fromTuple(tupl)
func decode*(decoder: var AbiDecoder, T: type Slot): ?!T =
let tupl = ?decoder.read(Slot.fieldTypes)
success Slot.fromTuple(tupl)
func id*(request: StorageRequest): RequestId =
let encoding = AbiEncoder.encode((request, ))
RequestId(keccak256.digest(encoding).data)
func slotId*(requestId: RequestId, slotIndex: UInt256): SlotId =
let encoding = AbiEncoder.encode((requestId, slotIndex))
SlotId(keccak256.digest(encoding).data)
func slotId*(request: StorageRequest, slotIndex: UInt256): SlotId =
slotId(request.id, slotIndex)
func id*(slot: Slot): SlotId =
slotId(slot.request, slot.slotIndex)
func pricePerSlot*(ask: StorageAsk): UInt256 =
ask.duration * ask.reward
func price*(ask: StorageAsk): UInt256 =
ask.slots.u256 * ask.pricePerSlot
func price*(request: StorageRequest): UInt256 =
request.ask.price
func size*(ask: StorageAsk): UInt256 =
ask.slots.u256 * ask.slotSize

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,19 +7,16 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/algorithm import std/algorithm
import std/net
import std/sequtils import std/sequtils
import pkg/chronos import pkg/chronos
import pkg/libp2p/[cid, multicodec, routing_record, signed_envelope] import pkg/libp2p/[cid, multicodec, routing_record, signed_envelope]
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/stew/shims/net
import pkg/contractabi/address as ca import pkg/contractabi/address as ca
import pkg/codexdht/discv5/[routing_table, protocol as discv5] import pkg/codexdht/discv5/[routing_table, protocol as discv5]
from pkg/nimcrypto import keccak256
import ./rng import ./rng
import ./errors import ./errors
@ -34,17 +31,15 @@ export discv5
logScope: logScope:
topics = "codex discovery" topics = "codex discovery"
type Discovery* = ref object of RootObj type
protocol*: discv5.Protocol # dht protocol Discovery* = ref object of RootObj
key: PrivateKey # private key protocol*: discv5.Protocol # dht protocol
peerId: PeerId # the peer id of the local node key: PrivateKey # private key
announceAddrs*: seq[MultiAddress] # addresses announced as part of the provider records peerId: PeerId # the peer id of the local node
providerRecord*: ?SignedPeerRecord announceAddrs*: seq[MultiAddress] # addresses announced as part of the provider records
# record to advertice node connection information, this carry any providerRecord*: ?SignedPeerRecord # record to advertice node connection information, this carry any
# address that the node can be connected on # address that the node can be connected on
dhtRecord*: ?SignedPeerRecord # record to advertice DHT connection information dhtRecord*: ?SignedPeerRecord # record to advertice DHT connection information
isStarted: bool
store: Datastore
proc toNodeId*(cid: Cid): NodeId = proc toNodeId*(cid: Cid): NodeId =
## Cid to discovery id ## Cid to discovery id
@ -59,121 +54,82 @@ proc toNodeId*(host: ca.Address): NodeId =
readUintBE[256](keccak256.digest(host.toArray).data) readUintBE[256](keccak256.digest(host.toArray).data)
proc findPeer*( proc findPeer*(
d: Discovery, peerId: PeerId d: Discovery,
): Future[?PeerRecord] {.async: (raises: [CancelledError]).} = peerId: PeerId): Future[?PeerRecord] {.async.} =
trace "protocol.resolve..." trace "protocol.resolve..."
## Find peer using the given Discovery object ## Find peer using the given Discovery object
## ##
let
node = await d.protocol.resolve(toNodeId(peerId))
try: return
let node = await d.protocol.resolve(toNodeId(peerId)) if node.isSome():
node.get().record.data.some
return else:
if node.isSome(): PeerRecord.none
node.get().record.data.some
else:
PeerRecord.none
except CancelledError as exc:
warn "Error finding peer", peerId = peerId, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding peer", peerId = peerId, exc = exc.msg
return PeerRecord.none
method find*( method find*(
d: Discovery, cid: Cid d: Discovery,
): Future[seq[SignedPeerRecord]] {.async: (raises: [CancelledError]), base.} = cid: Cid): Future[seq[SignedPeerRecord]] {.async, base.} =
## Find block providers ## Find block providers
## ##
without providers =?
(await d.protocol.getProviders(cid.toNodeId())).mapFailure, error:
warn "Error finding providers for block", cid, error = error.msg
try: return providers.filterIt( not (it.data.peerId == d.peerId) )
without providers =? (await d.protocol.getProviders(cid.toNodeId())).mapFailure,
error:
warn "Error finding providers for block", cid, error = error.msg
return providers.filterIt(not (it.data.peerId == d.peerId)) method provide*(d: Discovery, cid: Cid) {.async, base.} =
except CancelledError as exc:
warn "Error finding providers for block", cid, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding providers for block", cid, exc = exc.msg
method provide*(d: Discovery, cid: Cid) {.async: (raises: [CancelledError]), base.} =
## Provide a block Cid ## Provide a block Cid
## ##
try: let
let nodes = await d.protocol.addProvider(cid.toNodeId(), d.providerRecord.get) nodes = await d.protocol.addProvider(
cid.toNodeId(), d.providerRecord.get)
if nodes.len <= 0:
warn "Couldn't provide to any nodes!"
if nodes.len <= 0:
warn "Couldn't provide to any nodes!"
except CancelledError as exc:
warn "Error providing block", cid, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error providing block", cid, exc = exc.msg
method find*( method find*(
d: Discovery, host: ca.Address d: Discovery,
): Future[seq[SignedPeerRecord]] {.async: (raises: [CancelledError]), base.} = host: ca.Address): Future[seq[SignedPeerRecord]] {.async, base.} =
## Find host providers ## Find host providers
## ##
try: trace "Finding providers for host", host = $host
trace "Finding providers for host", host = $host without var providers =?
without var providers =? (await d.protocol.getProviders(host.toNodeId())).mapFailure, (await d.protocol.getProviders(host.toNodeId())).mapFailure, error:
error: trace "Error finding providers for host", host = $host, exc = error.msg
trace "Error finding providers for host", host = $host, exc = error.msg return
return
if providers.len <= 0: if providers.len <= 0:
trace "No providers found", host = $host trace "No providers found", host = $host
return return
providers.sort do(a, b: SignedPeerRecord) -> int: providers.sort do(a, b: SignedPeerRecord) -> int:
system.cmp[uint64](a.data.seqNo, b.data.seqNo) system.cmp[uint64](a.data.seqNo, b.data.seqNo)
return providers return providers
except CancelledError as exc:
warn "Error finding providers for host", host = $host, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error finding providers for host", host = $host, exc = exc.msg
method provide*( method provide*(d: Discovery, host: ca.Address) {.async, base.} =
d: Discovery, host: ca.Address
) {.async: (raises: [CancelledError]), base.} =
## Provide hosts ## Provide hosts
## ##
try: trace "Providing host", host = $host
trace "Providing host", host = $host let
let nodes = await d.protocol.addProvider(host.toNodeId(), d.providerRecord.get) nodes = await d.protocol.addProvider(
if nodes.len > 0: host.toNodeId(), d.providerRecord.get)
trace "Provided to nodes", nodes = nodes.len if nodes.len > 0:
except CancelledError as exc: trace "Provided to nodes", nodes = nodes.len
warn "Error providing host", host = $host, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error providing host", host = $host, exc = exc.msg
method removeProvider*( method removeProvider*(
d: Discovery, peerId: PeerId d: Discovery,
): Future[void] {.base, async: (raises: [CancelledError]).} = peerId: PeerId): Future[void] {.base.} =
## Remove provider from providers table ## Remove provider from providers table
## ##
trace "Removing provider", peerId trace "Removing provider", peerId
try: d.protocol.removeProvidersLocal(peerId)
await d.protocol.removeProvidersLocal(peerId)
except CancelledError as exc:
warn "Error removing provider", peerId = peerId, exc = exc.msg
raise exc
except CatchableError as exc:
warn "Error removing provider", peerId = peerId, exc = exc.msg
except Exception as exc: # Something in discv5 is raising Exception
warn "Error removing provider", peerId = peerId, exc = exc.msg
raiseAssert("Unexpected Exception in removeProvider")
proc updateAnnounceRecord*(d: Discovery, addrs: openArray[MultiAddress]) = proc updateAnnounceRecord*(d: Discovery, addrs: openArray[MultiAddress]) =
## Update providers record ## Update providers record
@ -181,72 +137,54 @@ proc updateAnnounceRecord*(d: Discovery, addrs: openArray[MultiAddress]) =
d.announceAddrs = @addrs d.announceAddrs = @addrs
info "Updating announce record", addrs = d.announceAddrs trace "Updating announce record", addrs = d.announceAddrs
d.providerRecord = SignedPeerRecord d.providerRecord = SignedPeerRecord.init(
.init(d.key, PeerRecord.init(d.peerId, d.announceAddrs)) d.key, PeerRecord.init(d.peerId, d.announceAddrs))
.expect("Should construct signed record").some .expect("Should construct signed record").some
if not d.protocol.isNil: if not d.protocol.isNil:
d.protocol.updateRecord(d.providerRecord).expect("Should update SPR") d.protocol.updateRecord(d.providerRecord)
.expect("Should update SPR")
proc updateDhtRecord*(d: Discovery, addrs: openArray[MultiAddress]) = proc updateDhtRecord*(d: Discovery, ip: ValidIpAddress, port: Port) =
## Update providers record ## Update providers record
## ##
info "Updating Dht record", addrs = addrs trace "Updating Dht record", ip, port = $port
d.dhtRecord = SignedPeerRecord d.dhtRecord = SignedPeerRecord.init(
.init(d.key, PeerRecord.init(d.peerId, @addrs)) d.key, PeerRecord.init(d.peerId, @[
.expect("Should construct signed record").some MultiAddress.init(
ip,
IpTransportProtocol.udpProtocol,
port)])).expect("Should construct signed record").some
if not d.protocol.isNil: if not d.protocol.isNil:
d.protocol.updateRecord(d.dhtRecord).expect("Should update SPR") d.protocol.updateRecord(d.dhtRecord)
.expect("Should update SPR")
proc start*(d: Discovery) {.async: (raises: []).} = proc start*(d: Discovery) {.async.} =
try: d.protocol.open()
d.protocol.open() await d.protocol.start()
await d.protocol.start()
d.isStarted = true
except CatchableError as exc:
error "Error starting discovery", exc = exc.msg
proc stop*(d: Discovery) {.async: (raises: []).} = proc stop*(d: Discovery) {.async.} =
if not d.isStarted: await d.protocol.closeWait()
warn "Discovery not started, skipping stop"
return
try:
await noCancel d.protocol.closeWait()
d.isStarted = false
trace "Discovery stopped"
except CatchableError as exc:
error "Error stopping discovery", exc = exc.msg
proc close*(d: Discovery) {.async: (raises: []).} =
if d.store.isNil:
warn "Discovery store is nil, skipping close"
return
let res = await noCancel d.store.close()
if res.isErr:
error "Error closing discovery store", error = res.error().msg
else:
trace "Discovery store closed"
proc new*( proc new*(
T: type Discovery, T: type Discovery,
key: PrivateKey, key: PrivateKey,
bindIp = IPv4_any(), bindIp = ValidIpAddress.init(IPv4_any()),
bindPort = 0.Port, bindPort = 0.Port,
announceAddrs: openArray[MultiAddress], announceAddrs: openArray[MultiAddress],
bootstrapNodes: openArray[SignedPeerRecord] = [], bootstrapNodes: openArray[SignedPeerRecord] = [],
store: Datastore = SQLiteDatastore.new(Memory).expect("Should not fail!"), store: Datastore = SQLiteDatastore.new(Memory).expect("Should not fail!")
): Discovery = ): Discovery =
## Create a new Discovery node instance for the given key and datastore ## Create a new Discovery node instance for the given key and datastore
## ##
var self = Discovery( var
key: key, peerId: PeerId.init(key).expect("Should construct PeerId"), store: store self = Discovery(
) key: key,
peerId: PeerId.init(key).expect("Should construct PeerId"))
self.updateAnnounceRecord(announceAddrs) self.updateAnnounceRecord(announceAddrs)
@ -254,20 +192,22 @@ proc new*(
# FIXME disable IP limits temporarily so we can run our workshop. Re-enable # FIXME disable IP limits temporarily so we can run our workshop. Re-enable
# and figure out proper solution. # and figure out proper solution.
let discoveryConfig = DiscoveryConfig( let discoveryConfig = DiscoveryConfig(
tableIpLimits: TableIpLimits(tableIpLimit: high(uint), bucketIpLimit: high(uint)), tableIpLimits: TableIpLimits(
bitsPerHop: DefaultBitsPerHop, tableIpLimit: high(uint),
bucketIpLimit:high(uint)
),
bitsPerHop: DefaultBitsPerHop
) )
# -------------------------------------------------------------------------- # --------------------------------------------------------------------------
self.protocol = newProtocol( self.protocol = newProtocol(
key, key,
bindIp = bindIp, bindIp = bindIp.toNormalIp,
bindPort = bindPort, bindPort = bindPort,
record = self.providerRecord.get, record = self.providerRecord.get,
bootstrapRecords = bootstrapNodes, bootstrapRecords = bootstrapNodes,
rng = Rng.instance(), rng = Rng.instance(),
providers = ProvidersManager.new(store), providers = ProvidersManager.new(store),
config = discoveryConfig, config = discoveryConfig)
)
self self

25
codex/erasure.nim Normal file
View File

@ -0,0 +1,25 @@
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import ./erasure/erasure
import ./erasure/backends/leopard
export erasure
func leoEncoderProvider*(
size, buffers, parity: int
): EncoderBackend {.raises: [Defect].} =
## create new Leo Encoder
LeoEncoderBackend.new(size, buffers, parity)
func leoDecoderProvider*(
size, buffers, parity: int
): DecoderBackend {.raises: [Defect].} =
## create new Leo Decoder
LeoDecoderBackend.new(size, buffers, parity)

View File

@ -0,0 +1,225 @@
## Nim-Codex
## Copyright (c) 2024 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import std/sequtils
import pkg/taskpools
import pkg/taskpools/flowvars
import pkg/chronos
import pkg/chronos/threadsync
import pkg/questionable/results
import ./backend
import ../errors
import ../logutils
logScope:
topics = "codex asyncerasure"
const
CompletitionTimeout = 1.seconds # Maximum await time for completition after receiving a signal
CompletitionRetryDelay = 10.millis
type
EncoderBackendPtr = ptr EncoderBackend
DecoderBackendPtr = ptr DecoderBackend
# Args objects are missing seq[seq[byte]] field, to avoid unnecessary data copy
EncodeTaskArgs = object
signal: ThreadSignalPtr
backend: EncoderBackendPtr
blockSize: int
ecM: int
DecodeTaskArgs = object
signal: ThreadSignalPtr
backend: DecoderBackendPtr
blockSize: int
ecK: int
SharedArrayHolder*[T] = object
data: ptr UncheckedArray[T]
size: int
EncodeTaskResult = Result[SharedArrayHolder[byte], cstring]
DecodeTaskResult = Result[SharedArrayHolder[byte], cstring]
proc encodeTask(args: EncodeTaskArgs, data: seq[seq[byte]]): EncodeTaskResult =
var
data = data.unsafeAddr
parity = newSeqWith[seq[byte]](args.ecM, newSeq[byte](args.blockSize))
try:
let res = args.backend[].encode(data[], parity)
if res.isOk:
let
resDataSize = parity.len * args.blockSize
resData = cast[ptr UncheckedArray[byte]](allocShared0(resDataSize))
arrHolder = SharedArrayHolder[byte](
data: resData,
size: resDataSize
)
for i in 0..<parity.len:
copyMem(addr resData[i * args.blockSize], addr parity[i][0], args.blockSize)
return ok(arrHolder)
else:
return err(res.error)
except CatchableError as exception:
return err(exception.msg.cstring)
finally:
if err =? args.signal.fireSync().mapFailure.errorOption():
error "Error firing signal", msg = err.msg
proc decodeTask(args: DecodeTaskArgs, data: seq[seq[byte]], parity: seq[seq[byte]]): DecodeTaskResult =
var
data = data.unsafeAddr
parity = parity.unsafeAddr
recovered = newSeqWith[seq[byte]](args.ecK, newSeq[byte](args.blockSize))
try:
let res = args.backend[].decode(data[], parity[], recovered)
if res.isOk:
let
resDataSize = recovered.len * args.blockSize
resData = cast[ptr UncheckedArray[byte]](allocShared0(resDataSize))
arrHolder = SharedArrayHolder[byte](
data: resData,
size: resDataSize
)
for i in 0..<recovered.len:
copyMem(addr resData[i * args.blockSize], addr recovered[i][0], args.blockSize)
return ok(arrHolder)
else:
return err(res.error)
except CatchableError as exception:
return err(exception.msg.cstring)
finally:
if err =? args.signal.fireSync().mapFailure.errorOption():
error "Error firing signal", msg = err.msg
proc proxySpawnEncodeTask(
tp: Taskpool,
args: EncodeTaskArgs,
data: ref seq[seq[byte]]
): Flowvar[EncodeTaskResult] =
# FIXME Uncomment the code below after addressing an issue:
# https://github.com/codex-storage/nim-codex/issues/854
# tp.spawn encodeTask(args, data[])
let fv = EncodeTaskResult.newFlowVar
fv.readyWith(encodeTask(args, data[]))
return fv
proc proxySpawnDecodeTask(
tp: Taskpool,
args: DecodeTaskArgs,
data: ref seq[seq[byte]],
parity: ref seq[seq[byte]]
): Flowvar[DecodeTaskResult] =
# FIXME Uncomment the code below after addressing an issue:
# https://github.com/codex-storage/nim-codex/issues/854
# tp.spawn decodeTask(args, data[], parity[])
let fv = DecodeTaskResult.newFlowVar
fv.readyWith(decodeTask(args, data[], parity[]))
return fv
proc awaitResult[T](signal: ThreadSignalPtr, handle: Flowvar[T]): Future[?!T] {.async.} =
await wait(signal)
var
res: T
awaitTotal: Duration
while awaitTotal < CompletitionTimeout:
if handle.tryComplete(res):
return success(res)
else:
awaitTotal += CompletitionRetryDelay
await sleepAsync(CompletitionRetryDelay)
return failure("Task signaled finish but didn't return any result within " & $CompletitionRetryDelay)
proc asyncEncode*(
tp: Taskpool,
backend: EncoderBackend,
data: ref seq[seq[byte]],
blockSize: int,
ecM: int
): Future[?!ref seq[seq[byte]]] {.async.} =
without signal =? ThreadSignalPtr.new().mapFailure, err:
return failure(err)
try:
let
blockSize = data[0].len
args = EncodeTaskArgs(signal: signal, backend: unsafeAddr backend, blockSize: blockSize, ecM: ecM)
handle = proxySpawnEncodeTask(tp, args, data)
without res =? await awaitResult(signal, handle), err:
return failure(err)
if res.isOk:
var parity = seq[seq[byte]].new()
parity[].setLen(ecM)
for i in 0..<parity[].len:
parity[i] = newSeq[byte](blockSize)
copyMem(addr parity[i][0], addr res.value.data[i * blockSize], blockSize)
deallocShared(res.value.data)
return success(parity)
else:
return failure($res.error)
finally:
if err =? signal.close().mapFailure.errorOption():
error "Error closing signal", msg = $err.msg
proc asyncDecode*(
tp: Taskpool,
backend: DecoderBackend,
data, parity: ref seq[seq[byte]],
blockSize: int
): Future[?!ref seq[seq[byte]]] {.async.} =
without signal =? ThreadSignalPtr.new().mapFailure, err:
return failure(err)
try:
let
ecK = data[].len
args = DecodeTaskArgs(signal: signal, backend: unsafeAddr backend, blockSize: blockSize, ecK: ecK)
handle = proxySpawnDecodeTask(tp, args, data, parity)
without res =? await awaitResult(signal, handle), err:
return failure(err)
if res.isOk:
var recovered = seq[seq[byte]].new()
recovered[].setLen(ecK)
for i in 0..<recovered[].len:
recovered[i] = newSeq[byte](blockSize)
copyMem(addr recovered[i][0], addr res.value.data[i * blockSize], blockSize)
deallocShared(res.value.data)
return success(recovered)
else:
return failure($res.error)
finally:
if err =? signal.close().mapFailure.errorOption():
error "Error closing signal", msg = $err.msg

47
codex/erasure/backend.nim Normal file
View File

@ -0,0 +1,47 @@
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import pkg/upraises
push: {.upraises: [].}
import ../stores
type
ErasureBackend* = ref object of RootObj
blockSize*: int # block size in bytes
buffers*: int # number of original pieces
parity*: int # number of redundancy pieces
EncoderBackend* = ref object of ErasureBackend
DecoderBackend* = ref object of ErasureBackend
method release*(self: ErasureBackend) {.base.} =
## release the backend
##
raiseAssert("not implemented!")
method encode*(
self: EncoderBackend,
buffers,
parity: var openArray[seq[byte]]
): Result[void, cstring] {.base.} =
## encode buffers using a backend
##
raiseAssert("not implemented!")
method decode*(
self: DecoderBackend,
buffers,
parity,
recovered: var openArray[seq[byte]]
): Result[void, cstring] {.base.} =
## decode buffers using a backend
##
raiseAssert("not implemented!")

View File

@ -0,0 +1,93 @@
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import std/options
import pkg/leopard
import pkg/stew/results
import ../backend
type
LeoEncoderBackend* = ref object of EncoderBackend
encoder*: Option[LeoEncoder]
LeoDecoderBackend* = ref object of DecoderBackend
decoder*: Option[LeoDecoder]
method encode*(
self: LeoEncoderBackend,
data,
parity: var openArray[seq[byte]]): Result[void, cstring] =
## Encode data using Leopard backend
if parity.len == 0:
return ok()
var encoder = if self.encoder.isNone:
self.encoder = (? LeoEncoder.init(
self.blockSize,
self.buffers,
self.parity)).some
self.encoder.get()
else:
self.encoder.get()
encoder.encode(data, parity)
method decode*(
self: LeoDecoderBackend,
data,
parity,
recovered: var openArray[seq[byte]]): Result[void, cstring] =
## Decode data using given Leopard backend
var decoder =
if self.decoder.isNone:
self.decoder = (? LeoDecoder.init(
self.blockSize,
self.buffers,
self.parity)).some
self.decoder.get()
else:
self.decoder.get()
decoder.decode(data, parity, recovered)
method release*(self: LeoEncoderBackend) =
if self.encoder.isSome:
self.encoder.get().free()
method release*(self: LeoDecoderBackend) =
if self.decoder.isSome:
self.decoder.get().free()
proc new*(
T: type LeoEncoderBackend,
blockSize,
buffers,
parity: int): LeoEncoderBackend =
## Create an instance of an Leopard Encoder backend
##
LeoEncoderBackend(
blockSize: blockSize,
buffers: buffers,
parity: parity)
proc new*(
T: type LeoDecoderBackend,
blockSize,
buffers,
parity: int): LeoDecoderBackend =
## Create an instance of an Leopard Decoder backend
##
LeoDecoderBackend(
blockSize: blockSize,
buffers: buffers,
parity: parity)

482
codex/erasure/erasure.nim Normal file
View File

@ -0,0 +1,482 @@
## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
import pkg/upraises
push: {.upraises: [].}
import std/sequtils
import std/sugar
import pkg/chronos
import pkg/libp2p/[multicodec, cid, multihash]
import pkg/libp2p/protobuf/minprotobuf
import pkg/taskpools
import ../logutils
import ../manifest
import ../merkletree
import ../stores
import ../blocktype as bt
import ../utils
import ../utils/asynciter
import ../indexingstrategy
import ../errors
import pkg/stew/byteutils
import ./backend
import ./asyncbackend
export backend
logScope:
topics = "codex erasure"
type
## Encode a manifest into one that is erasure protected.
##
## The new manifest has K `blocks` that are encoded into
## additional M `parity` blocks. The resulting dataset
## is padded with empty blocks if it doesn't have a square
## shape.
##
## NOTE: The padding blocks could be excluded
## from transmission, but they aren't for now.
##
## The resulting dataset is logically divided into rows
## where a row is made up of B blocks. There are then,
## K + M = N rows in total, each of length B blocks. Rows
## are assumed to be of the same number of (B) blocks.
##
## The encoding is systematic and the rows can be
## read sequentially by any node without decoding.
##
## Decoding is possible with any K rows or partial K
## columns (with up to M blocks missing per column),
## or any combination there of.
##
EncoderProvider* = proc(size, blocks, parity: int): EncoderBackend
{.raises: [Defect], noSideEffect.}
DecoderProvider* = proc(size, blocks, parity: int): DecoderBackend
{.raises: [Defect], noSideEffect.}
Erasure* = ref object
encoderProvider*: EncoderProvider
decoderProvider*: DecoderProvider
store*: BlockStore
taskpool: Taskpool
EncodingParams = object
ecK: Natural
ecM: Natural
rounded: Natural
steps: Natural
blocksCount: Natural
strategy: StrategyType
ErasureError* = object of CodexError
InsufficientBlocksError* = object of ErasureError
# Minimum size, in bytes, that the dataset must have had
# for the encoding request to have succeeded with the parameters
# provided.
minSize*: NBytes
func indexToPos(steps, idx, step: int): int {.inline.} =
## Convert an index to a position in the encoded
## dataset
## `idx` - the index to convert
## `step` - the current step
## `pos` - the position in the encoded dataset
##
(idx - step) div steps
proc getPendingBlocks(
self: Erasure,
manifest: Manifest,
indicies: seq[int]): AsyncIter[(?!bt.Block, int)] =
## Get pending blocks iterator
##
var
# request blocks from the store
pendingBlocks = indicies.map( (i: int) =>
self.store.getBlock(
BlockAddress.init(manifest.treeCid, i)
).map((r: ?!bt.Block) => (r, i)) # Get the data blocks (first K)
)
proc isFinished(): bool = pendingBlocks.len == 0
proc genNext(): Future[(?!bt.Block, int)] {.async.} =
let completedFut = await one(pendingBlocks)
if (let i = pendingBlocks.find(completedFut); i >= 0):
pendingBlocks.del(i)
return await completedFut
else:
let (_, index) = await completedFut
raise newException(
CatchableError,
"Future for block id not found, tree cid: " & $manifest.treeCid & ", index: " & $index)
AsyncIter[(?!bt.Block, int)].new(genNext, isFinished)
proc prepareEncodingData(
self: Erasure,
manifest: Manifest,
params: EncodingParams,
step: Natural,
data: ref seq[seq[byte]],
cids: ref seq[Cid],
emptyBlock: seq[byte]): Future[?!Natural] {.async.} =
## Prepare data for encoding
##
let
strategy = params.strategy.init(
firstIndex = 0,
lastIndex = params.rounded - 1,
iterations = params.steps
)
indicies = toSeq(strategy.getIndicies(step))
pendingBlocksIter = self.getPendingBlocks(manifest, indicies.filterIt(it < manifest.blocksCount))
var resolved = 0
for fut in pendingBlocksIter:
let (blkOrErr, idx) = await fut
without blk =? blkOrErr, err:
warn "Failed retreiving a block", treeCid = manifest.treeCid, idx, msg = err.msg
continue
let pos = indexToPos(params.steps, idx, step)
shallowCopy(data[pos], if blk.isEmpty: emptyBlock else: blk.data)
cids[idx] = blk.cid
resolved.inc()
for idx in indicies.filterIt(it >= manifest.blocksCount):
let pos = indexToPos(params.steps, idx, step)
trace "Padding with empty block", idx
shallowCopy(data[pos], emptyBlock)
without emptyBlockCid =? emptyCid(manifest.version, manifest.hcodec, manifest.codec), err:
return failure(err)
cids[idx] = emptyBlockCid
success(resolved.Natural)
proc prepareDecodingData(
self: Erasure,
encoded: Manifest,
step: Natural,
data: ref seq[seq[byte]],
parityData: ref seq[seq[byte]],
cids: ref seq[Cid],
emptyBlock: seq[byte]): Future[?!(Natural, Natural)] {.async.} =
## Prepare data for decoding
## `encoded` - the encoded manifest
## `step` - the current step
## `data` - the data to be prepared
## `parityData` - the parityData to be prepared
## `cids` - cids of prepared data
## `emptyBlock` - the empty block to be used for padding
##
let
strategy = encoded.protectedStrategy.init(
firstIndex = 0,
lastIndex = encoded.blocksCount - 1,
iterations = encoded.steps
)
indicies = toSeq(strategy.getIndicies(step))
pendingBlocksIter = self.getPendingBlocks(encoded, indicies)
var
dataPieces = 0
parityPieces = 0
resolved = 0
for fut in pendingBlocksIter:
# Continue to receive blocks until we have just enough for decoding
# or no more blocks can arrive
if resolved >= encoded.ecK:
break
let (blkOrErr, idx) = await fut
without blk =? blkOrErr, err:
trace "Failed retreiving a block", idx, treeCid = encoded.treeCid, msg = err.msg
continue
let
pos = indexToPos(encoded.steps, idx, step)
logScope:
cid = blk.cid
idx = idx
pos = pos
step = step
empty = blk.isEmpty
cids[idx] = blk.cid
if idx >= encoded.rounded:
trace "Retrieved parity block"
shallowCopy(parityData[pos - encoded.ecK], if blk.isEmpty: emptyBlock else: blk.data)
parityPieces.inc
else:
trace "Retrieved data block"
shallowCopy(data[pos], if blk.isEmpty: emptyBlock else: blk.data)
dataPieces.inc
resolved.inc
return success (dataPieces.Natural, parityPieces.Natural)
proc init*(
_: type EncodingParams,
manifest: Manifest,
ecK: Natural, ecM: Natural,
strategy: StrategyType): ?!EncodingParams =
if ecK > manifest.blocksCount:
let exc = (ref InsufficientBlocksError)(
msg: "Unable to encode manifest, not enough blocks, ecK = " &
$ecK &
", blocksCount = " &
$manifest.blocksCount,
minSize: ecK.NBytes * manifest.blockSize)
return failure(exc)
let
rounded = roundUp(manifest.blocksCount, ecK)
steps = divUp(rounded, ecK)
blocksCount = rounded + (steps * ecM)
success EncodingParams(
ecK: ecK,
ecM: ecM,
rounded: rounded,
steps: steps,
blocksCount: blocksCount,
strategy: strategy
)
proc encodeData(
self: Erasure,
manifest: Manifest,
params: EncodingParams
): Future[?!Manifest] {.async.} =
## Encode blocks pointed to by the protected manifest
##
## `manifest` - the manifest to encode
##
logScope:
steps = params.steps
rounded_blocks = params.rounded
blocks_count = params.blocksCount
ecK = params.ecK
ecM = params.ecM
var
cids = seq[Cid].new()
encoder = self.encoderProvider(manifest.blockSize.int, params.ecK, params.ecM)
emptyBlock = newSeq[byte](manifest.blockSize.int)
cids[].setLen(params.blocksCount)
try:
for step in 0..<params.steps:
# TODO: Don't allocate a new seq every time, allocate once and zero out
var
data = seq[seq[byte]].new() # number of blocks to encode
data[].setLen(params.ecK)
without resolved =?
(await self.prepareEncodingData(manifest, params, step, data, cids, emptyBlock)), err:
trace "Unable to prepare data", error = err.msg
return failure(err)
trace "Erasure coding data", data = data[].len, parity = params.ecM
without parity =? await asyncEncode(self.taskpool, encoder, data, manifest.blockSize.int, params.ecM), err:
trace "Error encoding data", err = err.msg
return failure(err)
var idx = params.rounded + step
for j in 0..<params.ecM:
without blk =? bt.Block.new(parity[j]), error:
trace "Unable to create parity block", err = error.msg
return failure(error)
trace "Adding parity block", cid = blk.cid, idx
cids[idx] = blk.cid
if isErr (await self.store.putBlock(blk)):
trace "Unable to store block!", cid = blk.cid
return failure("Unable to store block!")
idx.inc(params.steps)
without tree =? CodexTree.init(cids[]), err:
return failure(err)
without treeCid =? tree.rootCid, err:
return failure(err)
if err =? (await self.store.putAllProofs(tree)).errorOption:
return failure(err)
let encodedManifest = Manifest.new(
manifest = manifest,
treeCid = treeCid,
datasetSize = (manifest.blockSize.int * params.blocksCount).NBytes,
ecK = params.ecK,
ecM = params.ecM,
strategy = params.strategy
)
trace "Encoded data successfully", treeCid, blocksCount = params.blocksCount
success encodedManifest
except CancelledError as exc:
trace "Erasure coding encoding cancelled"
raise exc # cancellation needs to be propagated
except CatchableError as exc:
trace "Erasure coding encoding error", exc = exc.msg
return failure(exc)
finally:
encoder.release()
proc encode*(
self: Erasure,
manifest: Manifest,
blocks: Natural,
parity: Natural,
strategy = SteppedStrategy): Future[?!Manifest] {.async.} =
## Encode a manifest into one that is erasure protected.
##
## `manifest` - the original manifest to be encoded
## `blocks` - the number of blocks to be encoded - K
## `parity` - the number of parity blocks to generate - M
##
without params =? EncodingParams.init(manifest, blocks.int, parity.int, strategy), err:
return failure(err)
without encodedManifest =? await self.encodeData(manifest, params), err:
return failure(err)
return success encodedManifest
proc decode*(
self: Erasure,
encoded: Manifest): Future[?!Manifest] {.async.} =
## Decode a protected manifest into it's original
## manifest
##
## `encoded` - the encoded (protected) manifest to
## be recovered
##
logScope:
steps = encoded.steps
rounded_blocks = encoded.rounded
new_manifest = encoded.blocksCount
var
cids = seq[Cid].new()
recoveredIndices = newSeq[Natural]()
decoder = self.decoderProvider(encoded.blockSize.int, encoded.ecK, encoded.ecM)
emptyBlock = newSeq[byte](encoded.blockSize.int)
cids[].setLen(encoded.blocksCount)
try:
for step in 0..<encoded.steps:
var
data = seq[seq[byte]].new()
parity = seq[seq[byte]].new()
data[].setLen(encoded.ecK) # set len to K
parity[].setLen(encoded.ecM) # set len to M
without (dataPieces, _) =?
(await self.prepareDecodingData(encoded, step, data, parity, cids, emptyBlock)), err:
trace "Unable to prepare data", error = err.msg
return failure(err)
if dataPieces >= encoded.ecK:
trace "Retrieved all the required data blocks"
continue
trace "Erasure decoding data"
without recovered =? await asyncDecode(self.taskpool, decoder, data, parity, encoded.blockSize.int), err:
trace "Error decoding data", err = err.msg
return failure(err)
for i in 0..<encoded.ecK:
let idx = i * encoded.steps + step
if data[i].len <= 0 and not cids[idx].isEmpty:
without blk =? bt.Block.new(recovered[i]), error:
trace "Unable to create block!", exc = error.msg
return failure(error)
trace "Recovered block", cid = blk.cid, index = i
if isErr (await self.store.putBlock(blk)):
trace "Unable to store block!", cid = blk.cid
return failure("Unable to store block!")
cids[idx] = blk.cid
recoveredIndices.add(idx)
except CancelledError as exc:
trace "Erasure coding decoding cancelled"
raise exc # cancellation needs to be propagated
except CatchableError as exc:
trace "Erasure coding decoding error", exc = exc.msg
return failure(exc)
finally:
decoder.release()
without tree =? CodexTree.init(cids[0..<encoded.originalBlocksCount]), err:
return failure(err)
without treeCid =? tree.rootCid, err:
return failure(err)
if treeCid != encoded.originalTreeCid:
return failure("Original tree root differs from the tree root computed out of recovered data")
let idxIter = Iter[Natural].new(recoveredIndices)
.filter((i: Natural) => i < tree.leavesCount)
if err =? (await self.store.putSomeProofs(tree, idxIter)).errorOption:
return failure(err)
let decoded = Manifest.new(encoded)
return decoded.success
proc start*(self: Erasure) {.async.} =
return
proc stop*(self: Erasure) {.async.} =
return
proc new*(
T: type Erasure,
store: BlockStore,
encoderProvider: EncoderProvider,
decoderProvider: DecoderProvider,
taskpool: Taskpool): Erasure =
## Create a new Erasure instance for encoding and decoding manifests
##
Erasure(
store: store,
encoderProvider: encoderProvider,
decoderProvider: decoderProvider,
taskpool: taskpool)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,13 +7,9 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [].}
import std/options import std/options
import std/sugar
import std/sequtils
import pkg/results import pkg/stew/results
import pkg/chronos import pkg/chronos
import pkg/questionable/results import pkg/questionable/results
@ -23,18 +19,14 @@ type
CodexError* = object of CatchableError # base codex error CodexError* = object of CatchableError # base codex error
CodexResult*[T] = Result[T, ref CodexError] CodexResult*[T] = Result[T, ref CodexError]
FinishedFailed*[T] = tuple[success: seq[Future[T]], failure: seq[Future[T]]]
template mapFailure*[T, V, E]( template mapFailure*[T, V, E](
exp: Result[T, V], exc: typedesc[E] exp: Result[T, V],
exc: typedesc[E],
): Result[T, ref CatchableError] = ): Result[T, ref CatchableError] =
## Convert `Result[T, E]` to `Result[E, ref CatchableError]` ## Convert `Result[T, E]` to `Result[E, ref CatchableError]`
## ##
exp.mapErr( exp.mapErr(proc (e: V): ref CatchableError = (ref exc)(msg: $e))
proc(e: V): ref CatchableError =
(ref exc)(msg: $e)
)
template mapFailure*[T, V](exp: Result[T, V]): Result[T, ref CatchableError] = template mapFailure*[T, V](exp: Result[T, V]): Result[T, ref CatchableError] =
mapFailure(exp, CodexError) mapFailure(exp, CodexError)
@ -46,43 +38,12 @@ func toFailure*[T](exp: Option[T]): Result[T, ref CatchableError] {.inline.} =
else: else:
T.failure("Option is None") T.failure("Option is None")
proc allFinishedFailed*[T]( proc allFutureResult*[T](fut: seq[Future[T]]): Future[?!void] {.async.} =
futs: auto try:
): Future[FinishedFailed[T]] {.async: (raises: [CancelledError]).} = await allFuturesThrowing(fut)
## Check if all futures have finished or failed except CancelledError as exc:
## raise exc
## TODO: wip, not sure if we want this - at the minimum, except CatchableError as exc:
## we should probably avoid the async transform return failure(exc.msg)
var res: FinishedFailed[T] = (@[], @[]) return success()
await allFutures(futs)
for f in futs:
if f.failed:
res.failure.add f
else:
res.success.add f
return res
proc allFinishedValues*[T](
futs: auto
): Future[?!seq[T]] {.async: (raises: [CancelledError]).} =
## If all futures have finished, return corresponding values,
## otherwise return failure
##
# wait for all futures to be either completed, failed or canceled
await allFutures(futs)
let numOfFailed = futs.countIt(it.failed)
if numOfFailed > 0:
return failure "Some futures failed (" & $numOfFailed & "))"
# here, we know there are no failed futures in "futs"
# and we are only interested in those that completed successfully
let values = collect:
for b in futs:
if b.finished:
b.value
return success values

View File

@ -0,0 +1,97 @@
import ./errors
import ./utils
import ./utils/asynciter
{.push raises: [].}
type
StrategyType* = enum
# Simplest approach:
# 0 => 0, 1, 2
# 1 => 3, 4, 5
# 2 => 6, 7, 8
LinearStrategy,
# Stepped indexing:
# 0 => 0, 3, 6
# 1 => 1, 4, 7
# 2 => 2, 5, 8
SteppedStrategy
# Representing a strategy for grouping indices (of blocks usually)
# Given an interation-count as input, will produce a seq of
# selected indices.
IndexingError* = object of CodexError
IndexingWrongIndexError* = object of IndexingError
IndexingWrongIterationsError* = object of IndexingError
IndexingStrategy* = object
strategyType*: StrategyType
firstIndex*: int # Lowest index that can be returned
lastIndex*: int # Highest index that can be returned
iterations*: int # getIndices(iteration) will run from 0 ..< iterations
step*: int
func checkIteration(self: IndexingStrategy, iteration: int): void {.raises: [IndexingError].} =
if iteration >= self.iterations:
raise newException(
IndexingError,
"Indexing iteration can't be greater than or equal to iterations.")
func getIter(first, last, step: int): Iter[int] =
{.cast(noSideEffect).}:
Iter[int].new(first, last, step)
func getLinearIndicies(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
self.checkIteration(iteration)
let
first = self.firstIndex + iteration * self.step
last = min(first + self.step - 1, self.lastIndex)
getIter(first, last, 1)
func getSteppedIndicies(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
self.checkIteration(iteration)
let
first = self.firstIndex + iteration
last = self.lastIndex
getIter(first, last, self.iterations)
func getIndicies*(
self: IndexingStrategy,
iteration: int): Iter[int] {.raises: [IndexingError].} =
case self.strategyType
of StrategyType.LinearStrategy:
self.getLinearIndicies(iteration)
of StrategyType.SteppedStrategy:
self.getSteppedIndicies(iteration)
func init*(
strategy: StrategyType,
firstIndex, lastIndex, iterations: int): IndexingStrategy {.raises: [IndexingError].} =
if firstIndex > lastIndex:
raise newException(
IndexingWrongIndexError,
"firstIndex (" & $firstIndex & ") can't be greater than lastIndex (" & $lastIndex & ")")
if iterations <= 0:
raise newException(
IndexingWrongIterationsError,
"iterations (" & $iterations & ") must be greater than zero.")
IndexingStrategy(
strategyType: strategy,
firstIndex: firstIndex,
lastIndex: lastIndex,
iterations: iterations,
step: divUp((lastIndex - firstIndex + 1), iterations))

View File

@ -11,7 +11,7 @@
## 4. Remove usages of `nim-json-serialization` from the codebase ## 4. Remove usages of `nim-json-serialization` from the codebase
## 5. Remove need to declare `writeValue` for new types ## 5. Remove need to declare `writeValue` for new types
## 6. Remove need to [avoid importing or exporting `toJson`, `%`, `%*` to prevent ## 6. Remove need to [avoid importing or exporting `toJson`, `%`, `%*` to prevent
## conflicts](https://github.com/logos-storage/logos-storage-nim/pull/645#issuecomment-1838834467) ## conflicts](https://github.com/codex-storage/nim-codex/pull/645#issuecomment-1838834467)
## ##
## When declaring a new type, one should consider importing the `codex/logutils` ## When declaring a new type, one should consider importing the `codex/logutils`
## module, and specifying `formatIt`. If textlines log output and json log output ## module, and specifying `formatIt`. If textlines log output and json log output
@ -92,13 +92,13 @@ import std/sugar
import std/typetraits import std/typetraits
import pkg/chronicles except toJson, `%` import pkg/chronicles except toJson, `%`
from pkg/chronos import TransportAddress
from pkg/libp2p import Cid, MultiAddress, `$` from pkg/libp2p import Cid, MultiAddress, `$`
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import ./utils/json except formatIt # TODO: remove exception? import ./utils/json except formatIt # TODO: remove exception?
import pkg/stew/byteutils import pkg/stew/byteutils
import pkg/stint import pkg/stint
import pkg/upraises
export byteutils export byteutils
export chronicles except toJson, formatIt, `%` export chronicles except toJson, formatIt, `%`
@ -107,6 +107,7 @@ export sequtils
export json except formatIt export json except formatIt
export strutils export strutils
export sugar export sugar
export upraises
export results export results
func shortLog*(long: string, ellipses = "*", start = 3, stop = 6): string = func shortLog*(long: string, ellipses = "*", start = 3, stop = 6): string =
@ -124,9 +125,8 @@ func shortLog*(long: string, ellipses = "*", start = 3, stop = 6): string =
short short
func shortHexLog*(long: string): string = func shortHexLog*(long: string): string =
if long[0 .. 1] == "0x": if long[0..1] == "0x": result &= "0x"
result &= "0x" result &= long[2..long.high].shortLog("..", 4, 4)
result &= long[2 .. long.high].shortLog("..", 4, 4)
func short0xHexLog*[N: static[int], T: array[N, byte]](v: T): string = func short0xHexLog*[N: static[int], T: array[N, byte]](v: T): string =
v.to0xHex.shortHexLog v.to0xHex.shortHexLog
@ -153,7 +153,7 @@ proc formatTextLineSeq*(val: seq[string]): string =
template formatIt*(format: LogFormat, T: typedesc, body: untyped) = template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
# Provides formatters for logging with Chronicles for the given type and # Provides formatters for logging with Chronicles for the given type and
# `LogFormat`. # `LogFormat`.
# NOTE: `seq[T]`, `Option[T]`, and `seq[Option[T]]` are overridden # NOTE: `seq[T]`, `Option[T]`, and `seq[Option[T]]` are overriddden
# since the base `setProperty` is generic using `auto` and conflicts with # since the base `setProperty` is generic using `auto` and conflicts with
# providing a generic `seq` and `Option` override. # providing a generic `seq` and `Option` override.
when format == LogFormat.json: when format == LogFormat.json:
@ -184,16 +184,12 @@ template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
let v = opts.map(opt => opt.formatJsonOption) let v = opts.map(opt => opt.formatJsonOption)
setProperty(r, key, json.`%`(v)) setProperty(r, key, json.`%`(v))
proc setProperty*( proc setProperty*(r: var JsonRecord, key: string, val: seq[T]) =
r: var JsonRecord, key: string, val: seq[T]
) {.raises: [ValueError, IOError].} =
var it {.inject, used.}: T var it {.inject, used.}: T
let v = val.map(it => body) let v = val.map(it => body)
setProperty(r, key, json.`%`(v)) setProperty(r, key, json.`%`(v))
proc setProperty*( proc setProperty*(r: var JsonRecord, key: string, val: T) {.upraises:[ValueError, IOError].} =
r: var JsonRecord, key: string, val: T
) {.raises: [ValueError, IOError].} =
var it {.inject, used.}: T = val var it {.inject, used.}: T = val
let v = body let v = body
setProperty(r, key, json.`%`(v)) setProperty(r, key, json.`%`(v))
@ -224,37 +220,23 @@ template formatIt*(format: LogFormat, T: typedesc, body: untyped) =
let v = opts.map(opt => opt.formatTextLineOption) let v = opts.map(opt => opt.formatTextLineOption)
setProperty(r, key, v.formatTextLineSeq) setProperty(r, key, v.formatTextLineSeq)
proc setProperty*( proc setProperty*(r: var TextLineRecord, key: string, val: seq[T]) =
r: var TextLineRecord, key: string, val: seq[T]
) {.raises: [ValueError, IOError].} =
var it {.inject, used.}: T var it {.inject, used.}: T
let v = val.map(it => body) let v = val.map(it => body)
setProperty(r, key, v.formatTextLineSeq) setProperty(r, key, v.formatTextLineSeq)
proc setProperty*( proc setProperty*(r: var TextLineRecord, key: string, val: T) {.upraises:[ValueError, IOError].} =
r: var TextLineRecord, key: string, val: T
) {.raises: [ValueError, IOError].} =
var it {.inject, used.}: T = val var it {.inject, used.}: T = val
let v = body let v = body
setProperty(r, key, v) setProperty(r, key, v)
template formatIt*(T: type, body: untyped) {.dirty.} = template formatIt*(T: type, body: untyped) {.dirty.} =
formatIt(LogFormat.textLines, T): formatIt(LogFormat.textLines, T): body
body formatIt(LogFormat.json, T): body
formatIt(LogFormat.json, T):
body
formatIt(LogFormat.textLines, Cid): formatIt(LogFormat.textLines, Cid): shortLog($it)
shortLog($it) formatIt(LogFormat.json, Cid): $it
formatIt(LogFormat.json, Cid): formatIt(UInt256): $it
$it formatIt(MultiAddress): $it
formatIt(UInt256): formatIt(LogFormat.textLines, array[32, byte]): it.short0xHexLog
$it formatIt(LogFormat.json, array[32, byte]): it.to0xHex
formatIt(MultiAddress):
$it
formatIt(LogFormat.textLines, array[32, byte]):
it.short0xHexLog
formatIt(LogFormat.json, array[32, byte]):
it.to0xHex
formatIt(TransportAddress):
$it

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,9 +9,9 @@
# This module implements serialization and deserialization of Manifest # This module implements serialization and deserialization of Manifest
import times import pkg/upraises
{.push raises: [].} push: {.upraises: [].}
import std/tables import std/tables
import std/sequtils import std/sequtils
@ -25,18 +25,32 @@ import ./manifest
import ../errors import ../errors
import ../blocktype import ../blocktype
import ../logutils import ../logutils
import ../indexingstrategy
proc encode*(manifest: Manifest): ?!seq[byte] = proc encode*(manifest: Manifest): ?!seq[byte] =
## Encode the manifest into a ``ManifestCodec`` ## Encode the manifest into a ``ManifestCodec``
## multicodec container (Dag-pb) for now ## multicodec container (Dag-pb) for now
## ##
? manifest.verify()
var pbNode = initProtoBuffer() var pbNode = initProtoBuffer()
# NOTE: The `Data` field in the the `dag-pb` # NOTE: The `Data` field in the the `dag-pb`
# contains the following protobuf `Message` # contains the following protobuf `Message`
# #
# ```protobuf # ```protobuf
# Message VerificationInfo {
# bytes verifyRoot = 1; # Decimal encoded field-element
# repeated bytes slotRoots = 2; # Decimal encoded field-elements
# }
# Message ErasureInfo {
# optional uint32 ecK = 1; # number of encoded blocks
# optional uint32 ecM = 2; # number of parity blocks
# optional bytes originalTreeCid = 3; # cid of the original dataset
# optional uint32 originalDatasetSize = 4; # size of the original dataset
# optional VerificationInformation verification = 5; # verification information
# }
#
# Message Header { # Message Header {
# optional bytes treeCid = 1; # cid (root) of the tree # optional bytes treeCid = 1; # cid (root) of the tree
# optional uint32 blockSize = 2; # size of a single block # optional uint32 blockSize = 2; # size of a single block
@ -44,8 +58,7 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
# optional codec: MultiCodec = 4; # Dataset codec # optional codec: MultiCodec = 4; # Dataset codec
# optional hcodec: MultiCodec = 5 # Multihash codec # optional hcodec: MultiCodec = 5 # Multihash codec
# optional version: CidVersion = 6; # Cid version # optional version: CidVersion = 6; # Cid version
# optional filename: ?string = 7; # original filename # optional ErasureInfo erasure = 7; # erasure coding info
# optional mimetype: ?string = 8; # original mimetype
# } # }
# ``` # ```
# #
@ -57,12 +70,25 @@ proc encode*(manifest: Manifest): ?!seq[byte] =
header.write(4, manifest.codec.uint32) header.write(4, manifest.codec.uint32)
header.write(5, manifest.hcodec.uint32) header.write(5, manifest.hcodec.uint32)
header.write(6, manifest.version.uint32) header.write(6, manifest.version.uint32)
if manifest.protected:
var erasureInfo = initProtoBuffer()
erasureInfo.write(1, manifest.ecK.uint32)
erasureInfo.write(2, manifest.ecM.uint32)
erasureInfo.write(3, manifest.originalTreeCid.data.buffer)
erasureInfo.write(4, manifest.originalDatasetSize.uint64)
erasureInfo.write(5, manifest.protectedStrategy.uint32)
if manifest.filename.isSome: if manifest.verifiable:
header.write(7, manifest.filename.get()) var verificationInfo = initProtoBuffer()
verificationInfo.write(1, manifest.verifyRoot.data.buffer)
for slotRoot in manifest.slotRoots:
verificationInfo.write(2, slotRoot.data.buffer)
verificationInfo.write(3, manifest.cellSize.uint32)
verificationInfo.write(4, manifest.verifiableStrategy.uint32)
erasureInfo.write(6, verificationInfo)
if manifest.mimetype.isSome: erasureInfo.finish()
header.write(8, manifest.mimetype.get()) header.write(7, erasureInfo)
pbNode.write(1, header) # set the treeCid as the data field pbNode.write(1, header) # set the treeCid as the data field
pbNode.finish() pbNode.finish()
@ -76,14 +102,22 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
var var
pbNode = initProtoBuffer(data) pbNode = initProtoBuffer(data)
pbHeader: ProtoBuffer pbHeader: ProtoBuffer
pbErasureInfo: ProtoBuffer
pbVerificationInfo: ProtoBuffer
treeCidBuf: seq[byte] treeCidBuf: seq[byte]
originalTreeCid: seq[byte]
datasetSize: uint64 datasetSize: uint64
codec: uint32 codec: uint32
hcodec: uint32 hcodec: uint32
version: uint32 version: uint32
blockSize: uint32 blockSize: uint32
filename: string originalDatasetSize: uint64
mimetype: string ecK, ecM: uint32
protectedStrategy: uint32
verifyRoot: seq[byte]
slotRoots: seq[seq[byte]]
cellSize: uint32
verifiableStrategy: uint32
# Decode `Header` message # Decode `Header` message
if pbNode.getField(1, pbHeader).isErr: if pbNode.getField(1, pbHeader).isErr:
@ -108,27 +142,84 @@ proc decode*(_: type Manifest, data: openArray[byte]): ?!Manifest =
if pbHeader.getField(6, version).isErr: if pbHeader.getField(6, version).isErr:
return failure("Unable to decode `version` from manifest!") return failure("Unable to decode `version` from manifest!")
if pbHeader.getField(7, filename).isErr: if pbHeader.getField(7, pbErasureInfo).isErr:
return failure("Unable to decode `filename` from manifest!") return failure("Unable to decode `erasureInfo` from manifest!")
if pbHeader.getField(8, mimetype).isErr: let protected = pbErasureInfo.buffer.len > 0
return failure("Unable to decode `mimetype` from manifest!") var verifiable = false
if protected:
if pbErasureInfo.getField(1, ecK).isErr:
return failure("Unable to decode `K` from manifest!")
let treeCid = ?Cid.init(treeCidBuf).mapFailure if pbErasureInfo.getField(2, ecM).isErr:
return failure("Unable to decode `M` from manifest!")
var filenameOption = if filename.len == 0: string.none else: filename.some if pbErasureInfo.getField(3, originalTreeCid).isErr:
var mimetypeOption = if mimetype.len == 0: string.none else: mimetype.some return failure("Unable to decode `originalTreeCid` from manifest!")
let self = Manifest.new( if pbErasureInfo.getField(4, originalDatasetSize).isErr:
treeCid = treeCid, return failure("Unable to decode `originalDatasetSize` from manifest!")
datasetSize = datasetSize.NBytes,
blockSize = blockSize.NBytes, if pbErasureInfo.getField(5, protectedStrategy).isErr:
version = CidVersion(version), return failure("Unable to decode `protectedStrategy` from manifest!")
hcodec = hcodec.MultiCodec,
codec = codec.MultiCodec, if pbErasureInfo.getField(6, pbVerificationInfo).isErr:
filename = filenameOption, return failure("Unable to decode `verificationInfo` from manifest!")
mimetype = mimetypeOption,
) verifiable = pbVerificationInfo.buffer.len > 0
if verifiable:
if pbVerificationInfo.getField(1, verifyRoot).isErr:
return failure("Unable to decode `verifyRoot` from manifest!")
if pbVerificationInfo.getRequiredRepeatedField(2, slotRoots).isErr:
return failure("Unable to decode `slotRoots` from manifest!")
if pbVerificationInfo.getField(3, cellSize).isErr:
return failure("Unable to decode `cellSize` from manifest!")
if pbVerificationInfo.getField(4, verifiableStrategy).isErr:
return failure("Unable to decode `verifiableStrategy` from manifest!")
let
treeCid = ? Cid.init(treeCidBuf).mapFailure
let
self = if protected:
Manifest.new(
treeCid = treeCid,
datasetSize = datasetSize.NBytes,
blockSize = blockSize.NBytes,
version = CidVersion(version),
hcodec = hcodec.MultiCodec,
codec = codec.MultiCodec,
ecK = ecK.int,
ecM = ecM.int,
originalTreeCid = ? Cid.init(originalTreeCid).mapFailure,
originalDatasetSize = originalDatasetSize.NBytes,
strategy = StrategyType(protectedStrategy))
else:
Manifest.new(
treeCid = treeCid,
datasetSize = datasetSize.NBytes,
blockSize = blockSize.NBytes,
version = CidVersion(version),
hcodec = hcodec.MultiCodec,
codec = codec.MultiCodec)
? self.verify()
if verifiable:
let
verifyRootCid = ? Cid.init(verifyRoot).mapFailure
slotRootCids = slotRoots.mapIt(? Cid.init(it).mapFailure)
return Manifest.new(
manifest = self,
verifyRoot = verifyRootCid,
slotRoots = slotRootCids,
cellSize = cellSize.NBytes,
strategy = StrategyType(verifiableStrategy)
)
self.success self.success
@ -136,7 +227,7 @@ func decode*(_: type Manifest, blk: Block): ?!Manifest =
## Decode a manifest using `decoder` ## Decode a manifest using `decoder`
## ##
if not ?blk.cid.isManifest: if not ? blk.cid.isManifest:
return failure "Cid not a manifest codec" return failure "Cid not a manifest codec"
Manifest.decode(blk.data) Manifest.decode(blk.data)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,7 +9,9 @@
# This module defines all operations on Manifest # This module defines all operations on Manifest
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p/protobuf/minprotobuf import pkg/libp2p/protobuf/minprotobuf
import pkg/libp2p/[cid, multihash, multicodec] import pkg/libp2p/[cid, multihash, multicodec]
@ -20,20 +22,37 @@ import ../utils
import ../utils/json import ../utils/json
import ../units import ../units
import ../blocktype import ../blocktype
import ../indexingstrategy
import ../logutils import ../logutils
# TODO: Manifest should be reworked to more concrete types, # TODO: Manifest should be reworked to more concrete types,
# perhaps using inheritance # perhaps using inheritance
type Manifest* = ref object of RootObj type
treeCid {.serialize.}: Cid # Root of the merkle tree Manifest* = ref object of RootObj
datasetSize {.serialize.}: NBytes # Total size of all blocks treeCid {.serialize.}: Cid # Root of the merkle tree
blockSize {.serialize.}: NBytes datasetSize {.serialize.}: NBytes # Total size of all blocks
# Size of each contained block (might not be needed if blocks are len-prefixed) blockSize {.serialize.}: NBytes # Size of each contained block (might not be needed if blocks are len-prefixed)
codec: MultiCodec # Dataset codec codec: MultiCodec # Dataset codec
hcodec: MultiCodec # Multihash codec hcodec: MultiCodec # Multihash codec
version: CidVersion # Cid version version: CidVersion # Cid version
filename {.serialize.}: ?string # The filename of the content uploaded (optional) case protected {.serialize.}: bool # Protected datasets have erasure coded info
mimetype {.serialize.}: ?string # The mimetype of the content uploaded (optional) of true:
ecK: int # Number of blocks to encode
ecM: int # Number of resulting parity blocks
originalTreeCid: Cid # The original root of the dataset being erasure coded
originalDatasetSize: NBytes
protectedStrategy: StrategyType # Indexing strategy used to build the slot roots
case verifiable {.serialize.}: bool # Verifiable datasets can be used to generate storage proofs
of true:
verifyRoot: Cid # Root of the top level merkle tree built from slot roots
slotRoots: seq[Cid] # Individual slot root built from the original dataset blocks
cellSize: NBytes # Size of each slot cell
verifiableStrategy: StrategyType # Indexing strategy used to build the slot roots
else:
discard
else:
discard
############################################################ ############################################################
# Accessors # Accessors
@ -54,24 +73,60 @@ func hcodec*(self: Manifest): MultiCodec =
func codec*(self: Manifest): MultiCodec = func codec*(self: Manifest): MultiCodec =
self.codec self.codec
func protected*(self: Manifest): bool =
self.protected
func ecK*(self: Manifest): int =
self.ecK
func ecM*(self: Manifest): int =
self.ecM
func originalTreeCid*(self: Manifest): Cid =
self.originalTreeCid
func originalBlocksCount*(self: Manifest): int =
divUp(self.originalDatasetSize.int, self.blockSize.int)
func originalDatasetSize*(self: Manifest): NBytes =
self.originalDatasetSize
func treeCid*(self: Manifest): Cid = func treeCid*(self: Manifest): Cid =
self.treeCid self.treeCid
func blocksCount*(self: Manifest): int = func blocksCount*(self: Manifest): int =
divUp(self.datasetSize.int, self.blockSize.int) divUp(self.datasetSize.int, self.blockSize.int)
func filename*(self: Manifest): ?string = func verifiable*(self: Manifest): bool =
self.filename bool (self.protected and self.verifiable)
func mimetype*(self: Manifest): ?string = func verifyRoot*(self: Manifest): Cid =
self.mimetype self.verifyRoot
func slotRoots*(self: Manifest): seq[Cid] =
self.slotRoots
func numSlots*(self: Manifest): int =
self.ecK + self.ecM
func cellSize*(self: Manifest): NBytes =
self.cellSize
func protectedStrategy*(self: Manifest): StrategyType =
self.protectedStrategy
func verifiableStrategy*(self: Manifest): StrategyType =
self.verifiableStrategy
func numSlotBlocks*(self: Manifest): int =
divUp(self.blocksCount, self.numSlots)
############################################################ ############################################################
# Operations on block list # Operations on block list
############################################################ ############################################################
func isManifest*(cid: Cid): ?!bool = func isManifest*(cid: Cid): ?!bool =
success (ManifestCodec == ?cid.contentType().mapFailure(CodexError)) success (ManifestCodec == ? cid.contentType().mapFailure(CodexError))
func isManifest*(mc: MultiCodec): ?!bool = func isManifest*(mc: MultiCodec): ?!bool =
success mc == ManifestCodec success mc == ManifestCodec
@ -80,40 +135,87 @@ func isManifest*(mc: MultiCodec): ?!bool =
# Various sizes and verification # Various sizes and verification
############################################################ ############################################################
func rounded*(self: Manifest): int =
## Number of data blocks in *protected* manifest including padding at the end
roundUp(self.originalBlocksCount, self.ecK)
func steps*(self: Manifest): int =
## Number of EC groups in *protected* manifest
divUp(self.rounded, self.ecK)
func verify*(self: Manifest): ?!void =
## Check manifest correctness
##
if self.protected and (self.blocksCount != self.steps * (self.ecK + self.ecM)):
return failure newException(CodexError, "Broken manifest: wrong originalBlocksCount")
return success()
func cid*(self: Manifest): ?!Cid {.deprecated: "use treeCid instead".} =
self.treeCid.success
func `==`*(a, b: Manifest): bool = func `==`*(a, b: Manifest): bool =
(a.treeCid == b.treeCid) and (a.datasetSize == b.datasetSize) and (a.treeCid == b.treeCid) and
(a.blockSize == b.blockSize) and (a.version == b.version) and (a.hcodec == b.hcodec) and (a.datasetSize == b.datasetSize) and
(a.codec == b.codec) and (a.filename == b.filename) and (a.mimetype == b.mimetype) (a.blockSize == b.blockSize) and
(a.version == b.version) and
(a.hcodec == b.hcodec) and
(a.codec == b.codec) and
(a.protected == b.protected) and
(if a.protected:
(a.ecK == b.ecK) and
(a.ecM == b.ecM) and
(a.originalTreeCid == b.originalTreeCid) and
(a.originalDatasetSize == b.originalDatasetSize) and
(a.protectedStrategy == b.protectedStrategy) and
(a.verifiable == b.verifiable) and
(if a.verifiable:
(a.verifyRoot == b.verifyRoot) and
(a.slotRoots == b.slotRoots) and
(a.cellSize == b.cellSize) and
(a.verifiableStrategy == b.verifiableStrategy)
else:
true)
else:
true)
func `$`*(self: Manifest): string = func `$`*(self: Manifest): string =
result = "treeCid: " & $self.treeCid &
"treeCid: " & $self.treeCid & ", datasetSize: " & $self.datasetSize & ", blockSize: " & ", datasetSize: " & $self.datasetSize &
$self.blockSize & ", version: " & $self.version & ", hcodec: " & $self.hcodec & ", blockSize: " & $self.blockSize &
", codec: " & $self.codec ", version: " & $self.version &
", hcodec: " & $self.hcodec &
if self.filename.isSome: ", codec: " & $self.codec &
result &= ", filename: " & $self.filename ", protected: " & $self.protected &
(if self.protected:
if self.mimetype.isSome: ", ecK: " & $self.ecK &
result &= ", mimetype: " & $self.mimetype ", ecM: " & $self.ecM &
", originalTreeCid: " & $self.originalTreeCid &
return result ", originalDatasetSize: " & $self.originalDatasetSize &
", verifiable: " & $self.verifiable &
(if self.verifiable:
", verifyRoot: " & $self.verifyRoot &
", slotRoots: " & $self.slotRoots
else:
"")
else:
"")
############################################################ ############################################################
# Constructors # Constructors
############################################################ ############################################################
func new*( func new*(
T: type Manifest, T: type Manifest,
treeCid: Cid, treeCid: Cid,
blockSize: NBytes, blockSize: NBytes,
datasetSize: NBytes, datasetSize: NBytes,
version: CidVersion = CIDv1, version: CidVersion = CIDv1,
hcodec = Sha256HashCodec, hcodec = Sha256HashCodec,
codec = BlockCodec, codec = BlockCodec,
filename: ?string = string.none, protected = false): Manifest =
mimetype: ?string = string.none,
): Manifest =
T( T(
treeCid: treeCid, treeCid: treeCid,
blockSize: blockSize, blockSize: blockSize,
@ -121,11 +223,117 @@ func new*(
version: version, version: version,
codec: codec, codec: codec,
hcodec: hcodec, hcodec: hcodec,
filename: filename, protected: protected)
mimetype: mimetype,
)
func new*(T: type Manifest, data: openArray[byte]): ?!Manifest = func new*(
T: type Manifest,
manifest: Manifest,
treeCid: Cid,
datasetSize: NBytes,
ecK, ecM: int,
strategy = SteppedStrategy): Manifest =
## Create an erasure protected dataset from an
## unprotected one
##
Manifest(
treeCid: treeCid,
datasetSize: datasetSize,
version: manifest.version,
codec: manifest.codec,
hcodec: manifest.hcodec,
blockSize: manifest.blockSize,
protected: true,
ecK: ecK, ecM: ecM,
originalTreeCid: manifest.treeCid,
originalDatasetSize: manifest.datasetSize,
protectedStrategy: strategy)
func new*(
T: type Manifest,
manifest: Manifest): Manifest =
## Create an unprotected dataset from an
## erasure protected one
##
Manifest(
treeCid: manifest.originalTreeCid,
datasetSize: manifest.originalDatasetSize,
version: manifest.version,
codec: manifest.codec,
hcodec: manifest.hcodec,
blockSize: manifest.blockSize,
protected: false)
func new*(
T: type Manifest,
treeCid: Cid,
datasetSize: NBytes,
blockSize: NBytes,
version: CidVersion,
hcodec: MultiCodec,
codec: MultiCodec,
ecK: int,
ecM: int,
originalTreeCid: Cid,
originalDatasetSize: NBytes,
strategy = SteppedStrategy): Manifest =
Manifest(
treeCid: treeCid,
datasetSize: datasetSize,
blockSize: blockSize,
version: version,
hcodec: hcodec,
codec: codec,
protected: true,
ecK: ecK,
ecM: ecM,
originalTreeCid: originalTreeCid,
originalDatasetSize: originalDatasetSize,
protectedStrategy: strategy)
func new*(
T: type Manifest,
manifest: Manifest,
verifyRoot: Cid,
slotRoots: openArray[Cid],
cellSize = DefaultCellSize,
strategy = LinearStrategy): ?!Manifest =
## Create a verifiable dataset from an
## protected one
##
if not manifest.protected:
return failure newException(
CodexError, "Can create verifiable manifest only from protected manifest.")
if slotRoots.len != manifest.numSlots:
return failure newException(
CodexError, "Wrong number of slot roots.")
success Manifest(
treeCid: manifest.treeCid,
datasetSize: manifest.datasetSize,
version: manifest.version,
codec: manifest.codec,
hcodec: manifest.hcodec,
blockSize: manifest.blockSize,
protected: true,
ecK: manifest.ecK,
ecM: manifest.ecM,
originalTreeCid: manifest.originalTreeCid,
originalDatasetSize: manifest.originalDatasetSize,
protectedStrategy: manifest.protectedStrategy,
verifiable: true,
verifyRoot: verifyRoot,
slotRoots: @slotRoots,
cellSize: cellSize,
verifiableStrategy: strategy)
func new*(
T: type Manifest,
data: openArray[byte]): ?!Manifest =
## Create a manifest instance from given data ## Create a manifest instance from given data
## ##

250
codex/market.nim Normal file
View File

@ -0,0 +1,250 @@
import pkg/chronos
import pkg/upraises
import pkg/questionable
import pkg/ethers/erc20
import ./contracts/requests
import ./contracts/proofs
import ./clock
import ./errors
import ./periods
export chronos
export questionable
export requests
export proofs
export SecondsSince1970
export periods
type
Market* = ref object of RootObj
MarketError* = object of CodexError
Subscription* = ref object of RootObj
OnRequest* = proc(id: RequestId,
ask: StorageAsk,
expiry: UInt256) {.gcsafe, upraises:[].}
OnFulfillment* = proc(requestId: RequestId) {.gcsafe, upraises: [].}
OnSlotFilled* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises:[].}
OnSlotFreed* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises: [].}
OnSlotReservationsFull* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises: [].}
OnRequestCancelled* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
OnRequestFailed* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
OnProofSubmitted* = proc(id: SlotId) {.gcsafe, upraises:[].}
ProofChallenge* = array[32, byte]
# Marketplace events -- located here due to the Market abstraction
MarketplaceEvent* = Event
StorageRequested* = object of MarketplaceEvent
requestId*: RequestId
ask*: StorageAsk
expiry*: UInt256
SlotFilled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
slotIndex*: UInt256
SlotFreed* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
slotIndex*: UInt256
SlotReservationsFull* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
slotIndex*: UInt256
RequestFulfilled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
RequestCancelled* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
RequestFailed* = object of MarketplaceEvent
requestId* {.indexed.}: RequestId
ProofSubmitted* = object of MarketplaceEvent
id*: SlotId
method getZkeyHash*(market: Market): Future[?string] {.base, async.} =
raiseAssert("not implemented")
method getSigner*(market: Market): Future[Address] {.base, async.} =
raiseAssert("not implemented")
method periodicity*(market: Market): Future[Periodicity] {.base, async.} =
raiseAssert("not implemented")
method proofTimeout*(market: Market): Future[UInt256] {.base, async.} =
raiseAssert("not implemented")
method proofDowntime*(market: Market): Future[uint8] {.base, async.} =
raiseAssert("not implemented")
method getPointer*(market: Market, slotId: SlotId): Future[uint8] {.base, async.} =
raiseAssert("not implemented")
proc inDowntime*(market: Market, slotId: SlotId): Future[bool] {.async.} =
let downtime = await market.proofDowntime
let pntr = await market.getPointer(slotId)
return pntr < downtime
method requestStorage*(market: Market,
request: StorageRequest) {.base, async.} =
raiseAssert("not implemented")
method myRequests*(market: Market): Future[seq[RequestId]] {.base, async.} =
raiseAssert("not implemented")
method mySlots*(market: Market): Future[seq[SlotId]] {.base, async.} =
raiseAssert("not implemented")
method getRequest*(market: Market,
id: RequestId):
Future[?StorageRequest] {.base, async.} =
raiseAssert("not implemented")
method requestState*(market: Market,
requestId: RequestId): Future[?RequestState] {.base, async.} =
raiseAssert("not implemented")
method slotState*(market: Market,
slotId: SlotId): Future[SlotState] {.base, async.} =
raiseAssert("not implemented")
method getRequestEnd*(market: Market,
id: RequestId): Future[SecondsSince1970] {.base, async.} =
raiseAssert("not implemented")
method requestExpiresAt*(market: Market,
id: RequestId): Future[SecondsSince1970] {.base, async.} =
raiseAssert("not implemented")
method getHost*(market: Market,
requestId: RequestId,
slotIndex: UInt256): Future[?Address] {.base, async.} =
raiseAssert("not implemented")
method getActiveSlot*(
market: Market,
slotId: SlotId): Future[?Slot] {.base, async.} =
raiseAssert("not implemented")
method fillSlot*(market: Market,
requestId: RequestId,
slotIndex: UInt256,
proof: Groth16Proof,
collateral: UInt256) {.base, async.} =
raiseAssert("not implemented")
method freeSlot*(market: Market, slotId: SlotId) {.base, async.} =
raiseAssert("not implemented")
method withdrawFunds*(market: Market,
requestId: RequestId) {.base, async.} =
raiseAssert("not implemented")
method subscribeRequests*(market: Market,
callback: OnRequest):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method isProofRequired*(market: Market,
id: SlotId): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method willProofBeRequired*(market: Market,
id: SlotId): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method getChallenge*(market: Market, id: SlotId): Future[ProofChallenge] {.base, async.} =
raiseAssert("not implemented")
method submitProof*(market: Market,
id: SlotId,
proof: Groth16Proof) {.base, async.} =
raiseAssert("not implemented")
method markProofAsMissing*(market: Market,
id: SlotId,
period: Period) {.base, async.} =
raiseAssert("not implemented")
method canProofBeMarkedAsMissing*(market: Market,
id: SlotId,
period: Period): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method reserveSlot*(
market: Market,
requestId: RequestId,
slotIndex: UInt256) {.base, async.} =
raiseAssert("not implemented")
method canReserveSlot*(
market: Market,
requestId: RequestId,
slotIndex: UInt256): Future[bool] {.base, async.} =
raiseAssert("not implemented")
method subscribeFulfillment*(market: Market,
callback: OnFulfillment):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeFulfillment*(market: Market,
requestId: RequestId,
callback: OnFulfillment):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeSlotFilled*(market: Market,
callback: OnSlotFilled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeSlotFilled*(market: Market,
requestId: RequestId,
slotIndex: UInt256,
callback: OnSlotFilled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeSlotFreed*(market: Market,
callback: OnSlotFreed):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeSlotReservationsFull*(
market: Market,
callback: OnSlotReservationsFull): Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequestCancelled*(market: Market,
callback: OnRequestCancelled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequestCancelled*(market: Market,
requestId: RequestId,
callback: OnRequestCancelled):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequestFailed*(market: Market,
callback: OnRequestFailed):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeRequestFailed*(market: Market,
requestId: RequestId,
callback: OnRequestFailed):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method subscribeProofSubmission*(market: Market,
callback: OnProofSubmitted):
Future[Subscription] {.base, async.} =
raiseAssert("not implemented")
method unsubscribe*(subscription: Subscription) {.base, async, upraises:[].} =
raiseAssert("not implemented")
method queryPastEvents*[T: MarketplaceEvent](
market: Market,
_: type T,
blocksAgo: int): Future[seq[T]] {.base, async.} =
raiseAssert("not implemented")

View File

@ -1,4 +1,10 @@
import ./merkletree/merkletree import ./merkletree/merkletree
import ./merkletree/codex import ./merkletree/codex
import ./merkletree/poseidon2
export codex, merkletree export codex, poseidon2, merkletree
type
SomeMerkleTree* = ByteTree | CodexTree | Poseidon2Tree
SomeMerkleProof* = ByteProof | CodexProof | Poseidon2Proof
SomeMerkleHash* = ByteHash | Poseidon2Hash

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -7,7 +7,9 @@
## This file may not be copied, modified, or distributed except according to ## This file may not be copied, modified, or distributed except according to
## those terms. ## those terms.
{.push raises: [], gcsafe.} import pkg/upraises
push: {.upraises: [].}
import pkg/libp2p import pkg/libp2p
import pkg/questionable import pkg/questionable
@ -24,11 +26,11 @@ const MaxMerkleTreeSize = 100.MiBs.uint
const MaxMerkleProofSize = 1.MiBs.uint const MaxMerkleProofSize = 1.MiBs.uint
proc encode*(self: CodexTree): seq[byte] = proc encode*(self: CodexTree): seq[byte] =
var pb = initProtoBuffer() var pb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
pb.write(1, self.mcodec.uint64) pb.write(1, self.mcodec.uint64)
pb.write(2, self.leavesCount.uint64) pb.write(2, self.leavesCount.uint64)
for node in self.nodes: for node in self.nodes:
var nodesPb = initProtoBuffer() var nodesPb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
nodesPb.write(1, node) nodesPb.write(1, node)
nodesPb.finish() nodesPb.finish()
pb.write(3, nodesPb) pb.write(3, nodesPb)
@ -37,11 +39,11 @@ proc encode*(self: CodexTree): seq[byte] =
pb.buffer pb.buffer
proc decode*(_: type CodexTree, data: seq[byte]): ?!CodexTree = proc decode*(_: type CodexTree, data: seq[byte]): ?!CodexTree =
var pb = initProtoBuffer(data) var pb = initProtoBuffer(data, maxSize = MaxMerkleTreeSize)
var mcodecCode: uint64 var mcodecCode: uint64
var leavesCount: uint64 var leavesCount: uint64
discard ?pb.getField(1, mcodecCode).mapFailure discard ? pb.getField(1, mcodecCode).mapFailure
discard ?pb.getField(2, leavesCount).mapFailure discard ? pb.getField(2, leavesCount).mapFailure
let mcodec = MultiCodec.codec(mcodecCode.int) let mcodec = MultiCodec.codec(mcodecCode.int)
if mcodec == InvalidMultiCodec: if mcodec == InvalidMultiCodec:
@ -51,22 +53,22 @@ proc decode*(_: type CodexTree, data: seq[byte]): ?!CodexTree =
nodesBuff: seq[seq[byte]] nodesBuff: seq[seq[byte]]
nodes: seq[ByteHash] nodes: seq[ByteHash]
if ?pb.getRepeatedField(3, nodesBuff).mapFailure: if ? pb.getRepeatedField(3, nodesBuff).mapFailure:
for nodeBuff in nodesBuff: for nodeBuff in nodesBuff:
var node: ByteHash var node: ByteHash
discard ?initProtoBuffer(nodeBuff).getField(1, node).mapFailure discard ? initProtoBuffer(nodeBuff).getField(1, node).mapFailure
nodes.add node nodes.add node
CodexTree.fromNodes(mcodec, nodes, leavesCount.int) CodexTree.fromNodes(mcodec, nodes, leavesCount.int)
proc encode*(self: CodexProof): seq[byte] = proc encode*(self: CodexProof): seq[byte] =
var pb = initProtoBuffer() var pb = initProtoBuffer(maxSize = MaxMerkleProofSize)
pb.write(1, self.mcodec.uint64) pb.write(1, self.mcodec.uint64)
pb.write(2, self.index.uint64) pb.write(2, self.index.uint64)
pb.write(3, self.nleaves.uint64) pb.write(3, self.nleaves.uint64)
for node in self.path: for node in self.path:
var nodesPb = initProtoBuffer() var nodesPb = initProtoBuffer(maxSize = MaxMerkleTreeSize)
nodesPb.write(1, node) nodesPb.write(1, node)
nodesPb.finish() nodesPb.finish()
pb.write(4, nodesPb) pb.write(4, nodesPb)
@ -75,33 +77,36 @@ proc encode*(self: CodexProof): seq[byte] =
pb.buffer pb.buffer
proc decode*(_: type CodexProof, data: seq[byte]): ?!CodexProof = proc decode*(_: type CodexProof, data: seq[byte]): ?!CodexProof =
var pb = initProtoBuffer(data) var pb = initProtoBuffer(data, maxSize = MaxMerkleProofSize)
var mcodecCode: uint64 var mcodecCode: uint64
var index: uint64 var index: uint64
var nleaves: uint64 var nleaves: uint64
discard ?pb.getField(1, mcodecCode).mapFailure discard ? pb.getField(1, mcodecCode).mapFailure
let mcodec = MultiCodec.codec(mcodecCode.int) let mcodec = MultiCodec.codec(mcodecCode.int)
if mcodec == InvalidMultiCodec: if mcodec == InvalidMultiCodec:
return failure("Invalid MultiCodec code " & $mcodecCode) return failure("Invalid MultiCodec code " & $mcodecCode)
discard ?pb.getField(2, index).mapFailure discard ? pb.getField(2, index).mapFailure
discard ?pb.getField(3, nleaves).mapFailure discard ? pb.getField(3, nleaves).mapFailure
var var
nodesBuff: seq[seq[byte]] nodesBuff: seq[seq[byte]]
nodes: seq[ByteHash] nodes: seq[ByteHash]
if ?pb.getRepeatedField(4, nodesBuff).mapFailure: if ? pb.getRepeatedField(4, nodesBuff).mapFailure:
for nodeBuff in nodesBuff: for nodeBuff in nodesBuff:
var node: ByteHash var node: ByteHash
let nodePb = initProtoBuffer(nodeBuff) let nodePb = initProtoBuffer(nodeBuff)
discard ?nodePb.getField(1, node).mapFailure discard ? nodePb.getField(1, node).mapFailure
nodes.add node nodes.add node
CodexProof.init(mcodec, index.int, nleaves.int, nodes) CodexProof.init(mcodec, index.int, nleaves.int, nodes)
proc fromJson*(_: type CodexProof, json: JsonNode): ?!CodexProof = proc fromJson*(
_: type CodexProof,
json: JsonNode
): ?!CodexProof =
expectJsonKind(Cid, JString, json) expectJsonKind(Cid, JString, json)
var bytes: seq[byte] var bytes: seq[byte]
try: try:
@ -111,5 +116,4 @@ proc fromJson*(_: type CodexProof, json: JsonNode): ?!CodexProof =
CodexProof.decode(bytes) CodexProof.decode(bytes)
func `%`*(proof: CodexProof): JsonNode = func `%`*(proof: CodexProof): JsonNode = % byteutils.toHex(proof.encode())
%byteutils.toHex(proof.encode())

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -10,18 +10,16 @@
{.push raises: [].} {.push raises: [].}
import std/bitops import std/bitops
import std/[atomics, sequtils] import std/sequtils
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/libp2p/[cid, multicodec, multihash] import pkg/libp2p/[cid, multicodec, multihash]
import pkg/constantine/hashes
import pkg/taskpools
import pkg/chronos/threadsync
import ../../utils import ../../utils
import ../../rng import ../../rng
import ../../errors import ../../errors
import ../../codextypes import ../../blocktype
from ../../utils/digest import digestBytes from ../../utils/digest import digestBytes
@ -34,10 +32,10 @@ logScope:
type type
ByteTreeKey* {.pure.} = enum ByteTreeKey* {.pure.} = enum
KeyNone = 0x0.byte KeyNone = 0x0.byte
KeyBottomLayer = 0x1.byte KeyBottomLayer = 0x1.byte
KeyOdd = 0x2.byte KeyOdd = 0x2.byte
KeyOddAndBottomLayer = 0x3.byte KeyOddAndBottomLayer = 0x3.byte
ByteHash* = seq[byte] ByteHash* = seq[byte]
ByteTree* = MerkleTree[ByteHash, ByteTreeKey] ByteTree* = MerkleTree[ByteHash, ByteTreeKey]
@ -49,10 +47,26 @@ type
CodexProof* = ref object of ByteProof CodexProof* = ref object of ByteProof
mcodec*: MultiCodec mcodec*: MultiCodec
func getProof*(self: CodexTree, index: int): ?!CodexProof = func mhash*(mcodec: MultiCodec): ?!MHash =
var proof = CodexProof(mcodec: self.mcodec) let
mhash = CodeHashes.getOrDefault(mcodec)
?self.getProof(index, proof) if isNil(mhash.coder):
return failure "Invalid multihash codec"
success mhash
func digestSize*(self: (CodexTree or CodexProof)): int =
## Number of leaves
##
self.mhash.size
func getProof*(self: CodexTree, index: int): ?!CodexProof =
var
proof = CodexProof(mcodec: self.mcodec)
? self.getProof(index, proof)
success proof success proof
@ -64,180 +78,173 @@ func verify*(self: CodexProof, leaf: MultiHash, root: MultiHash): ?!bool =
rootBytes = root.digestBytes rootBytes = root.digestBytes
leafBytes = leaf.digestBytes leafBytes = leaf.digestBytes
if self.mcodec != root.mcodec or self.mcodec != leaf.mcodec: if self.mcodec != root.mcodec or
self.mcodec != leaf.mcodec:
return failure "Hash codec mismatch" return failure "Hash codec mismatch"
if rootBytes.len != root.size and leafBytes.len != leaf.size: if rootBytes.len != root.size and
leafBytes.len != leaf.size:
return failure "Invalid hash length" return failure "Invalid hash length"
self.verify(leafBytes, rootBytes) self.verify(leafBytes, rootBytes)
func verify*(self: CodexProof, leaf: Cid, root: Cid): ?!bool = func verify*(self: CodexProof, leaf: Cid, root: Cid): ?!bool =
self.verify(?leaf.mhash.mapFailure, ?leaf.mhash.mapFailure) self.verify(? leaf.mhash.mapFailure, ? leaf.mhash.mapFailure)
proc rootCid*(self: CodexTree, version = CIDv1, dataCodec = DatasetRootCodec): ?!Cid = proc rootCid*(
if (?self.root).len == 0: self: CodexTree,
version = CIDv1,
dataCodec = DatasetRootCodec): ?!Cid =
if (? self.root).len == 0:
return failure "Empty root" return failure "Empty root"
let mhash = ?MultiHash.init(self.mcodec, ?self.root).mapFailure let
mhash = ? MultiHash.init(self.mcodec, ? self.root).mapFailure
Cid.init(version, DatasetRootCodec, mhash).mapFailure Cid.init(version, DatasetRootCodec, mhash).mapFailure
func getLeafCid*( func getLeafCid*(
self: CodexTree, i: Natural, version = CIDv1, dataCodec = BlockCodec self: CodexTree,
): ?!Cid = i: Natural,
version = CIDv1,
dataCodec = BlockCodec): ?!Cid =
if i >= self.leavesCount: if i >= self.leavesCount:
return failure "Invalid leaf index " & $i return failure "Invalid leaf index " & $i
let let
leaf = self.leaves[i] leaf = self.leaves[i]
mhash = ?MultiHash.init($self.mcodec, leaf).mapFailure mhash = ? MultiHash.init($self.mcodec, leaf).mapFailure
Cid.init(version, dataCodec, mhash).mapFailure Cid.init(version, dataCodec, mhash).mapFailure
proc `$`*(self: CodexTree): string = proc `$`*(self: CodexTree): string =
let root = let root = if self.root.isOk: byteutils.toHex(self.root.get) else: "none"
if self.root.isOk: "CodexTree(" &
byteutils.toHex(self.root.get) " root: " & root &
else: ", leavesCount: " & $self.leavesCount &
"none" ", levels: " & $self.levels &
"CodexTree(" & " root: " & root & ", leavesCount: " & $self.leavesCount & ", levels: " & ", mcodec: " & $self.mcodec & " )"
$self.levels & ", mcodec: " & $self.mcodec & " )"
proc `$`*(self: CodexProof): string = proc `$`*(self: CodexProof): string =
"CodexProof(" & " nleaves: " & $self.nleaves & ", index: " & $self.index & ", path: " & "CodexProof(" &
$self.path.mapIt(byteutils.toHex(it)) & ", mcodec: " & $self.mcodec & " )" " nleaves: " & $self.nleaves &
", index: " & $self.index &
", path: " & $self.path.mapIt( byteutils.toHex(it) ) &
", mcodec: " & $self.mcodec & " )"
func compress*(x, y: openArray[byte], key: ByteTreeKey, codec: MultiCodec): ?!ByteHash = func compress*(
x, y: openArray[byte],
key: ByteTreeKey,
mhash: MHash): ?!ByteHash =
## Compress two hashes ## Compress two hashes
## ##
let input = @x & @y & @[key.byte]
let digest = ?MultiHash.digest(codec, input).mapFailure
success digest.digestBytes
func initTree(mcodec: MultiCodec, leaves: openArray[ByteHash]): ?!CodexTree = var digest = newSeq[byte](mhash.size)
mhash.coder(@x & @y & @[ key.byte ], digest)
success digest
func init*(
_: type CodexTree,
mcodec: MultiCodec = Sha256HashCodec,
leaves: openArray[ByteHash]): ?!CodexTree =
if leaves.len == 0: if leaves.len == 0:
return failure "Empty leaves" return failure "Empty leaves"
let let
mhash = ? mcodec.mhash()
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} = compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} =
compress(x, y, key, mcodec) compress(x, y, key, mhash)
digestSize = ?mcodec.digestSize.mapFailure Zero: ByteHash = newSeq[byte](mhash.size)
Zero: ByteHash = newSeq[byte](digestSize)
if digestSize != leaves[0].len: if mhash.size != leaves[0].len:
return failure "Invalid hash length" return failure "Invalid hash length"
var self = CodexTree(mcodec: mcodec) var
?self.prepare(compressor, Zero, leaves) self = CodexTree(mcodec: mcodec, compress: compressor, zero: Zero)
self.layers = ? merkleTreeWorker(self, leaves, isBottomLayer = true)
success self success self
func init*( func init*(
_: type CodexTree, mcodec: MultiCodec = Sha256HashCodec, leaves: openArray[ByteHash] _: type CodexTree,
): ?!CodexTree = leaves: openArray[MultiHash]): ?!CodexTree =
let tree = ?initTree(mcodec, leaves)
?tree.compute()
success tree
proc init*(
_: type CodexTree,
tp: Taskpool,
mcodec: MultiCodec = Sha256HashCodec,
leaves: seq[ByteHash],
): Future[?!CodexTree] {.async: (raises: [CancelledError]).} =
let tree = ?initTree(mcodec, leaves)
?await tree.compute(tp)
success tree
func init*(_: type CodexTree, leaves: openArray[MultiHash]): ?!CodexTree =
if leaves.len == 0: if leaves.len == 0:
return failure "Empty leaves" return failure "Empty leaves"
let let
mcodec = leaves[0].mcodec mcodec = leaves[0].mcodec
leaves = leaves.mapIt(it.digestBytes) leaves = leaves.mapIt( it.digestBytes )
CodexTree.init(mcodec, leaves) CodexTree.init(mcodec, leaves)
proc init*( func init*(
_: type CodexTree, tp: Taskpool, leaves: seq[MultiHash] _: type CodexTree,
): Future[?!CodexTree] {.async: (raises: [CancelledError]).} = leaves: openArray[Cid]): ?!CodexTree =
if leaves.len == 0: if leaves.len == 0:
return failure "Empty leaves" return failure "Empty leaves"
let let
mcodec = leaves[0].mcodec mcodec = (? leaves[0].mhash.mapFailure).mcodec
leaves = leaves.mapIt(it.digestBytes) leaves = leaves.mapIt( (? it.mhash.mapFailure).digestBytes )
await CodexTree.init(tp, mcodec, leaves)
func init*(_: type CodexTree, leaves: openArray[Cid]): ?!CodexTree =
if leaves.len == 0:
return failure "Empty leaves"
let
mcodec = (?leaves[0].mhash.mapFailure).mcodec
leaves = leaves.mapIt((?it.mhash.mapFailure).digestBytes)
CodexTree.init(mcodec, leaves) CodexTree.init(mcodec, leaves)
proc init*(
_: type CodexTree, tp: Taskpool, leaves: seq[Cid]
): Future[?!CodexTree] {.async: (raises: [CancelledError]).} =
if leaves.len == 0:
return failure("Empty leaves")
let
mcodec = (?leaves[0].mhash.mapFailure).mcodec
leaves = leaves.mapIt((?it.mhash.mapFailure).digestBytes)
await CodexTree.init(tp, mcodec, leaves)
proc fromNodes*( proc fromNodes*(
_: type CodexTree, _: type CodexTree,
mcodec: MultiCodec = Sha256HashCodec, mcodec: MultiCodec = Sha256HashCodec,
nodes: openArray[ByteHash], nodes: openArray[ByteHash],
nleaves: int, nleaves: int): ?!CodexTree =
): ?!CodexTree =
if nodes.len == 0: if nodes.len == 0:
return failure "Empty nodes" return failure "Empty nodes"
let let
digestSize = ?mcodec.digestSize.mapFailure mhash = ? mcodec.mhash()
Zero = newSeq[byte](digestSize) Zero = newSeq[byte](mhash.size)
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} = compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!ByteHash {.noSideEffect.} =
compress(x, y, key, mcodec) compress(x, y, key, mhash)
if digestSize != nodes[0].len: if mhash.size != nodes[0].len:
return failure "Invalid hash length" return failure "Invalid hash length"
var self = CodexTree(mcodec: mcodec) var
?self.fromNodes(compressor, Zero, nodes, nleaves) self = CodexTree(compress: compressor, zero: Zero, mcodec: mcodec)
layer = nleaves
pos = 0
while pos < nodes.len:
self.layers.add( nodes[pos..<(pos + layer)] )
pos += layer
layer = divUp(layer, 2)
let let
index = Rng.instance.rand(nleaves - 1) index = Rng.instance.rand(nleaves - 1)
proof = ?self.getProof(index) proof = ? self.getProof(index)
if not ?proof.verify(self.leaves[index], ?self.root): # sanity check if not ? proof.verify(self.leaves[index], ? self.root): # sanity check
return failure "Unable to verify tree built from nodes" return failure "Unable to verify tree built from nodes"
success self success self
func init*( func init*(
_: type CodexProof, _: type CodexProof,
mcodec: MultiCodec = Sha256HashCodec, mcodec: MultiCodec = Sha256HashCodec,
index: int, index: int,
nleaves: int, nleaves: int,
nodes: openArray[ByteHash], nodes: openArray[ByteHash]): ?!CodexProof =
): ?!CodexProof =
if nodes.len == 0: if nodes.len == 0:
return failure "Empty nodes" return failure "Empty nodes"
let let
digestSize = ?mcodec.digestSize.mapFailure mhash = ? mcodec.mhash()
Zero = newSeq[byte](digestSize) Zero = newSeq[byte](mhash.size)
compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!seq[byte] {.noSideEffect.} = compressor = proc(x, y: seq[byte], key: ByteTreeKey): ?!seq[byte] {.noSideEffect.} =
compress(x, y, key, mcodec) compress(x, y, key, mhash)
success CodexProof( success CodexProof(
compress: compressor, compress: compressor,
@ -245,5 +252,4 @@ func init*(
mcodec: mcodec, mcodec: mcodec,
index: index, index: index,
nleaves: nleaves, nleaves: nleaves,
path: @nodes, path: @nodes)
)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH ## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,194 +9,86 @@
{.push raises: [].} {.push raises: [].}
import std/[bitops, atomics, sequtils] import std/bitops
import stew/assign2
import pkg/questionable/results import pkg/questionable/results
import pkg/taskpools
import pkg/chronos
import pkg/chronos/threadsync
import ../errors import ../errors
import ../utils/sharedbuf
export sharedbuf
template nodeData(
data: openArray[byte], offsets: openArray[int], nodeSize, i, j: int
): openArray[byte] =
## Bytes of the j'th entry of the i'th level in the tree, starting with the
## leaves (at level 0).
let start = (offsets[i] + j) * nodeSize
data.toOpenArray(start, start + nodeSize - 1)
type type
# TODO hash functions don't fail - removing the ?! from this function would CompressFn*[H, K] = proc (x, y: H, key: K): ?!H {.noSideEffect, raises: [].}
# significantly simplify the flow below
CompressFn*[H, K] = proc(x, y: H, key: K): ?!H {.noSideEffect, raises: [].}
CompressData[H, K] = object MerkleTree*[H, K] = ref object of RootObj
fn: CompressFn[H, K] layers* : seq[seq[H]]
nodeSize: int compress*: CompressFn[H, K]
zero: H zero* : H
MerkleTreeObj*[H, K] = object of RootObj
store*: seq[byte]
## Flattened merkle tree where hashes are assumed to be trivial bytes and
## uniform in size.
##
## Each layer of the tree is stored serially starting with the leaves and
## ending with the root.
##
## Beacuse the tree might not be balanced, `layerOffsets` contains the
## index of the starting point of each level, for easy lookup.
layerOffsets*: seq[int]
## Starting point of each level in the tree, starting from the leaves -
## multiplied by the entry size, this is the offset in the payload where
## the entries of that level start
##
## For example, a tree with 4 leaves will have [0, 4, 6] stored here.
##
## See nodesPerLevel function, from whic this sequence is derived
compress*: CompressData[H, K]
MerkleTree*[H, K] = ref MerkleTreeObj[H, K]
MerkleProof*[H, K] = ref object of RootObj MerkleProof*[H, K] = ref object of RootObj
index*: int # linear index of the leaf, starting from 0 index* : int # linear index of the leaf, starting from 0
path*: seq[H] # order: from the bottom to the top path* : seq[H] # order: from the bottom to the top
nleaves*: int # number of leaves in the tree (=size of input) nleaves* : int # number of leaves in the tree (=size of input)
compress*: CompressFn[H, K] # compress function compress*: CompressFn[H, K] # compress function
zero*: H # zero value zero* : H # zero value
func levels*[H, K](self: MerkleTree[H, K]): int =
return self.layerOffsets.len
func depth*[H, K](self: MerkleTree[H, K]): int = func depth*[H, K](self: MerkleTree[H, K]): int =
return self.levels() - 1 return self.layers.len - 1
func nodesInLayer(offsets: openArray[int], layer: int): int =
if layer == offsets.high:
1
else:
offsets[layer + 1] - offsets[layer]
func nodesInLayer(self: MerkleTree | MerkleTreeObj, layer: int): int =
self.layerOffsets.nodesInLayer(layer)
func leavesCount*[H, K](self: MerkleTree[H, K]): int = func leavesCount*[H, K](self: MerkleTree[H, K]): int =
return self.nodesInLayer(0) return self.layers[0].len
func nodesPerLevel(nleaves: int): seq[int] = func levels*[H, K](self: MerkleTree[H, K]): int =
## Given a number of leaves, return a seq with the number of nodes at each return self.layers.len
## layer of the tree (from the bottom/leaves to the root)
##
## Ie For a tree of 4 leaves, return `[4, 2, 1]`
if nleaves <= 0:
return @[]
elif nleaves == 1:
return @[1, 1] # leaf and root
var nodes: seq[int] = @[] func leaves*[H, K](self: MerkleTree[H, K]): seq[H] =
var m = nleaves return self.layers[0]
while true:
nodes.add(m)
if m == 1:
break
# Next layer size is ceil(m/2)
m = (m + 1) shr 1
nodes iterator layers*[H, K](self: MerkleTree[H, K]): seq[H] =
for layer in self.layers:
func layerOffsets(nleaves: int): seq[int] = yield layer
## Given a number of leaves, return a seq of the starting offsets of each
## layer in the node store that results from flattening the binary tree
##
## Ie For a tree of 4 leaves, return `[0, 4, 6]`
let nodes = nodesPerLevel(nleaves)
var tot = 0
let offsets = nodes.mapIt:
let cur = tot
tot += it
cur
offsets
template nodeData(self: MerkleTreeObj, i, j: int): openArray[byte] =
## Bytes of the j'th node of the i'th level in the tree, starting with the
## leaves (at level 0).
self.store.nodeData(self.layerOffsets, self.compress.nodeSize, i, j)
func layer*[H, K](
self: MerkleTree[H, K], layer: int
): seq[H] {.deprecated: "Expensive".} =
var nodes = newSeq[H](self.nodesInLayer(layer))
for i, h in nodes.mpairs:
assign(h, self[].nodeData(layer, i))
return nodes
func leaves*[H, K](self: MerkleTree[H, K]): seq[H] {.deprecated: "Expensive".} =
self.layer(0)
iterator layers*[H, K](self: MerkleTree[H, K]): seq[H] {.deprecated: "Expensive".} =
for i in 0 ..< self.layerOffsets.len:
yield self.layer(i)
proc layers*[H, K](self: MerkleTree[H, K]): seq[seq[H]] {.deprecated: "Expensive".} =
for l in self.layers():
result.add l
iterator nodes*[H, K](self: MerkleTree[H, K]): H = iterator nodes*[H, K](self: MerkleTree[H, K]): H =
## Iterate over the nodes of each layer starting with the leaves for layer in self.layers:
var node: H for node in layer:
for i in 0 ..< self.layerOffsets.len:
let nodesInLayer = self.nodesInLayer(i)
for j in 0 ..< nodesInLayer:
assign(node, self[].nodeData(i, j))
yield node yield node
func root*[H, K](self: MerkleTree[H, K]): ?!H = func root*[H, K](self: MerkleTree[H, K]): ?!H =
mixin assign let last = self.layers[^1]
if self.layerOffsets.len == 0: if last.len != 1:
return failure "invalid tree" return failure "invalid tree"
var h: H return success last[0]
assign(h, self[].nodeData(self.layerOffsets.high(), 0))
return success h
func getProof*[H, K]( func getProof*[H, K](
self: MerkleTree[H, K], index: int, proof: MerkleProof[H, K] self: MerkleTree[H, K],
): ?!void = index: int,
let depth = self.depth proof: MerkleProof[H, K]): ?!void =
let depth = self.depth
let nleaves = self.leavesCount let nleaves = self.leavesCount
if not (index >= 0 and index < nleaves): if not (index >= 0 and index < nleaves):
return failure "index out of bounds" return failure "index out of bounds"
var path: seq[H] = newSeq[H](depth) var path : seq[H] = newSeq[H](depth)
var k = index var k = index
var m = nleaves var m = nleaves
for i in 0 ..< depth: for i in 0..<depth:
let j = k xor 1 let j = k xor 1
path[i] = if (j < m): self.layers[i][j] else: self.zero
if (j < m): k = k shr 1
assign(path[i], self[].nodeData(i, j))
else:
path[i] = self.compress.zero
k = k shr 1
m = (m + 1) shr 1 m = (m + 1) shr 1
proof.index = index proof.index = index
proof.path = path proof.path = path
proof.nleaves = nleaves proof.nleaves = nleaves
proof.compress = self.compress.fn proof.compress = self.compress
success() success()
func getProof*[H, K](self: MerkleTree[H, K], index: int): ?!MerkleProof[H, K] = func getProof*[H, K](self: MerkleTree[H, K], index: int): ?!MerkleProof[H, K] =
var proof = MerkleProof[H, K]() var
proof = MerkleProof[H, K]()
?self.getProof(index, proof) ? self.getProof(index, proof)
success proof success proof
@ -208,189 +100,54 @@ func reconstructRoot*[H, K](proof: MerkleProof[H, K], leaf: H): ?!H =
bottomFlag = K.KeyBottomLayer bottomFlag = K.KeyBottomLayer
for p in proof.path: for p in proof.path:
let oddIndex: bool = (bitand(j, 1) != 0) let oddIndex : bool = (bitand(j,1) != 0)
if oddIndex: if oddIndex:
# the index of the child is odd, so the node itself can't be odd (a bit counterintuitive, yeah :) # the index of the child is odd, so the node itself can't be odd (a bit counterintuitive, yeah :)
h = ?proof.compress(p, h, bottomFlag) h = ? proof.compress( p, h, bottomFlag )
else: else:
if j == m - 1: if j == m - 1:
# single child => odd node # single child => odd node
h = ?proof.compress(h, p, K(bottomFlag.ord + 2)) h = ? proof.compress( h, p, K(bottomFlag.ord + 2) )
else: else:
# even node # even node
h = ?proof.compress(h, p, bottomFlag) h = ? proof.compress( h , p, bottomFlag )
bottomFlag = K.KeyNone bottomFlag = K.KeyNone
j = j shr 1 j = j shr 1
m = (m + 1) shr 1 m = (m+1) shr 1
return success h return success h
func verify*[H, K](proof: MerkleProof[H, K], leaf: H, root: H): ?!bool = func verify*[H, K](proof: MerkleProof[H, K], leaf: H, root: H): ?!bool =
success bool(root == ?proof.reconstructRoot(leaf)) success bool(root == ? proof.reconstructRoot(leaf))
func fromNodes*[H, K]( func merkleTreeWorker*[H, K](
self: MerkleTree[H, K], self: MerkleTree[H, K],
compressor: CompressFn, xs: openArray[H],
zero: H, isBottomLayer: static bool): ?!seq[seq[H]] =
nodes: openArray[H],
nleaves: int,
): ?!void =
mixin assign
if nodes.len < 2: # At least leaf and root let a = low(xs)
return failure "Not enough nodes" let b = high(xs)
let m = b - a + 1
if nleaves == 0:
return failure "No leaves"
self.compress = CompressData[H, K](fn: compressor, nodeSize: nodes[0].len, zero: zero)
self.layerOffsets = layerOffsets(nleaves)
if self.layerOffsets[^1] + 1 != nodes.len:
return failure "bad node count"
self.store = newSeqUninit[byte](nodes.len * self.compress.nodeSize)
for i in 0 ..< nodes.len:
assign(
self[].store.toOpenArray(
i * self.compress.nodeSize, (i + 1) * self.compress.nodeSize - 1
),
nodes[i],
)
success()
func merkleTreeWorker[H, K](
store: var openArray[byte],
offsets: openArray[int],
compress: CompressData[H, K],
layer: int,
isBottomLayer: static bool,
): ?!void =
## Worker used to compute the merkle tree from the leaves that are assumed to
## already be stored at the beginning of the `store`, as done by `prepare`.
# Throughout, we use `assign` to convert from H to bytes and back, assuming
# this assignment can be done somewhat efficiently (ie memcpy) - because
# the code must work with multihash where len(H) is can differ, we cannot
# simply use a fixed-size array here.
mixin assign
template nodeData(i, j: int): openArray[byte] =
# Pick out the bytes of node j in layer i
store.nodeData(offsets, compress.nodeSize, i, j)
let m = offsets.nodesInLayer(layer)
when not isBottomLayer: when not isBottomLayer:
if m == 1: if m == 1:
return success() return success @[ @xs ]
let halfn: int = m div 2 let halfn: int = m div 2
let n: int = 2 * halfn let n : int = 2 * halfn
let isOdd: bool = (n != m) let isOdd: bool = (n != m)
# Because the compression function we work with works with H and not bytes, var ys: seq[H]
# we need to extract H from the raw data - a little abstraction tax that if not isOdd:
# ensures that properties like alignment of H are respected. ys = newSeq[H](halfn)
var a, b, tmp: H else:
ys = newSeq[H](halfn + 1)
for i in 0 ..< halfn: for i in 0..<halfn:
const key = when isBottomLayer: K.KeyBottomLayer else: K.KeyNone const key = when isBottomLayer: K.KeyBottomLayer else: K.KeyNone
ys[i] = ? self.compress( xs[a + 2 * i], xs[a + 2 * i + 1], key = key )
assign(a, nodeData(layer, i * 2))
assign(b, nodeData(layer, i * 2 + 1))
tmp = ?compress.fn(a, b, key = key)
assign(nodeData(layer + 1, i), tmp)
if isOdd: if isOdd:
const key = when isBottomLayer: K.KeyOddAndBottomLayer else: K.KeyOdd const key = when isBottomLayer: K.KeyOddAndBottomLayer else: K.KeyOdd
ys[halfn] = ? self.compress( xs[n], self.zero, key = key )
assign(a, nodeData(layer, n)) success @[ @xs ] & ? self.merkleTreeWorker(ys, isBottomLayer = false)
tmp = ?compress.fn(a, compress.zero, key = key)
assign(nodeData(layer + 1, halfn), tmp)
merkleTreeWorker(store, offsets, compress, layer + 1, false)
proc merkleTreeWorker[H, K](
store: SharedBuf[byte],
offsets: SharedBuf[int],
compress: ptr CompressData[H, K],
signal: ThreadSignalPtr,
): bool =
defer:
discard signal.fireSync()
let res = merkleTreeWorker(
store.toOpenArray(), offsets.toOpenArray(), compress[], 0, isBottomLayer = true
)
return res.isOk()
func prepare*[H, K](
self: MerkleTree[H, K], compressor: CompressFn, zero: H, leaves: openArray[H]
): ?!void =
## Prepare the instance for computing the merkle tree of the given leaves using
## the given compression function. After preparation, `compute` should be
## called to perform the actual computation. `leaves` will be copied into the
## tree so they can be freed after the call.
if leaves.len == 0:
return failure "No leaves"
self.compress =
CompressData[H, K](fn: compressor, nodeSize: leaves[0].len, zero: zero)
self.layerOffsets = layerOffsets(leaves.len)
self.store = newSeqUninit[byte]((self.layerOffsets[^1] + 1) * self.compress.nodeSize)
for j in 0 ..< leaves.len:
assign(self[].nodeData(0, j), leaves[j])
return success()
proc compute*[H, K](self: MerkleTree[H, K]): ?!void =
merkleTreeWorker(
self.store, self.layerOffsets, self.compress, 0, isBottomLayer = true
)
proc compute*[H, K](
self: MerkleTree[H, K], tp: Taskpool
): Future[?!void] {.async: (raises: []).} =
if tp.numThreads == 1:
# With a single thread, there's no point creating a separate task
return self.compute()
# TODO this signal would benefit from reuse across computations
without signal =? ThreadSignalPtr.new():
return failure("Unable to create thread signal")
defer:
signal.close().expect("closing once works")
let res = tp.spawn merkleTreeWorker(
SharedBuf.view(self.store),
SharedBuf.view(self.layerOffsets),
addr self.compress,
signal,
)
# To support cancellation, we'd have to ensure the task we posted to taskpools
# exits early - since we're not doing that, block cancellation attempts
try:
await noCancel signal.wait()
except AsyncError as exc:
# Since we initialized the signal, the OS or chronos is misbehaving. In any
# case, it would mean the task is still running which would cause a memory
# a memory violation if we let it run - panic instead
raiseAssert "Could not wait for signal, was it initialized? " & exc.msg
if not res.sync():
return failure("merkle tree task failed")
return success()

View File

@ -0,0 +1,148 @@
## Nim-Codex
## Copyright (c) 2023 Status Research & Development GmbH
## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
## at your option.
## This file may not be copied, modified, or distributed except according to
## those terms.
{.push raises: [].}
import std/sequtils
import pkg/poseidon2
import pkg/constantine/math/io/io_fields
import pkg/constantine/platforms/abstractions
import pkg/questionable/results
import ../utils
import ../rng
import ./merkletree
export merkletree, poseidon2
const
KeyNoneF = F.fromhex("0x0")
KeyBottomLayerF = F.fromhex("0x1")
KeyOddF = F.fromhex("0x2")
KeyOddAndBottomLayerF = F.fromhex("0x3")
Poseidon2Zero* = zero
type
Bn254Fr* = F
Poseidon2Hash* = Bn254Fr
PoseidonKeysEnum* = enum # can't use non-ordinals as enum values
KeyNone
KeyBottomLayer
KeyOdd
KeyOddAndBottomLayer
Poseidon2Tree* = MerkleTree[Poseidon2Hash, PoseidonKeysEnum]
Poseidon2Proof* = MerkleProof[Poseidon2Hash, PoseidonKeysEnum]
proc `$`*(self: Poseidon2Tree): string =
let root = if self.root.isOk: self.root.get.toHex else: "none"
"Poseidon2Tree(" &
" root: " & root &
", leavesCount: " & $self.leavesCount &
", levels: " & $self.levels & " )"
proc `$`*(self: Poseidon2Proof): string =
"Poseidon2Proof(" &
" nleaves: " & $self.nleaves &
", index: " & $self.index &
", path: " & $self.path.mapIt( it.toHex ) & " )"
func toArray32*(bytes: openArray[byte]): array[32, byte] =
result[0..<bytes.len] = bytes[0..<bytes.len]
converter toKey*(key: PoseidonKeysEnum): Poseidon2Hash =
case key:
of KeyNone: KeyNoneF
of KeyBottomLayer: KeyBottomLayerF
of KeyOdd: KeyOddF
of KeyOddAndBottomLayer: KeyOddAndBottomLayerF
func init*(
_: type Poseidon2Tree,
leaves: openArray[Poseidon2Hash]): ?!Poseidon2Tree =
if leaves.len == 0:
return failure "Empty leaves"
let
compressor = proc(
x, y: Poseidon2Hash,
key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
var
self = Poseidon2Tree(compress: compressor, zero: Poseidon2Zero)
self.layers = ? merkleTreeWorker(self, leaves, isBottomLayer = true)
success self
func init*(
_: type Poseidon2Tree,
leaves: openArray[array[31, byte]]): ?!Poseidon2Tree =
Poseidon2Tree.init(
leaves.mapIt( Poseidon2Hash.fromBytes(it) ))
proc fromNodes*(
_: type Poseidon2Tree,
nodes: openArray[Poseidon2Hash],
nleaves: int): ?!Poseidon2Tree =
if nodes.len == 0:
return failure "Empty nodes"
let
compressor = proc(
x, y: Poseidon2Hash,
key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
var
self = Poseidon2Tree(compress: compressor, zero: zero)
layer = nleaves
pos = 0
while pos < nodes.len:
self.layers.add( nodes[pos..<(pos + layer)] )
pos += layer
layer = divUp(layer, 2)
let
index = Rng.instance.rand(nleaves - 1)
proof = ? self.getProof(index)
if not ? proof.verify(self.leaves[index], ? self.root): # sanity check
return failure "Unable to verify tree built from nodes"
success self
func init*(
_: type Poseidon2Proof,
index: int,
nleaves: int,
nodes: openArray[Poseidon2Hash]): ?!Poseidon2Proof =
if nodes.len == 0:
return failure "Empty nodes"
let
compressor = proc(
x, y: Poseidon2Hash,
key: PoseidonKeysEnum): ?!Poseidon2Hash {.noSideEffect.} =
success compress( x, y, key.toKey )
success Poseidon2Proof(
compress: compressor,
zero: Poseidon2Zero,
index: index,
nleaves: nleaves,
path: @nodes)

View File

@ -1,2 +0,0 @@
const CodecExts =
[("codex-manifest", 0xCD01), ("codex-block", 0xCD02), ("codex-root", 0xCD03)]

View File

@ -1,19 +0,0 @@
import blscurve/bls_public_exports
import pkg/constantine/hashes
proc sha2_256hash_constantine(data: openArray[byte], output: var openArray[byte]) =
# Using Constantine's SHA256 instead of mhash for optimal performance on 32-byte merkle node hashing
# See: https://github.com/logos-storage/logos-storage-nim/issues/1162
if len(output) > 0:
let digest = hashes.sha256.hash(data)
copyMem(addr output[0], addr digest[0], 32)
const Sha2256MultiHash* = MHash(
mcodec: multiCodec("sha2-256"),
size: sha256.sizeDigest,
coder: sha2_256hash_constantine,
)
const HashExts = [
# override sha2-256 hash function
Sha2256MultiHash
]

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2022 Status Research & Development GmbH ## Copyright (c) 2022 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -9,17 +9,16 @@
const const
# Namespaces # Namespaces
CodexMetaNamespace* = "meta" # meta info stored here CodexMetaNamespace* = "meta" # meta info stored here
CodexRepoNamespace* = "repo" # repository namespace, blocks and manifests are subkeys CodexRepoNamespace* = "repo" # repository namespace, blocks and manifests are subkeys
CodexBlockTotalNamespace* = CodexMetaNamespace & "/total" CodexBlockTotalNamespace* = CodexMetaNamespace & "/total" # number of blocks in the repo
# number of blocks in the repo CodexBlocksNamespace* = CodexRepoNamespace & "/blocks" # blocks namespace
CodexBlocksNamespace* = CodexRepoNamespace & "/blocks" # blocks namespace
CodexManifestNamespace* = CodexRepoNamespace & "/manifests" # manifest namespace CodexManifestNamespace* = CodexRepoNamespace & "/manifests" # manifest namespace
CodexBlocksTtlNamespace* = # Cid TTL CodexBlocksTtlNamespace* = # Cid TTL
CodexMetaNamespace & "/ttl" CodexMetaNamespace & "/ttl"
CodexBlockProofNamespace* = # Cid and Proof CodexBlockProofNamespace* = # Cid and Proof
CodexMetaNamespace & "/proof" CodexMetaNamespace & "/proof"
CodexDhtNamespace* = "dht" # Dht namespace CodexDhtNamespace* = "dht" # Dht namespace
CodexDhtProvidersNamespace* = # Dht providers namespace CodexDhtProvidersNamespace* = # Dht providers namespace
CodexDhtNamespace & "/providers" CodexDhtNamespace & "/providers"
CodexQuotaNamespace* = CodexMetaNamespace & "/quota" # quota's namespace CodexQuotaNamespace* = CodexMetaNamespace & "/quota" # quota's namespace

View File

@ -1,432 +0,0 @@
# Copyright (c) 2019-2023 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
{.push raises: [].}
import
std/[options, os, strutils, times, net, atomics],
stew/[objects],
nat_traversal/[miniupnpc, natpmp],
json_serialization/std/net,
results
import pkg/chronos
import pkg/chronicles
import pkg/libp2p
import ./utils
import ./utils/natutils
import ./utils/addrutils
const
UPNP_TIMEOUT = 200 # ms
PORT_MAPPING_INTERVAL = 20 * 60 # seconds
NATPMP_LIFETIME = 60 * 60 # in seconds, must be longer than PORT_MAPPING_INTERVAL
type PortMappings* = object
internalTcpPort: Port
externalTcpPort: Port
internalUdpPort: Port
externalUdpPort: Port
description: string
type PortMappingArgs =
tuple[strategy: NatStrategy, tcpPort, udpPort: Port, description: string]
type NatConfig* = object
case hasExtIp*: bool
of true: extIp*: IpAddress
of false: nat*: NatStrategy
var
upnp {.threadvar.}: Miniupnp
npmp {.threadvar.}: NatPmp
strategy = NatStrategy.NatNone
natClosed: Atomic[bool]
extIp: Option[IpAddress]
activeMappings: seq[PortMappings]
natThreads: seq[Thread[PortMappingArgs]] = @[]
logScope:
topics = "nat"
type PrefSrcStatus = enum
NoRoutingInfo
PrefSrcIsPublic
PrefSrcIsPrivate
BindAddressIsPublic
BindAddressIsPrivate
## Also does threadvar initialisation.
## Must be called before redirectPorts() in each thread.
proc getExternalIP*(natStrategy: NatStrategy, quiet = false): Option[IpAddress] =
var externalIP: IpAddress
if natStrategy == NatStrategy.NatAny or natStrategy == NatStrategy.NatUpnp:
if upnp == nil:
upnp = newMiniupnp()
upnp.discoverDelay = UPNP_TIMEOUT
let dres = upnp.discover()
if dres.isErr:
debug "UPnP", msg = dres.error
else:
var
msg: cstring
canContinue = true
case upnp.selectIGD()
of IGDNotFound:
msg = "Internet Gateway Device not found. Giving up."
canContinue = false
of IGDFound:
msg = "Internet Gateway Device found."
of IGDNotConnected:
msg = "Internet Gateway Device found but it's not connected. Trying anyway."
of NotAnIGD:
msg =
"Some device found, but it's not recognised as an Internet Gateway Device. Trying anyway."
of IGDIpNotRoutable:
msg =
"Internet Gateway Device found and is connected, but with a reserved or non-routable IP. Trying anyway."
if not quiet:
debug "UPnP", msg
if canContinue:
let ires = upnp.externalIPAddress()
if ires.isErr:
debug "UPnP", msg = ires.error
else:
# if we got this far, UPnP is working and we don't need to try NAT-PMP
try:
externalIP = parseIpAddress(ires.value)
strategy = NatStrategy.NatUpnp
return some(externalIP)
except ValueError as e:
error "parseIpAddress() exception", err = e.msg
return
if natStrategy == NatStrategy.NatAny or natStrategy == NatStrategy.NatPmp:
if npmp == nil:
npmp = newNatPmp()
let nres = npmp.init()
if nres.isErr:
debug "NAT-PMP", msg = nres.error
else:
let nires = npmp.externalIPAddress()
if nires.isErr:
debug "NAT-PMP", msg = nires.error
else:
try:
externalIP = parseIpAddress($(nires.value))
strategy = NatStrategy.NatPmp
return some(externalIP)
except ValueError as e:
error "parseIpAddress() exception", err = e.msg
return
# This queries the routing table to get the "preferred source" attribute and
# checks if it's a public IP. If so, then it's our public IP.
#
# Further more, we check if the bind address (user provided, or a "0.0.0.0"
# default) is a public IP. That's a long shot, because code paths involving a
# user-provided bind address are not supposed to get here.
proc getRoutePrefSrc(bindIp: IpAddress): (Option[IpAddress], PrefSrcStatus) =
let bindAddress = initTAddress(bindIp, Port(0))
if bindAddress.isAnyLocal():
let ip = getRouteIpv4()
if ip.isErr():
# No route was found, log error and continue without IP.
error "No routable IP address found, check your network connection",
error = ip.error
return (none(IpAddress), NoRoutingInfo)
elif ip.get().isGlobalUnicast():
return (some(ip.get()), PrefSrcIsPublic)
else:
return (none(IpAddress), PrefSrcIsPrivate)
elif bindAddress.isGlobalUnicast():
return (some(bindIp), BindAddressIsPublic)
else:
return (none(IpAddress), BindAddressIsPrivate)
# Try to detect a public IP assigned to this host, before trying NAT traversal.
proc getPublicRoutePrefSrcOrExternalIP*(
natStrategy: NatStrategy, bindIp: IpAddress, quiet = true
): Option[IpAddress] =
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return prefSrcIp
of PrefSrcIsPrivate, BindAddressIsPrivate:
let extIp = getExternalIP(natStrategy, quiet)
if extIp.isSome:
return some(extIp.get)
proc doPortMapping(
strategy: NatStrategy, tcpPort, udpPort: Port, description: string
): Option[(Port, Port)] {.gcsafe.} =
var
extTcpPort: Port
extUdpPort: Port
if strategy == NatStrategy.NatUpnp:
for t in [(tcpPort, UPNPProtocol.TCP), (udpPort, UPNPProtocol.UDP)]:
let
(port, protocol) = t
pmres = upnp.addPortMapping(
externalPort = $port,
protocol = protocol,
internalHost = upnp.lanAddr,
internalPort = $port,
desc = description,
leaseDuration = 0,
)
if pmres.isErr:
error "UPnP port mapping", msg = pmres.error, port
return
else:
# let's check it
let cres =
upnp.getSpecificPortMapping(externalPort = $port, protocol = protocol)
if cres.isErr:
warn "UPnP port mapping check failed. Assuming the check itself is broken and the port mapping was done.",
msg = cres.error
info "UPnP: added port mapping",
externalPort = port, internalPort = port, protocol = protocol
case protocol
of UPNPProtocol.TCP:
extTcpPort = port
of UPNPProtocol.UDP:
extUdpPort = port
elif strategy == NatStrategy.NatPmp:
for t in [(tcpPort, NatPmpProtocol.TCP), (udpPort, NatPmpProtocol.UDP)]:
let
(port, protocol) = t
pmres = npmp.addPortMapping(
eport = port.cushort,
iport = port.cushort,
protocol = protocol,
lifetime = NATPMP_LIFETIME,
)
if pmres.isErr:
error "NAT-PMP port mapping", msg = pmres.error, port
return
else:
let extPort = Port(pmres.value)
info "NAT-PMP: added port mapping",
externalPort = extPort, internalPort = port, protocol = protocol
case protocol
of NatPmpProtocol.TCP:
extTcpPort = extPort
of NatPmpProtocol.UDP:
extUdpPort = extPort
return some((extTcpPort, extUdpPort))
proc repeatPortMapping(args: PortMappingArgs) {.thread, raises: [ValueError].} =
ignoreSignalsInThread()
let
(strategy, tcpPort, udpPort, description) = args
interval = initDuration(seconds = PORT_MAPPING_INTERVAL)
sleepDuration = 1_000 # in ms, also the maximum delay after pressing Ctrl-C
var lastUpdate = now()
# We can't use copies of Miniupnp and NatPmp objects in this thread, because they share
# C pointers with other instances that have already been garbage collected, so
# we use threadvars instead and initialise them again with getExternalIP(),
# even though we don't need the external IP's value.
let ipres = getExternalIP(strategy, quiet = true)
if ipres.isSome:
while natClosed.load() == false:
let
# we're being silly here with this channel polling because we can't
# select on Nim channels like on Go ones
currTime = now()
if currTime >= (lastUpdate + interval):
discard doPortMapping(strategy, tcpPort, udpPort, description)
lastUpdate = currTime
sleep(sleepDuration)
proc stopNatThreads() {.noconv.} =
# stop the thread
debug "Stopping NAT port mapping renewal threads"
try:
natClosed.store(true)
joinThreads(natThreads)
except Exception as exc:
warn "Failed to stop NAT port mapping renewal thread", exc = exc.msg
# delete our port mappings
# FIXME: if the initial port mapping failed because it already existed for the
# required external port, we should not delete it. It might have been set up
# by another program.
# In Windows, a new thread is created for the signal handler, so we need to
# initialise our threadvars again.
let ipres = getExternalIP(strategy, quiet = true)
if ipres.isSome:
if strategy == NatStrategy.NatUpnp:
for entry in activeMappings:
for t in [
(entry.externalTcpPort, entry.internalTcpPort, UPNPProtocol.TCP),
(entry.externalUdpPort, entry.internalUdpPort, UPNPProtocol.UDP),
]:
let
(eport, iport, protocol) = t
pmres = upnp.deletePortMapping(externalPort = $eport, protocol = protocol)
if pmres.isErr:
error "UPnP port mapping deletion", msg = pmres.error
else:
debug "UPnP: deleted port mapping",
externalPort = eport, internalPort = iport, protocol = protocol
elif strategy == NatStrategy.NatPmp:
for entry in activeMappings:
for t in [
(entry.externalTcpPort, entry.internalTcpPort, NatPmpProtocol.TCP),
(entry.externalUdpPort, entry.internalUdpPort, NatPmpProtocol.UDP),
]:
let
(eport, iport, protocol) = t
pmres = npmp.deletePortMapping(
eport = eport.cushort, iport = iport.cushort, protocol = protocol
)
if pmres.isErr:
error "NAT-PMP port mapping deletion", msg = pmres.error
else:
debug "NAT-PMP: deleted port mapping",
externalPort = eport, internalPort = iport, protocol = protocol
proc redirectPorts*(
strategy: NatStrategy, tcpPort, udpPort: Port, description: string
): Option[(Port, Port)] =
result = doPortMapping(strategy, tcpPort, udpPort, description)
if result.isSome:
let (externalTcpPort, externalUdpPort) = result.get()
# needed by NAT-PMP on port mapping deletion
# Port mapping works. Let's launch a thread that repeats it, in case the
# NAT-PMP lease expires or the router is rebooted and forgets all about
# these mappings.
activeMappings.add(
PortMappings(
internalTcpPort: tcpPort,
externalTcpPort: externalTcpPort,
internalUdpPort: udpPort,
externalUdpPort: externalUdpPort,
description: description,
)
)
try:
natThreads.add(Thread[PortMappingArgs]())
natThreads[^1].createThread(
repeatPortMapping, (strategy, externalTcpPort, externalUdpPort, description)
)
# atexit() in disguise
if natThreads.len == 1:
# we should register the thread termination function only once
addQuitProc(stopNatThreads)
except Exception as exc:
warn "Failed to create NAT port mapping renewal thread", exc = exc.msg
proc setupNat*(
natStrategy: NatStrategy, tcpPort, udpPort: Port, clientId: string
): tuple[ip: Option[IpAddress], tcpPort, udpPort: Option[Port]] =
## Setup NAT port mapping and get external IP address.
## If any of this fails, we don't return any IP address but do return the
## original ports as best effort.
## TODO: Allow for tcp or udp port mapping to be optional.
if extIp.isNone:
extIp = getExternalIP(natStrategy)
if extIp.isSome:
let ip = extIp.get
let extPorts = (
{.gcsafe.}:
redirectPorts(
strategy, tcpPort = tcpPort, udpPort = udpPort, description = clientId
)
)
if extPorts.isSome:
let (extTcpPort, extUdpPort) = extPorts.get()
(ip: some(ip), tcpPort: some(extTcpPort), udpPort: some(extUdpPort))
else:
warn "UPnP/NAT-PMP available but port forwarding failed"
(ip: none(IpAddress), tcpPort: some(tcpPort), udpPort: some(udpPort))
else:
warn "UPnP/NAT-PMP not available"
(ip: none(IpAddress), tcpPort: some(tcpPort), udpPort: some(udpPort))
proc setupAddress*(
natConfig: NatConfig, bindIp: IpAddress, tcpPort, udpPort: Port, clientId: string
): tuple[ip: Option[IpAddress], tcpPort, udpPort: Option[Port]] {.gcsafe.} =
## Set-up of the external address via any of the ways as configured in
## `NatConfig`. In case all fails an error is logged and the bind ports are
## selected also as external ports, as best effort and in hope that the
## external IP can be figured out by other means at a later stage.
## TODO: Allow for tcp or udp bind ports to be optional.
if natConfig.hasExtIp:
# any required port redirection must be done by hand
return (some(natConfig.extIp), some(tcpPort), some(udpPort))
case natConfig.nat
of NatStrategy.NatAny:
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return (prefSrcIp, some(tcpPort), some(udpPort))
of PrefSrcIsPrivate, BindAddressIsPrivate:
return setupNat(natConfig.nat, tcpPort, udpPort, clientId)
of NatStrategy.NatNone:
let (prefSrcIp, prefSrcStatus) = getRoutePrefSrc(bindIp)
case prefSrcStatus
of NoRoutingInfo, PrefSrcIsPublic, BindAddressIsPublic:
return (prefSrcIp, some(tcpPort), some(udpPort))
of PrefSrcIsPrivate:
error "No public IP address found. Should not use --nat:none option"
return (none(IpAddress), some(tcpPort), some(udpPort))
of BindAddressIsPrivate:
error "Bind IP is not a public IP address. Should not use --nat:none option"
return (none(IpAddress), some(tcpPort), some(udpPort))
of NatStrategy.NatUpnp, NatStrategy.NatPmp:
return setupNat(natConfig.nat, tcpPort, udpPort, clientId)
proc nattedAddress*(
natConfig: NatConfig, addrs: seq[MultiAddress], udpPort: Port
): tuple[libp2p, discovery: seq[MultiAddress]] =
## Takes a NAT configuration, sequence of multiaddresses and UDP port and returns:
## - Modified multiaddresses with NAT-mapped addresses for libp2p
## - Discovery addresses with NAT-mapped UDP ports
var discoveryAddrs = newSeq[MultiAddress](0)
let newAddrs = addrs.mapIt:
block:
# Extract IP address and port from the multiaddress
let (ipPart, port) = getAddressAndPort(it)
if ipPart.isSome and port.isSome:
# Try to setup NAT mapping for the address
let (newIP, tcp, udp) =
setupAddress(natConfig, ipPart.get, port.get, udpPort, "codex")
if newIP.isSome:
# NAT mapping successful - add discovery address with mapped UDP port
discoveryAddrs.add(getMultiAddrWithIPAndUDPPort(newIP.get, udp.get))
# Remap original address with NAT IP and TCP port
it.remapAddr(ip = newIP, port = tcp)
else:
# NAT mapping failed - use original address
echo "Failed to get external IP, using original address", it
discoveryAddrs.add(getMultiAddrWithIPAndUDPPort(ipPart.get, udpPort))
it
else:
# Invalid multiaddress format - return as is
it
(newAddrs, discoveryAddrs)

View File

@ -1,4 +1,4 @@
## Logos Storage ## Nim-Codex
## Copyright (c) 2021 Status Research & Development GmbH ## Copyright (c) 2021 Status Research & Development GmbH
## Licensed under either of ## Licensed under either of
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE)) ## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
@ -13,12 +13,12 @@ import std/options
import std/sequtils import std/sequtils
import std/strformat import std/strformat
import std/sugar import std/sugar
import times import std/cpuinfo
import pkg/taskpools
import pkg/questionable import pkg/questionable
import pkg/questionable/results import pkg/questionable/results
import pkg/chronos import pkg/chronos
import pkg/poseidon2
import pkg/libp2p/[switch, multicodec, multihash] import pkg/libp2p/[switch, multicodec, multihash]
import pkg/libp2p/stream/bufferstream import pkg/libp2p/stream/bufferstream
@ -26,8 +26,10 @@ import pkg/libp2p/stream/bufferstream
# TODO: remove once exported by libp2p # TODO: remove once exported by libp2p
import pkg/libp2p/routing_record import pkg/libp2p/routing_record
import pkg/libp2p/signed_envelope import pkg/libp2p/signed_envelope
import pkg/taskpools
import ./chunker import ./chunker
import ./slots
import ./clock import ./clock
import ./blocktype as bt import ./blocktype as bt
import ./manifest import ./manifest
@ -35,12 +37,15 @@ import ./merkletree
import ./stores import ./stores
import ./blockexchange import ./blockexchange
import ./streams import ./streams
import ./erasure
import ./discovery import ./discovery
import ./contracts
import ./indexingstrategy
import ./utils import ./utils
import ./errors import ./errors
import ./logutils import ./logutils
import ./utils/safeasynciter import ./utils/poseidon2digest
import ./utils/trackedfutures import ./utils/asynciter
export logutils export logutils
@ -48,27 +53,30 @@ logScope:
topics = "codex node" topics = "codex node"
const const
DefaultFetchBatch = 1024 FetchBatch = 200
MaxOnBatchBlocks = 128
BatchRefillThreshold = 0.75 # Refill when 75% of window completes
type type
Contracts* = tuple
client: ?ClientInteractions
host: ?HostInteractions
validator: ?ValidatorInteractions
CodexNode* = object CodexNode* = object
switch: Switch switch: Switch
networkId: PeerId networkId: PeerId
networkStore: NetworkStore networkStore: NetworkStore
engine: BlockExcEngine engine: BlockExcEngine
prover: ?Prover
discovery: Discovery discovery: Discovery
contracts*: Contracts
clock*: Clock clock*: Clock
taskPool: Taskpool storage*: Contracts
trackedFutures: TrackedFutures taskpool*: Taskpool
CodexNodeRef* = ref CodexNode CodexNodeRef* = ref CodexNode
OnManifest* = proc(cid: Cid, manifest: Manifest): void {.gcsafe, raises: [].} OnManifest* = proc(cid: Cid, manifest: Manifest): void {.gcsafe, raises: [].}
BatchProc* = BatchProc* = proc(blocks: seq[bt.Block]): Future[?!void] {.gcsafe, raises: [].}
proc(blocks: seq[bt.Block]): Future[?!void] {.async: (raises: [CancelledError]).}
OnBlockStoredProc = proc(chunk: seq[byte]): void {.gcsafe, raises: [].}
func switch*(self: CodexNodeRef): Switch = func switch*(self: CodexNodeRef): Switch =
return self.switch return self.switch
@ -83,8 +91,8 @@ func discovery*(self: CodexNodeRef): Discovery =
return self.discovery return self.discovery
proc storeManifest*( proc storeManifest*(
self: CodexNodeRef, manifest: Manifest self: CodexNodeRef,
): Future[?!bt.Block] {.async.} = manifest: Manifest): Future[?!bt.Block] {.async.} =
without encodedVerifiable =? manifest.encode(), err: without encodedVerifiable =? manifest.encode(), err:
trace "Unable to encode manifest" trace "Unable to encode manifest"
return failure(err) return failure(err)
@ -100,8 +108,8 @@ proc storeManifest*(
success blk success blk
proc fetchManifest*( proc fetchManifest*(
self: CodexNodeRef, cid: Cid self: CodexNodeRef,
): Future[?!Manifest] {.async: (raises: [CancelledError]).} = cid: Cid): Future[?!Manifest] {.async.} =
## Fetch and decode a manifest block ## Fetch and decode a manifest block
## ##
@ -124,32 +132,34 @@ proc fetchManifest*(
return manifest.success return manifest.success
proc findPeer*(self: CodexNodeRef, peerId: PeerId): Future[?PeerRecord] {.async.} = proc findPeer*(
self: CodexNodeRef,
peerId: PeerId): Future[?PeerRecord] {.async.} =
## Find peer using the discovery service from the given CodexNode ## Find peer using the discovery service from the given CodexNode
## ##
return await self.discovery.findPeer(peerId) return await self.discovery.findPeer(peerId)
proc connect*( proc connect*(
self: CodexNodeRef, peerId: PeerId, addrs: seq[MultiAddress] self: CodexNodeRef,
peerId: PeerId,
addrs: seq[MultiAddress]
): Future[void] = ): Future[void] =
self.switch.connect(peerId, addrs) self.switch.connect(peerId, addrs)
proc updateExpiry*( proc updateExpiry*(
self: CodexNodeRef, manifestCid: Cid, expiry: SecondsSince1970 self: CodexNodeRef,
): Future[?!void] {.async: (raises: [CancelledError]).} = manifestCid: Cid,
expiry: SecondsSince1970): Future[?!void] {.async.} =
without manifest =? await self.fetchManifest(manifestCid), error: without manifest =? await self.fetchManifest(manifestCid), error:
trace "Unable to fetch manifest for cid", manifestCid trace "Unable to fetch manifest for cid", manifestCid
return failure(error) return failure(error)
try: try:
let ensuringFutures = Iter[int].new(0 ..< manifest.blocksCount).mapIt( let
self.networkStore.localStore.ensureExpiry(manifest.treeCid, it, expiry) ensuringFutures = Iter[int].new(0..<manifest.blocksCount)
) .mapIt(self.networkStore.localStore.ensureExpiry( manifest.treeCid, it, expiry ))
await allFuturesThrowing(ensuringFutures)
let res = await allFinishedFailed[?!void](ensuringFutures)
if res.failure.len > 0:
trace "Some blocks failed to update expiry", len = res.failure.len
return failure("Some blocks failed to update expiry (" & $res.failure.len & " )")
except CancelledError as exc: except CancelledError as exc:
raise exc raise exc
except CatchableError as exc: except CatchableError as exc:
@ -158,13 +168,11 @@ proc updateExpiry*(
return success() return success()
proc fetchBatched*( proc fetchBatched*(
self: CodexNodeRef, self: CodexNodeRef,
cid: Cid, cid: Cid,
iter: Iter[int], iter: Iter[int],
batchSize = DefaultFetchBatch, batchSize = FetchBatch,
onBatch: BatchProc = nil, onBatch: BatchProc = nil): Future[?!void] {.async, gcsafe.} =
fetchLocal = true,
): Future[?!void] {.async: (raises: [CancelledError]), gcsafe.} =
## Fetch blocks in batches of `batchSize` ## Fetch blocks in batches of `batchSize`
## ##
@ -174,158 +182,104 @@ proc fetchBatched*(
# (i: int) => self.networkStore.getBlock(BlockAddress.init(cid, i)) # (i: int) => self.networkStore.getBlock(BlockAddress.init(cid, i))
# ) # )
# Sliding window: maintain batchSize blocks in-flight while not iter.finished:
let let blocks = collect:
refillThreshold = int(float(batchSize) * BatchRefillThreshold) for i in 0..<batchSize:
refillSize = max(refillThreshold, 1)
maxCallbackBlocks = min(batchSize, MaxOnBatchBlocks)
var
blockData: seq[bt.Block]
failedBlocks = 0
successfulBlocks = 0
completedInWindow = 0
var addresses = newSeqOfCap[BlockAddress](batchSize)
for i in 0 ..< batchSize:
if not iter.finished:
let address = BlockAddress.init(cid, iter.next())
if fetchLocal or not (await address in self.networkStore):
addresses.add(address)
var blockResults = await self.networkStore.getBlocks(addresses)
while not blockResults.finished:
without blk =? await blockResults.next(), err:
inc(failedBlocks)
continue
inc(successfulBlocks)
inc(completedInWindow)
if not onBatch.isNil:
blockData.add(blk)
if blockData.len >= maxCallbackBlocks:
if batchErr =? (await onBatch(blockData)).errorOption:
return failure(batchErr)
blockData = @[]
if completedInWindow >= refillThreshold and not iter.finished:
var refillAddresses = newSeqOfCap[BlockAddress](refillSize)
for i in 0 ..< refillSize:
if not iter.finished: if not iter.finished:
let address = BlockAddress.init(cid, iter.next()) self.networkStore.getBlock(BlockAddress.init(cid, iter.next()))
if fetchLocal or not (await address in self.networkStore):
refillAddresses.add(address)
if refillAddresses.len > 0: if blocksErr =? (await allFutureResult(blocks)).errorOption:
blockResults = return failure(blocksErr)
chain(blockResults, await self.networkStore.getBlocks(refillAddresses))
completedInWindow = 0
if failedBlocks > 0: if not onBatch.isNil and
return failure("Some blocks failed (Result) to fetch (" & $failedBlocks & ")") batchErr =? (await onBatch(blocks.mapIt( it.read.get ))).errorOption:
if not onBatch.isNil and blockData.len > 0:
if batchErr =? (await onBatch(blockData)).errorOption:
return failure(batchErr) return failure(batchErr)
success() success()
proc fetchBatched*( proc fetchBatched*(
self: CodexNodeRef, self: CodexNodeRef,
manifest: Manifest, manifest: Manifest,
batchSize = DefaultFetchBatch, batchSize = FetchBatch,
onBatch: BatchProc = nil, onBatch: BatchProc = nil): Future[?!void] =
fetchLocal = true,
): Future[?!void] {.async: (raw: true, raises: [CancelledError]).} =
## Fetch manifest in batches of `batchSize` ## Fetch manifest in batches of `batchSize`
## ##
trace "Fetching blocks in batches of", trace "Fetching blocks in batches of", size = batchSize
size = batchSize, blocksCount = manifest.blocksCount
let iter = Iter[int].new(0 ..< manifest.blocksCount) let iter = Iter[int].new(0..<manifest.blocksCount)
self.fetchBatched(manifest.treeCid, iter, batchSize, onBatch, fetchLocal) self.fetchBatched(manifest.treeCid, iter, batchSize, onBatch)
proc fetchDatasetAsync*(
self: CodexNodeRef, manifest: Manifest, fetchLocal = true
): Future[void] {.async: (raises: []).} =
## Asynchronously fetch a dataset in the background.
## This task will be tracked and cleaned up on node shutdown.
##
try:
if err =? (
await self.fetchBatched(
manifest = manifest, batchSize = DefaultFetchBatch, fetchLocal = fetchLocal
)
).errorOption:
error "Unable to fetch blocks", err = err.msg
except CancelledError as exc:
trace "Cancelled fetching blocks", exc = exc.msg
proc fetchDatasetAsyncTask*(self: CodexNodeRef, manifest: Manifest) =
## Start fetching a dataset in the background.
## The task will be tracked and cleaned up on node shutdown.
##
self.trackedFutures.track(self.fetchDatasetAsync(manifest, fetchLocal = false))
proc streamSingleBlock( proc streamSingleBlock(
self: CodexNodeRef, cid: Cid self: CodexNodeRef,
): Future[?!LPStream] {.async: (raises: [CancelledError]).} = cid: Cid
): Future[?!LPstream] {.async.} =
## Streams the contents of a single block. ## Streams the contents of a single block.
## ##
trace "Streaming single block", cid = cid trace "Streaming single block", cid = cid
let stream = BufferStream.new() let
stream = BufferStream.new()
without blk =? (await self.networkStore.getBlock(BlockAddress.init(cid))), err: without blk =? (await self.networkStore.getBlock(BlockAddress.init(cid))), err:
return failure(err) return failure(err)
proc streamOneBlock(): Future[void] {.async: (raises: []).} = proc streamOneBlock(): Future[void] {.async.} =
try: try:
defer:
await stream.pushEof()
await stream.pushData(blk.data) await stream.pushData(blk.data)
except CancelledError as exc: except CatchableError as exc:
trace "Streaming block cancelled", cid, exc = exc.msg
except LPStreamError as exc:
trace "Unable to send block", cid, exc = exc.msg trace "Unable to send block", cid, exc = exc.msg
discard
finally:
await stream.pushEof()
self.trackedFutures.track(streamOneBlock()) asyncSpawn streamOneBlock()
LPStream(stream).success LPStream(stream).success
proc streamEntireDataset( proc streamEntireDataset(
self: CodexNodeRef, manifest: Manifest, manifestCid: Cid self: CodexNodeRef,
): Future[?!LPStream] {.async: (raises: [CancelledError]).} = manifest: Manifest,
manifestCid: Cid,
): Future[?!LPStream] {.async.} =
## Streams the contents of the entire dataset described by the manifest. ## Streams the contents of the entire dataset described by the manifest.
## ##
trace "Retrieving blocks from manifest", manifestCid trace "Retrieving blocks from manifest", manifestCid
var jobs: seq[Future[void]] if manifest.protected:
let stream = LPStream(StoreStream.new(self.networkStore, manifest, pad = false)) # Retrieve, decode and save to the local store all EС groups
proc erasureJob(): Future[?!void] {.async.} =
try:
# Spawn an erasure decoding job
let
erasure = Erasure.new(
self.networkStore,
leoEncoderProvider,
leoDecoderProvider,
self.taskpool)
without _ =? (await erasure.decode(manifest)), error:
error "Unable to erasure decode manifest", manifestCid, exc = error.msg
return failure(error)
jobs.add(self.fetchDatasetAsync(manifest, fetchLocal = false)) return success()
# --------------------------------------------------------------------------
# FIXME this is a HACK so that the node does not crash during the workshop.
# We should NOT catch Defect.
except Exception as exc:
trace "Exception decoding manifest", manifestCid, exc = exc.msg
return failure(exc.msg)
# --------------------------------------------------------------------------
# Monitor stream completion and cancel background jobs when done if err =? (await erasureJob()).errorOption:
proc monitorStream() {.async: (raises: []).} = return failure(err)
try:
await stream.join()
except CancelledError as exc:
warn "Stream cancelled", exc = exc.msg
finally:
await noCancel allFutures(jobs.mapIt(it.cancelAndWait))
self.trackedFutures.track(monitorStream())
# Retrieve all blocks of the dataset sequentially from the local store or network # Retrieve all blocks of the dataset sequentially from the local store or network
trace "Creating store stream for manifest", manifestCid trace "Creating store stream for manifest", manifestCid
LPStream(StoreStream.new(self.networkStore, manifest, pad = false)).success
stream.success
proc retrieve*( proc retrieve*(
self: CodexNodeRef, cid: Cid, local: bool = true self: CodexNodeRef,
): Future[?!LPStream] {.async: (raises: [CancelledError]).} = cid: Cid,
local: bool = true): Future[?!LPStream] {.async.} =
## Retrieve by Cid a single block or an entire dataset described by manifest ## Retrieve by Cid a single block or an entire dataset described by manifest
## ##
@ -340,73 +294,10 @@ proc retrieve*(
await self.streamEntireDataset(manifest, cid) await self.streamEntireDataset(manifest, cid)
proc deleteSingleBlock(self: CodexNodeRef, cid: Cid): Future[?!void] {.async.} =
if err =? (await self.networkStore.delBlock(cid)).errorOption:
error "Error deleting block", cid, err = err.msg
return failure(err)
trace "Deleted block", cid
return success()
proc deleteEntireDataset(self: CodexNodeRef, cid: Cid): Future[?!void] {.async.} =
# Deletion is a strictly local operation
var store = self.networkStore.localStore
if not (await cid in store):
# As per the contract for delete*, an absent dataset is not an error.
return success()
without manifestBlock =? await store.getBlock(cid), err:
return failure(err)
without manifest =? Manifest.decode(manifestBlock), err:
return failure(err)
let runtimeQuota = initDuration(milliseconds = 100)
var lastIdle = getTime()
for i in 0 ..< manifest.blocksCount:
if (getTime() - lastIdle) >= runtimeQuota:
await idleAsync()
lastIdle = getTime()
if err =? (await store.delBlock(manifest.treeCid, i)).errorOption:
# The contract for delBlock is fuzzy, but we assume that if the block is
# simply missing we won't get an error. This is a best effort operation and
# can simply be retried.
error "Failed to delete block within dataset", index = i, err = err.msg
return failure(err)
if err =? (await store.delBlock(cid)).errorOption:
error "Error deleting manifest block", err = err.msg
success()
proc delete*(
self: CodexNodeRef, cid: Cid
): Future[?!void] {.async: (raises: [CatchableError]).} =
## Deletes a whole dataset, if Cid is a Manifest Cid, or a single block, if Cid a block Cid,
## from the underlying block store. This is a strictly local operation.
##
## Missing blocks in dataset deletes are ignored.
##
without isManifest =? cid.isManifest, err:
trace "Bad content type for CID:", cid = cid, err = err.msg
return failure(err)
if not isManifest:
return await self.deleteSingleBlock(cid)
await self.deleteEntireDataset(cid)
proc store*( proc store*(
self: CodexNodeRef, self: CodexNodeRef,
stream: LPStream, stream: LPStream,
filename: ?string = string.none, blockSize = DefaultBlockSize): Future[?!Cid] {.async.} =
mimetype: ?string = string.none,
blockSize = DefaultBlockSize,
onBlockStored: OnBlockStoredProc = nil,
): Future[?!Cid] {.async.} =
## Save stream contents as dataset with given blockSize ## Save stream contents as dataset with given blockSize
## to nodes's BlockStore, and return Cid of its manifest ## to nodes's BlockStore, and return Cid of its manifest
## ##
@ -420,7 +311,10 @@ proc store*(
var cids: seq[Cid] var cids: seq[Cid]
try: try:
while (let chunk = await chunker.getBytes(); chunk.len > 0): while (
let chunk = await chunker.getBytes();
chunk.len > 0):
without mhash =? MultiHash.digest($hcodec, chunk).mapFailure, err: without mhash =? MultiHash.digest($hcodec, chunk).mapFailure, err:
return failure(err) return failure(err)
@ -435,9 +329,6 @@ proc store*(
if err =? (await self.networkStore.putBlock(blk)).errorOption: if err =? (await self.networkStore.putBlock(blk)).errorOption:
error "Unable to store block", cid = blk.cid, err = err.msg error "Unable to store block", cid = blk.cid, err = err.msg
return failure(&"Unable to store block {blk.cid}") return failure(&"Unable to store block {blk.cid}")
if not onBlockStored.isNil:
onBlockStored(chunk)
except CancelledError as exc: except CancelledError as exc:
raise exc raise exc
except CatchableError as exc: except CatchableError as exc:
@ -445,7 +336,7 @@ proc store*(
finally: finally:
await stream.close() await stream.close()
without tree =? (await CodexTree.init(self.taskPool, cids)), err: without tree =? CodexTree.init(cids), err:
return failure(err) return failure(err)
without treeCid =? tree.rootCid(CIDv1, dataCodec), err: without treeCid =? tree.rootCid(CIDv1, dataCodec), err:
@ -454,8 +345,7 @@ proc store*(
for index, cid in cids: for index, cid in cids:
without proof =? tree.getProof(index), err: without proof =? tree.getProof(index), err:
return failure(err) return failure(err)
if err =? if err =? (await self.networkStore.putCidAndProof(treeCid, index, cid, proof)).errorOption:
(await self.networkStore.putCidAndProof(treeCid, index, cid, proof)).errorOption:
# TODO add log here # TODO add log here
return failure(err) return failure(err)
@ -465,31 +355,25 @@ proc store*(
datasetSize = NBytes(chunker.offset), datasetSize = NBytes(chunker.offset),
version = CIDv1, version = CIDv1,
hcodec = hcodec, hcodec = hcodec,
codec = dataCodec, codec = dataCodec)
filename = filename,
mimetype = mimetype,
)
without manifestBlk =? await self.storeManifest(manifest), err: without manifestBlk =? await self.storeManifest(manifest), err:
error "Unable to store manifest" error "Unable to store manifest"
return failure(err) return failure(err)
info "Stored data", info "Stored data", manifestCid = manifestBlk.cid,
manifestCid = manifestBlk.cid, treeCid = treeCid,
treeCid = treeCid, blocks = manifest.blocksCount,
blocks = manifest.blocksCount, datasetSize = manifest.datasetSize
datasetSize = manifest.datasetSize,
filename = manifest.filename,
mimetype = manifest.mimetype
return manifestBlk.cid.success return manifestBlk.cid.success
proc iterateManifests*(self: CodexNodeRef, onManifest: OnManifest) {.async.} = proc iterateManifests*(self: CodexNodeRef, onManifest: OnManifest) {.async.} =
without cidsIter =? await self.networkStore.listBlocks(BlockType.Manifest): without cids =? await self.networkStore.listBlocks(BlockType.Manifest):
warn "Failed to listBlocks" warn "Failed to listBlocks"
return return
for c in cidsIter: for c in cids:
if cid =? await c: if cid =? await c:
without blk =? await self.networkStore.getBlock(cid): without blk =? await self.networkStore.getBlock(cid):
warn "Failed to get manifest block by cid", cid warn "Failed to get manifest block by cid", cid
@ -501,10 +385,300 @@ proc iterateManifests*(self: CodexNodeRef, onManifest: OnManifest) {.async.} =
onManifest(cid, manifest) onManifest(cid, manifest)
proc setupRequest(
self: CodexNodeRef,
cid: Cid,
duration: UInt256,
proofProbability: UInt256,
nodes: uint,
tolerance: uint,
reward: UInt256,
collateral: UInt256,
expiry: UInt256): Future[?!StorageRequest] {.async.} =
## Setup slots for a given dataset
##
let
ecK = nodes - tolerance
ecM = tolerance
logScope:
cid = cid
duration = duration
nodes = nodes
tolerance = tolerance
reward = reward
proofProbability = proofProbability
collateral = collateral
expiry = expiry
ecK = ecK
ecM = ecM
trace "Setting up slots"
without manifest =? await self.fetchManifest(cid), error:
trace "Unable to fetch manifest for cid"
return failure error
# ----------------------------------------------------------------------------
# FIXME this is a BAND-AID to address
# https://github.com/codex-storage/nim-codex/issues/852 temporarily for the
# workshop. Remove this once we get that fixed.
if manifest.blocksCount.uint == ecK:
return failure("Cannot setup slots for a dataset with ecK == numBlocks. Please use a larger file or a different combination of `nodes` and `tolerance`.")
# ----------------------------------------------------------------------------
# Erasure code the dataset according to provided parameters
let
erasure = Erasure.new(
self.networkStore.localStore,
leoEncoderProvider,
leoDecoderProvider,
self.taskpool)
without encoded =? (await erasure.encode(manifest, ecK, ecM)), error:
trace "Unable to erasure code dataset"
return failure(error)
without builder =? Poseidon2Builder.new(self.networkStore.localStore, encoded), err:
trace "Unable to create slot builder"
return failure(err)
without verifiable =? (await builder.buildManifest()), err:
trace "Unable to build verifiable manifest"
return failure(err)
without manifestBlk =? await self.storeManifest(verifiable), err:
trace "Unable to store verifiable manifest"
return failure(err)
let
verifyRoot =
if builder.verifyRoot.isNone:
return failure("No slots root")
else:
builder.verifyRoot.get.toBytes
request = StorageRequest(
ask: StorageAsk(
slots: verifiable.numSlots.uint64,
slotSize: builder.slotBytes.uint.u256,
duration: duration,
proofProbability: proofProbability,
reward: reward,
collateral: collateral,
maxSlotLoss: tolerance
),
content: StorageContent(
cid: $manifestBlk.cid, # TODO: why string?
merkleRoot: verifyRoot
),
expiry: expiry
)
trace "Request created", request = $request
success request
proc requestStorage*(
self: CodexNodeRef,
cid: Cid,
duration: UInt256,
proofProbability: UInt256,
nodes: uint,
tolerance: uint,
reward: UInt256,
collateral: UInt256,
expiry: UInt256): Future[?!PurchaseId] {.async.} =
## Initiate a request for storage sequence, this might
## be a multistep procedure.
##
logScope:
cid = cid
duration = duration
nodes = nodes
tolerance = tolerance
reward = reward
proofProbability = proofProbability
collateral = collateral
expiry = expiry.truncate(int64)
now = self.clock.now
trace "Received a request for storage!"
without contracts =? self.contracts.client:
trace "Purchasing not available"
return failure "Purchasing not available"
without request =?
(await self.setupRequest(
cid,
duration,
proofProbability,
nodes,
tolerance,
reward,
collateral,
expiry)), err:
trace "Unable to setup request"
return failure err
let purchase = await contracts.purchasing.purchase(request)
success purchase.id
proc onStore(
self: CodexNodeRef,
request: StorageRequest,
slotIdx: UInt256,
blocksCb: BlocksCb): Future[?!void] {.async.} =
## store data in local storage
##
logScope:
cid = request.content.cid
slotIdx = slotIdx
trace "Received a request to store a slot"
without cid =? Cid.init(request.content.cid).mapFailure, err:
trace "Unable to parse Cid", cid
return failure(err)
without manifest =? (await self.fetchManifest(cid)), err:
trace "Unable to fetch manifest for cid", cid, err = err.msg
return failure(err)
without builder =? Poseidon2Builder.new(
self.networkStore, manifest, manifest.verifiableStrategy
), err:
trace "Unable to create slots builder", err = err.msg
return failure(err)
let
slotIdx = slotIdx.truncate(int)
expiry = request.expiry.toSecondsSince1970
if slotIdx > manifest.slotRoots.high:
trace "Slot index not in manifest", slotIdx
return failure(newException(CodexError, "Slot index not in manifest"))
proc updateExpiry(blocks: seq[bt.Block]): Future[?!void] {.async.} =
trace "Updating expiry for blocks", blocks = blocks.len
let ensureExpiryFutures = blocks.mapIt(self.networkStore.ensureExpiry(it.cid, expiry))
if updateExpiryErr =? (await allFutureResult(ensureExpiryFutures)).errorOption:
return failure(updateExpiryErr)
if not blocksCb.isNil and err =? (await blocksCb(blocks)).errorOption:
trace "Unable to process blocks", err = err.msg
return failure(err)
return success()
without indexer =? manifest.verifiableStrategy.init(
0, manifest.blocksCount - 1, manifest.numSlots).catch, err:
trace "Unable to create indexing strategy from protected manifest", err = err.msg
return failure(err)
without blksIter =? indexer.getIndicies(slotIdx).catch, err:
trace "Unable to get indicies from strategy", err = err.msg
return failure(err)
if err =? (await self.fetchBatched(
manifest.treeCid,
blksIter,
onBatch = updateExpiry)).errorOption:
trace "Unable to fetch blocks", err = err.msg
return failure(err)
without slotRoot =? (await builder.buildSlot(slotIdx.Natural)), err:
trace "Unable to build slot", err = err.msg
return failure(err)
trace "Slot successfully retrieved and reconstructed"
if cid =? slotRoot.toSlotCid() and cid != manifest.slotRoots[slotIdx.int]:
trace "Slot root mismatch", manifest = manifest.slotRoots[slotIdx.int], recovered = slotRoot.toSlotCid()
return failure(newException(CodexError, "Slot root mismatch"))
trace "Slot successfully retrieved and reconstructed"
return success()
proc onProve(
self: CodexNodeRef,
slot: Slot,
challenge: ProofChallenge): Future[?!Groth16Proof] {.async.} =
## Generats a proof for a given slot and challenge
##
let
cidStr = slot.request.content.cid
slotIdx = slot.slotIndex.truncate(Natural)
logScope:
cid = cidStr
slot = slotIdx
challenge = challenge
trace "Received proof challenge"
if prover =? self.prover:
trace "Prover enabled"
without cid =? Cid.init(cidStr).mapFailure, err:
error "Unable to parse Cid", cid, err = err.msg
return failure(err)
without manifest =? await self.fetchManifest(cid), err:
error "Unable to fetch manifest for cid", err = err.msg
return failure(err)
when defined(verify_circuit):
without (inputs, proof) =? await prover.prove(slotIdx, manifest, challenge), err:
error "Unable to generate proof", err = err.msg
return failure(err)
without checked =? await prover.verify(proof, inputs), err:
error "Unable to verify proof", err = err.msg
return failure(err)
if not checked:
error "Proof verification failed"
return failure("Proof verification failed")
trace "Proof verified successfully"
else:
without (_, proof) =? await prover.prove(slotIdx, manifest, challenge), err:
error "Unable to generate proof", err = err.msg
return failure(err)
let groth16Proof = proof.toGroth16Proof()
trace "Proof generated successfully", groth16Proof
success groth16Proof
else:
warn "Prover not enabled"
failure "Prover not enabled"
proc onExpiryUpdate( proc onExpiryUpdate(
self: CodexNodeRef, rootCid: Cid, expiry: SecondsSince1970 self: CodexNodeRef,
): Future[?!void] {.async: (raises: [CancelledError]).} = rootCid: string,
return await self.updateExpiry(rootCid, expiry) expiry: SecondsSince1970): Future[?!void] {.async.} =
without cid =? Cid.init(rootCid):
trace "Unable to parse Cid", cid
let error = newException(CodexError, "Unable to parse Cid")
return failure(error)
return await self.updateExpiry(cid, expiry)
proc onClear(
self: CodexNodeRef,
request: StorageRequest,
slotIndex: UInt256) =
# TODO: remove data from local storage
discard
proc start*(self: CodexNodeRef) {.async.} = proc start*(self: CodexNodeRef) {.async.} =
if not self.engine.isNil: if not self.engine.isNil:
@ -516,14 +690,59 @@ proc start*(self: CodexNodeRef) {.async.} =
if not self.clock.isNil: if not self.clock.isNil:
await self.clock.start() await self.clock.start()
if hostContracts =? self.contracts.host:
hostContracts.sales.onStore =
proc(
request: StorageRequest,
slot: UInt256,
onBatch: BatchProc): Future[?!void] = self.onStore(request, slot, onBatch)
hostContracts.sales.onExpiryUpdate =
proc(rootCid: string, expiry: SecondsSince1970): Future[?!void] =
self.onExpiryUpdate(rootCid, expiry)
hostContracts.sales.onClear =
proc(request: StorageRequest, slotIndex: UInt256) =
# TODO: remove data from local storage
self.onClear(request, slotIndex)
hostContracts.sales.onProve =
proc(slot: Slot, challenge: ProofChallenge): Future[?!Groth16Proof] =
# TODO: generate proof
self.onProve(slot, challenge)
try:
await hostContracts.start()
except CancelledError as error:
raise error
except CatchableError as error:
error "Unable to start host contract interactions", error=error.msg
self.contracts.host = HostInteractions.none
if clientContracts =? self.contracts.client:
try:
await clientContracts.start()
except CancelledError as error:
raise error
except CatchableError as error:
error "Unable to start client contract interactions: ", error=error.msg
self.contracts.client = ClientInteractions.none
if validatorContracts =? self.contracts.validator:
try:
await validatorContracts.start()
except CancelledError as error:
raise error
except CatchableError as error:
error "Unable to start validator contract interactions: ", error=error.msg
self.contracts.validator = ValidatorInteractions.none
self.networkId = self.switch.peerInfo.peerId self.networkId = self.switch.peerInfo.peerId
notice "Started Storage node", id = self.networkId, addrs = self.switch.peerInfo.addrs notice "Started codex node", id = self.networkId, addrs = self.switch.peerInfo.addrs
proc stop*(self: CodexNodeRef) {.async.} = proc stop*(self: CodexNodeRef) {.async.} =
trace "Stopping node" trace "Stopping node"
await self.trackedFutures.cancelTracked()
if not self.engine.isNil: if not self.engine.isNil:
await self.engine.stop() await self.engine.stop()
@ -533,18 +752,27 @@ proc stop*(self: CodexNodeRef) {.async.} =
if not self.clock.isNil: if not self.clock.isNil:
await self.clock.stop() await self.clock.stop()
proc close*(self: CodexNodeRef) {.async.} = if clientContracts =? self.contracts.client:
await clientContracts.stop()
if hostContracts =? self.contracts.host:
await hostContracts.stop()
if validatorContracts =? self.contracts.validator:
await validatorContracts.stop()
if not self.networkStore.isNil: if not self.networkStore.isNil:
await self.networkStore.close await self.networkStore.close
proc new*( proc new*(
T: type CodexNodeRef, T: type CodexNodeRef,
switch: Switch, switch: Switch,
networkStore: NetworkStore, networkStore: NetworkStore,
engine: BlockExcEngine, engine: BlockExcEngine,
discovery: Discovery, discovery: Discovery,
taskpool: Taskpool, prover = Prover.none,
): CodexNodeRef = contracts = Contracts.default,
taskpool = Taskpool.new(num_threads = countProcessors())): CodexNodeRef =
## Create new instance of a Codex self, call `start` to run it ## Create new instance of a Codex self, call `start` to run it
## ##
@ -552,14 +780,7 @@ proc new*(
switch: switch, switch: switch,
networkStore: networkStore, networkStore: networkStore,
engine: engine, engine: engine,
prover: prover,
discovery: discovery, discovery: discovery,
taskPool: taskpool, contracts: contracts,
trackedFutures: TrackedFutures(), taskpool: taskpool)
)
proc hasLocalBlock*(
self: CodexNodeRef, cid: Cid
): Future[bool] {.async: (raises: [CancelledError]).} =
## Returns true if the given Cid is present in the local store
return await (cid in self.networkStore.localStore)

16
codex/periods.nim Normal file
View File

@ -0,0 +1,16 @@
import pkg/stint
type
Periodicity* = object
seconds*: UInt256
Period* = UInt256
Timestamp* = UInt256
func periodOf*(periodicity: Periodicity, timestamp: Timestamp): Period =
timestamp div periodicity.seconds
func periodStart*(periodicity: Periodicity, period: Period): Timestamp =
period * periodicity.seconds
func periodEnd*(periodicity: Periodicity, period: Period): Timestamp =
periodicity.periodStart(period + 1)

78
codex/purchasing.nim Normal file
View File

@ -0,0 +1,78 @@
import std/tables
import pkg/stint
import pkg/chronos
import pkg/questionable
import pkg/nimcrypto
import ./market
import ./clock
import ./purchasing/purchase
export questionable
export chronos
export market
export purchase
type
Purchasing* = ref object
market: Market
clock: Clock
purchases: Table[PurchaseId, Purchase]
proofProbability*: UInt256
PurchaseTimeout* = Timeout
const DefaultProofProbability = 100.u256
proc new*(_: type Purchasing, market: Market, clock: Clock): Purchasing =
Purchasing(
market: market,
clock: clock,
proofProbability: DefaultProofProbability,
)
proc load*(purchasing: Purchasing) {.async.} =
let market = purchasing.market
let requestIds = await market.myRequests()
for requestId in requestIds:
let purchase = Purchase.new(requestId, purchasing.market, purchasing.clock)
purchase.load()
purchasing.purchases[purchase.id] = purchase
proc start*(purchasing: Purchasing) {.async.} =
await purchasing.load()
proc stop*(purchasing: Purchasing) {.async.} =
discard
proc populate*(purchasing: Purchasing,
request: StorageRequest
): Future[StorageRequest] {.async.} =
result = request
if result.ask.proofProbability == 0.u256:
result.ask.proofProbability = purchasing.proofProbability
if result.nonce == Nonce.default:
var id = result.nonce.toArray
doAssert randomBytes(id) == 32
result.nonce = Nonce(id)
result.client = await purchasing.market.getSigner()
proc purchase*(purchasing: Purchasing,
request: StorageRequest
): Future[Purchase] {.async.} =
let request = await purchasing.populate(request)
let purchase = Purchase.new(request, purchasing.market, purchasing.clock)
purchase.start()
purchasing.purchases[purchase.id] = purchase
return purchase
func getPurchase*(purchasing: Purchasing, id: PurchaseId): ?Purchase =
if purchasing.purchases.hasKey(id):
some purchasing.purchases[id]
else:
none Purchase
func getPurchaseIds*(purchasing: Purchasing): seq[PurchaseId] =
var pIds: seq[PurchaseId] = @[]
for key in purchasing.purchases.keys:
pIds.add(key)
return pIds

View File

@ -0,0 +1,79 @@
import ./statemachine
import ./states/pending
import ./states/unknown
import ./purchaseid
# Purchase is implemented as a state machine.
#
# It can either be a new (pending) purchase that still needs to be submitted
# on-chain, or it is a purchase that was previously submitted on-chain, and
# we're just restoring its (unknown) state after a node restart.
#
# |
# v
# ------------------------- unknown
# | / /
# v v /
# pending ----> submitted ----> started ---------> finished <----/
# \ \ /
# \ ------------> failed <----/
# \ /
# --> cancelled <-----------------------
export Purchase
export purchaseid
export statemachine
func new*(
_: type Purchase,
requestId: RequestId,
market: Market,
clock: Clock
): Purchase =
## create a new instance of a Purchase
##
var purchase = Purchase.new()
{.cast(noSideEffect).}:
purchase.future = newFuture[void]()
purchase.requestId = requestId
purchase.market = market
purchase.clock = clock
return purchase
func new*(
_: type Purchase,
request: StorageRequest,
market: Market,
clock: Clock
): Purchase =
## Create a new purchase using the given market and clock
let purchase = Purchase.new(request.id, market, clock)
purchase.request = some request
return purchase
proc start*(purchase: Purchase) =
purchase.start(PurchasePending())
proc load*(purchase: Purchase) =
purchase.start(PurchaseUnknown())
proc wait*(purchase: Purchase) {.async.} =
await purchase.future
func id*(purchase: Purchase): PurchaseId =
PurchaseId(purchase.requestId)
func finished*(purchase: Purchase): bool =
purchase.future.finished
func error*(purchase: Purchase): ?(ref CatchableError) =
if purchase.future.failed:
some purchase.future.error
else:
none (ref CatchableError)
func state*(purchase: Purchase): ?string =
proc description(state: State): string =
$state
purchase.query(description)

View File

@ -0,0 +1,12 @@
import std/hashes
import pkg/nimcrypto
import ../logutils
type PurchaseId* = distinct array[32, byte]
logutils.formatIt(LogFormat.textLines, PurchaseId): it.short0xHexLog
logutils.formatIt(LogFormat.json, PurchaseId): it.to0xHexLog
proc hash*(x: PurchaseId): Hash {.borrow.}
proc `==`*(x, y: PurchaseId): bool {.borrow.}
proc toHex*(x: PurchaseId): string = array[32, byte](x).toHex

View File

@ -0,0 +1,18 @@
import ../utils/asyncstatemachine
import ../market
import ../clock
import ../errors
export market
export clock
export asyncstatemachine
type
Purchase* = ref object of Machine
future*: Future[void]
market*: Market
clock*: Clock
requestId*: RequestId
request*: ?StorageRequest
PurchaseState* = ref object of State
PurchaseError* = object of CodexError

View File

@ -0,0 +1,25 @@
import pkg/metrics
import ../../logutils
import ../statemachine
import ./errorhandling
declareCounter(codex_purchases_cancelled, "codex purchases cancelled")
logScope:
topics = "marketplace purchases cancelled"
type PurchaseCancelled* = ref object of ErrorHandlingState
method `$`*(state: PurchaseCancelled): string =
"cancelled"
method run*(state: PurchaseCancelled, machine: Machine): Future[?State] {.async.} =
codex_purchases_cancelled.inc()
let purchase = Purchase(machine)
warn "Request cancelled, withdrawing remaining funds", requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
let error = newException(Timeout, "Purchase cancelled due to timeout")
purchase.future.fail(error)

View File

@ -0,0 +1,23 @@
import pkg/metrics
import ../statemachine
import ../../utils/exceptions
import ../../logutils
declareCounter(codex_purchases_error, "codex purchases error")
logScope:
topics = "marketplace purchases errored"
type PurchaseErrored* = ref object of PurchaseState
error*: ref CatchableError
method `$`*(state: PurchaseErrored): string =
"errored"
method run*(state: PurchaseErrored, machine: Machine): Future[?State] {.async.} =
codex_purchases_error.inc()
let purchase = Purchase(machine)
error "Purchasing error", error=state.error.msgDetail, requestId = purchase.requestId
purchase.future.fail(state.error)

View File

@ -0,0 +1,9 @@
import pkg/questionable
import ../statemachine
import ./error
type
ErrorHandlingState* = ref object of PurchaseState
method onError*(state: ErrorHandlingState, error: ref CatchableError): ?State =
some State(PurchaseErrored(error: error))

View File

@ -0,0 +1,21 @@
import pkg/metrics
import ../statemachine
import ../../logutils
import ./error
declareCounter(codex_purchases_failed, "codex purchases failed")
type
PurchaseFailed* = ref object of PurchaseState
method `$`*(state: PurchaseFailed): string =
"failed"
method run*(state: PurchaseFailed, machine: Machine): Future[?State] {.async.} =
codex_purchases_failed.inc()
let purchase = Purchase(machine)
warn "Request failed, withdrawing remaining funds", requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
let error = newException(PurchaseError, "Purchase failed")
return some State(PurchaseErrored(error: error))

View File

@ -0,0 +1,22 @@
import pkg/metrics
import ../statemachine
import ../../logutils
declareCounter(codex_purchases_finished, "codex purchases finished")
logScope:
topics = "marketplace purchases finished"
type PurchaseFinished* = ref object of PurchaseState
method `$`*(state: PurchaseFinished): string =
"finished"
method run*(state: PurchaseFinished, machine: Machine): Future[?State] {.async.} =
codex_purchases_finished.inc()
let purchase = Purchase(machine)
info "Purchase finished, withdrawing remaining funds", requestId = purchase.requestId
await purchase.market.withdrawFunds(purchase.requestId)
purchase.future.complete()

View File

@ -0,0 +1,18 @@
import pkg/metrics
import ../statemachine
import ./errorhandling
import ./submitted
declareCounter(codex_purchases_pending, "codex purchases pending")
type PurchasePending* = ref object of ErrorHandlingState
method `$`*(state: PurchasePending): string =
"pending"
method run*(state: PurchasePending, machine: Machine): Future[?State] {.async.} =
codex_purchases_pending.inc()
let purchase = Purchase(machine)
let request = !purchase.request
await purchase.market.requestStorage(request)
return some State(PurchaseSubmitted())

View File

@ -0,0 +1,41 @@
import pkg/metrics
import ../../logutils
import ../statemachine
import ./errorhandling
import ./finished
import ./failed
declareCounter(codex_purchases_started, "codex purchases started")
logScope:
topics = "marketplace purchases started"
type PurchaseStarted* = ref object of ErrorHandlingState
method `$`*(state: PurchaseStarted): string =
"started"
method run*(state: PurchaseStarted, machine: Machine): Future[?State] {.async.} =
codex_purchases_started.inc()
let purchase = Purchase(machine)
let clock = purchase.clock
let market = purchase.market
info "All required slots filled, purchase started", requestId = purchase.requestId
let failed = newFuture[void]()
proc callback(_: RequestId) =
failed.complete()
let subscription = await market.subscribeRequestFailed(purchase.requestId, callback)
# Ensure that we're past the request end by waiting an additional second
let ended = clock.waitUntil((await market.getRequestEnd(purchase.requestId)) + 1)
let fut = await one(ended, failed)
await subscription.unsubscribe()
if fut.id == failed.id:
ended.cancel()
return some State(PurchaseFailed())
else:
failed.cancel()
return some State(PurchaseFinished())

View File

@ -0,0 +1,46 @@
import pkg/metrics
import ../../logutils
import ../statemachine
import ./errorhandling
import ./started
import ./cancelled
logScope:
topics = "marketplace purchases submitted"
declareCounter(codex_purchases_submitted, "codex purchases submitted")
type PurchaseSubmitted* = ref object of ErrorHandlingState
method `$`*(state: PurchaseSubmitted): string =
"submitted"
method run*(state: PurchaseSubmitted, machine: Machine): Future[?State] {.async.} =
codex_purchases_submitted.inc()
let purchase = Purchase(machine)
let request = !purchase.request
let market = purchase.market
let clock = purchase.clock
info "Request submitted, waiting for slots to be filled", requestId = purchase.requestId
proc wait {.async.} =
let done = newFuture[void]()
proc callback(_: RequestId) =
done.complete()
let subscription = await market.subscribeFulfillment(request.id, callback)
await done
await subscription.unsubscribe()
proc withTimeout(future: Future[void]) {.async.} =
let expiry = (await market.requestExpiresAt(request.id)) + 1
trace "waiting for request fulfillment or expiry", expiry
await future.withTimeout(clock, expiry)
try:
await wait().withTimeout()
except Timeout:
return some State(PurchaseCancelled())
return some State(PurchaseStarted())

View File

@ -0,0 +1,35 @@
import pkg/metrics
import ../statemachine
import ./errorhandling
import ./submitted
import ./started
import ./cancelled
import ./finished
import ./failed
declareCounter(codex_purchases_unknown, "codex purchases unknown")
type PurchaseUnknown* = ref object of ErrorHandlingState
method `$`*(state: PurchaseUnknown): string =
"unknown"
method run*(state: PurchaseUnknown, machine: Machine): Future[?State] {.async.} =
codex_purchases_unknown.inc()
let purchase = Purchase(machine)
if (request =? await purchase.market.getRequest(purchase.requestId)) and
(requestState =? await purchase.market.requestState(purchase.requestId)):
purchase.request = some request
case requestState
of RequestState.New:
return some State(PurchaseSubmitted())
of RequestState.Started:
return some State(PurchaseStarted())
of RequestState.Cancelled:
return some State(PurchaseCancelled())
of RequestState.Finished:
return some State(PurchaseFinished())
of RequestState.Failed:
return some State(PurchaseFailed())

Some files were not shown because too many files have changed in this diff Show More