Release v0.1.5 (#941)
* fix: createReservation lock (#825) * fix: createReservation lock * fix: additional locking places * fix: acquire lock * chore: feedback Co-authored-by: markspanbroek <mark@spanbroek.net> Signed-off-by: Adam Uhlíř <adam@uhlir.dev> * feat: withLock template and fixed tests * fix: use proc for MockReservations constructor * chore: feedback Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> Signed-off-by: Adam Uhlíř <adam@uhlir.dev> * chore: feedback implementation --------- Signed-off-by: Adam Uhlíř <adam@uhlir.dev> Co-authored-by: markspanbroek <mark@spanbroek.net> Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> * Block deletion with ref count & repostore refactor (#631) * Fix StoreStream so it doesn't return parity bytes (#838) * fix storestream so it doesn\'t return parity bits for protected/verifiable manifests * use Cid.example instead of creating a mock manually * Fix verifiable manifest initialization (#839) * fix verifiable manifest initialization * fix linearstrategy, use verifiableStrategy to select blocks for slots * check for both strategies in attribute inheritance test * ci: add verify_circuit=true to the releases (#840) * provisional fix so EC errors do not crash the node on download (#841) * prevent node crashing with `not val.isNil` (#843) * bump nim-leopard to handle no parity data (#845) * Fix verifiable manifest constructor (#844) * Fix verifiable manifest constructor * Add integration test for verifiable manifest download Add integration test for testing download of verifiable dataset after creating request for storage * add missing import * add testecbug to integration suite * Remove hardhat instance from integration test * change description, drop echo --------- Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> Co-authored-by: gmega <giuliano.mega@gmail.com> * Bump Nim to 1.6.21 (#851) * bump Nim to 1.6.21 (range type reset fixes) * remove incompatible versions from compiler matrix * feat(rest): adds erasure coding constraints when requesting storage (#848) * Rest API: add erasure coding constraints when requesting storage * clean up * Make error message for "dataset too small" more informative. * fix API integration test --------- Co-authored-by: gmega <giuliano.mega@gmail.com> * Prover workshop band-aid (#853) * add prover bandaid * Improve error message text Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> --------- Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> * Bandaid for failing erasure coding (#855) * Update Release workflow (#858) Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Fixes prover behavior with singleton proof trees (#859) * add logs and test * add Merkle proof checks * factor out Circom input normalization, fix proof input serialization * add test and update existing ones * update circuit assets * add back trace message * switch contracts to fix branch * update codex-contracts-eth to latest * do not expose prove with prenormalized inputs * Chronos v4 Update (v3 Compat Mode) (#814) * add changes to use chronos v4 in compat mode * switch chronos to compat fix branch * use nimbus-build-system with configurable Nim repo * add missing imports * add missing await * bump compat * pin nim version in Makefile * add await instead of asyncSpawn to advertisement queue loop * bump DHT to v0.5.0 * allow error state of `onBatch` to propagate upwards in test code * pin Nim compiler commit to avoid fetching stale branch * make CI build against branch head instead of merge * fix handling of return values in testslotqueue * Downgrade to gcc 13 on Windows (#874) * Downgrade to gcc 13 on Windows Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Increase build job timeout to 90 minutes Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> --------- Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Add MIT/Apache licenses (#861) * Add MIT/Apache licenses * Center "Apache License" Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> * remove wrong legal entity; rename apache license file --------- Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> * Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869) * Add OPTIONS endpoint to allow the content-type header exec git commit --amend --no-edit -S * Remove useless header "Access-Control-Headers" and add cache Signed-off-by: Arnaud <arnaud@status.im> --------- Signed-off-by: Arnaud <arnaud@status.im> Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com> * chore: add `downtimeProduct` config parameter (#867) * chore: add `downtimeProduct` config parameter * bump codex-contracts-eth to master * Support CORS preflight requests when the storage request api returns an error (#878) * Add CORS headers when the REST API is returning an error * Use the allowedOrigin instead of the wilcard when setting the origin Signed-off-by: Arnaud <arnaud@status.im> --------- Signed-off-by: Arnaud <arnaud@status.im> * refactor(marketplace): generic querying of historical marketplace events (#872) * refactor(marketplace): move marketplace events to the Market abstraction Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction. * Remove unneeded conversion * Switch to generic implementation of event querying * change parent type to MarketplaceEvent * Remove extra license file (#876) * remove extra license * center "apache license" * Update advertising (#862) * Setting up advertiser * Wires up advertiser * cleanup * test compiles * tests pass * setting up test for advertiser * Finishes advertiser tests * fixes commonstore tests * Review comments by Giuliano * Race condition found by Giuliano * Review comment by Dmitriy Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com> Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com> * fixes tests --------- Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com> Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com> * feat: add `--payout-address` (#870) * feat: add `--payout-address` Allows SPs to be paid out to a separate address, keeping their profits secure. Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client. * Remove optional payoutAddress Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`. * Update integration tests to include --payout-address * move payoutAddress from fillSlot to freeSlot * Update integration tests to use required payoutAddress - to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter. * support client payout address - withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address. * fix integration test adds --payout-address to validators * refactor: support withdrawFunds and freeSlot optional parameters - withdrawFunds has an optional parameter for withdrawRecipient - freeSlot has optional parameters for rewardRecipient and collateralRecipient - change --payout-address to --reward-recipient to match contract signature naming * Revert "Update integration tests to include --payout-address" This reverts commit8f9535cf35
. There are some valid improvements to the integration tests, but they can be handled in a separate PR. * small fix * bump contracts to fix marketplace spec * bump codex-contracts-eth, now rebased on master * bump codex-contracts-eth now that feat/reward-address has been merged to master * clean up, comments * Rework circuit downloader (#882) * Introduces a start method to prover * Moves backend creation into start method * sets up three paths for backend initialization * Extracts backend initialization to backend-factory * Implements loading backend from cli files or previously downloaded local files * Wires up downloading and unzipping * functional implementation * Fixes testprover.nim * Sets up tests for backendfactory * includes libzip-dev * pulls in updated contracts * removes integration cli tests for r1cs, wasm, and zkey file arguments. * Fixes issue where inner-scope values are lost before returning * sets local proof verification for dist-test images * Adds two traces and bumps nim-ethers * Adds separate path for circuit files * Create circuit dir if not exists * fix: make sure requestStorage is mined * fix: correct place to plug confirm * test: fixing contracts tests * Restores gitmodules * restores nim-datastore reference * Sets up downloader exe * sets up tool skeleton * implements getting of circuit hash * Implements downloader tool * sets up test skeleton * Implements test for cirdl * includes testTools in testAll * Cleanup building.md * cleans up previous downloader implementation * cleans up testbackendfactory * moves start of prover into node.nim * Fills in arguments in example command * Initializes backend in prover constructor * Restores tests * Restores tests for cli instructions * Review comments by Dmitriy, part 1 * Quotes path in download instruction. * replaces curl with chronos http session * Moves cirdl build output to 'build' folder. * Fixes chronicles log output * Add cirdl support to the codex Dockerfile Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Add cirdl support to the docker entrypoint Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Add cirdl support to the release workflow Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Disable verify_circuit flag for releases Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Removes backendFactory placeholder type * wip * Replaces zip library with status-im/zippy library (which supports zip and tar) * Updates cirdl to not change circuitdir folder * Switches from zip to tar.gz * Review comments by Dmitriy * updates codex-contracts-eth * Adds testTools to CI * Adds check for access to config.circuitdir * Update fixture circuit zkey * Update matrix to run tools tests on Windows * Adds 'deps' dependency for cirdl * Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var * Review comments by Giuliano --------- Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> Co-authored-by: Adam Uhlíř <adam@uhlir.dev> Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com> * Support CORS for POST and PATCH availability endpoints (#897) * Adds testnet marketplace address to known deployments (#911) * API tweaks for OpenAPI, errors and endpoints (#886) * All sort of tweaks * docs: availability's minPrice doc * Revert changes to the two node test example * Change default EC params in REST API Change default EC params in REST API to 3 nodes and 1 tolerance. Adjust integration tests to honour these settings. --------- Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> * remove erasure and por parameters from openapi spec (#915) * Move Building Codex guide to the main docs site (#893) * updates Marketplace tutorial documentation (#888) * updates Marketplace tutorial documentation * Applies review comments to marketplace-tutorial * Final formatting touches * moved `Prerequisites` around * Fixes indentation in one JSON snippet * Use CLI args when passed for cirdl in Docker entrypoint (#927) * Use CLI args when passed for cirdl in Docker entrypoint Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Increase CI timeout Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> --------- Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Validator - support partitioning of the slot id space (#890) * Adds validatorPartitionSize and validatorPartitionIndex config options * adds partitioning options to the validation type * adds partitioning logic to the validator * ignores partitionIndex when partitionSize is either 0 or 1 * clips the partition index to <<partitionIndex mod partitionSize>> * handles negative values for the validation partition index * updates long description of the new validator cli options * makes default partitionSize to be 0 for better backward compatibility * Improving formatting on validator CLI * reactors validation params into a separate type and simplifies validation of validation params * removes suspected duplication * fixes typo in validator CLI help * updates README * Applies review comments - using optionals and range types to handle validation params * Adds initializer to the configFactory for validatorMaxSlots * [Review] update validator CLI description and README * [Review]: renaming validationParams to validationConfig (config) * [Review]: move validationconfig.nim to a higher level (next to validation.nim) * changes backing type of MaxSlots to be int and makes sure slots are validated without limit when maxSlots is set to 0 * adds more end-to-end test for the validator and the groups * fixes typo in README and conf.nim * makes `maxSlotsConstraintRespected` and `shouldValidateSlot` private + updates the tests * fixes public address of the signer account in the marketplace tutorial * applies review comments - removes two tests * Remove moved docs (#930) * Remove moved document * Update main Readme and point links to the documentation site * feat(slot-reservations): Support reserving slots (#907) * feat(slot-reservations): Support reserving slots Closes #898. Wire up reserveSlot and canReserveSlot contract calls, but don't call them * Remove return value from `reserveSlot` * convert EthersError to MarketError * Move convertEthersError to reserveSlot * bump codex-contracts-eth after rebase * change `canReserveSlot` and `reserveSlot` parameters Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`. * bump codex-contracts-eth after rebase * bump codex-contracts-eth to master after codex-contracts-eth/pull/177 merged * feat(slot-reservations): Add SaleSlotReserving state (#917) * convert EthersError to MarketError * change `canReserveSlot` and `reserveSlot` parameters Parameters for `canReserveSlot` and `reserveSlot` were changed from `SlotId` to `RequestId` and `UInt256 slotIndex`. * Add SaleSlotReserving Adds a new state, SaleSlotReserving, that attempts to reserve a slot before downloading. If the slot cannot be reserved, the state moves to SaleIgnored. On error, the state moves to SaleErrored. SaleIgnored is also updated to pass in `reprocessSlot` and `returnBytes`, controlling the behaviour in the Sales module after the slot is ignored. This is because previously it was assumed that SaleIgnored was only reached when there was no Availability. This is no longer the case, since SaleIgnored can now be reached when a slot cannot be reserved. * Update SalePreparing Specify `reprocessSlot` and `returnBytes` when moving to `SaleIgnored` from `SalePreparing`. Update tests to include test for a raised CatchableError. * Fix unit test * Modify `canReserveSlot` and `reverseSlot` params after rebase * Update MockMarket with new `canReserveSlot` and `reserveSlot` params * fix after rebase also bump codex-contracts-eth to master * Use Ubuntu 20.04 for Linux amd64 releases (#939) * Use Ubuntu 20.04 for Linux amd64 releases (#932) * Accept branches with the slash in the name for release workflow (#932) * Increase artifacts retention-days for release workflow (#932) * feat(slot-reservations): support SlotReservationsFull event (#926) * Remove moved docs (#935) Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> * Fix: null-ref in networkPeer (#937) * Fixes nullref in networkPeer * Removes inflight semaphore * Revert "Removes inflight semaphore" This reverts commit26ec15c6f7
. --------- Signed-off-by: Adam Uhlíř <adam@uhlir.dev> Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com> Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com> Signed-off-by: Arnaud <arnaud@status.im> Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com> Co-authored-by: Adam Uhlíř <adam@uhlir.dev> Co-authored-by: markspanbroek <mark@spanbroek.net> Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com> Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com> Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com> Co-authored-by: Arnaud <arno.deville@gmail.com> Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com> Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com> Co-authored-by: Arnaud <arnaud@status.im> Co-authored-by: Marcin Czenko <marcin.czenko@pm.me>
This commit is contained in:
parent
484124db09
commit
7ba5e8c13a
|
@ -26,7 +26,7 @@ jobs:
|
|||
|
||||
name: '${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.tests }}'
|
||||
runs-on: ${{ matrix.builder }}
|
||||
timeout-minutes: 90
|
||||
timeout-minutes: 100
|
||||
steps:
|
||||
- name: Checkout sources
|
||||
uses: actions/checkout@v4
|
||||
|
|
|
@ -28,7 +28,7 @@ jobs:
|
|||
uses: fabiocaccamo/create-matrix-action@v4
|
||||
with:
|
||||
matrix: |
|
||||
os {linux}, cpu {amd64}, builder {ubuntu-22.04}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
|
||||
os {linux}, cpu {amd64}, builder {ubuntu-20.04}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
|
||||
os {linux}, cpu {arm64}, builder {buildjet-4vcpu-ubuntu-2204-arm}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
|
||||
os {macos}, cpu {amd64}, builder {macos-13}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
|
||||
os {macos}, cpu {arm64}, builder {macos-14}, nim_version {${{ env.nim_version }}}, rust_version {${{ env.rust_version }}}, shell {bash --noprofile --norc -e -o pipefail}
|
||||
|
@ -71,8 +71,9 @@ jobs:
|
|||
macos*) os_name="darwin" ;;
|
||||
windows*) os_name="windows" ;;
|
||||
esac
|
||||
codex_binary="${{ env.codex_binary_base }}-${{ github.ref_name }}-${os_name}-${{ matrix.cpu }}"
|
||||
cirdl_binary="${{ env.cirdl_binary_base }}-${{ github.ref_name }}-${os_name}-${{ matrix.cpu }}"
|
||||
github_ref_name="${GITHUB_REF_NAME/\//-}"
|
||||
codex_binary="${{ env.codex_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
|
||||
cirdl_binary="${{ env.cirdl_binary_base }}-${github_ref_name}-${os_name}-${{ matrix.cpu }}"
|
||||
if [[ ${os_name} == "windows" ]]; then
|
||||
codex_binary="${codex_binary}.exe"
|
||||
cirdl_binary="${cirdl_binary}.exe"
|
||||
|
@ -98,14 +99,14 @@ jobs:
|
|||
with:
|
||||
name: release-${{ env.codex_binary }}
|
||||
path: ${{ env.build_dir }}/${{ env.codex_binary_base }}*
|
||||
retention-days: 1
|
||||
retention-days: 30
|
||||
|
||||
- name: Release - Upload cirdl build artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: release-${{ env.cirdl_binary }}
|
||||
path: ${{ env.build_dir }}/${{ env.cirdl_binary_base }}*
|
||||
retention-days: 1
|
||||
retention-days: 30
|
||||
|
||||
- name: Release - Upload windows libs
|
||||
if: matrix.os == 'windows'
|
||||
|
@ -113,7 +114,7 @@ jobs:
|
|||
with:
|
||||
name: release-${{ matrix.os }}-libs
|
||||
path: ${{ env.build_dir }}/*.dll
|
||||
retention-days: 1
|
||||
retention-days: 30
|
||||
|
||||
# Release
|
||||
release:
|
||||
|
@ -167,7 +168,7 @@ jobs:
|
|||
with:
|
||||
name: archives-and-checksums
|
||||
path: /tmp/release/
|
||||
retention-days: 1
|
||||
retention-days: 30
|
||||
|
||||
- name: Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
|
|
202
BUILDING.md
202
BUILDING.md
|
@ -1,202 +0,0 @@
|
|||
# Building Codex
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Install developer tools](#prerequisites)
|
||||
- [Linux](#linux)
|
||||
- [macOS](#macos)
|
||||
- [Windows + MSYS2](#windows--msys2)
|
||||
- [Other](#other)
|
||||
- [Clone and prepare the Git repository](#repository)
|
||||
- [Build the executable](#executable)
|
||||
- [Run the example](#example-usage)
|
||||
|
||||
**Optional**
|
||||
- [Run the tests](#tests)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To build nim-codex, developer tools need to be installed and accessible in the OS.
|
||||
|
||||
Instructions below correspond roughly to environmental setups in nim-codex's [CI workflow](https://github.com/codex-storage/nim-codex/blob/main/.github/workflows/ci.yml) and are known to work.
|
||||
|
||||
Other approaches may be viable. On macOS, some users may prefer [MacPorts](https://www.macports.org/) to [Homebrew](https://brew.sh/). On Windows, rather than use MSYS2, some users may prefer to install developer tools with [winget](https://docs.microsoft.com/en-us/windows/package-manager/winget/), [Scoop](https://scoop.sh/), or [Chocolatey](https://chocolatey.org/), or download installers for e.g. Make and CMake while otherwise relying on official Windows developer tools. Community contributions to these docs and our build system are welcome!
|
||||
|
||||
### Rust
|
||||
|
||||
The current implementation of Codex's zero-knowledge proving circuit requires the installation of rust v1.76.0 or greater. Be sure to install it for your OS and add it to your terminal's path such that the command `cargo --version` gives a compatible version.
|
||||
|
||||
### Linux
|
||||
|
||||
*Package manager commands may require `sudo` depending on OS setup.*
|
||||
|
||||
On a bare bones installation of Debian (or a distribution derived from Debian, such as Ubuntu), run
|
||||
|
||||
```shell
|
||||
$ apt-get update && apt-get install build-essential cmake curl git rustc cargo
|
||||
```
|
||||
|
||||
Non-Debian distributions have different package managers: `apk`, `dnf`, `pacman`, `rpm`, `yum`, etc.
|
||||
|
||||
For example, on a bare bones installation of Fedora, run
|
||||
|
||||
```shell
|
||||
dnf install @development-tools cmake gcc-c++ rust cargo
|
||||
```
|
||||
|
||||
In case your distribution does not provide required Rust version, we may install it using [rustup](https://www.rust-lang.org/tools/install)
|
||||
```shell
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs/ | sh -s -- --default-toolchain=1.76.0 -y
|
||||
|
||||
. "$HOME/.cargo/env"
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
Install the [Xcode Command Line Tools](https://mac.install.guide/commandlinetools/index.html) by opening a terminal and running
|
||||
```shell
|
||||
xcode-select --install
|
||||
```
|
||||
|
||||
Install [Homebrew (`brew`)](https://brew.sh/) and in a new terminal run
|
||||
```shell
|
||||
brew install bash cmake rust
|
||||
```
|
||||
|
||||
Check that `PATH` is setup correctly
|
||||
```shell
|
||||
which bash cmake
|
||||
|
||||
# /usr/local/bin/bash
|
||||
# /usr/local/bin/cmake
|
||||
```
|
||||
|
||||
### Windows + MSYS2
|
||||
|
||||
*Instructions below assume the OS is 64-bit Windows and that the hardware or VM is [x86-64](https://en.wikipedia.org/wiki/X86-64) compatible.*
|
||||
|
||||
Download and run the installer from [msys2.org](https://www.msys2.org/).
|
||||
|
||||
Launch an MSYS2 [environment](https://www.msys2.org/docs/environments/). UCRT64 is generally recommended: from the Windows *Start menu* select `MSYS2 MinGW UCRT x64`.
|
||||
|
||||
Assuming a UCRT64 environment, in Bash run
|
||||
```shell
|
||||
pacman -Suy
|
||||
pacman -S base-devel git unzip mingw-w64-ucrt-x86_64-toolchain mingw-w64-ucrt-x86_64-cmake mingw-w64-ucrt-x86_64-rust
|
||||
```
|
||||
|
||||
<!-- #### Headless Windows container -->
|
||||
<!-- add instructions re: getting setup with MSYS2 in a Windows container -->
|
||||
<!-- https://github.com/StefanScherer/windows-docker-machine -->
|
||||
|
||||
#### Optional: VSCode Terminal integration
|
||||
|
||||
You can link the MSYS2-UCRT64 terminal into VSCode by modifying the configuration file as shown below.
|
||||
File: `C:/Users/<username>/AppData/Roaming/Code/User/settings.json`
|
||||
```json
|
||||
{
|
||||
...
|
||||
"terminal.integrated.profiles.windows": {
|
||||
...
|
||||
"MSYS2-UCRT64": {
|
||||
"path": "C:\\msys64\\usr\\bin\\bash.exe",
|
||||
"args": [
|
||||
"--login",
|
||||
"-i"
|
||||
],
|
||||
"env": {
|
||||
"MSYSTEM": "UCRT64",
|
||||
"CHERE_INVOKING": "1",
|
||||
"MSYS2_PATH_TYPE": "inherit"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Other
|
||||
|
||||
It is possible that nim-codex can be built and run on other platforms supported by the [Nim](https://nim-lang.org/) language: BSD family, older versions of Windows, etc. There has not been sufficient experimentation with nim-codex on such platforms, so instructions are not provided. Community contributions to these docs and our build system are welcome!
|
||||
|
||||
## Repository
|
||||
|
||||
In Bash run
|
||||
```shell
|
||||
git clone https://github.com/codex-storage/nim-codex.git repos/nim-codex && cd repos/nim-codex
|
||||
```
|
||||
|
||||
nim-codex uses the [nimbus-build-system](https://github.com/status-im/nimbus-build-system), so next run
|
||||
```shell
|
||||
make update
|
||||
```
|
||||
|
||||
This step can take a while to complete because by default it builds the [Nim compiler](https://nim-lang.org/docs/nimc.html).
|
||||
|
||||
To see more output from `make` pass `V=1`. This works for all `make` targets in projects using the nimbus-build-system
|
||||
```shell
|
||||
make V=1 update
|
||||
```
|
||||
|
||||
## Executable
|
||||
|
||||
In Bash run
|
||||
```shell
|
||||
make
|
||||
```
|
||||
|
||||
The default `make` target creates the `build/codex` executable.
|
||||
|
||||
## Example usage
|
||||
|
||||
See the [instructions](README.md#cli-options) in the main readme.
|
||||
|
||||
## Tests
|
||||
|
||||
In Bash run
|
||||
```shell
|
||||
make test
|
||||
```
|
||||
|
||||
### Tools
|
||||
|
||||
#### Circuit download tool
|
||||
|
||||
To build the circuit download tool located in `tools/cirdl` run:
|
||||
|
||||
```shell
|
||||
make cirdl
|
||||
```
|
||||
|
||||
### testAll
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
To run the integration tests, an Ethereum test node is required. Follow these instructions to set it up.
|
||||
|
||||
##### Windows (do this before 'All platforms')
|
||||
|
||||
1. Download and install Visual Studio 2017 or newer. (Not VSCode!) In the Workloads overview, enable `Desktop development with C++`. ( https://visualstudio.microsoft.com )
|
||||
|
||||
##### All platforms
|
||||
|
||||
1. Install NodeJS (tested with v18.14.0), consider using NVM as a version manager. [Node Version Manager (`nvm`)](https://github.com/nvm-sh/nvm#readme)
|
||||
1. Open a terminal
|
||||
1. Go to the vendor/codex-contracts-eth folder: `cd /<git-root>/vendor/codex-contracts-eth/`
|
||||
1. `npm install` -> Should complete with the number of packages added and an overview of known vulnerabilities.
|
||||
1. `npm test` -> Should output test results. May take a minute.
|
||||
|
||||
Before the integration tests are started, you must start the Ethereum test node manually.
|
||||
1. Open a terminal
|
||||
1. Go to the vendor/codex-contracts-eth folder: `cd /<git-root>/vendor/codex-contracts-eth/`
|
||||
1. `npm start` -> This should launch Hardhat, and output a number of keys and a warning message.
|
||||
|
||||
#### Run
|
||||
|
||||
The `testAll` target runs the same tests as `make test` and also runs tests for nim-codex's Ethereum contracts, as well a basic suite of integration tests.
|
||||
|
||||
To run `make testAll`.
|
||||
|
||||
Use a new terminal to run:
|
||||
```shell
|
||||
make testAll
|
||||
```
|
110
README.md
110
README.md
|
@ -16,7 +16,7 @@
|
|||
|
||||
## Build and Run
|
||||
|
||||
For detailed instructions on preparing to build nim-codex see [*Building Codex*](BUILDING.md).
|
||||
For detailed instructions on preparing to build nim-codex see [*Build Codex*](https://docs.codex.storage/learn/build).
|
||||
|
||||
To build the project, clone it and run:
|
||||
|
||||
|
@ -35,112 +35,18 @@ build/codex
|
|||
|
||||
It is possible to configure a Codex node in several ways:
|
||||
1. CLI options
|
||||
2. Env. variable
|
||||
3. Config
|
||||
2. Environment variables
|
||||
3. Configuration file
|
||||
|
||||
The order of priority is the same as above: Cli arguments > Env variables > Config file values.
|
||||
The order of priority is the same as above: CLI options --> Environment variables --> Configuration file.
|
||||
|
||||
### Environment variables
|
||||
Please check [documentation](https://docs.codex.storage/learn/run#configuration) for more information.
|
||||
|
||||
In order to set a configuration option using environment variables, first find the desired CLI option
|
||||
and then transform it in the following way:
|
||||
|
||||
1. prepend it with `CODEX_`
|
||||
2. make it uppercase
|
||||
3. replace `-` with `_`
|
||||
|
||||
For example, to configure `--log-level`, use `CODEX_LOG_LEVEL` as the environment variable name.
|
||||
|
||||
### Configuration file
|
||||
|
||||
A [TOML](https://toml.io/en/) configuration file can also be used to set configuration values. Configuration option names and corresponding values are placed in the file, separated by `=`. Configuration option names can be obtained from the `codex --help` command, and should not include the `--` prefix. For example, a node's log level (`--log-level`) can be configured using TOML as follows:
|
||||
|
||||
```toml
|
||||
log-level = "trace"
|
||||
```
|
||||
|
||||
The Codex node can then read the configuration from this file using the `--config-file` CLI parameter, like `codex --config-file=/path/to/your/config.toml`.
|
||||
|
||||
### CLI Options
|
||||
|
||||
```
|
||||
build/codex --help
|
||||
Usage:
|
||||
|
||||
codex [OPTIONS]... command
|
||||
|
||||
The following options are available:
|
||||
|
||||
--config-file Loads the configuration from a TOML file [=none].
|
||||
--log-level Sets the log level [=info].
|
||||
--metrics Enable the metrics server [=false].
|
||||
--metrics-address Listening address of the metrics server [=127.0.0.1].
|
||||
--metrics-port Listening HTTP port of the metrics server [=8008].
|
||||
-d, --data-dir The directory where codex will store configuration and data.
|
||||
-i, --listen-addrs Multi Addresses to listen on [=/ip4/0.0.0.0/tcp/0].
|
||||
-a, --nat IP Addresses to announce behind a NAT [=127.0.0.1].
|
||||
-e, --disc-ip Discovery listen address [=0.0.0.0].
|
||||
-u, --disc-port Discovery (UDP) port [=8090].
|
||||
--net-privkey Source of network (secp256k1) private key file path or name [=key].
|
||||
-b, --bootstrap-node Specifies one or more bootstrap nodes to use when connecting to the network.
|
||||
--max-peers The maximum number of peers to connect to [=160].
|
||||
--agent-string Node agent string which is used as identifier in network [=Codex].
|
||||
--api-bindaddr The REST API bind address [=127.0.0.1].
|
||||
-p, --api-port The REST Api port [=8080].
|
||||
--repo-kind Backend for main repo store (fs, sqlite) [=fs].
|
||||
-q, --storage-quota The size of the total storage quota dedicated to the node [=8589934592].
|
||||
-t, --block-ttl Default block timeout in seconds - 0 disables the ttl [=$DefaultBlockTtl].
|
||||
--block-mi Time interval in seconds - determines frequency of block maintenance cycle: how
|
||||
often blocks are checked for expiration and cleanup
|
||||
[=$DefaultBlockMaintenanceInterval].
|
||||
--block-mn Number of blocks to check every maintenance cycle [=1000].
|
||||
-c, --cache-size The size of the block cache, 0 disables the cache - might help on slow hardrives
|
||||
[=0].
|
||||
|
||||
Available sub-commands:
|
||||
|
||||
codex persistence [OPTIONS]... command
|
||||
|
||||
The following options are available:
|
||||
|
||||
--eth-provider The URL of the JSON-RPC API of the Ethereum node [=ws://localhost:8545].
|
||||
--eth-account The Ethereum account that is used for storage contracts.
|
||||
--eth-private-key File containing Ethereum private key for storage contracts.
|
||||
--marketplace-address Address of deployed Marketplace contract.
|
||||
--validator Enables validator, requires an Ethereum node [=false].
|
||||
--validator-max-slots Maximum number of slots that the validator monitors [=1000].
|
||||
|
||||
Available sub-commands:
|
||||
|
||||
codex persistence prover [OPTIONS]...
|
||||
|
||||
The following options are available:
|
||||
|
||||
--circom-r1cs The r1cs file for the storage circuit.
|
||||
--circom-wasm The wasm file for the storage circuit.
|
||||
--circom-zkey The zkey file for the storage circuit.
|
||||
--circom-no-zkey Ignore the zkey file - use only for testing! [=false].
|
||||
--proof-samples Number of samples to prove [=5].
|
||||
--max-slot-depth The maximum depth of the slot tree [=32].
|
||||
--max-dataset-depth The maximum depth of the dataset tree [=8].
|
||||
--max-block-depth The maximum depth of the network block merkle tree [=5].
|
||||
--max-cell-elements The maximum number of elements in a cell [=67].
|
||||
```
|
||||
|
||||
#### Logging
|
||||
|
||||
Codex uses [Chronicles](https://github.com/status-im/nim-chronicles) logging library, which allows great flexibility in working with logs.
|
||||
Chronicles has the concept of topics, which categorize log entries into semantic groups.
|
||||
|
||||
Using the `log-level` parameter, you can set the top-level log level like `--log-level="trace"`, but more importantly,
|
||||
you can set log levels for specific topics like `--log-level="info; trace: marketplace,node; error: blockexchange"`,
|
||||
which sets the top-level log level to `info` and then for topics `marketplace` and `node` sets the level to `trace` and so on.
|
||||
|
||||
### Guides
|
||||
## Guides
|
||||
|
||||
To get acquainted with Codex, consider:
|
||||
* running the simple [Codex Two-Client Test](docs/TwoClientTest.md) for a start, and;
|
||||
* if you are feeling more adventurous, try [Running a Local Codex Network with Marketplace Support](docs/Marketplace.md) using a local blockchain as well.
|
||||
* running the simple [Codex Two-Client Test](https://docs.codex.storage/learn/local-two-client-test) for a start, and;
|
||||
* if you are feeling more adventurous, try [Running a Local Codex Network with Marketplace Support](https://docs.codex.storage/learn/local-marketplace) using a local blockchain as well.
|
||||
|
||||
## API
|
||||
|
||||
|
|
|
@ -93,18 +93,20 @@ proc send*(b: BlockExcNetwork, id: PeerId, msg: pb.Message) {.async.} =
|
|||
## Send message to peer
|
||||
##
|
||||
|
||||
b.peers.withValue(id, peer):
|
||||
try:
|
||||
await b.inflightSema.acquire()
|
||||
await peer[].send(msg)
|
||||
except CancelledError as error:
|
||||
raise error
|
||||
except CatchableError as err:
|
||||
error "Error sending message", peer = id, msg = err.msg
|
||||
finally:
|
||||
b.inflightSema.release()
|
||||
do:
|
||||
if not (id in b.peers):
|
||||
trace "Unable to send, peer not found", peerId = id
|
||||
return
|
||||
|
||||
let peer = b.peers[id]
|
||||
try:
|
||||
await b.inflightSema.acquire()
|
||||
await peer.send(msg)
|
||||
except CancelledError as error:
|
||||
raise error
|
||||
except CatchableError as err:
|
||||
error "Error sending message", peer = id, msg = err.msg
|
||||
finally:
|
||||
b.inflightSema.release()
|
||||
|
||||
proc handleWantList(
|
||||
b: BlockExcNetwork,
|
||||
|
|
|
@ -122,25 +122,30 @@ proc bootstrapInteractions(
|
|||
else:
|
||||
s.codexNode.clock = SystemClock()
|
||||
|
||||
if config.persistence:
|
||||
# This is used for simulation purposes. Normal nodes won't be compiled with this flag
|
||||
# and hence the proof failure will always be 0.
|
||||
when codex_enable_proof_failures:
|
||||
let proofFailures = config.simulateProofFailures
|
||||
if proofFailures > 0:
|
||||
warn "Enabling proof failure simulation!"
|
||||
else:
|
||||
let proofFailures = 0
|
||||
if config.simulateProofFailures > 0:
|
||||
warn "Proof failure simulation is not enabled for this build! Configuration ignored"
|
||||
# This is used for simulation purposes. Normal nodes won't be compiled with this flag
|
||||
# and hence the proof failure will always be 0.
|
||||
when codex_enable_proof_failures:
|
||||
let proofFailures = config.simulateProofFailures
|
||||
if proofFailures > 0:
|
||||
warn "Enabling proof failure simulation!"
|
||||
else:
|
||||
let proofFailures = 0
|
||||
if config.simulateProofFailures > 0:
|
||||
warn "Proof failure simulation is not enabled for this build! Configuration ignored"
|
||||
|
||||
let purchasing = Purchasing.new(market, clock)
|
||||
let sales = Sales.new(market, clock, repo, proofFailures)
|
||||
client = some ClientInteractions.new(clock, purchasing)
|
||||
host = some HostInteractions.new(clock, sales)
|
||||
let purchasing = Purchasing.new(market, clock)
|
||||
let sales = Sales.new(market, clock, repo, proofFailures)
|
||||
client = some ClientInteractions.new(clock, purchasing)
|
||||
host = some HostInteractions.new(clock, sales)
|
||||
|
||||
if config.validator:
|
||||
let validation = Validation.new(clock, market, config.validatorMaxSlots)
|
||||
without validationConfig =? ValidationConfig.init(
|
||||
config.validatorMaxSlots,
|
||||
config.validatorGroups,
|
||||
config.validatorGroupIndex), err:
|
||||
error "Invalid validation parameters", err = err.msg
|
||||
quit QuitFailure
|
||||
let validation = Validation.new(clock, market, validationConfig)
|
||||
validator = some ValidatorInteractions.new(clock, validation)
|
||||
|
||||
s.codexNode.contracts = (client, host, validator)
|
||||
|
|
|
@ -37,8 +37,10 @@ import ./logutils
|
|||
import ./stores
|
||||
import ./units
|
||||
import ./utils
|
||||
from ./validationconfig import MaxSlots, ValidationGroups
|
||||
|
||||
export units, net, codextypes, logutils
|
||||
export ValidationGroups, MaxSlots
|
||||
|
||||
export
|
||||
DefaultQuotaBytes,
|
||||
|
@ -99,7 +101,8 @@ type
|
|||
|
||||
logFormat* {.
|
||||
hidden
|
||||
desc: "Specifies what kind of logs should be written to stdout (auto, colors, nocolors, json)"
|
||||
desc: "Specifies what kind of logs should be written to stdout (auto, " &
|
||||
"colors, nocolors, json)"
|
||||
defaultValueDesc: "auto"
|
||||
defaultValue: LogKind.Auto
|
||||
name: "log-format" }: LogKind
|
||||
|
@ -164,7 +167,8 @@ type
|
|||
name: "net-privkey" }: string
|
||||
|
||||
bootstrapNodes* {.
|
||||
desc: "Specifies one or more bootstrap nodes to use when connecting to the network"
|
||||
desc: "Specifies one or more bootstrap nodes to use when " &
|
||||
"connecting to the network"
|
||||
abbr: "b"
|
||||
name: "bootstrap-node" }: seq[SignedPeerRecord]
|
||||
|
||||
|
@ -192,7 +196,8 @@ type
|
|||
abbr: "p" }: Port
|
||||
|
||||
apiCorsAllowedOrigin* {.
|
||||
desc: "The REST Api CORS allowed origin for downloading data. '*' will allow all origins, '' will allow none.",
|
||||
desc: "The REST Api CORS allowed origin for downloading data. " &
|
||||
"'*' will allow all origins, '' will allow none.",
|
||||
defaultValue: string.none
|
||||
defaultValueDesc: "Disallow all cross origin requests to download data"
|
||||
name: "api-cors-origin" }: Option[string]
|
||||
|
@ -218,7 +223,9 @@ type
|
|||
abbr: "t" }: Duration
|
||||
|
||||
blockMaintenanceInterval* {.
|
||||
desc: "Time interval in seconds - determines frequency of block maintenance cycle: how often blocks are checked for expiration and cleanup"
|
||||
desc: "Time interval in seconds - determines frequency of block " &
|
||||
"maintenance cycle: how often blocks are checked " &
|
||||
"for expiration and cleanup"
|
||||
defaultValue: DefaultBlockMaintenanceInterval
|
||||
defaultValueDesc: $DefaultBlockMaintenanceInterval
|
||||
name: "block-mi" }: Duration
|
||||
|
@ -230,7 +237,8 @@ type
|
|||
name: "block-mn" }: int
|
||||
|
||||
cacheSize* {.
|
||||
desc: "The size of the block cache, 0 disables the cache - might help on slow hardrives"
|
||||
desc: "The size of the block cache, 0 disables the cache - " &
|
||||
"might help on slow hardrives"
|
||||
defaultValue: 0
|
||||
defaultValueDesc: "0"
|
||||
name: "cache-size"
|
||||
|
@ -290,9 +298,35 @@ type
|
|||
|
||||
validatorMaxSlots* {.
|
||||
desc: "Maximum number of slots that the validator monitors"
|
||||
longDesc: "If set to 0, the validator will not limit " &
|
||||
"the maximum number of slots it monitors"
|
||||
defaultValue: 1000
|
||||
name: "validator-max-slots"
|
||||
.}: int
|
||||
.}: MaxSlots
|
||||
|
||||
validatorGroups* {.
|
||||
desc: "Slot validation groups"
|
||||
longDesc: "A number indicating total number of groups into " &
|
||||
"which the whole slot id space will be divided. " &
|
||||
"The value must be in the range [2, 65535]. " &
|
||||
"If not provided, the validator will observe " &
|
||||
"the whole slot id space and the value of " &
|
||||
"the --validator-group-index parameter will be ignored. " &
|
||||
"Powers of twos are advised for even distribution"
|
||||
defaultValue: ValidationGroups.none
|
||||
name: "validator-groups"
|
||||
.}: Option[ValidationGroups]
|
||||
|
||||
validatorGroupIndex* {.
|
||||
desc: "Slot validation group index"
|
||||
longDesc: "The value provided must be in the range " &
|
||||
"[0, validatorGroups). Ignored when --validator-groups " &
|
||||
"is not provided. Only slot ids satisfying condition " &
|
||||
"[(slotId mod validationGroups) == groupIndex] will be " &
|
||||
"observed by the validator"
|
||||
defaultValue: 0
|
||||
name: "validator-group-index"
|
||||
.}: uint16
|
||||
|
||||
rewardRecipient* {.
|
||||
desc: "Address to send payouts to (eg rewards and refunds)"
|
||||
|
@ -546,7 +580,10 @@ proc updateLogLevel*(logLevel: string) {.upraises: [ValueError].} =
|
|||
try:
|
||||
setLogLevel(parseEnum[LogLevel](directives[0].toUpperAscii))
|
||||
except ValueError:
|
||||
raise (ref ValueError)(msg: "Please specify one of: trace, debug, info, notice, warn, error or fatal")
|
||||
raise (ref ValueError)(
|
||||
msg: "Please specify one of: trace, debug, " &
|
||||
"info, notice, warn, error or fatal"
|
||||
)
|
||||
|
||||
if directives.len > 1:
|
||||
for topicName, settings in parseTopicDirectives(directives[1..^1]):
|
||||
|
|
|
@ -247,6 +247,22 @@ method canProofBeMarkedAsMissing*(
|
|||
trace "Proof cannot be marked as missing", msg = e.msg
|
||||
return false
|
||||
|
||||
method reserveSlot*(
|
||||
market: OnChainMarket,
|
||||
requestId: RequestId,
|
||||
slotIndex: UInt256) {.async.} =
|
||||
|
||||
convertEthersError:
|
||||
discard await market.contract.reserveSlot(requestId, slotIndex).confirm(0)
|
||||
|
||||
method canReserveSlot*(
|
||||
market: OnChainMarket,
|
||||
requestId: RequestId,
|
||||
slotIndex: UInt256): Future[bool] {.async.} =
|
||||
|
||||
convertEthersError:
|
||||
return await market.contract.canReserveSlot(requestId, slotIndex)
|
||||
|
||||
method subscribeRequests*(market: OnChainMarket,
|
||||
callback: OnRequest):
|
||||
Future[MarketSubscription] {.async.} =
|
||||
|
@ -291,6 +307,17 @@ method subscribeSlotFreed*(market: OnChainMarket,
|
|||
let subscription = await market.contract.subscribe(SlotFreed, onEvent)
|
||||
return OnChainMarketSubscription(eventSubscription: subscription)
|
||||
|
||||
method subscribeSlotReservationsFull*(
|
||||
market: OnChainMarket,
|
||||
callback: OnSlotReservationsFull): Future[MarketSubscription] {.async.} =
|
||||
|
||||
proc onEvent(event: SlotReservationsFull) {.upraises:[].} =
|
||||
callback(event.requestId, event.slotIndex)
|
||||
|
||||
convertEthersError:
|
||||
let subscription = await market.contract.subscribe(SlotReservationsFull, onEvent)
|
||||
return OnChainMarketSubscription(eventSubscription: subscription)
|
||||
|
||||
method subscribeFulfillment(market: OnChainMarket,
|
||||
callback: OnFulfillment):
|
||||
Future[MarketSubscription] {.async.} =
|
||||
|
|
|
@ -51,3 +51,6 @@ proc getPointer*(marketplace: Marketplace, id: SlotId): uint8 {.contract, view.}
|
|||
|
||||
proc submitProof*(marketplace: Marketplace, id: SlotId, proof: Groth16Proof): ?TransactionResponse {.contract.}
|
||||
proc markProofAsMissing*(marketplace: Marketplace, id: SlotId, period: UInt256): ?TransactionResponse {.contract.}
|
||||
|
||||
proc reserveSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256): ?TransactionResponse {.contract.}
|
||||
proc canReserveSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256): bool {.contract, view.}
|
||||
|
|
|
@ -25,6 +25,7 @@ type
|
|||
OnFulfillment* = proc(requestId: RequestId) {.gcsafe, upraises: [].}
|
||||
OnSlotFilled* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises:[].}
|
||||
OnSlotFreed* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises: [].}
|
||||
OnSlotReservationsFull* = proc(requestId: RequestId, slotIndex: UInt256) {.gcsafe, upraises: [].}
|
||||
OnRequestCancelled* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
|
||||
OnRequestFailed* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
|
||||
OnProofSubmitted* = proc(id: SlotId) {.gcsafe, upraises:[].}
|
||||
|
@ -42,6 +43,9 @@ type
|
|||
SlotFreed* = object of MarketplaceEvent
|
||||
requestId* {.indexed.}: RequestId
|
||||
slotIndex*: UInt256
|
||||
SlotReservationsFull* = object of MarketplaceEvent
|
||||
requestId* {.indexed.}: RequestId
|
||||
slotIndex*: UInt256
|
||||
RequestFulfilled* = object of MarketplaceEvent
|
||||
requestId* {.indexed.}: RequestId
|
||||
RequestCancelled* = object of MarketplaceEvent
|
||||
|
@ -161,6 +165,20 @@ method canProofBeMarkedAsMissing*(market: Market,
|
|||
period: Period): Future[bool] {.base, async.} =
|
||||
raiseAssert("not implemented")
|
||||
|
||||
method reserveSlot*(
|
||||
market: Market,
|
||||
requestId: RequestId,
|
||||
slotIndex: UInt256) {.base, async.} =
|
||||
|
||||
raiseAssert("not implemented")
|
||||
|
||||
method canReserveSlot*(
|
||||
market: Market,
|
||||
requestId: RequestId,
|
||||
slotIndex: UInt256): Future[bool] {.base, async.} =
|
||||
|
||||
raiseAssert("not implemented")
|
||||
|
||||
method subscribeFulfillment*(market: Market,
|
||||
callback: OnFulfillment):
|
||||
Future[Subscription] {.base, async.} =
|
||||
|
@ -189,6 +207,12 @@ method subscribeSlotFreed*(market: Market,
|
|||
Future[Subscription] {.base, async.} =
|
||||
raiseAssert("not implemented")
|
||||
|
||||
method subscribeSlotReservationsFull*(
|
||||
market: Market,
|
||||
callback: OnSlotReservationsFull): Future[Subscription] {.base, async.} =
|
||||
|
||||
raiseAssert("not implemented")
|
||||
|
||||
method subscribeRequestCancelled*(market: Market,
|
||||
callback: OnRequestCancelled):
|
||||
Future[Subscription] {.base, async.} =
|
||||
|
|
|
@ -465,6 +465,23 @@ proc subscribeSlotFreed(sales: Sales) {.async.} =
|
|||
except CatchableError as e:
|
||||
error "Unable to subscribe to slot freed events", msg = e.msg
|
||||
|
||||
proc subscribeSlotReservationsFull(sales: Sales) {.async.} =
|
||||
let context = sales.context
|
||||
let market = context.market
|
||||
let queue = context.slotQueue
|
||||
|
||||
proc onSlotReservationsFull(requestId: RequestId, slotIndex: UInt256) =
|
||||
trace "reservations for slot full, removing from slot queue", requestId, slotIndex
|
||||
queue.delete(requestId, slotIndex.truncate(uint16))
|
||||
|
||||
try:
|
||||
let sub = await market.subscribeSlotReservationsFull(onSlotReservationsFull)
|
||||
sales.subscriptions.add(sub)
|
||||
except CancelledError as error:
|
||||
raise error
|
||||
except CatchableError as e:
|
||||
error "Unable to subscribe to slot filled events", msg = e.msg
|
||||
|
||||
proc startSlotQueue(sales: Sales) {.async.} =
|
||||
let slotQueue = sales.context.slotQueue
|
||||
let reservations = sales.context.reservations
|
||||
|
@ -488,6 +505,7 @@ proc subscribe(sales: Sales) {.async.} =
|
|||
await sales.subscribeSlotFilled()
|
||||
await sales.subscribeSlotFreed()
|
||||
await sales.subscribeCancellation()
|
||||
await sales.subscribeSlotReservationsFull()
|
||||
|
||||
proc unsubscribe(sales: Sales) {.async.} =
|
||||
for sub in sales.subscriptions:
|
||||
|
|
|
@ -8,8 +8,13 @@ import ./errorhandling
|
|||
logScope:
|
||||
topics = "marketplace sales ignored"
|
||||
|
||||
# Ignored slots could mean there was no availability or that the slot could
|
||||
# not be reserved.
|
||||
|
||||
type
|
||||
SaleIgnored* = ref object of ErrorHandlingState
|
||||
reprocessSlot*: bool # readd slot to queue with `seen` flag
|
||||
returnBytes*: bool # return unreleased bytes from Reservation to Availability
|
||||
|
||||
method `$`*(state: SaleIgnored): string = "SaleIgnored"
|
||||
|
||||
|
@ -17,7 +22,5 @@ method run*(state: SaleIgnored, machine: Machine): Future[?State] {.async.} =
|
|||
let agent = SalesAgent(machine)
|
||||
|
||||
if onCleanUp =? agent.onCleanUp:
|
||||
# Ignored slots mean there was no availability. In order to prevent small
|
||||
# availabilities from draining the queue, mark this slot as seen and re-add
|
||||
# back into the queue.
|
||||
await onCleanUp(reprocessSlot = true)
|
||||
await onCleanUp(reprocessSlot = state.reprocessSlot,
|
||||
returnBytes = state.returnBytes)
|
||||
|
|
|
@ -11,7 +11,7 @@ import ./cancelled
|
|||
import ./failed
|
||||
import ./filled
|
||||
import ./ignored
|
||||
import ./downloading
|
||||
import ./slotreserving
|
||||
import ./errored
|
||||
|
||||
declareCounter(codex_reservations_availability_mismatch, "codex reservations availability_mismatch")
|
||||
|
@ -50,7 +50,7 @@ method run*(state: SalePreparing, machine: Machine): Future[?State] {.async.} =
|
|||
let slotId = slotId(data.requestId, data.slotIndex)
|
||||
let state = await market.slotState(slotId)
|
||||
if state != SlotState.Free:
|
||||
return some State(SaleIgnored())
|
||||
return some State(SaleIgnored(reprocessSlot: false, returnBytes: false))
|
||||
|
||||
# TODO: Once implemented, check to ensure the host is allowed to fill the slot,
|
||||
# due to the [sliding window mechanism](https://github.com/codex-storage/codex-research/blob/master/design/marketplace.md#dispersal)
|
||||
|
@ -71,7 +71,7 @@ method run*(state: SalePreparing, machine: Machine): Future[?State] {.async.} =
|
|||
request.ask.collateral):
|
||||
debug "No availability found for request, ignoring"
|
||||
|
||||
return some State(SaleIgnored())
|
||||
return some State(SaleIgnored(reprocessSlot: true))
|
||||
|
||||
info "Availability found for request, creating reservation"
|
||||
|
||||
|
@ -88,11 +88,11 @@ method run*(state: SalePreparing, machine: Machine): Future[?State] {.async.} =
|
|||
if error of BytesOutOfBoundsError:
|
||||
# Lets monitor how often this happen and if it is often we can make it more inteligent to handle it
|
||||
codex_reservations_availability_mismatch.inc()
|
||||
return some State(SaleIgnored())
|
||||
return some State(SaleIgnored(reprocessSlot: true))
|
||||
|
||||
return some State(SaleErrored(error: error))
|
||||
|
||||
trace "Reservation created succesfully"
|
||||
|
||||
data.reservation = some reservation
|
||||
return some State(SaleDownloading())
|
||||
return some State(SaleSlotReserving())
|
||||
|
|
|
@ -0,0 +1,61 @@
|
|||
import pkg/questionable
|
||||
import pkg/questionable/results
|
||||
import pkg/metrics
|
||||
|
||||
import ../../logutils
|
||||
import ../../market
|
||||
import ../salesagent
|
||||
import ../statemachine
|
||||
import ./errorhandling
|
||||
import ./cancelled
|
||||
import ./failed
|
||||
import ./filled
|
||||
import ./ignored
|
||||
import ./downloading
|
||||
import ./errored
|
||||
|
||||
type
|
||||
SaleSlotReserving* = ref object of ErrorHandlingState
|
||||
|
||||
logScope:
|
||||
topics = "marketplace sales reserving"
|
||||
|
||||
method `$`*(state: SaleSlotReserving): string = "SaleSlotReserving"
|
||||
|
||||
method onCancelled*(state: SaleSlotReserving, request: StorageRequest): ?State =
|
||||
return some State(SaleCancelled())
|
||||
|
||||
method onFailed*(state: SaleSlotReserving, request: StorageRequest): ?State =
|
||||
return some State(SaleFailed())
|
||||
|
||||
method onSlotFilled*(state: SaleSlotReserving, requestId: RequestId,
|
||||
slotIndex: UInt256): ?State =
|
||||
return some State(SaleFilled())
|
||||
|
||||
method run*(state: SaleSlotReserving, machine: Machine): Future[?State] {.async.} =
|
||||
let agent = SalesAgent(machine)
|
||||
let data = agent.data
|
||||
let context = agent.context
|
||||
let market = context.market
|
||||
|
||||
logScope:
|
||||
requestId = data.requestId
|
||||
slotIndex = data.slotIndex
|
||||
|
||||
let canReserve = await market.canReserveSlot(data.requestId, data.slotIndex)
|
||||
if canReserve:
|
||||
try:
|
||||
trace "Reserving slot"
|
||||
await market.reserveSlot(data.requestId, data.slotIndex)
|
||||
except MarketError as e:
|
||||
return some State( SaleErrored(error: e) )
|
||||
|
||||
trace "Slot successfully reserved"
|
||||
return some State( SaleDownloading() )
|
||||
|
||||
else:
|
||||
# do not re-add this slot to the queue, and return bytes from Reservation to
|
||||
# the Availability
|
||||
debug "Slot cannot be reserved, ignoring"
|
||||
return some State( SaleIgnored(reprocessSlot: false, returnBytes: true) )
|
||||
|
|
@ -1,35 +1,38 @@
|
|||
import std/sets
|
||||
import std/sequtils
|
||||
import pkg/chronos
|
||||
import pkg/questionable/results
|
||||
|
||||
import ./validationconfig
|
||||
import ./market
|
||||
import ./clock
|
||||
import ./logutils
|
||||
|
||||
export market
|
||||
export sets
|
||||
export validationconfig
|
||||
|
||||
type
|
||||
Validation* = ref object
|
||||
slots: HashSet[SlotId]
|
||||
maxSlots: int
|
||||
clock: Clock
|
||||
market: Market
|
||||
subscriptions: seq[Subscription]
|
||||
running: Future[void]
|
||||
periodicity: Periodicity
|
||||
proofTimeout: UInt256
|
||||
config: ValidationConfig
|
||||
|
||||
logScope:
|
||||
topics = "codex validator"
|
||||
|
||||
proc new*(
|
||||
_: type Validation,
|
||||
clock: Clock,
|
||||
market: Market,
|
||||
maxSlots: int
|
||||
_: type Validation,
|
||||
clock: Clock,
|
||||
market: Market,
|
||||
config: ValidationConfig
|
||||
): Validation =
|
||||
## Create a new Validation instance
|
||||
Validation(clock: clock, market: market, maxSlots: maxSlots)
|
||||
Validation(clock: clock, market: market, config: config)
|
||||
|
||||
proc slots*(validation: Validation): seq[SlotId] =
|
||||
validation.slots.toSeq
|
||||
|
@ -43,13 +46,29 @@ proc waitUntilNextPeriod(validation: Validation) {.async.} =
|
|||
trace "Waiting until next period", currentPeriod = period
|
||||
await validation.clock.waitUntil(periodEnd.truncate(int64) + 1)
|
||||
|
||||
func groupIndexForSlotId*(slotId: SlotId,
|
||||
validationGroups: ValidationGroups): uint16 =
|
||||
let slotIdUInt256 = UInt256.fromBytesBE(slotId.toArray)
|
||||
(slotIdUInt256 mod validationGroups.u256).truncate(uint16)
|
||||
|
||||
func maxSlotsConstraintRespected(validation: Validation): bool =
|
||||
validation.config.maxSlots == 0 or
|
||||
validation.slots.len < validation.config.maxSlots
|
||||
|
||||
func shouldValidateSlot(validation: Validation, slotId: SlotId): bool =
|
||||
if (validationGroups =? validation.config.groups):
|
||||
(groupIndexForSlotId(slotId, validationGroups) ==
|
||||
validation.config.groupIndex) and
|
||||
validation.maxSlotsConstraintRespected
|
||||
else:
|
||||
validation.maxSlotsConstraintRespected
|
||||
|
||||
proc subscribeSlotFilled(validation: Validation) {.async.} =
|
||||
proc onSlotFilled(requestId: RequestId, slotIndex: UInt256) =
|
||||
let slotId = slotId(requestId, slotIndex)
|
||||
if slotId notin validation.slots:
|
||||
if validation.slots.len < validation.maxSlots:
|
||||
trace "Adding slot", slotId
|
||||
validation.slots.incl(slotId)
|
||||
if validation.shouldValidateSlot(slotId):
|
||||
trace "Adding slot", slotId
|
||||
validation.slots.incl(slotId)
|
||||
let subscription = await validation.market.subscribeSlotFilled(onSlotFilled)
|
||||
validation.subscriptions.add(subscription)
|
||||
|
||||
|
|
|
@ -0,0 +1,36 @@
|
|||
import std/strformat
|
||||
import pkg/questionable
|
||||
import pkg/questionable/results
|
||||
|
||||
type
|
||||
ValidationGroups* = range[2..65535]
|
||||
MaxSlots* = int
|
||||
ValidationConfig* = object
|
||||
maxSlots: MaxSlots
|
||||
groups: ?ValidationGroups
|
||||
groupIndex: uint16
|
||||
|
||||
func init*(
|
||||
_: type ValidationConfig,
|
||||
maxSlots: MaxSlots,
|
||||
groups: ?ValidationGroups,
|
||||
groupIndex: uint16 = 0): ?!ValidationConfig =
|
||||
if maxSlots < 0:
|
||||
return failure "The value of maxSlots must be greater than " &
|
||||
fmt"or equal to 0! (got: {maxSlots})"
|
||||
if validationGroups =? groups and groupIndex >= uint16(validationGroups):
|
||||
return failure "The value of the group index must be less than " &
|
||||
fmt"validation groups! (got: {groupIndex = }, " &
|
||||
fmt"groups = {validationGroups})"
|
||||
|
||||
success ValidationConfig(
|
||||
maxSlots: maxSlots, groups: groups, groupIndex: groupIndex)
|
||||
|
||||
func maxSlots*(config: ValidationConfig): MaxSlots =
|
||||
config.maxSlots
|
||||
|
||||
func groups*(config: ValidationConfig): ?ValidationGroups =
|
||||
config.groups
|
||||
|
||||
func groupIndex*(config: ValidationConfig): uint16 =
|
||||
config.groupIndex
|
|
@ -1,6 +1,8 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Environment variables from files
|
||||
# If set to file path, read the file and export the variables
|
||||
# If set to directory path, read all files in the directory and export the variables
|
||||
if [[ -n "${ENV_PATH}" ]]; then
|
||||
set -a
|
||||
[[ -f "${ENV_PATH}" ]] && source "${ENV_PATH}" || for f in "${ENV_PATH}"/*; do source "$f"; done
|
||||
|
@ -53,15 +55,28 @@ fi
|
|||
# Circuit downloader
|
||||
# cirdl [circuitPath] [rpcEndpoint] [marketplaceAddress]
|
||||
if [[ "$@" == *"prover"* ]]; then
|
||||
echo "Run Circuit downloader"
|
||||
# Set circuits dir from CODEX_CIRCUIT_DIR variables if set
|
||||
echo "Prover is enabled - Run Circuit downloader"
|
||||
|
||||
# Set variables required by cirdl from command line arguments when passed
|
||||
for arg in data-dir circuit-dir eth-provider marketplace-address; do
|
||||
arg_value=$(grep -o "${arg}=[^ ,]\+" <<< $@ | awk -F '=' '{print $2}')
|
||||
if [[ -n "${arg_value}" ]]; then
|
||||
var_name=$(tr '[:lower:]' '[:upper:]' <<< "CODEX_${arg//-/_}")
|
||||
export "${var_name}"="${arg_value}"
|
||||
fi
|
||||
done
|
||||
|
||||
# Set circuit dir from CODEX_CIRCUIT_DIR variables if set
|
||||
if [[ -z "${CODEX_CIRCUIT_DIR}" ]]; then
|
||||
export CODEX_CIRCUIT_DIR="${CODEX_DATA_DIR}/circuits"
|
||||
fi
|
||||
# Download circuits
|
||||
|
||||
# Download circuit
|
||||
mkdir -p "${CODEX_CIRCUIT_DIR}"
|
||||
chmod 700 "${CODEX_CIRCUIT_DIR}"
|
||||
cirdl "${CODEX_CIRCUIT_DIR}" "${CODEX_ETH_PROVIDER}" "${CODEX_MARKETPLACE_ADDRESS}"
|
||||
download="cirdl ${CODEX_CIRCUIT_DIR} ${CODEX_ETH_PROVIDER} ${CODEX_MARKETPLACE_ADDRESS}"
|
||||
echo "${download}"
|
||||
eval "${download}"
|
||||
[[ $? -ne 0 ]] && { echo "Failed to download circuit files"; exit 1; }
|
||||
fi
|
||||
|
||||
|
|
|
@ -1,68 +0,0 @@
|
|||
# Download Flow
|
||||
Sequence of interactions that result in dat blocks being transferred across the network.
|
||||
|
||||
## Local Store
|
||||
When data is available in the local blockstore,
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
actor Alice
|
||||
participant API
|
||||
Alice->>API: Download(CID)
|
||||
API->>+Node/StoreStream: Retrieve(CID)
|
||||
loop Get manifest block, then data blocks
|
||||
Node/StoreStream->>NetworkStore: GetBlock(CID)
|
||||
NetworkStore->>LocalStore: GetBlock(CID)
|
||||
LocalStore->>NetworkStore: Block
|
||||
NetworkStore->>Node/StoreStream: Block
|
||||
end
|
||||
Node/StoreStream->>Node/StoreStream: Handle erasure coding
|
||||
Node/StoreStream->>-API: Data stream
|
||||
API->>Alice: Stream download of block
|
||||
```
|
||||
|
||||
## Network Store
|
||||
When data is not found ih the local blockstore, the block-exchange engine is used to discover the location of the block within the network. Connection will be established to the node(s) that have the block, and exchange can take place.
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
box
|
||||
actor Alice
|
||||
participant API
|
||||
participant Node/StoreStream
|
||||
participant NetworkStore
|
||||
participant Discovery
|
||||
participant Engine
|
||||
end
|
||||
box
|
||||
participant OtherNode
|
||||
end
|
||||
Alice->>API: Download(CID)
|
||||
API->>+Node/StoreStream: Retrieve(CID)
|
||||
Node/StoreStream->>-API: Data stream
|
||||
API->>Alice: Download stream begins
|
||||
loop Get manifest block, then data blocks
|
||||
Node/StoreStream->>NetworkStore: GetBlock(CID)
|
||||
NetworkStore->>Engine: RequestBlock(CID)
|
||||
opt CID not known
|
||||
Engine->>Discovery: Discovery Block
|
||||
Discovery->>Discovery: Locates peers who provide block
|
||||
Discovery->>Engine: Peers
|
||||
Engine->>Engine: Update peers admin
|
||||
end
|
||||
Engine->>Engine: Select optimal peer
|
||||
Engine->>OtherNode: Send WantHave list
|
||||
OtherNode->>Engine: Send BlockPresence
|
||||
Engine->>Engine: Update peers admin
|
||||
Engine->>Engine: Decide to buy block
|
||||
Engine->>OtherNode: Send WantBlock list
|
||||
OtherNode->>Engine: Send Block
|
||||
Engine->>NetworkStore: Block
|
||||
NetworkStore->>NetworkStore: Add to Local store
|
||||
NetworkStore->>Node/StoreStream: Resolve Block
|
||||
Node/StoreStream->>Node/StoreStream: Handle erasure coding
|
||||
Node/StoreStream->>API: Push data to stream
|
||||
end
|
||||
API->>Alice: Download stream finishes
|
||||
```
|
||||
|
|
@ -1,444 +0,0 @@
|
|||
# Running a Local Codex Network with Marketplace Support
|
||||
|
||||
This tutorial will teach you how to run a small Codex network with the _storage marketplace_ enabled; i.e., the functionality in Codex which allows participants to offer and buy storage in a market, ensuring that storage providers honor their part of the deal by means of cryptographic proofs.
|
||||
|
||||
To complete this tutorial, you will need:
|
||||
|
||||
* the [geth](https://github.com/ethereum/go-ethereum) Ethereum client;
|
||||
* a Codex binary, which [you can compile from source](https://github.com/codex-storage/nim-codex?tab=readme-ov-file#build-and-run).
|
||||
|
||||
We will also be using [bash](https://en.wikipedia.org/wiki/Bash_(Unix_shell)) syntax throughout. If you use a different shell, you may need to adapt things to your platform.
|
||||
|
||||
In this tutorial, you will:
|
||||
|
||||
1. [Set Up a Geth PoA network](#1-set-up-a-geth-poa-network);
|
||||
2. [Set up The Marketplace](#2-set-up-the-marketplace);
|
||||
3. [Run Codex](#3-run-codex);
|
||||
4. [Buy and Sell Storage in the Marketplace](#4-buy-and-sell-storage-on-the-marketplace).
|
||||
|
||||
We strongly suggest you to create a folder (e.g. `marketplace-tutorial`), and switch into it before beginning.
|
||||
|
||||
## 1. Set Up a Geth PoA Network
|
||||
|
||||
For this tutorial, we will use a simple [Proof-of-Authority](https://github.com/ethereum/EIPs/issues/225) network with geth. The first step is creating a _signer account_: an account which will be used by geth to sign the blocks in the network. Any block signed by a signer is accepted as valid.
|
||||
|
||||
### 1.1. Create a Signer Account
|
||||
|
||||
To create a signer account, run:
|
||||
|
||||
```bash
|
||||
geth account new --datadir geth-data
|
||||
```
|
||||
|
||||
The account generator will ask you to input a password, which you can leave blank. It will then print some information, including the account's public address:
|
||||
|
||||
```bash
|
||||
INFO [03-22|12:58:05.637] Maximum peer count ETH=50 total=50
|
||||
INFO [03-22|12:58:05.638] Smartcard socket not found, disabling err="stat /run/pcscd/pcscd.comm: no such file or directory"
|
||||
Your new account is locked with a password. Please give a password. Do not forget this password.
|
||||
Password:
|
||||
Repeat password:
|
||||
|
||||
Your new key was generated
|
||||
|
||||
Public address of the key: 0x93976895c4939d99837C8e0E1779787718EF8368
|
||||
...
|
||||
```
|
||||
|
||||
In this example, the public address of the signer account is `0x93976895c4939d99837C8e0E1779787718EF8368`. Yours will print a different address. Save it for later usage.
|
||||
|
||||
Next set an environment variable for later usage:
|
||||
|
||||
```sh
|
||||
export GETH_SIGNER_ADDR="0x0000000000000000000000000000000000000000"
|
||||
echo ${GETH_SIGNER_ADDR} > geth_signer_address.txt
|
||||
```
|
||||
|
||||
### 1.2. Configure The Network and Create the Genesis Block
|
||||
|
||||
The next step is telling geth what kind of network you want to run. We will be running a [pre-merge](https://ethereum.org/en/roadmap/merge/) network with Proof-of-Authority consensus. To get that working, create a `network.json` file.
|
||||
|
||||
If you set the GETH_SIGNER_ADDR variable above you can run to create the `network.json` file:
|
||||
|
||||
```sh
|
||||
echo "{\"config\": { \"chainId\": 12345, \"homesteadBlock\": 0, \"eip150Block\": 0, \"eip155Block\": 0, \"eip158Block\": 0, \"byzantiumBlock\": 0, \"constantinopleBlock\": 0, \"petersburgBlock\": 0, \"istanbulBlock\": 0, \"berlinBlock\": 0, \"londonBlock\": 0, \"arrowGlacierBlock\": 0, \"grayGlacierBlock\": 0, \"clique\": { \"period\": 1, \"epoch\": 30000 } }, \"difficulty\": \"1\", \"gasLimit\": \"8000000\", \"extradata\": \"0x0000000000000000000000000000000000000000000000000000000000000000${GETH_SIGNER_ADDR:2}0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000\", \"alloc\": { \"${GETH_SIGNER_ADDR}\": { \"balance\": \"10000000000000000000000\"}}}" > network.json
|
||||
```
|
||||
|
||||
You can also manually create the file with the following content modified with your signer private key:
|
||||
|
||||
```json
|
||||
{
|
||||
"config": {
|
||||
"chainId": 12345,
|
||||
"homesteadBlock": 0,
|
||||
"eip150Block": 0,
|
||||
"eip155Block": 0,
|
||||
"eip158Block": 0,
|
||||
"byzantiumBlock": 0,
|
||||
"constantinopleBlock": 0,
|
||||
"petersburgBlock": 0,
|
||||
"istanbulBlock": 0,
|
||||
"berlinBlock": 0,
|
||||
"londonBlock": 0,
|
||||
"arrowGlacierBlock": 0,
|
||||
"grayGlacierBlock": 0,
|
||||
"clique": {
|
||||
"period": 1,
|
||||
"epoch": 30000
|
||||
}
|
||||
},
|
||||
"difficulty": "1",
|
||||
"gasLimit": "8000000",
|
||||
"extradata": "0x000000000000000000000000000000000000000000000000000000000000000093976895c4939d99837C8e0E1779787718EF83680000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
"alloc": {
|
||||
"0x93976895c4939d99837C8e0E1779787718EF8368": {
|
||||
"balance": "10000000000000000000000"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that the signer account address is embedded in two different places:
|
||||
* inside of the `"extradata"` string, surrounded by zeroes and stripped of its `0x` prefix;
|
||||
* as an entry key in the `alloc` session.
|
||||
Make sure to replace that ID with the account ID that you wrote down in Step 1.1.
|
||||
|
||||
|
||||
Once `network.json` is created, you can initialize the network with:
|
||||
|
||||
```bash
|
||||
geth init --datadir geth-data network.json
|
||||
```
|
||||
|
||||
### 1.3. Start your PoA Node
|
||||
|
||||
We are now ready to start our $1$-node, private blockchain. To launch the signer node, open a separate terminal on the same working directory and run:
|
||||
|
||||
```bash
|
||||
geth\
|
||||
--datadir geth-data\
|
||||
--networkid 12345\
|
||||
--unlock ${GETH_SIGNER_ADDR}\
|
||||
--nat extip:127.0.0.1\
|
||||
--netrestrict 127.0.0.0/24\
|
||||
--mine\
|
||||
--miner.etherbase ${GETH_SIGNER_ADDR}\
|
||||
--http\
|
||||
--allow-insecure-unlock
|
||||
```
|
||||
|
||||
Note that, once again, the signer account created in Step 1.1 appears both in `--unlock` and `--allow-insecure-unlock`. Make sure you have the `GETH_SIGNER_ADDR` set.
|
||||
|
||||
Geth will prompt you to insert the account's password as it starts up. Once you do that, it should be able to start up and begin "mining" blocks.
|
||||
|
||||
## 2. Set Up The Marketplace
|
||||
|
||||
You will need to open new terminal for this section and geth needs to be running already. Setting up the Codex marketplace entails:
|
||||
|
||||
1. Deploying the Codex Marketplace contracts to our private blockchain
|
||||
2. Setup Ethereum accounts we will use to buy and sell storage in the Codex marketplace
|
||||
3. Provisioning those accounts with the required token balances
|
||||
|
||||
### 2.1. Deploy the Codex Marketplace Contracts
|
||||
|
||||
To deploy the contracts, start by cloning the Codex contracts repository locally and installing its dependencies:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/codex-storage/codex-contracts-eth
|
||||
cd codex-contracts-eth
|
||||
npm install
|
||||
```
|
||||
You now must **wait until $256$ blocks are mined in your PoA network**, or deploy will fail. This should take about $4$ minutes and $30$ seconds. You can check which block height you are currently at by running:
|
||||
|
||||
```bash
|
||||
geth attach --exec web3.eth.blockNumber ../geth-data/geth.ipc
|
||||
```
|
||||
|
||||
once that gets past $256$, you are ready to go. To deploy contracts, run:
|
||||
|
||||
```bash
|
||||
export DISTTEST_NETWORK_URL=http://localhost:8545 # bootstrap node
|
||||
npx hardhat --network codexdisttestnetwork deploy && cd ../
|
||||
```
|
||||
|
||||
If the command completes successfully, you are ready to prepare the accounts.
|
||||
|
||||
### 2.2. Generate the Required Accounts
|
||||
|
||||
We will run $2$ Codex nodes: a **storage provider**, which will sell storage on the network, and a **client**, which will buy and use such storage; we therefore need two valid Ethereum accounts. We could create random accounts by using one of the many tools available to that end but, since this is a tutorial running on a local private network, we will simply provide you with two pre-made accounts along with their private keys which you can copy and paste instead:
|
||||
|
||||
First make sure you're back in the `marketplace-tutorial` folder and not the `codex-contracts-eth` subfolder. Then set these variables:
|
||||
|
||||
**Storage:**
|
||||
```sh
|
||||
export ETH_STORAGE_ADDR=0x45BC5ca0fbdD9F920Edd12B90908448C30F32a37
|
||||
export ETH_STORAGE_PK=0x06c7ac11d4ee1d0ccb53811b71802fa92d40a5a174afad9f2cb44f93498322c3
|
||||
echo $ETH_STORAGE_PK > storage.pkey && chmod 0600 storage.pkey
|
||||
```
|
||||
|
||||
**Client:**
|
||||
```sh
|
||||
export ETH_CLIENT_ADDR=0x9F0C62Fe60b22301751d6cDe1175526b9280b965
|
||||
export ETH_CLIENT_PK=0x5538ec03c956cb9d0bee02a25b600b0225f1347da4071d0fd70c521fdc63c2fc
|
||||
echo $ETH_CLIENT_PK > client.pkey && chmod 0600 client.pkey
|
||||
```
|
||||
|
||||
### 2.3. Provision Accounts with Tokens
|
||||
|
||||
We now need to transfer some ETH to each of the accounts, as well as provide them with some Codex tokens for the storage node to use as collateral and for the client node to buy actual storage.
|
||||
|
||||
Although the process is not particularly complicated, I suggest you use [the script we prepared](https://github.com/gmega/local-codex-bare/blob/main/scripts/mint-tokens.js) for that. This script, essentially:
|
||||
|
||||
1. reads the Marketplace contract address and its ABI from the deployment data;
|
||||
2. transfers $1$ ETH from the signer account to a target account if the target account has no ETH balance;
|
||||
3. mints $n$ Codex tokens and adds it into the target account's balance.
|
||||
|
||||
To use the script, just download it into a local file named `mint-tokens.js`, for instance using curl:
|
||||
|
||||
```bash
|
||||
# set the contract file location
|
||||
export CONTRACT_DEPLOY_FULL="codex-contracts-eth/deployments/codexdisttestnetwork"
|
||||
export GETH_SIGNER_ADDR=$(cat geth_signer_address.txt)
|
||||
# download script
|
||||
curl https://raw.githubusercontent.com/gmega/codex-local-bare/main/scripts/mint-tokens.js -o mint-tokens.js
|
||||
```
|
||||
|
||||
```bash
|
||||
# Installs Web3-js
|
||||
npm install web3
|
||||
# Provides tokens to the storage account.
|
||||
node ./mint-tokens.js $CONTRACT_DEPLOY_FULL/TestToken.json $GETH_SIGNER_ADDR 0x45BC5ca0fbdD9F920Edd12B90908448C30F32a37 10000000000
|
||||
# Provides tokens to the client account.
|
||||
node ./mint-tokens.js $CONTRACT_DEPLOY_FULL/TestToken.json $GETH_SIGNER_ADDR 0x9F0C62Fe60b22301751d6cDe1175526b9280b965 10000000000
|
||||
```
|
||||
|
||||
If you get a message like `Usage: mint-tokens.js <token-hardhat-deploy-json> <signer-account> <receiver-account> <token-ammount>` then you need to ensure you have
|
||||
|
||||
## 3. Run Codex
|
||||
|
||||
With accounts and geth in place, we can now start the Codex nodes.
|
||||
|
||||
### 3.1. Storage Node
|
||||
|
||||
The storage node will be the one storing data and submitting the proofs of storage to the chain. To do that, it needs access to:
|
||||
|
||||
1. the address of the Marketplace contract that has been deployed to the local geth node in [Step 2.1](#21-deploy-the-codex-marketplace-contracts);
|
||||
2. the sample ceremony files which are shipped in the Codex contracts repo.
|
||||
|
||||
Recall you have clone the `codex-contracts-eth` repository in Step 2.1. All of the required files are in there.
|
||||
|
||||
**Address of the Marketplace Contract.** The contract address can be found inside of the file `codex-contracts-eth/deployments/codexdisttestnetwork/Marketplace.json`:
|
||||
|
||||
```bash
|
||||
grep '"address":' ${CONTRACT_DEPLOY_FULL}/Marketplace.json
|
||||
```
|
||||
|
||||
which should print something like:
|
||||
```sh
|
||||
"address": "0x8891732D890f5A7B7181fBc70F7482DE28a7B60f",
|
||||
```
|
||||
|
||||
Then run the following with the correct market place address:
|
||||
```sh
|
||||
export MARKETPLACE_ADDRESS="0x0000000000000000000000000000000000000000"
|
||||
echo ${MARKETPLACE_ADDRESS} > marketplace_address.txt
|
||||
```
|
||||
|
||||
**Prover ceremony files.** The ceremony files are under the `codex-contracts-eth/verifier/networks/codexdisttestnetwork` subdirectory. There are three of them: `proof_main.r1cs`, `proof_main.zkey`, and `prooof_main.wasm`. We will need all of them to start the Codex storage node.
|
||||
|
||||
**Starting the storage node.** Let:
|
||||
|
||||
* `PROVER_ASSETS` contain the directory where the prover ceremony files are located. **This must be an absolute path**;
|
||||
* `CODEX_BINARY` contain the location of your Codex binary;
|
||||
* `MARKETPLACE_ADDRESS` contain the address of the Marketplace contract (obtained above).
|
||||
|
||||
Set these paths into environment variables (modify it with the correct paths if you changed them above):
|
||||
|
||||
```sh
|
||||
export CONTRACT_DEPLOY_FULL=$(realpath "codex-contracts-eth/deployments/codexdisttestnetwork")
|
||||
export PROVER_ASSETS=$(realpath "codex-contracts-eth/verifier/networks/codexdisttestnetwork/")
|
||||
export CODEX_BINARY=$(realpath "../build/codex")
|
||||
export MARKETPLACE_ADDRESS=$(cat marketplace_address.txt)
|
||||
```
|
||||
|
||||
To launch the storage node, run:
|
||||
|
||||
```bash
|
||||
${CODEX_BINARY}\
|
||||
--data-dir=./codex-storage\
|
||||
--listen-addrs=/ip4/0.0.0.0/tcp/8080\
|
||||
--api-port=8000\
|
||||
--disc-port=8090\
|
||||
persistence\
|
||||
--eth-provider=http://localhost:8545\
|
||||
--eth-private-key=./storage.pkey\
|
||||
--marketplace-address=${MARKETPLACE_ADDRESS}\
|
||||
--validator\
|
||||
--validator-max-slots=1000\
|
||||
prover\
|
||||
--circom-r1cs=${PROVER_ASSETS}/proof_main.r1cs\
|
||||
--circom-wasm=${PROVER_ASSETS}/proof_main.wasm\
|
||||
--circom-zkey=${PROVER_ASSETS}/proof_main.zkey
|
||||
```
|
||||
|
||||
**Starting the client node.**
|
||||
|
||||
The client node is started similarly except that:
|
||||
|
||||
* we need to pass the SPR of the storage node so it can form a network with it;
|
||||
* since it does not run any proofs, it does not require any ceremony files.
|
||||
|
||||
We get the Signed Peer Record (SPR) of the storage node so we can bootstrap the client node with it. To get the SPR, issue the following call:
|
||||
|
||||
```bash
|
||||
curl -H 'Accept: text/plain' 'http://localhost:8000/api/codex/v1/spr'
|
||||
```
|
||||
|
||||
You should get the SPR back starting with `spr:`. Next set these paths into environment variables:
|
||||
|
||||
```bash
|
||||
# set the SPR for the storage node
|
||||
export STORAGE_NODE_SPR=$(curl -H 'Accept: text/plain' 'http://localhost:8000/api/codex/v1/spr')
|
||||
# basic vars
|
||||
export CONTRACT_DEPLOY_FULL=$(realpath "codex-contracts-eth/deployments/codexdisttestnetwork")
|
||||
export PROVER_ASSETS=$(realpath "codex-contracts-eth/verifier/networks/codexdisttestnetwork/")
|
||||
export CODEX_BINARY=$(realpath "../build/codex")
|
||||
export MARKETPLACE_ADDRESS=$(cat marketplace_address.txt)
|
||||
```
|
||||
|
||||
```bash
|
||||
${CODEX_BINARY}\
|
||||
--data-dir=./codex-client\
|
||||
--listen-addrs=/ip4/0.0.0.0/tcp/8081\
|
||||
--api-port=8001\
|
||||
--disc-port=8091\
|
||||
--bootstrap-node=${STORAGE_NODE_SPR}\
|
||||
persistence\
|
||||
--eth-provider=http://localhost:8545\
|
||||
--eth-private-key=./client.pkey\
|
||||
--marketplace-address=${MARKETPLACE_ADDRESS}
|
||||
```
|
||||
|
||||
## 4. Buy and Sell Storage on the Marketplace
|
||||
|
||||
Any storage negotiation has two sides: a buyer and a seller. Before we can actually request storage, therefore, we must first put some of it for sale.
|
||||
|
||||
### 4.1 Sell Storage
|
||||
|
||||
The following request will cause the storage node to put out $50\text{MB}$ of storage for sale for $1$ hour, at a price of $1$ Codex token per byte per second, while expressing that it's willing to take at most a $1000$ Codex token penalty for not fulfilling its part of the contract.[^1]
|
||||
|
||||
```bash
|
||||
curl 'http://localhost:8000/api/codex/v1/sales/availability' \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data '{
|
||||
"totalSize": "50000000",
|
||||
"duration": "3600",
|
||||
"minPrice": "1",
|
||||
"maxCollateral": "1000"
|
||||
}'
|
||||
```
|
||||
|
||||
This should return a response with an id a string (e.g. `"id": "0x552ef12a2ee64ca22b237335c7e1df884df36d22bfd6506b356936bc718565d4"`) which identifies this storage offer. To check the current storage offers for this node, you can issue:
|
||||
|
||||
```bash
|
||||
curl 'http://localhost:8000/api/codex/v1/sales/availability'
|
||||
```
|
||||
|
||||
This should print a list of offers, with the one you just created figuring among them.
|
||||
|
||||
## 4.2. Buy Storage
|
||||
|
||||
Before we can buy storage, we must have some actual data to request storage for. Start by uploading a small file to your client node. On Linux you could, for instance, use `dd` to generate a $100KB$ file:
|
||||
|
||||
```bash
|
||||
dd if=/dev/urandom of=./data.bin bs=100K count=1
|
||||
```
|
||||
|
||||
but any small file will do. Assuming your file is named `data.bin`, you can upload it with:
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8001/api/codex/v1/data" --data-bin @data.bin
|
||||
```
|
||||
|
||||
Once the upload completes, you should see a CID (e.g. `zDvZRwzm2mK7tvDzKScRLapqGdgNTLyyEBvx1TQY37J2CdWdS6Sj`) for the file printed to the terminal. Use that CID in the purchase request:
|
||||
|
||||
```bash
|
||||
export CID=zDvZRwzm2mK7tvDzKScRLapqGdgNTLyyEBvx1TQY37J2CdWdS6Sj
|
||||
export EXPIRY_TIME=$((1000 + $(date +%s))) # current time + 1000 seconds
|
||||
# adjust expiry_time as desired, see below
|
||||
```
|
||||
|
||||
```bash
|
||||
curl "http://localhost:8001/api/codex/v1/storage/request/${CID}" \
|
||||
--header 'Content-Type: application/json' \
|
||||
--data "{
|
||||
\"duration\": \"1200\",
|
||||
\"reward\": \"1\",
|
||||
\"proofProbability\": \"3\",
|
||||
\"expiry\": \"${EXPIRY_TIME}\",
|
||||
\"nodes\": 1,
|
||||
\"tolerance\": 0,
|
||||
\"collateral\": \"1000\"
|
||||
}"
|
||||
```
|
||||
|
||||
The parameters under `--data` say that:
|
||||
|
||||
1. we want to purchase storage for our file for $20$ minutes (`"duration": "1200"`);
|
||||
2. we are willing to pay up to $1$ token per byte, per second (`"reward": "1"`);
|
||||
3. our file will be split into four pieces (`"nodes": 3` and `"tolerance": 1`), so that we only need three pieces to rebuild the file; i.e., we can tolerate that at most one node stops storing our data; either due to failure or other reasons;
|
||||
4. we demand `1000` tokens in collateral from storage providers for each piece. Since there are $4$ such pieces, there will be `4000` in total collateral committed by all of the storage providers taken together once our request is fulfilled.
|
||||
|
||||
Finally, the `expiry` puts a cap on the block time at which our request expires. This has to be at most `current block time + duration`, which means this request can fail if you input the wrong number, which you likely will if you do not know what the current block time is. Fear not, however, as you can try an an arbitrary number (e.g. `1000`), and look at the failure message:
|
||||
|
||||
`Expiry needs to be in future. Now: 1711995463`
|
||||
|
||||
to compute a valid one. Just take the number in the error message and add the duration; i.e., `1711995463 + 1200 = 1711996663`, then use the resulting number (`1711996663`) as expiry and things should work. The request should return a purchase ID (e.g. `1d0ec5261e3364f8b9d1cf70324d70af21a9b5dccba380b24eb68b4762249185`), which you can use track the completion of your request in the marketplace.
|
||||
|
||||
## 4.3. Track your Storage Requests
|
||||
|
||||
POSTing a storage request will make it available in the storage market, and a storage node will eventually pick it up.
|
||||
|
||||
You can poll the status of your request by means of:
|
||||
```bash
|
||||
export STORAGE_PURCHASE_ID="1d0ec5261e3364f8b9d1cf70324d70af21a9b5dccba380b24eb68b4762249185"
|
||||
curl "http://localhost:8001/api/codex/v1/storage/purchases/${STORAGE_PURCHASE_ID}"
|
||||
```
|
||||
|
||||
For instance:
|
||||
|
||||
```bash
|
||||
> curl 'http://localhost:8001/api/codex/v1/storage/purchases/6c698cd0ad71c41982f83097d6fa75beb582924e08a658357a1cd4d7a2a6766d'
|
||||
```
|
||||
|
||||
This returns a result like:
|
||||
|
||||
```json
|
||||
{
|
||||
"requestId": "0x6c698cd0ad71c41982f83097d6fa75beb582924e08a658357a1cd4d7a2a6766d",
|
||||
"request": {
|
||||
"client": "0xed6c3c20358f0217919a30c98d72e29ceffedc33",
|
||||
"ask": {
|
||||
"slots": 3,
|
||||
"slotSize": "262144",
|
||||
"duration": "1000",
|
||||
"proofProbability": "3",
|
||||
"reward": "1",
|
||||
"collateral": "1",
|
||||
"maxSlotLoss": 1
|
||||
},
|
||||
"content": {
|
||||
"cid": "zDvZRwzm3nnkekFLCACmWyKdkYixsX3j9gJhkvFtfYA5K9bpXQnC"
|
||||
},
|
||||
"expiry": "1711992852",
|
||||
"nonce": "0x9f5e651ecd3bf73c914f8ed0b1088869c64095c0d7bd50a38fc92ebf66ff5915",
|
||||
"id": "0x6c698cd0ad71c41982f83097d6fa75beb582924e08a658357a1cd4d7a2a6766d"
|
||||
},
|
||||
"state": "submitted",
|
||||
"error": null
|
||||
}
|
||||
```
|
||||
|
||||
Shows that a request has been submitted but has not yet been filled. Your request will be successful once `"state"` shows `"started"`. Anything other than that means the request has not been completely processed yet, and an `"error"` state other than `null` means it failed.
|
||||
|
||||
[^1]: Codex files get partitioned into pieces called "slots" and distributed to various storage providers. The collateral refers to one such slot, and will be slowly eaten away as the storage provider fails to deliver timely proofs, but the actual logic is [more involved than that](https://github.com/codex-storage/codex-contracts-eth/blob/6c9f797f408608958714024b9055fcc330e3842f/contracts/Marketplace.sol#L209).
|
|
@ -1,176 +0,0 @@
|
|||
# Codex Two-Client Test
|
||||
|
||||
The two-client test is a manual test you can perform to check your setup and familiarize yourself with the Codex API. These steps will guide you through running and connecting two nodes, in order to upload a file to one and then download that file from the other. This test also includes running a local blockchain node in order to have the Marketplace functionality available. However, running a local blockchain node is not strictly necessary, and you can skip steps marked as optional if you choose not start a local blockchain node.
|
||||
|
||||
## Prerequisite
|
||||
|
||||
Make sure you have built the client, and can run it as explained in the [README](../README.md).
|
||||
|
||||
## Steps
|
||||
|
||||
### 0. Setup blockchain node (optional)
|
||||
|
||||
You need to have installed NodeJS and npm in order to spinup a local blockchain node.
|
||||
|
||||
Go to directory `vendor/codex-contracts-eth` and run these two commands:
|
||||
```
|
||||
npm ci
|
||||
npm start
|
||||
```
|
||||
|
||||
This will launch a local Ganache blockchain.
|
||||
|
||||
### 1. Launch Node #1
|
||||
|
||||
Open a terminal and run:
|
||||
- Mac/Unx: `"build/codex" --data-dir="$(pwd)/Data1" --listen-addrs="/ip4/127.0.0.1/tcp/8070" --api-port=8080 --disc-port=8090`
|
||||
- Windows: `"build/codex.exe" --data-dir="Data1" --listen-addrs="/ip4/127.0.0.1/tcp/8070" --api-port=8080 --disc-port=8090`
|
||||
|
||||
Optionally, if you want to use the Marketplace blockchain functionality, you need to also include these flags: `--persistence --eth-account=<account>`, where `account` can be one following:
|
||||
|
||||
- `0x70997970C51812dc3A010C7d01b50e0d17dc79C8`
|
||||
- `0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC`
|
||||
- `0x90F79bf6EB2c4f870365E785982E1f101E93b906`
|
||||
- `0x15d34AAf54267DB7D7c367839AAf71A00a2C6A65`
|
||||
|
||||
**For each node use a different account!**
|
||||
|
||||
| Argument | Description |
|
||||
|----------------|-----------------------------------------------------------------------|
|
||||
| `data-dir` | We specify a relative path where the node will store its data. |
|
||||
| `listen-addrs` | Multiaddress where the node will accept connections from other nodes. |
|
||||
| `api-port` | Port on localhost where the node will expose its API. |
|
||||
| `disc-port` | Port the node will use for its discovery service. |
|
||||
| `persistence` | Enables Marketplace functionality. Requires a blockchain connection. |
|
||||
| `eth-account` | Defines which blockchain account the node should use. |
|
||||
|
||||
Codex uses sane defaults for most of its arguments. Here we specify some explicitly for the purpose of this walk-through.
|
||||
|
||||
### 2. Sign of life
|
||||
|
||||
Run the command :
|
||||
|
||||
```bash
|
||||
curl -X GET http://127.0.0.1:8080/api/codex/v1/debug/info
|
||||
```
|
||||
|
||||
This GET request will return the node's debug information. The response will be in JSON and should look like:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "16Uiu2HAmJ3TSfPnrJNedHy2DMsjTqwBiVAQQqPo579DuMgGxmG99",
|
||||
"addrs": [
|
||||
"/ip4/127.0.0.1/tcp/8070"
|
||||
],
|
||||
"repo": "/Users/user/projects/nim-codex/Data1",
|
||||
"spr": "spr:CiUIAhIhA1AL2J7EWfg7x77iOrR9YYBisY6CDtU2nEhuwDaQyjpkEgIDARo8CicAJQgCEiEDUAvYnsRZ-DvHvuI6tH1hgGKxjoIO1TacSG7ANpDKOmQQ2MWasAYaCwoJBH8AAAGRAh-aKkYwRAIgB2ooPfAyzWEJDe8hD2OXKOBnyTOPakc4GzqKqjM2OGoCICraQLPWf0oSEuvmSroFebVQx-3SDtMqDoIyWhjq1XFF",
|
||||
"announceAddresses": [
|
||||
"/ip4/127.0.0.1/tcp/8070"
|
||||
],
|
||||
"table": {
|
||||
"localNode": {
|
||||
"nodeId": "f6e6d48fa7cd171688249a57de0c1aba15e88308c07538c91e1310c9f48c860a",
|
||||
"peerId": "16Uiu2HAmJ3TSfPnrJNedHy2DMsjTqwBiVAQQqPo579DuMgGxmG99",
|
||||
"record": "...",
|
||||
"address": "0.0.0.0:8090",
|
||||
"seen": false
|
||||
},
|
||||
"nodes": []
|
||||
},
|
||||
"codex": {
|
||||
"version": "untagged build",
|
||||
"revision": "b3e626a5"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Description |
|
||||
| ------- | ---------------------------------------------------------------------------------------- |
|
||||
| `id` | Id of the node. Also referred to as 'peerId'. |
|
||||
| `addrs` | Multiaddresses currently open to accept connections from other nodes. |
|
||||
| `repo` | Path of this node's data folder. |
|
||||
| `spr` | Signed Peer Record, encoded information about this node and its location in the network. |
|
||||
| `announceAddresses` | Multiaddresses used for annoucning this node
|
||||
| `table` | Table of nodes present in the node's DHT
|
||||
| `codex` | Codex version information
|
||||
|
||||
### 3. Launch Node #2
|
||||
|
||||
We will need the signed peer record (SPR) from the first node that you got in the previous step.
|
||||
|
||||
Replace `<SPR HERE>` in the following command with the SPR returned from the previous command. (Note that it should include the `spr:` at the beginning.)
|
||||
|
||||
Open a new terminal and run:
|
||||
- Mac/Linux: `"build/codex" --data-dir="$(pwd)/Data2" --listen-addrs=/ip4/127.0.0.1/tcp/8071 --api-port=8081 --disc-port=8091 --bootstrap-node=<SPR HERE>`
|
||||
- Windows: `"build/codex.exe" --data-dir="Data2" --listen-addrs=/ip4/127.0.0.1/tcp/8071 --api-port=8081 --disc-port=8091 --bootstrap-node=<SPR HERE>`
|
||||
|
||||
Alternatively on Mac, Linux, or MSYS2 and a recent Codex binary you can run it in one command like:
|
||||
|
||||
```sh
|
||||
"build/codex" --data-dir="$(pwd)/Data2" --listen-addrs=/ip4/127.0.0.1/tcp/8071 --api-port=8081 --disc-port=8091 --bootstrap-node=$(curl -H "Accept: text/plain" http://127.0.0.1:8080/api/codex/v1/spr)
|
||||
```
|
||||
|
||||
Notice we're using a new data-dir, and we've increased each port number by one. This is needed so that the new node won't try to open ports already in use by the first node.
|
||||
|
||||
We're now also including the `bootstrap-node` argument. This allows us to link the new node to another one, bootstrapping our own little peer-to-peer network. (SPR strings always start with "spr:".)
|
||||
|
||||
### 4. Connect The Two
|
||||
|
||||
Normally the two nodes will automatically connect. If they do not automatically connect or you want to manually connect nodes you can use the peerId to connect nodes.
|
||||
|
||||
You can get the first node's peer id by running the following command and finding the `"peerId"` in the results:
|
||||
|
||||
```bash
|
||||
curl -X GET -H "Accept: text/plain" http://127.0.0.1:8081/api/codex/v1/debug/info
|
||||
```
|
||||
|
||||
Next replace `<PEER ID HERE>` in the following command with the peerId returned from the previous command:
|
||||
|
||||
```bash
|
||||
curl -X GET http://127.0.0.1:8080/api/codex/v1/connect/<PEER ID HERE>?addrs=/ip4/127.0.0.1/tcp/8071
|
||||
```
|
||||
|
||||
Alternatively on Mac, Linux, or MSYS2 and a recent Codex binary you can run it in one command like:
|
||||
|
||||
```bash
|
||||
curl -X GET http://127.0.0.1:8080/api/codex/v1/connect/$(curl -X GET -H "Accept: text/plain" http://127.0.0.1:8081/api/codex/v1/peerid)\?addrs=/ip4/127.0.0.1/tcp/8071
|
||||
```
|
||||
|
||||
Notice that we are sending the peerId and the multiaddress of node 2 to the `/connect` endpoint of node 1. This provides node 1 all the information it needs to communicate with node 2. The response to this request should be `Successfully connected to peer`.
|
||||
|
||||
### 5. Upload The File
|
||||
|
||||
We're now ready to upload a file to the network. In this example we'll use node 1 for uploading and node 2 for downloading. But the reverse also works.
|
||||
|
||||
Next replace `<FILE PATH>` with the path to the file you want to upload in the following command:
|
||||
|
||||
```bash
|
||||
curl -H "Content-Type: application/octet-stream" -H "Expect: 100-continue" -T "<FILE PATH>" 127.0.0.1:8080/api/codex/v1/data -X POST
|
||||
```
|
||||
|
||||
(Hint: if curl is reluctant to show you the response, add `-o <FILENAME>` to write the result to a file.)
|
||||
|
||||
Depending on the file size this may take a moment. Codex is processing the file by cutting it into blocks and generating erasure-recovery data. When the process is finished, the request will return the content-identifier (CID) of the uploaded file. It should look something like `zdj7WVxH8HHHenKtid8Vkgv5Z5eSUbCxxr8xguTUBMCBD8F2S`.
|
||||
|
||||
### 6. Download The File
|
||||
|
||||
Replace `<CID>` with the identifier returned in the previous step. Replace `<OUTPUT FILE>` with the filename where you want to store the downloaded file.
|
||||
|
||||
```bash
|
||||
curl 127.0.0.1:8081/api/codex/v1/data/<CID>/network --output <OUTPUT FILE>
|
||||
```
|
||||
|
||||
Notice we are connecting to the second node in order to download the file. The CID we provide contains the information needed to locate the file within the network.
|
||||
|
||||
### 7. Verify The Results
|
||||
|
||||
If your file is downloaded and identical to the file you uploaded, then this manual test has passed. Rejoice! If on the other hand that didn't happen or you were unable to complete any of these steps, please leave us a message detailing your troubles.
|
||||
|
||||
## Notes
|
||||
|
||||
When using the Ganache blockchain, there are some deviations from the expected behavior, mainly linked to how blocks are mined, which affects certain functionalities in the Sales module.
|
||||
Therefore, if you are manually testing processes such as payout collection after a request is finished or proof submissions, you need to mine some blocks manually for it to work correctly. You can do this by using the following curl command:
|
||||
|
||||
```bash
|
||||
$ curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"evm_mine","params":[],"id":67}' 127.0.0.1:8545
|
||||
```
|
21
openapi.yaml
21
openapi.yaml
|
@ -83,33 +83,12 @@ components:
|
|||
id:
|
||||
$ref: "#/components/schemas/PeerId"
|
||||
|
||||
ErasureParameters:
|
||||
type: object
|
||||
properties:
|
||||
totalChunks:
|
||||
type: integer
|
||||
|
||||
PoRParameters:
|
||||
description: Parameters for Proof of Retrievability
|
||||
type: object
|
||||
properties:
|
||||
u:
|
||||
type: string
|
||||
publicKey:
|
||||
type: string
|
||||
name:
|
||||
type: string
|
||||
|
||||
Content:
|
||||
type: object
|
||||
description: Parameters specifying the content
|
||||
properties:
|
||||
cid:
|
||||
$ref: "#/components/schemas/Cid"
|
||||
erasure:
|
||||
$ref: "#/components/schemas/ErasureParameters"
|
||||
por:
|
||||
$ref: "#/components/schemas/PoRParameters"
|
||||
|
||||
DebugInfo:
|
||||
type: object
|
||||
|
|
|
@ -38,6 +38,8 @@ type
|
|||
signer: Address
|
||||
subscriptions: Subscriptions
|
||||
config*: MarketplaceConfig
|
||||
canReserveSlot*: bool
|
||||
reserveSlotThrowError*: ?(ref MarketError)
|
||||
Fulfillment* = object
|
||||
requestId*: RequestId
|
||||
proof*: Groth16Proof
|
||||
|
@ -52,6 +54,7 @@ type
|
|||
onFulfillment: seq[FulfillmentSubscription]
|
||||
onSlotFilled: seq[SlotFilledSubscription]
|
||||
onSlotFreed: seq[SlotFreedSubscription]
|
||||
onSlotReservationsFull: seq[SlotReservationsFullSubscription]
|
||||
onRequestCancelled: seq[RequestCancelledSubscription]
|
||||
onRequestFailed: seq[RequestFailedSubscription]
|
||||
onProofSubmitted: seq[ProofSubmittedSubscription]
|
||||
|
@ -70,6 +73,9 @@ type
|
|||
SlotFreedSubscription* = ref object of Subscription
|
||||
market: MockMarket
|
||||
callback: OnSlotFreed
|
||||
SlotReservationsFullSubscription* = ref object of Subscription
|
||||
market: MockMarket
|
||||
callback: OnSlotReservationsFull
|
||||
RequestCancelledSubscription* = ref object of Subscription
|
||||
market: MockMarket
|
||||
requestId: ?RequestId
|
||||
|
@ -105,7 +111,7 @@ proc new*(_: type MockMarket): MockMarket =
|
|||
downtimeProduct: 67.uint8
|
||||
)
|
||||
)
|
||||
MockMarket(signer: Address.example, config: config)
|
||||
MockMarket(signer: Address.example, config: config, canReserveSlot: true)
|
||||
|
||||
method getSigner*(market: MockMarket): Future[Address] {.async.} =
|
||||
return market.signer
|
||||
|
@ -200,6 +206,15 @@ proc emitSlotFreed*(market: MockMarket,
|
|||
for subscription in subscriptions:
|
||||
subscription.callback(requestId, slotIndex)
|
||||
|
||||
proc emitSlotReservationsFull*(
|
||||
market: MockMarket,
|
||||
requestId: RequestId,
|
||||
slotIndex: UInt256) =
|
||||
|
||||
var subscriptions = market.subscriptions.onSlotReservationsFull
|
||||
for subscription in subscriptions:
|
||||
subscription.callback(requestId, slotIndex)
|
||||
|
||||
proc emitRequestCancelled*(market: MockMarket, requestId: RequestId) =
|
||||
var subscriptions = market.subscriptions.onRequestCancelled
|
||||
for subscription in subscriptions:
|
||||
|
@ -303,6 +318,29 @@ method canProofBeMarkedAsMissing*(market: MockMarket,
|
|||
period: Period): Future[bool] {.async.} =
|
||||
return market.canBeMarkedAsMissing.contains(id)
|
||||
|
||||
method reserveSlot*(
|
||||
market: MockMarket,
|
||||
requestId: RequestId,
|
||||
slotIndex: UInt256) {.async.} =
|
||||
|
||||
if error =? market.reserveSlotThrowError:
|
||||
raise error
|
||||
|
||||
method canReserveSlot*(
|
||||
market: MockMarket,
|
||||
requestId: RequestId,
|
||||
slotIndex: UInt256): Future[bool] {.async.} =
|
||||
|
||||
return market.canReserveSlot
|
||||
|
||||
func setCanReserveSlot*(market: MockMarket, canReserveSlot: bool) =
|
||||
market.canReserveSlot = canReserveSlot
|
||||
|
||||
func setReserveSlotThrowError*(
|
||||
market: MockMarket, error: ?(ref MarketError)) =
|
||||
|
||||
market.reserveSlotThrowError = error
|
||||
|
||||
method subscribeRequests*(market: MockMarket,
|
||||
callback: OnRequest):
|
||||
Future[Subscription] {.async.} =
|
||||
|
@ -364,6 +402,15 @@ method subscribeSlotFreed*(market: MockMarket,
|
|||
market.subscriptions.onSlotFreed.add(subscription)
|
||||
return subscription
|
||||
|
||||
method subscribeSlotReservationsFull*(
|
||||
market: MockMarket,
|
||||
callback: OnSlotReservationsFull): Future[Subscription] {.async.} =
|
||||
|
||||
let subscription =
|
||||
SlotReservationsFullSubscription(market: market, callback: callback)
|
||||
market.subscriptions.onSlotReservationsFull.add(subscription)
|
||||
return subscription
|
||||
|
||||
method subscribeRequestCancelled*(market: MockMarket,
|
||||
callback: OnRequestCancelled):
|
||||
Future[Subscription] {.async.} =
|
||||
|
@ -456,3 +503,6 @@ method unsubscribe*(subscription: RequestFailedSubscription) {.async.} =
|
|||
|
||||
method unsubscribe*(subscription: ProofSubmittedSubscription) {.async.} =
|
||||
subscription.market.subscriptions.onProofSubmitted.keepItIf(it != subscription)
|
||||
|
||||
method unsubscribe*(subscription: SlotReservationsFullSubscription) {.async.} =
|
||||
subscription.market.subscriptions.onSlotReservationsFull.keepItIf(it != subscription)
|
||||
|
|
|
@ -6,6 +6,7 @@ import pkg/questionable/results
|
|||
type
|
||||
MockReservations* = ref object of Reservations
|
||||
createReservationThrowBytesOutOfBoundsError: bool
|
||||
createReservationThrowError: ?(ref CatchableError)
|
||||
|
||||
proc new*(
|
||||
T: type MockReservations,
|
||||
|
@ -14,9 +15,16 @@ proc new*(
|
|||
## Create a mock clock instance
|
||||
MockReservations(availabilityLock: newAsyncLock(), repo: repo)
|
||||
|
||||
proc setCreateReservationThrowBytesOutOfBoundsError*(self: MockReservations, flag: bool) =
|
||||
proc setCreateReservationThrowBytesOutOfBoundsError*(
|
||||
self: MockReservations, flag: bool) =
|
||||
|
||||
self.createReservationThrowBytesOutOfBoundsError = flag
|
||||
|
||||
proc setCreateReservationThrowError*(
|
||||
self: MockReservations, error: ?(ref CatchableError)) =
|
||||
|
||||
self.createReservationThrowError = error
|
||||
|
||||
method createReservation*(
|
||||
self: MockReservations,
|
||||
availabilityId: AvailabilityId,
|
||||
|
@ -29,5 +37,8 @@ method createReservation*(
|
|||
"trying to reserve an amount of bytes that is greater than the total size of the Availability")
|
||||
return failure(error)
|
||||
|
||||
elif error =? self.createReservationThrowError:
|
||||
return failure(error)
|
||||
|
||||
return await procCall createReservation(Reservations(self), availabilityId, slotSize, requestId, slotIndex)
|
||||
|
||||
|
|
|
@ -39,7 +39,8 @@ asyncchecksuite "sales state 'ignored'":
|
|||
agent.onCleanUp = onCleanUp
|
||||
state = SaleIgnored.new()
|
||||
|
||||
test "calls onCleanUp with returnBytes = false and reprocessSlot = true":
|
||||
test "calls onCleanUp with values assigned to SaleIgnored":
|
||||
state = SaleIgnored(reprocessSlot: true, returnBytes: true)
|
||||
discard await state.run(agent)
|
||||
check eventually returnBytesWas == false
|
||||
check eventually returnBytesWas == true
|
||||
check eventually reprocessSlotWas == true
|
||||
|
|
|
@ -4,7 +4,7 @@ import pkg/datastore
|
|||
import pkg/stew/byteutils
|
||||
import pkg/codex/contracts/requests
|
||||
import pkg/codex/sales/states/preparing
|
||||
import pkg/codex/sales/states/downloading
|
||||
import pkg/codex/sales/states/slotreserving
|
||||
import pkg/codex/sales/states/cancelled
|
||||
import pkg/codex/sales/states/failed
|
||||
import pkg/codex/sales/states/filled
|
||||
|
@ -84,17 +84,33 @@ asyncchecksuite "sales state 'preparing'":
|
|||
availability = a.get
|
||||
|
||||
test "run switches to ignored when no availability":
|
||||
let next = await state.run(agent)
|
||||
check !next of SaleIgnored
|
||||
let next = !(await state.run(agent))
|
||||
check next of SaleIgnored
|
||||
let ignored = SaleIgnored(next)
|
||||
check ignored.reprocessSlot
|
||||
check ignored.returnBytes == false
|
||||
|
||||
test "run switches to downloading when reserved":
|
||||
test "run switches to slot reserving state after reservation created":
|
||||
await createAvailability()
|
||||
let next = await state.run(agent)
|
||||
check !next of SaleDownloading
|
||||
check !next of SaleSlotReserving
|
||||
|
||||
test "run switches to ignored when reserve fails with BytesOutOfBounds":
|
||||
await createAvailability()
|
||||
reservations.setCreateReservationThrowBytesOutOfBoundsError(true)
|
||||
|
||||
let next = await state.run(agent)
|
||||
check !next of SaleIgnored
|
||||
let next = !(await state.run(agent))
|
||||
check next of SaleIgnored
|
||||
let ignored = SaleIgnored(next)
|
||||
check ignored.reprocessSlot
|
||||
check ignored.returnBytes == false
|
||||
|
||||
test "run switches to errored when reserve fails with other error":
|
||||
await createAvailability()
|
||||
let error = newException(CatchableError, "some error")
|
||||
reservations.setCreateReservationThrowError(some error)
|
||||
|
||||
let next = !(await state.run(agent))
|
||||
check next of SaleErrored
|
||||
let errored = SaleErrored(next)
|
||||
check errored.error == error
|
||||
|
|
|
@ -0,0 +1,73 @@
|
|||
import pkg/chronos
|
||||
import pkg/questionable
|
||||
import pkg/codex/contracts/requests
|
||||
import pkg/codex/sales/states/slotreserving
|
||||
import pkg/codex/sales/states/downloading
|
||||
import pkg/codex/sales/states/cancelled
|
||||
import pkg/codex/sales/states/failed
|
||||
import pkg/codex/sales/states/filled
|
||||
import pkg/codex/sales/states/ignored
|
||||
import pkg/codex/sales/states/errored
|
||||
import pkg/codex/sales/salesagent
|
||||
import pkg/codex/sales/salescontext
|
||||
import pkg/codex/sales/reservations
|
||||
import pkg/codex/stores/repostore
|
||||
import ../../../asynctest
|
||||
import ../../helpers
|
||||
import ../../examples
|
||||
import ../../helpers/mockmarket
|
||||
import ../../helpers/mockreservations
|
||||
import ../../helpers/mockclock
|
||||
|
||||
asyncchecksuite "sales state 'SlotReserving'":
|
||||
let request = StorageRequest.example
|
||||
let slotIndex = (request.ask.slots div 2).u256
|
||||
var market: MockMarket
|
||||
var clock: MockClock
|
||||
var agent: SalesAgent
|
||||
var state: SaleSlotReserving
|
||||
var context: SalesContext
|
||||
|
||||
setup:
|
||||
market = MockMarket.new()
|
||||
clock = MockClock.new()
|
||||
|
||||
state = SaleSlotReserving.new()
|
||||
context = SalesContext(
|
||||
market: market,
|
||||
clock: clock
|
||||
)
|
||||
|
||||
agent = newSalesAgent(context,
|
||||
request.id,
|
||||
slotIndex,
|
||||
request.some)
|
||||
|
||||
test "switches to cancelled state when request expires":
|
||||
let next = state.onCancelled(request)
|
||||
check !next of SaleCancelled
|
||||
|
||||
test "switches to failed state when request fails":
|
||||
let next = state.onFailed(request)
|
||||
check !next of SaleFailed
|
||||
|
||||
test "switches to filled state when slot is filled":
|
||||
let next = state.onSlotFilled(request.id, slotIndex)
|
||||
check !next of SaleFilled
|
||||
|
||||
test "run switches to downloading when slot successfully reserved":
|
||||
let next = await state.run(agent)
|
||||
check !next of SaleDownloading
|
||||
|
||||
test "run switches to ignored when slot reservation not allowed":
|
||||
market.setCanReserveSlot(false)
|
||||
let next = await state.run(agent)
|
||||
check !next of SaleIgnored
|
||||
|
||||
test "run switches to errored when slot reservation errors":
|
||||
let error = newException(MarketError, "some error")
|
||||
market.setReserveSlotThrowError(some error)
|
||||
let next = !(await state.run(agent))
|
||||
check next of SaleErrored
|
||||
let errored = SaleErrored(next)
|
||||
check errored.error == error
|
|
@ -270,6 +270,12 @@ asyncchecksuite "Sales":
|
|||
let expected = SlotQueueItem.init(request1, 1'u16)
|
||||
check always (not itemsProcessed.contains(expected))
|
||||
|
||||
test "removes slot index from slot queue once SlotReservationsFull emitted":
|
||||
let request1 = await addRequestToSaturatedQueue()
|
||||
market.emitSlotReservationsFull(request1.id, 1.u256)
|
||||
let expected = SlotQueueItem.init(request1, 1'u16)
|
||||
check always (not itemsProcessed.contains(expected))
|
||||
|
||||
test "adds slot index to slot queue once SlotFreed emitted":
|
||||
queue.onProcessSlot = proc(item: SlotQueueItem, done: Future[void]) {.async.} =
|
||||
itemsProcessed.add item
|
||||
|
|
|
@ -10,5 +10,6 @@ import ./states/testcancelled
|
|||
import ./states/testerrored
|
||||
import ./states/testignored
|
||||
import ./states/testpreparing
|
||||
import ./states/testslotreserving
|
||||
|
||||
{.warning[UnusedImport]: off.}
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
import pkg/chronos
|
||||
import std/strformat
|
||||
import std/random
|
||||
|
||||
import codex/validation
|
||||
import codex/periods
|
||||
|
@ -12,7 +14,8 @@ import ./helpers
|
|||
asyncchecksuite "validation":
|
||||
let period = 10
|
||||
let timeout = 5
|
||||
let maxSlots = 100
|
||||
let maxSlots = MaxSlots(100)
|
||||
let validationGroups = ValidationGroups(8).some
|
||||
let slot = Slot.example
|
||||
let proof = Groth16Proof.example
|
||||
let collateral = slot.request.ask.collateral
|
||||
|
@ -20,11 +23,23 @@ asyncchecksuite "validation":
|
|||
var validation: Validation
|
||||
var market: MockMarket
|
||||
var clock: MockClock
|
||||
var groupIndex: uint16
|
||||
|
||||
proc initValidationConfig(maxSlots: MaxSlots,
|
||||
validationGroups: ?ValidationGroups,
|
||||
groupIndex: uint16 = 0): ValidationConfig =
|
||||
without validationConfig =? ValidationConfig.init(
|
||||
maxSlots, groups=validationGroups, groupIndex), error:
|
||||
raiseAssert fmt"Creating ValidationConfig failed! Error msg: {error.msg}"
|
||||
validationConfig
|
||||
|
||||
setup:
|
||||
groupIndex = groupIndexForSlotId(slot.id, !validationGroups)
|
||||
market = MockMarket.new()
|
||||
clock = MockClock.new()
|
||||
validation = Validation.new(clock, market, maxSlots)
|
||||
let validationConfig = initValidationConfig(
|
||||
maxSlots, validationGroups, groupIndex)
|
||||
validation = Validation.new(clock, market, validationConfig)
|
||||
market.config.proofs.period = period.u256
|
||||
market.config.proofs.timeout = timeout.u256
|
||||
await validation.start()
|
||||
|
@ -41,12 +56,69 @@ asyncchecksuite "validation":
|
|||
test "the list of slots that it's monitoring is empty initially":
|
||||
check validation.slots.len == 0
|
||||
|
||||
for (validationGroups, groupIndex) in [(100, 100'u16), (100, 101'u16)]:
|
||||
test "initializing ValidationConfig fails when groupIndex is " &
|
||||
"greater than or equal to validationGroups " &
|
||||
fmt"(testing for {groupIndex = }, {validationGroups = })":
|
||||
let groups = ValidationGroups(validationGroups).some
|
||||
let validationConfig = ValidationConfig.init(
|
||||
maxSlots, groups = groups, groupIndex = groupIndex)
|
||||
check validationConfig.isFailure == true
|
||||
check validationConfig.error.msg == "The value of the group index " &
|
||||
"must be less than validation groups! " &
|
||||
fmt"(got: {groupIndex = }, groups = {!groups})"
|
||||
|
||||
test "initializing ValidationConfig fails when maxSlots is negative":
|
||||
let maxSlots = -1
|
||||
let validationConfig = ValidationConfig.init(
|
||||
maxSlots = maxSlots, groups = ValidationGroups.none)
|
||||
check validationConfig.isFailure == true
|
||||
check validationConfig.error.msg == "The value of maxSlots must " &
|
||||
fmt"be greater than or equal to 0! (got: {maxSlots})"
|
||||
|
||||
test "initializing ValidationConfig fails when maxSlots is negative " &
|
||||
"(validationGroups set)":
|
||||
let maxSlots = -1
|
||||
let validationConfig = ValidationConfig.init(
|
||||
maxSlots = maxSlots, groups = validationGroups, groupIndex)
|
||||
check validationConfig.isFailure == true
|
||||
check validationConfig.error.msg == "The value of maxSlots must " &
|
||||
fmt"be greater than or equal to 0! (got: {maxSlots})"
|
||||
|
||||
test "slot is not observed if it is not in the validation group":
|
||||
let validationConfig = initValidationConfig(maxSlots, validationGroups,
|
||||
(groupIndex + 1) mod uint16(!validationGroups))
|
||||
let validation = Validation.new(clock, market, validationConfig)
|
||||
await validation.start()
|
||||
await market.fillSlot(slot.request.id, slot.slotIndex, proof, collateral)
|
||||
await validation.stop()
|
||||
check validation.slots.len == 0
|
||||
|
||||
test "when a slot is filled on chain, it is added to the list":
|
||||
await market.fillSlot(slot.request.id, slot.slotIndex, proof, collateral)
|
||||
check validation.slots == @[slot.id]
|
||||
|
||||
test "slot should be observed if maxSlots is set to 0":
|
||||
let validationConfig = initValidationConfig(
|
||||
maxSlots = 0, ValidationGroups.none)
|
||||
let validation = Validation.new(clock, market, validationConfig)
|
||||
await validation.start()
|
||||
await market.fillSlot(slot.request.id, slot.slotIndex, proof, collateral)
|
||||
await validation.stop()
|
||||
check validation.slots == @[slot.id]
|
||||
|
||||
test "slot should be observed if validation group is not set (and " &
|
||||
"maxSlots is not 0)":
|
||||
let validationConfig = initValidationConfig(
|
||||
maxSlots, ValidationGroups.none)
|
||||
let validation = Validation.new(clock, market, validationConfig)
|
||||
await validation.start()
|
||||
await market.fillSlot(slot.request.id, slot.slotIndex, proof, collateral)
|
||||
await validation.stop()
|
||||
check validation.slots == @[slot.id]
|
||||
|
||||
for state in [SlotState.Finished, SlotState.Failed]:
|
||||
test "when slot state changes, it is removed from the list":
|
||||
test fmt"when slot state changes to {state}, it is removed from the list":
|
||||
await market.fillSlot(slot.request.id, slot.slotIndex, proof, collateral)
|
||||
market.slotState[slot.id] = state
|
||||
advanceToNextPeriod()
|
||||
|
@ -67,7 +139,13 @@ asyncchecksuite "validation":
|
|||
check market.markedAsMissingProofs.len == 0
|
||||
|
||||
test "it does not monitor more than the maximum number of slots":
|
||||
let validationGroups = ValidationGroups.none
|
||||
let validationConfig = initValidationConfig(
|
||||
maxSlots, validationGroups)
|
||||
let validation = Validation.new(clock, market, validationConfig)
|
||||
await validation.start()
|
||||
for _ in 0..<maxSlots + 1:
|
||||
let slot = Slot.example
|
||||
await market.fillSlot(slot.request.id, slot.slotIndex, proof, collateral)
|
||||
await validation.stop()
|
||||
check validation.slots.len == maxSlots
|
||||
|
|
|
@ -200,6 +200,30 @@ ethersuite "On-Chain Market":
|
|||
check receivedIdxs == @[slotIndex]
|
||||
await subscription.unsubscribe()
|
||||
|
||||
test "supports slot reservations full subscriptions":
|
||||
let account2 = ethProvider.getSigner(accounts[2])
|
||||
let account3 = ethProvider.getSigner(accounts[3])
|
||||
|
||||
await market.requestStorage(request)
|
||||
|
||||
var receivedRequestIds: seq[RequestId] = @[]
|
||||
var receivedIdxs: seq[UInt256] = @[]
|
||||
proc onSlotReservationsFull(requestId: RequestId, idx: UInt256) =
|
||||
receivedRequestIds.add(requestId)
|
||||
receivedIdxs.add(idx)
|
||||
let subscription =
|
||||
await market.subscribeSlotReservationsFull(onSlotReservationsFull)
|
||||
|
||||
await market.reserveSlot(request.id, slotIndex)
|
||||
switchAccount(account2)
|
||||
await market.reserveSlot(request.id, slotIndex)
|
||||
switchAccount(account3)
|
||||
await market.reserveSlot(request.id, slotIndex)
|
||||
|
||||
check receivedRequestIds == @[request.id]
|
||||
check receivedIdxs == @[slotIndex]
|
||||
await subscription.unsubscribe()
|
||||
|
||||
test "support fulfillment subscriptions":
|
||||
await market.requestStorage(request)
|
||||
var receivedIds: seq[RequestId]
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit 558bf645c3dc385437a3e695bba57e7dba1375fb
|
||||
Subproject commit 807fc973c875b5bde8f517c71c818ba8f2f720dd
|
Loading…
Reference in New Issue