Release v0.1.4 (#912)
* fix: createReservation lock (#825)
* fix: createReservation lock
* fix: additional locking places
* fix: acquire lock
* chore: feedback
Co-authored-by: markspanbroek <mark@spanbroek.net>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
* feat: withLock template and fixed tests
* fix: use proc for MockReservations constructor
* chore: feedback
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
* chore: feedback implementation
---------
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* Block deletion with ref count & repostore refactor (#631)
* Fix StoreStream so it doesn't return parity bytes (#838)
* fix storestream so it doesn\'t return parity bits for protected/verifiable manifests
* use Cid.example instead of creating a mock manually
* Fix verifiable manifest initialization (#839)
* fix verifiable manifest initialization
* fix linearstrategy, use verifiableStrategy to select blocks for slots
* check for both strategies in attribute inheritance test
* ci: add verify_circuit=true to the releases (#840)
* provisional fix so EC errors do not crash the node on download (#841)
* prevent node crashing with `not val.isNil` (#843)
* bump nim-leopard to handle no parity data (#845)
* Fix verifiable manifest constructor (#844)
* Fix verifiable manifest constructor
* Add integration test for verifiable manifest download
Add integration test for testing download of verifiable dataset after creating request for storage
* add missing import
* add testecbug to integration suite
* Remove hardhat instance from integration test
* change description, drop echo
---------
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: gmega <giuliano.mega@gmail.com>
* Bump Nim to 1.6.21 (#851)
* bump Nim to 1.6.21 (range type reset fixes)
* remove incompatible versions from compiler matrix
* feat(rest): adds erasure coding constraints when requesting storage (#848)
* Rest API: add erasure coding constraints when requesting storage
* clean up
* Make error message for "dataset too small" more informative.
* fix API integration test
---------
Co-authored-by: gmega <giuliano.mega@gmail.com>
* Prover workshop band-aid (#853)
* add prover bandaid
* Improve error message text
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
---------
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
* Bandaid for failing erasure coding (#855)
* Update Release workflow (#858)
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
* Fixes prover behavior with singleton proof trees (#859)
* add logs and test
* add Merkle proof checks
* factor out Circom input normalization, fix proof input serialization
* add test and update existing ones
* update circuit assets
* add back trace message
* switch contracts to fix branch
* update codex-contracts-eth to latest
* do not expose prove with prenormalized inputs
* Chronos v4 Update (v3 Compat Mode) (#814)
* add changes to use chronos v4 in compat mode
* switch chronos to compat fix branch
* use nimbus-build-system with configurable Nim repo
* add missing imports
* add missing await
* bump compat
* pin nim version in Makefile
* add await instead of asyncSpawn to advertisement queue loop
* bump DHT to v0.5.0
* allow error state of `onBatch` to propagate upwards in test code
* pin Nim compiler commit to avoid fetching stale branch
* make CI build against branch head instead of merge
* fix handling of return values in testslotqueue
* Downgrade to gcc 13 on Windows (#874)
* Downgrade to gcc 13 on Windows
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
* Increase build job timeout to 90 minutes
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
---------
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
* Add MIT/Apache licenses (#861)
* Add MIT/Apache licenses
* Center "Apache License"
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
* remove wrong legal entity; rename apache license file
---------
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
* Add OPTIONS endpoint to allow the content-type header for the upload endpoint (#869)
* Add OPTIONS endpoint to allow the content-type header
exec git commit --amend --no-edit -S
* Remove useless header "Access-Control-Headers" and add cache
Signed-off-by: Arnaud <arnaud@status.im>
---------
Signed-off-by: Arnaud <arnaud@status.im>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
* chore: add `downtimeProduct` config parameter (#867)
* chore: add `downtimeProduct` config parameter
* bump codex-contracts-eth to master
* Support CORS preflight requests when the storage request api returns an error (#878)
* Add CORS headers when the REST API is returning an error
* Use the allowedOrigin instead of the wilcard when setting the origin
Signed-off-by: Arnaud <arnaud@status.im>
---------
Signed-off-by: Arnaud <arnaud@status.im>
* refactor(marketplace): generic querying of historical marketplace events (#872)
* refactor(marketplace): move marketplace events to the Market abstraction
Move marketplace contract events to the Market abstraction so the types can be shared across all modules that call the Market abstraction.
* Remove unneeded conversion
* Switch to generic implementation of event querying
* change parent type to MarketplaceEvent
* Remove extra license file (#876)
* remove extra license
* center "apache license"
* Update advertising (#862)
* Setting up advertiser
* Wires up advertiser
* cleanup
* test compiles
* tests pass
* setting up test for advertiser
* Finishes advertiser tests
* fixes commonstore tests
* Review comments by Giuliano
* Race condition found by Giuliano
* Review comment by Dmitriy
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
* fixes tests
---------
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
* feat: add `--payout-address` (#870)
* feat: add `--payout-address`
Allows SPs to be paid out to a separate address, keeping their profits secure.
Supports https://github.com/codex-storage/codex-contracts-eth/pull/144 in the nim-codex client.
* Remove optional payoutAddress
Change --payout-address so that it is no longer optional. There is no longer an overload in `Marketplace.sol` for `fillSlot` accepting no `payoutAddress`.
* Update integration tests to include --payout-address
* move payoutAddress from fillSlot to freeSlot
* Update integration tests to use required payoutAddress
- to make payoutAddress required, the integration tests needed to avoid building the cli params until just before starting the node, otherwise if cli params were added ad-hoc, there would be an error after a non-required parameter was added before a required parameter.
* support client payout address
- withdrawFunds requires a withdrawAddress parameter, directs payouts for withdrawing of client funds (for a cancelled request) to go to that address.
* fix integration test
adds --payout-address to validators
* refactor: support withdrawFunds and freeSlot optional parameters
- withdrawFunds has an optional parameter for withdrawRecipient
- freeSlot has optional parameters for rewardRecipient and collateralRecipient
- change --payout-address to --reward-recipient to match contract signature naming
* Revert "Update integration tests to include --payout-address"
This reverts commit 8f9535cf35
.
There are some valid improvements to the integration tests, but they can be handled in a separate PR.
* small fix
* bump contracts to fix marketplace spec
* bump codex-contracts-eth, now rebased on master
* bump codex-contracts-eth
now that feat/reward-address has been merged to master
* clean up, comments
* Rework circuit downloader (#882)
* Introduces a start method to prover
* Moves backend creation into start method
* sets up three paths for backend initialization
* Extracts backend initialization to backend-factory
* Implements loading backend from cli files or previously downloaded local files
* Wires up downloading and unzipping
* functional implementation
* Fixes testprover.nim
* Sets up tests for backendfactory
* includes libzip-dev
* pulls in updated contracts
* removes integration cli tests for r1cs, wasm, and zkey file arguments.
* Fixes issue where inner-scope values are lost before returning
* sets local proof verification for dist-test images
* Adds two traces and bumps nim-ethers
* Adds separate path for circuit files
* Create circuit dir if not exists
* fix: make sure requestStorage is mined
* fix: correct place to plug confirm
* test: fixing contracts tests
* Restores gitmodules
* restores nim-datastore reference
* Sets up downloader exe
* sets up tool skeleton
* implements getting of circuit hash
* Implements downloader tool
* sets up test skeleton
* Implements test for cirdl
* includes testTools in testAll
* Cleanup building.md
* cleans up previous downloader implementation
* cleans up testbackendfactory
* moves start of prover into node.nim
* Fills in arguments in example command
* Initializes backend in prover constructor
* Restores tests
* Restores tests for cli instructions
* Review comments by Dmitriy, part 1
* Quotes path in download instruction.
* replaces curl with chronos http session
* Moves cirdl build output to 'build' folder.
* Fixes chronicles log output
* Add cirdl support to the codex Dockerfile
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
* Add cirdl support to the docker entrypoint
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
* Add cirdl support to the release workflow
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
* Disable verify_circuit flag for releases
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
* Removes backendFactory placeholder type
* wip
* Replaces zip library with status-im/zippy library (which supports zip and tar)
* Updates cirdl to not change circuitdir folder
* Switches from zip to tar.gz
* Review comments by Dmitriy
* updates codex-contracts-eth
* Adds testTools to CI
* Adds check for access to config.circuitdir
* Update fixture circuit zkey
* Update matrix to run tools tests on Windows
* Adds 'deps' dependency for cirdl
* Adjust docker-entrypoint.sh to use CODEX_CIRCUIT_DIR env var
* Review comments by Giuliano
---------
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: Veaceslav Doina <20563034+veaceslavdoina@users.noreply.github.com>
* Support CORS for POST and PATCH availability endpoints (#897)
* Adds testnet marketplace address to known deployments (#911)
* API tweaks for OpenAPI, errors and endpoints (#886)
* All sort of tweaks
* docs: availability's minPrice doc
* Revert changes to the two node test example
* Change default EC params in REST API
Change default EC params in REST API to 3 nodes and 1 tolerance.
Adjust integration tests to honour these settings.
---------
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
---------
Signed-off-by: Adam Uhlíř <adam@uhlir.dev>
Signed-off-by: Giuliano Mega <giuliano.mega@gmail.com>
Signed-off-by: Slava <20563034+veaceslavdoina@users.noreply.github.com>
Signed-off-by: Arnaud <arnaud@status.im>
Signed-off-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
Co-authored-by: markspanbroek <mark@spanbroek.net>
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Co-authored-by: Tomasz Bekas <tomasz.bekas@gmail.com>
Co-authored-by: Giuliano Mega <giuliano.mega@gmail.com>
Co-authored-by: Arnaud <arno.deville@gmail.com>
Co-authored-by: Ben Bierens <39762930+benbierens@users.noreply.github.com>
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
Co-authored-by: Arnaud <arnaud@status.im>
This commit is contained in:
parent
89917d4bb6
commit
484124db09
|
@ -78,6 +78,12 @@ runs:
|
|||
mingw-w64-i686-ntldd-git
|
||||
mingw-w64-i686-rust
|
||||
|
||||
- name: MSYS2 (Windows All) - Downgrade to gcc 13
|
||||
if: inputs.os == 'windows'
|
||||
shell: ${{ inputs.shell }} {0}
|
||||
run: |
|
||||
pacman -U --noconfirm https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-13.2.0-6-any.pkg.tar.zst https://repo.msys2.org/mingw/ucrt64/mingw-w64-ucrt-x86_64-gcc-libs-13.2.0-6-any.pkg.tar.zst
|
||||
|
||||
- name: Derive environment variables
|
||||
shell: ${{ inputs.shell }} {0}
|
||||
run: |
|
||||
|
|
|
@ -26,7 +26,7 @@ jobs:
|
|||
|
||||
name: '${{ matrix.os }}-${{ matrix.cpu }}-${{ matrix.nim_version }}-${{ matrix.tests }}'
|
||||
runs-on: ${{ matrix.builder }}
|
||||
timeout-minutes: 80
|
||||
timeout-minutes: 90
|
||||
steps:
|
||||
- name: Checkout sources
|
||||
uses: actions/checkout@v4
|
||||
|
@ -53,7 +53,7 @@ jobs:
|
|||
node-version: 18.15
|
||||
|
||||
- name: Start Ethereum node with Codex contracts
|
||||
if: matrix.tests == 'contract' || matrix.tests == 'integration' || matrix.tests == 'all'
|
||||
if: matrix.tests == 'contract' || matrix.tests == 'integration' || matrix.tests == 'tools' || matrix.tests == 'all'
|
||||
working-directory: vendor/codex-contracts-eth
|
||||
env:
|
||||
MSYS2_PATH_TYPE: inherit
|
||||
|
@ -79,6 +79,11 @@ jobs:
|
|||
path: tests/integration/logs/
|
||||
retention-days: 1
|
||||
|
||||
## Part 4 Tools ##
|
||||
- name: Tools tests
|
||||
if: matrix.tests == 'tools' || matrix.tests == 'all'
|
||||
run: make -j${ncpu} testTools
|
||||
|
||||
status:
|
||||
if: always()
|
||||
needs: [build]
|
||||
|
|
|
@ -33,6 +33,7 @@ jobs:
|
|||
os {windows}, cpu {amd64}, builder {windows-latest}, tests {unittest}, nim_version {${{ env.nim_version }}}, shell {msys2}
|
||||
os {windows}, cpu {amd64}, builder {windows-latest}, tests {contract}, nim_version {${{ env.nim_version }}}, shell {msys2}
|
||||
os {windows}, cpu {amd64}, builder {windows-latest}, tests {integration}, nim_version {${{ env.nim_version }}}, shell {msys2}
|
||||
os {windows}, cpu {amd64}, builder {windows-latest}, tests {tools}, nim_version {${{ env.nim_version }}}, shell {msys2}
|
||||
|
||||
build:
|
||||
needs: matrix
|
||||
|
|
|
@ -24,7 +24,7 @@ jobs:
|
|||
name: Build and Push
|
||||
uses: ./.github/workflows/docker-reusable.yml
|
||||
with:
|
||||
nimflags: '-d:disableMarchNative -d:codex_enable_api_debug_peers=true -d:codex_enable_proof_failures=true -d:codex_enable_log_counter=true'
|
||||
nimflags: '-d:disableMarchNative -d:codex_enable_api_debug_peers=true -d:codex_enable_proof_failures=true -d:codex_enable_log_counter=true -d:verify_circuit=true'
|
||||
nat_ip_auto: true
|
||||
tag_latest: ${{ github.ref_name == github.event.repository.default_branch || startsWith(github.ref, 'refs/tags/') }}
|
||||
tag_suffix: dist-tests
|
||||
|
|
|
@ -8,11 +8,13 @@ on:
|
|||
|
||||
env:
|
||||
cache_nonce: 0 # Allows for easily busting actions/cache caches
|
||||
nim_version: v1.6.14
|
||||
nim_version: pinned
|
||||
rust_version: 1.78.0
|
||||
binary_base: codex
|
||||
nim_flags: '-d:verify_circuit=true'
|
||||
upload_to_codex: false
|
||||
codex_binary_base: codex
|
||||
cirdl_binary_base: cirdl
|
||||
build_dir: build
|
||||
nim_flags: ''
|
||||
windows_libs: 'libstdc++-6.dll libgomp-1.dll libgcc_s_seh-1.dll libwinpthread-1.dll'
|
||||
|
||||
jobs:
|
||||
# Matrix
|
||||
|
@ -69,19 +71,48 @@ jobs:
|
|||
macos*) os_name="darwin" ;;
|
||||
windows*) os_name="windows" ;;
|
||||
esac
|
||||
binary="${{ env.binary_base }}-${{ github.ref_name }}-${os_name}-${{ matrix.cpu }}"
|
||||
[[ ${os_name} == "windows" ]] && binary="${binary}.exe"
|
||||
echo "binary=${binary}" >>$GITHUB_ENV
|
||||
codex_binary="${{ env.codex_binary_base }}-${{ github.ref_name }}-${os_name}-${{ matrix.cpu }}"
|
||||
cirdl_binary="${{ env.cirdl_binary_base }}-${{ github.ref_name }}-${os_name}-${{ matrix.cpu }}"
|
||||
if [[ ${os_name} == "windows" ]]; then
|
||||
codex_binary="${codex_binary}.exe"
|
||||
cirdl_binary="${cirdl_binary}.exe"
|
||||
fi
|
||||
echo "codex_binary=${codex_binary}" >>$GITHUB_ENV
|
||||
echo "cirdl_binary=${cirdl_binary}" >>$GITHUB_ENV
|
||||
|
||||
- name: Release - Build
|
||||
run: |
|
||||
make NIMFLAGS="--out:${{ env.binary }} ${{ env.nim_flags }}"
|
||||
make NIMFLAGS="--out:${{ env.build_dir }}/${{ env.codex_binary }} ${{ env.nim_flags }}"
|
||||
make cirdl NIMFLAGS="--out:${{ env.build_dir }}/${{ env.cirdl_binary }} ${{ env.nim_flags }}"
|
||||
|
||||
- name: Release - Upload binaries
|
||||
- name: Release - Libraries
|
||||
run: |
|
||||
if [[ "${{ matrix.os }}" == "windows" ]]; then
|
||||
for lib in ${{ env.windows_libs }}; do
|
||||
cp -v "${MINGW_PREFIX}/bin/${lib}" "${{ env.build_dir }}"
|
||||
done
|
||||
fi
|
||||
|
||||
- name: Release - Upload codex build artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: release-${{ env.binary }}
|
||||
path: ${{ env.binary }}
|
||||
name: release-${{ env.codex_binary }}
|
||||
path: ${{ env.build_dir }}/${{ env.codex_binary_base }}*
|
||||
retention-days: 1
|
||||
|
||||
- name: Release - Upload cirdl build artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: release-${{ env.cirdl_binary }}
|
||||
path: ${{ env.build_dir }}/${{ env.cirdl_binary_base }}*
|
||||
retention-days: 1
|
||||
|
||||
- name: Release - Upload windows libs
|
||||
if: matrix.os == 'windows'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: release-${{ matrix.os }}-libs
|
||||
path: ${{ env.build_dir }}/*.dll
|
||||
retention-days: 1
|
||||
|
||||
# Release
|
||||
|
@ -100,46 +131,47 @@ jobs:
|
|||
- name: Release - Compress and checksum
|
||||
run: |
|
||||
cd /tmp/release
|
||||
prepare() {
|
||||
# Checksum
|
||||
checksum() {
|
||||
arc="${1}"
|
||||
sha256sum "${arc}" >"${arc}.sha256"
|
||||
|
||||
# Upload to Codex
|
||||
if [[ "${{ env.upload_to_codex }}" == "true" ]]; then
|
||||
codex_endpoints="${{ secrets.CODEX_ENDPOINTS }}"
|
||||
codex_username="${{ secrets.CODEX_USERNAME }}"
|
||||
codex_password="${{ secrets.CODEX_PASSWORD }}"
|
||||
|
||||
for endpoint in ${codex_endpoints}; do
|
||||
echo "::add-mask::${endpoint}"
|
||||
cid=$(curl -X POST \
|
||||
"${endpoint}/api/codex/v1/data" \
|
||||
-u "${codex_username}":"${codex_password}" \
|
||||
-H "content-type: application/octet-stream" \
|
||||
-T "${arc}")
|
||||
|
||||
echo "${cid}" >"${arc}.cid"
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
# Compress and prepare
|
||||
for file in *; do
|
||||
for file in ${{ env.codex_binary_base }}* ${{ env.cirdl_binary_base }}*; do
|
||||
if [[ "${file}" == *".exe"* ]]; then
|
||||
|
||||
# Windows - binary only
|
||||
arc="${file%.*}.zip"
|
||||
zip "${arc}" "${file}"
|
||||
checksum "${arc}"
|
||||
|
||||
# Windows - binary and libs
|
||||
arc="${file%.*}-libs.zip"
|
||||
zip "${arc}" "${file}" ${{ env.windows_libs }}
|
||||
rm -f "${file}"
|
||||
prepare "${arc}"
|
||||
checksum "${arc}"
|
||||
else
|
||||
|
||||
# Linux/macOS
|
||||
arc="${file}.tar.gz"
|
||||
chmod 755 "${file}"
|
||||
tar cfz "${arc}" "${file}"
|
||||
rm -f "${file}"
|
||||
prepare "${arc}"
|
||||
checksum "${arc}"
|
||||
fi
|
||||
done
|
||||
rm -f ${{ env.windows_libs }}
|
||||
|
||||
- name: Release - Upload compressed artifacts and checksums
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: archives-and-checksums
|
||||
path: /tmp/release/
|
||||
retention-days: 1
|
||||
|
||||
- name: Release
|
||||
uses: softprops/action-gh-release@v2
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
with:
|
||||
files: |
|
||||
/tmp/release/*
|
||||
|
|
|
@ -3,6 +3,7 @@
|
|||
!*.*
|
||||
*.exe
|
||||
|
||||
!LICENSE*
|
||||
!Makefile
|
||||
|
||||
nimcache/
|
||||
|
|
|
@ -215,3 +215,6 @@
|
|||
[submodule "vendor/nim-leveldbstatic"]
|
||||
path = vendor/nim-leveldbstatic
|
||||
url = https://github.com/codex-storage/nim-leveldb.git
|
||||
[submodule "vendor/nim-zippy"]
|
||||
path = vendor/nim-zippy
|
||||
url = https://github.com/status-im/nim-zippy.git
|
||||
|
|
12
BUILDING.md
12
BUILDING.md
|
@ -33,7 +33,7 @@ The current implementation of Codex's zero-knowledge proving circuit requires th
|
|||
On a bare bones installation of Debian (or a distribution derived from Debian, such as Ubuntu), run
|
||||
|
||||
```shell
|
||||
apt-get update && apt-get install build-essential cmake curl git rustc cargo
|
||||
$ apt-get update && apt-get install build-essential cmake curl git rustc cargo
|
||||
```
|
||||
|
||||
Non-Debian distributions have different package managers: `apk`, `dnf`, `pacman`, `rpm`, `yum`, etc.
|
||||
|
@ -157,6 +157,16 @@ In Bash run
|
|||
make test
|
||||
```
|
||||
|
||||
### Tools
|
||||
|
||||
#### Circuit download tool
|
||||
|
||||
To build the circuit download tool located in `tools/cirdl` run:
|
||||
|
||||
```shell
|
||||
make cirdl
|
||||
```
|
||||
|
||||
### testAll
|
||||
|
||||
#### Prerequisites
|
||||
|
|
|
@ -0,0 +1,199 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
|
@ -0,0 +1,19 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
12
Makefile
12
Makefile
|
@ -74,6 +74,11 @@ all: | build deps
|
|||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim codex $(NIM_PARAMS) build.nims
|
||||
|
||||
# Build tools/cirdl
|
||||
cirdl: | deps
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim toolsCirdl $(NIM_PARAMS) build.nims
|
||||
|
||||
# must be included after the default target
|
||||
-include $(BUILD_SYSTEM_DIR)/makefiles/targets.mk
|
||||
|
||||
|
@ -124,7 +129,12 @@ testAll: | build deps
|
|||
# Builds and runs Taiko L2 tests
|
||||
testTaiko: | build deps
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim testTaiko $(NIM_PARAMS) codex.nims
|
||||
$(ENV_SCRIPT) nim testTaiko $(NIM_PARAMS) build.nims
|
||||
|
||||
# Builds and runs tool tests
|
||||
testTools: | cirdl
|
||||
echo -e $(BUILD_MSG) "build/$@" && \
|
||||
$(ENV_SCRIPT) nim testTools $(NIM_PARAMS) build.nims
|
||||
|
||||
# nim-libbacktrace
|
||||
LIBBACKTRACE_MAKE_FLAGS := -C vendor/nim-libbacktrace --no-print-directory BUILD_CXX_LIB=0
|
||||
|
|
15
build.nims
15
build.nims
|
@ -1,5 +1,6 @@
|
|||
mode = ScriptMode.Verbose
|
||||
|
||||
import std/os except commandLineParams
|
||||
|
||||
### Helper functions
|
||||
proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") =
|
||||
|
@ -14,7 +15,11 @@ proc buildBinary(name: string, srcDir = "./", params = "", lang = "c") =
|
|||
for i in 2..<paramCount():
|
||||
extra_params &= " " & paramStr(i)
|
||||
|
||||
let cmd = "nim " & lang & " --out:build/" & name & " " & extra_params & " " & srcDir & name & ".nim"
|
||||
let
|
||||
# Place build output in 'build' folder, even if name includes a longer path.
|
||||
outName = os.lastPathPart(name)
|
||||
cmd = "nim " & lang & " --out:build/" & outName & " " & extra_params & " " & srcDir & name & ".nim"
|
||||
|
||||
exec(cmd)
|
||||
|
||||
proc test(name: string, srcDir = "tests/", params = "", lang = "c") =
|
||||
|
@ -24,6 +29,9 @@ proc test(name: string, srcDir = "tests/", params = "", lang = "c") =
|
|||
task codex, "build codex binary":
|
||||
buildBinary "codex", params = "-d:chronicles_runtime_filtering -d:chronicles_log_level=TRACE"
|
||||
|
||||
task toolsCirdl, "build tools/cirdl binary":
|
||||
buildBinary "tools/cirdl/cirdl"
|
||||
|
||||
task testCodex, "Build & run Codex tests":
|
||||
test "testCodex", params = "-d:codex_enable_proof_failures=true"
|
||||
|
||||
|
@ -40,10 +48,15 @@ task build, "build codex binary":
|
|||
task test, "Run tests":
|
||||
testCodexTask()
|
||||
|
||||
task testTools, "Run Tools tests":
|
||||
toolsCirdlTask()
|
||||
test "testTools"
|
||||
|
||||
task testAll, "Run all tests (except for Taiko L2 tests)":
|
||||
testCodexTask()
|
||||
testContractsTask()
|
||||
testIntegrationTask()
|
||||
testToolsTask()
|
||||
|
||||
task testTaiko, "Run Taiko L2 tests":
|
||||
codexTask()
|
||||
|
|
|
@ -67,6 +67,9 @@ when isMainModule:
|
|||
# permissions are insecure.
|
||||
quit QuitFailure
|
||||
|
||||
if config.prover() and not(checkAndCreateDataDir((config.circuitDir).string)):
|
||||
quit QuitFailure
|
||||
|
||||
trace "Data dir initialized", dir = $config.dataDir
|
||||
|
||||
if not(checkAndCreateDataDir((config.dataDir / "repo"))):
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import ./engine/discovery
|
||||
import ./engine/advertiser
|
||||
import ./engine/engine
|
||||
import ./engine/payments
|
||||
|
||||
export discovery, engine, payments
|
||||
export discovery, advertiser, engine, payments
|
||||
|
|
|
@ -0,0 +1,177 @@
|
|||
## Nim-Codex
|
||||
## Copyright (c) 2022 Status Research & Development GmbH
|
||||
## Licensed under either of
|
||||
## * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
|
||||
## * MIT license ([LICENSE-MIT](LICENSE-MIT))
|
||||
## at your option.
|
||||
## This file may not be copied, modified, or distributed except according to
|
||||
## those terms.
|
||||
|
||||
import pkg/chronos
|
||||
import pkg/libp2p/cid
|
||||
import pkg/libp2p/multicodec
|
||||
import pkg/metrics
|
||||
import pkg/questionable
|
||||
import pkg/questionable/results
|
||||
|
||||
import ../protobuf/presence
|
||||
import ../peers
|
||||
|
||||
import ../../utils
|
||||
import ../../discovery
|
||||
import ../../stores/blockstore
|
||||
import ../../logutils
|
||||
import ../../manifest
|
||||
|
||||
logScope:
|
||||
topics = "codex discoveryengine advertiser"
|
||||
|
||||
declareGauge(codexInflightAdvertise, "inflight advertise requests")
|
||||
|
||||
const
|
||||
DefaultConcurrentAdvertRequests = 10
|
||||
DefaultAdvertiseLoopSleep = 30.minutes
|
||||
|
||||
type
|
||||
Advertiser* = ref object of RootObj
|
||||
localStore*: BlockStore # Local block store for this instance
|
||||
discovery*: Discovery # Discovery interface
|
||||
|
||||
advertiserRunning*: bool # Indicates if discovery is running
|
||||
concurrentAdvReqs: int # Concurrent advertise requests
|
||||
|
||||
advertiseLocalStoreLoop*: Future[void] # Advertise loop task handle
|
||||
advertiseQueue*: AsyncQueue[Cid] # Advertise queue
|
||||
advertiseTasks*: seq[Future[void]] # Advertise tasks
|
||||
|
||||
advertiseLocalStoreLoopSleep: Duration # Advertise loop sleep
|
||||
inFlightAdvReqs*: Table[Cid, Future[void]] # Inflight advertise requests
|
||||
|
||||
proc addCidToQueue(b: Advertiser, cid: Cid) {.async.} =
|
||||
if cid notin b.advertiseQueue:
|
||||
await b.advertiseQueue.put(cid)
|
||||
trace "Advertising", cid
|
||||
|
||||
proc advertiseBlock(b: Advertiser, cid: Cid) {.async.} =
|
||||
without isM =? cid.isManifest, err:
|
||||
warn "Unable to determine if cid is manifest"
|
||||
return
|
||||
|
||||
if isM:
|
||||
without blk =? await b.localStore.getBlock(cid), err:
|
||||
error "Error retrieving manifest block", cid, err = err.msg
|
||||
return
|
||||
|
||||
without manifest =? Manifest.decode(blk), err:
|
||||
error "Unable to decode as manifest", err = err.msg
|
||||
return
|
||||
|
||||
# announce manifest cid and tree cid
|
||||
await b.addCidToQueue(cid)
|
||||
await b.addCidToQueue(manifest.treeCid)
|
||||
|
||||
proc advertiseLocalStoreLoop(b: Advertiser) {.async.} =
|
||||
while b.advertiserRunning:
|
||||
if cids =? await b.localStore.listBlocks(blockType = BlockType.Manifest):
|
||||
trace "Advertiser begins iterating blocks..."
|
||||
for c in cids:
|
||||
if cid =? await c:
|
||||
await b.advertiseBlock(cid)
|
||||
trace "Advertiser iterating blocks finished."
|
||||
|
||||
await sleepAsync(b.advertiseLocalStoreLoopSleep)
|
||||
|
||||
info "Exiting advertise task loop"
|
||||
|
||||
proc processQueueLoop(b: Advertiser) {.async.} =
|
||||
while b.advertiserRunning:
|
||||
try:
|
||||
let
|
||||
cid = await b.advertiseQueue.get()
|
||||
|
||||
if cid in b.inFlightAdvReqs:
|
||||
continue
|
||||
|
||||
try:
|
||||
let
|
||||
request = b.discovery.provide(cid)
|
||||
|
||||
b.inFlightAdvReqs[cid] = request
|
||||
codexInflightAdvertise.set(b.inFlightAdvReqs.len.int64)
|
||||
await request
|
||||
|
||||
finally:
|
||||
b.inFlightAdvReqs.del(cid)
|
||||
codexInflightAdvertise.set(b.inFlightAdvReqs.len.int64)
|
||||
except CancelledError:
|
||||
trace "Advertise task cancelled"
|
||||
return
|
||||
except CatchableError as exc:
|
||||
warn "Exception in advertise task runner", exc = exc.msg
|
||||
|
||||
info "Exiting advertise task runner"
|
||||
|
||||
proc start*(b: Advertiser) {.async.} =
|
||||
## Start the advertiser
|
||||
##
|
||||
|
||||
trace "Advertiser start"
|
||||
|
||||
proc onBlock(cid: Cid) {.async.} =
|
||||
await b.advertiseBlock(cid)
|
||||
|
||||
doAssert(b.localStore.onBlockStored.isNone())
|
||||
b.localStore.onBlockStored = onBlock.some
|
||||
|
||||
if b.advertiserRunning:
|
||||
warn "Starting advertiser twice"
|
||||
return
|
||||
|
||||
b.advertiserRunning = true
|
||||
for i in 0..<b.concurrentAdvReqs:
|
||||
b.advertiseTasks.add(processQueueLoop(b))
|
||||
|
||||
b.advertiseLocalStoreLoop = advertiseLocalStoreLoop(b)
|
||||
|
||||
proc stop*(b: Advertiser) {.async.} =
|
||||
## Stop the advertiser
|
||||
##
|
||||
|
||||
trace "Advertiser stop"
|
||||
if not b.advertiserRunning:
|
||||
warn "Stopping advertiser without starting it"
|
||||
return
|
||||
|
||||
b.advertiserRunning = false
|
||||
# Stop incoming tasks from callback and localStore loop
|
||||
b.localStore.onBlockStored = CidCallback.none
|
||||
if not b.advertiseLocalStoreLoop.isNil and not b.advertiseLocalStoreLoop.finished:
|
||||
trace "Awaiting advertise loop to stop"
|
||||
await b.advertiseLocalStoreLoop.cancelAndWait()
|
||||
trace "Advertise loop stopped"
|
||||
|
||||
# Clear up remaining tasks
|
||||
for task in b.advertiseTasks:
|
||||
if not task.finished:
|
||||
trace "Awaiting advertise task to stop"
|
||||
await task.cancelAndWait()
|
||||
trace "Advertise task stopped"
|
||||
|
||||
trace "Advertiser stopped"
|
||||
|
||||
proc new*(
|
||||
T: type Advertiser,
|
||||
localStore: BlockStore,
|
||||
discovery: Discovery,
|
||||
concurrentAdvReqs = DefaultConcurrentAdvertRequests,
|
||||
advertiseLocalStoreLoopSleep = DefaultAdvertiseLoopSleep
|
||||
): Advertiser =
|
||||
## Create a advertiser instance
|
||||
##
|
||||
Advertiser(
|
||||
localStore: localStore,
|
||||
discovery: discovery,
|
||||
concurrentAdvReqs: concurrentAdvReqs,
|
||||
advertiseQueue: newAsyncQueue[Cid](concurrentAdvReqs),
|
||||
inFlightAdvReqs: initTable[Cid, Future[void]](),
|
||||
advertiseLocalStoreLoopSleep: advertiseLocalStoreLoopSleep)
|
|
@ -35,11 +35,9 @@ declareGauge(codexInflightDiscovery, "inflight discovery requests")
|
|||
|
||||
const
|
||||
DefaultConcurrentDiscRequests = 10
|
||||
DefaultConcurrentAdvertRequests = 10
|
||||
DefaultDiscoveryTimeout = 1.minutes
|
||||
DefaultMinPeersPerBlock = 3
|
||||
DefaultDiscoveryLoopSleep = 3.seconds
|
||||
DefaultAdvertiseLoopSleep = 30.minutes
|
||||
|
||||
type
|
||||
DiscoveryEngine* = ref object of RootObj
|
||||
|
@ -49,20 +47,13 @@ type
|
|||
discovery*: Discovery # Discovery interface
|
||||
pendingBlocks*: PendingBlocksManager # Blocks we're awaiting to be resolved
|
||||
discEngineRunning*: bool # Indicates if discovery is running
|
||||
concurrentAdvReqs: int # Concurrent advertise requests
|
||||
concurrentDiscReqs: int # Concurrent discovery requests
|
||||
advertiseLoop*: Future[void] # Advertise loop task handle
|
||||
advertiseQueue*: AsyncQueue[Cid] # Advertise queue
|
||||
advertiseTasks*: seq[Future[void]] # Advertise tasks
|
||||
discoveryLoop*: Future[void] # Discovery loop task handle
|
||||
discoveryQueue*: AsyncQueue[Cid] # Discovery queue
|
||||
discoveryTasks*: seq[Future[void]] # Discovery tasks
|
||||
minPeersPerBlock*: int # Max number of peers with block
|
||||
discoveryLoopSleep: Duration # Discovery loop sleep
|
||||
advertiseLoopSleep: Duration # Advertise loop sleep
|
||||
inFlightDiscReqs*: Table[Cid, Future[seq[SignedPeerRecord]]] # Inflight discovery requests
|
||||
inFlightAdvReqs*: Table[Cid, Future[void]] # Inflight advertise requests
|
||||
advertiseType*: BlockType # Advertice blocks, manifests or both
|
||||
|
||||
proc discoveryQueueLoop(b: DiscoveryEngine) {.async.} =
|
||||
while b.discEngineRunning:
|
||||
|
@ -81,69 +72,6 @@ proc discoveryQueueLoop(b: DiscoveryEngine) {.async.} =
|
|||
|
||||
await sleepAsync(b.discoveryLoopSleep)
|
||||
|
||||
proc advertiseBlock(b: DiscoveryEngine, cid: Cid) {.async.} =
|
||||
without isM =? cid.isManifest, err:
|
||||
warn "Unable to determine if cid is manifest"
|
||||
return
|
||||
|
||||
if isM:
|
||||
without blk =? await b.localStore.getBlock(cid), err:
|
||||
error "Error retrieving manifest block", cid, err = err.msg
|
||||
return
|
||||
|
||||
without manifest =? Manifest.decode(blk), err:
|
||||
error "Unable to decode as manifest", err = err.msg
|
||||
return
|
||||
|
||||
# announce manifest cid and tree cid
|
||||
await b.advertiseQueue.put(cid)
|
||||
await b.advertiseQueue.put(manifest.treeCid)
|
||||
|
||||
proc advertiseQueueLoop(b: DiscoveryEngine) {.async.} =
|
||||
while b.discEngineRunning:
|
||||
if cids =? await b.localStore.listBlocks(blockType = b.advertiseType):
|
||||
trace "Begin iterating blocks..."
|
||||
for c in cids:
|
||||
if cid =? await c:
|
||||
b.advertiseBlock(cid)
|
||||
await sleepAsync(100.millis)
|
||||
trace "Iterating blocks finished."
|
||||
|
||||
await sleepAsync(b.advertiseLoopSleep)
|
||||
|
||||
info "Exiting advertise task loop"
|
||||
|
||||
proc advertiseTaskLoop(b: DiscoveryEngine) {.async.} =
|
||||
## Run advertise tasks
|
||||
##
|
||||
|
||||
while b.discEngineRunning:
|
||||
try:
|
||||
let
|
||||
cid = await b.advertiseQueue.get()
|
||||
|
||||
if cid in b.inFlightAdvReqs:
|
||||
continue
|
||||
|
||||
try:
|
||||
let
|
||||
request = b.discovery.provide(cid)
|
||||
|
||||
b.inFlightAdvReqs[cid] = request
|
||||
codexInflightDiscovery.set(b.inFlightAdvReqs.len.int64)
|
||||
await request
|
||||
|
||||
finally:
|
||||
b.inFlightAdvReqs.del(cid)
|
||||
codexInflightDiscovery.set(b.inFlightAdvReqs.len.int64)
|
||||
except CancelledError:
|
||||
trace "Advertise task cancelled"
|
||||
return
|
||||
except CatchableError as exc:
|
||||
warn "Exception in advertise task runner", exc = exc.msg
|
||||
|
||||
info "Exiting advertise task runner"
|
||||
|
||||
proc discoveryTaskLoop(b: DiscoveryEngine) {.async.} =
|
||||
## Run discovery tasks
|
||||
##
|
||||
|
@ -168,7 +96,7 @@ proc discoveryTaskLoop(b: DiscoveryEngine) {.async.} =
|
|||
.wait(DefaultDiscoveryTimeout)
|
||||
|
||||
b.inFlightDiscReqs[cid] = request
|
||||
codexInflightDiscovery.set(b.inFlightAdvReqs.len.int64)
|
||||
codexInflightDiscovery.set(b.inFlightDiscReqs.len.int64)
|
||||
let
|
||||
peers = await request
|
||||
|
||||
|
@ -182,7 +110,7 @@ proc discoveryTaskLoop(b: DiscoveryEngine) {.async.} =
|
|||
|
||||
finally:
|
||||
b.inFlightDiscReqs.del(cid)
|
||||
codexInflightDiscovery.set(b.inFlightAdvReqs.len.int64)
|
||||
codexInflightDiscovery.set(b.inFlightDiscReqs.len.int64)
|
||||
except CancelledError:
|
||||
trace "Discovery task cancelled"
|
||||
return
|
||||
|
@ -199,14 +127,6 @@ proc queueFindBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) {.inline.} =
|
|||
except CatchableError as exc:
|
||||
warn "Exception queueing discovery request", exc = exc.msg
|
||||
|
||||
proc queueProvideBlocksReq*(b: DiscoveryEngine, cids: seq[Cid]) {.inline.} =
|
||||
for cid in cids:
|
||||
if cid notin b.advertiseQueue:
|
||||
try:
|
||||
b.advertiseQueue.putNoWait(cid)
|
||||
except CatchableError as exc:
|
||||
warn "Exception queueing discovery request", exc = exc.msg
|
||||
|
||||
proc start*(b: DiscoveryEngine) {.async.} =
|
||||
## Start the discengine task
|
||||
##
|
||||
|
@ -218,13 +138,9 @@ proc start*(b: DiscoveryEngine) {.async.} =
|
|||
return
|
||||
|
||||
b.discEngineRunning = true
|
||||
for i in 0..<b.concurrentAdvReqs:
|
||||
b.advertiseTasks.add(advertiseTaskLoop(b))
|
||||
|
||||
for i in 0..<b.concurrentDiscReqs:
|
||||
b.discoveryTasks.add(discoveryTaskLoop(b))
|
||||
|
||||
b.advertiseLoop = advertiseQueueLoop(b)
|
||||
b.discoveryLoop = discoveryQueueLoop(b)
|
||||
|
||||
proc stop*(b: DiscoveryEngine) {.async.} =
|
||||
|
@ -237,23 +153,12 @@ proc stop*(b: DiscoveryEngine) {.async.} =
|
|||
return
|
||||
|
||||
b.discEngineRunning = false
|
||||
for task in b.advertiseTasks:
|
||||
if not task.finished:
|
||||
trace "Awaiting advertise task to stop"
|
||||
await task.cancelAndWait()
|
||||
trace "Advertise task stopped"
|
||||
|
||||
for task in b.discoveryTasks:
|
||||
if not task.finished:
|
||||
trace "Awaiting discovery task to stop"
|
||||
await task.cancelAndWait()
|
||||
trace "Discovery task stopped"
|
||||
|
||||
if not b.advertiseLoop.isNil and not b.advertiseLoop.finished:
|
||||
trace "Awaiting advertise loop to stop"
|
||||
await b.advertiseLoop.cancelAndWait()
|
||||
trace "Advertise loop stopped"
|
||||
|
||||
if not b.discoveryLoop.isNil and not b.discoveryLoop.finished:
|
||||
trace "Awaiting discovery loop to stop"
|
||||
await b.discoveryLoop.cancelAndWait()
|
||||
|
@ -268,12 +173,9 @@ proc new*(
|
|||
network: BlockExcNetwork,
|
||||
discovery: Discovery,
|
||||
pendingBlocks: PendingBlocksManager,
|
||||
concurrentAdvReqs = DefaultConcurrentAdvertRequests,
|
||||
concurrentDiscReqs = DefaultConcurrentDiscRequests,
|
||||
discoveryLoopSleep = DefaultDiscoveryLoopSleep,
|
||||
advertiseLoopSleep = DefaultAdvertiseLoopSleep,
|
||||
minPeersPerBlock = DefaultMinPeersPerBlock,
|
||||
advertiseType = BlockType.Manifest
|
||||
minPeersPerBlock = DefaultMinPeersPerBlock
|
||||
): DiscoveryEngine =
|
||||
## Create a discovery engine instance for advertising services
|
||||
##
|
||||
|
@ -283,13 +185,8 @@ proc new*(
|
|||
network: network,
|
||||
discovery: discovery,
|
||||
pendingBlocks: pendingBlocks,
|
||||
concurrentAdvReqs: concurrentAdvReqs,
|
||||
concurrentDiscReqs: concurrentDiscReqs,
|
||||
advertiseQueue: newAsyncQueue[Cid](concurrentAdvReqs),
|
||||
discoveryQueue: newAsyncQueue[Cid](concurrentDiscReqs),
|
||||
inFlightDiscReqs: initTable[Cid, Future[seq[SignedPeerRecord]]](),
|
||||
inFlightAdvReqs: initTable[Cid, Future[void]](),
|
||||
discoveryLoopSleep: discoveryLoopSleep,
|
||||
advertiseLoopSleep: advertiseLoopSleep,
|
||||
minPeersPerBlock: minPeersPerBlock,
|
||||
advertiseType: advertiseType)
|
||||
minPeersPerBlock: minPeersPerBlock)
|
||||
|
|
|
@ -34,6 +34,7 @@ import ../peers
|
|||
|
||||
import ./payments
|
||||
import ./discovery
|
||||
import ./advertiser
|
||||
import ./pendingblocks
|
||||
|
||||
export peers, pendingblocks, payments, discovery
|
||||
|
@ -77,6 +78,7 @@ type
|
|||
pricing*: ?Pricing # Optional bandwidth pricing
|
||||
blockFetchTimeout*: Duration # Timeout for fetching blocks over the network
|
||||
discovery*: DiscoveryEngine
|
||||
advertiser*: Advertiser
|
||||
|
||||
Pricing* = object
|
||||
address*: EthAddress
|
||||
|
@ -93,6 +95,7 @@ proc start*(b: BlockExcEngine) {.async.} =
|
|||
##
|
||||
|
||||
await b.discovery.start()
|
||||
await b.advertiser.start()
|
||||
|
||||
trace "Blockexc starting with concurrent tasks", tasks = b.concurrentTasks
|
||||
if b.blockexcRunning:
|
||||
|
@ -108,6 +111,7 @@ proc stop*(b: BlockExcEngine) {.async.} =
|
|||
##
|
||||
|
||||
await b.discovery.stop()
|
||||
await b.advertiser.stop()
|
||||
|
||||
trace "NetworkStore stop"
|
||||
if not b.blockexcRunning:
|
||||
|
@ -284,27 +288,11 @@ proc cancelBlocks(b: BlockExcEngine, addrs: seq[BlockAddress]) {.async.} =
|
|||
if failed.len > 0:
|
||||
warn "Failed to send block request cancellations to peers", peers = failed.len
|
||||
|
||||
proc getAnnouceCids(blocksDelivery: seq[BlockDelivery]): seq[Cid] =
|
||||
var cids = initHashSet[Cid]()
|
||||
for bd in blocksDelivery:
|
||||
if bd.address.leaf:
|
||||
cids.incl(bd.address.treeCid)
|
||||
else:
|
||||
without isM =? bd.address.cid.isManifest, err:
|
||||
warn "Unable to determine if cid is manifest"
|
||||
continue
|
||||
if isM:
|
||||
cids.incl(bd.address.cid)
|
||||
return cids.toSeq
|
||||
|
||||
proc resolveBlocks*(b: BlockExcEngine, blocksDelivery: seq[BlockDelivery]) {.async.} =
|
||||
b.pendingBlocks.resolve(blocksDelivery)
|
||||
await b.scheduleTasks(blocksDelivery)
|
||||
let announceCids = getAnnouceCids(blocksDelivery)
|
||||
await b.cancelBlocks(blocksDelivery.mapIt(it.address))
|
||||
|
||||
b.discovery.queueProvideBlocksReq(announceCids)
|
||||
|
||||
proc resolveBlocks*(b: BlockExcEngine, blocks: seq[Block]) {.async.} =
|
||||
await b.resolveBlocks(
|
||||
blocks.mapIt(
|
||||
|
@ -596,6 +584,7 @@ proc new*(
|
|||
wallet: WalletRef,
|
||||
network: BlockExcNetwork,
|
||||
discovery: DiscoveryEngine,
|
||||
advertiser: Advertiser,
|
||||
peerStore: PeerCtxStore,
|
||||
pendingBlocks: PendingBlocksManager,
|
||||
concurrentTasks = DefaultConcurrentTasks,
|
||||
|
@ -616,6 +605,7 @@ proc new*(
|
|||
concurrentTasks: concurrentTasks,
|
||||
taskQueue: newAsyncHeapQueue[BlockExcPeerCtx](DefaultTaskQueueSize),
|
||||
discovery: discovery,
|
||||
advertiser: advertiser,
|
||||
blockFetchTimeout: blockFetchTimeout)
|
||||
|
||||
proc peerEventHandler(peerId: PeerId, event: PeerEvent) {.async.} =
|
||||
|
|
|
@ -31,7 +31,7 @@ import ./codextypes
|
|||
export errors, logutils, units, codextypes
|
||||
|
||||
type
|
||||
Block* = object of RootObj
|
||||
Block* = ref object of RootObj
|
||||
cid*: Cid
|
||||
data*: seq[byte]
|
||||
|
||||
|
|
|
@ -110,7 +110,7 @@ proc bootstrapInteractions(
|
|||
quit QuitFailure
|
||||
|
||||
let marketplace = Marketplace.new(marketplaceAddress, signer)
|
||||
let market = OnChainMarket.new(marketplace)
|
||||
let market = OnChainMarket.new(marketplace, config.rewardRecipient)
|
||||
let clock = OnChainClock.new(provider)
|
||||
|
||||
var client: ?ClientInteractions
|
||||
|
@ -268,36 +268,13 @@ proc new*(
|
|||
|
||||
peerStore = PeerCtxStore.new()
|
||||
pendingBlocks = PendingBlocksManager.new()
|
||||
advertiser = Advertiser.new(repoStore, discovery)
|
||||
blockDiscovery = DiscoveryEngine.new(repoStore, peerStore, network, discovery, pendingBlocks)
|
||||
engine = BlockExcEngine.new(repoStore, wallet, network, blockDiscovery, peerStore, pendingBlocks)
|
||||
engine = BlockExcEngine.new(repoStore, wallet, network, blockDiscovery, advertiser, peerStore, pendingBlocks)
|
||||
store = NetworkStore.new(engine, repoStore)
|
||||
prover = if config.prover:
|
||||
if not fileAccessible($config.circomR1cs, {AccessFlags.Read}) and
|
||||
endsWith($config.circomR1cs, ".r1cs"):
|
||||
error "Circom R1CS file not accessible"
|
||||
raise (ref Defect)(
|
||||
msg: "r1cs file not readable, doesn't exist or wrong extension (.r1cs)")
|
||||
|
||||
if not fileAccessible($config.circomWasm, {AccessFlags.Read}) and
|
||||
endsWith($config.circomWasm, ".wasm"):
|
||||
error "Circom wasm file not accessible"
|
||||
raise (ref Defect)(
|
||||
msg: "wasm file not readable, doesn't exist or wrong extension (.wasm)")
|
||||
|
||||
let zkey = if not config.circomNoZkey:
|
||||
if not fileAccessible($config.circomZkey, {AccessFlags.Read}) and
|
||||
endsWith($config.circomZkey, ".zkey"):
|
||||
error "Circom zkey file not accessible"
|
||||
raise (ref Defect)(
|
||||
msg: "zkey file not readable, doesn't exist or wrong extension (.zkey)")
|
||||
|
||||
$config.circomZkey
|
||||
else: ""
|
||||
|
||||
some Prover.new(
|
||||
store,
|
||||
CircomCompat.init($config.circomR1cs, $config.circomWasm, zkey),
|
||||
config.numProofSamples)
|
||||
let backend = config.initializeBackend().expect("Unable to create prover backend.")
|
||||
some Prover.new(store, backend, config.numProofSamples)
|
||||
else:
|
||||
none Prover
|
||||
|
||||
|
|
|
@ -62,6 +62,7 @@ const
|
|||
codex_enable_log_counter* {.booldefine.} = false
|
||||
|
||||
DefaultDataDir* = defaultDataDir()
|
||||
DefaultCircuitDir* = defaultDataDir() / "circuits"
|
||||
|
||||
type
|
||||
StartUpCmd* {.pure.} = enum
|
||||
|
@ -293,28 +294,40 @@ type
|
|||
name: "validator-max-slots"
|
||||
.}: int
|
||||
|
||||
rewardRecipient* {.
|
||||
desc: "Address to send payouts to (eg rewards and refunds)"
|
||||
name: "reward-recipient"
|
||||
.}: Option[EthAddress]
|
||||
|
||||
case persistenceCmd* {.
|
||||
defaultValue: noCmd
|
||||
command }: PersistenceCmd
|
||||
|
||||
of PersistenceCmd.prover:
|
||||
circuitDir* {.
|
||||
desc: "Directory where Codex will store proof circuit data"
|
||||
defaultValue: DefaultCircuitDir
|
||||
defaultValueDesc: $DefaultCircuitDir
|
||||
abbr: "cd"
|
||||
name: "circuit-dir" }: OutDir
|
||||
|
||||
circomR1cs* {.
|
||||
desc: "The r1cs file for the storage circuit"
|
||||
defaultValue: $DefaultDataDir / "circuits" / "proof_main.r1cs"
|
||||
defaultValueDesc: $DefaultDataDir & "/circuits/proof_main.r1cs"
|
||||
defaultValue: $DefaultCircuitDir / "proof_main.r1cs"
|
||||
defaultValueDesc: $DefaultCircuitDir & "/proof_main.r1cs"
|
||||
name: "circom-r1cs"
|
||||
.}: InputFile
|
||||
|
||||
circomWasm* {.
|
||||
desc: "The wasm file for the storage circuit"
|
||||
defaultValue: $DefaultDataDir / "circuits" / "proof_main.wasm"
|
||||
defaultValue: $DefaultCircuitDir / "proof_main.wasm"
|
||||
defaultValueDesc: $DefaultDataDir & "/circuits/proof_main.wasm"
|
||||
name: "circom-wasm"
|
||||
.}: InputFile
|
||||
|
||||
circomZkey* {.
|
||||
desc: "The zkey file for the storage circuit"
|
||||
defaultValue: $DefaultDataDir / "circuits" / "proof_main.zkey"
|
||||
defaultValue: $DefaultCircuitDir / "proof_main.zkey"
|
||||
defaultValueDesc: $DefaultDataDir & "/circuits/proof_main.zkey"
|
||||
name: "circom-zkey"
|
||||
.}: InputFile
|
||||
|
|
|
@ -18,6 +18,10 @@ type
|
|||
timeout*: UInt256 # mark proofs as missing before the timeout (in seconds)
|
||||
downtime*: uint8 # ignore this much recent blocks for proof requirements
|
||||
zkeyHash*: string # hash of the zkey file which is linked to the verifier
|
||||
# Ensures the pointer does not remain in downtime for many consecutive
|
||||
# periods. For each period increase, move the pointer `pointerProduct`
|
||||
# blocks. Should be a prime number to ensure there are no cycles.
|
||||
downtimeProduct*: uint8
|
||||
|
||||
|
||||
func fromTuple(_: type ProofConfig, tupl: tuple): ProofConfig =
|
||||
|
@ -25,7 +29,8 @@ func fromTuple(_: type ProofConfig, tupl: tuple): ProofConfig =
|
|||
period: tupl[0],
|
||||
timeout: tupl[1],
|
||||
downtime: tupl[2],
|
||||
zkeyHash: tupl[3]
|
||||
zkeyHash: tupl[3],
|
||||
downtimeProduct: tupl[4]
|
||||
)
|
||||
|
||||
func fromTuple(_: type CollateralConfig, tupl: tuple): CollateralConfig =
|
||||
|
|
|
@ -19,6 +19,10 @@ const knownAddresses = {
|
|||
# Taiko Alpha-3 Testnet
|
||||
"167005": {
|
||||
"Marketplace": Address.init("0x948CF9291b77Bd7ad84781b9047129Addf1b894F")
|
||||
}.toTable,
|
||||
# Codex Testnet - Sept-2024
|
||||
"789987": {
|
||||
"Marketplace": Address.init("0xCDef8d6884557be4F68dC265b6bB2E3e52a6C9d6")
|
||||
}.toTable
|
||||
}.toTable
|
||||
|
||||
|
|
|
@ -19,17 +19,24 @@ type
|
|||
OnChainMarket* = ref object of Market
|
||||
contract: Marketplace
|
||||
signer: Signer
|
||||
rewardRecipient: ?Address
|
||||
MarketSubscription = market.Subscription
|
||||
EventSubscription = ethers.Subscription
|
||||
OnChainMarketSubscription = ref object of MarketSubscription
|
||||
eventSubscription: EventSubscription
|
||||
|
||||
func new*(_: type OnChainMarket, contract: Marketplace): OnChainMarket =
|
||||
func new*(
|
||||
_: type OnChainMarket,
|
||||
contract: Marketplace,
|
||||
rewardRecipient = Address.none): OnChainMarket =
|
||||
|
||||
without signer =? contract.signer:
|
||||
raiseAssert("Marketplace contract should have a signer")
|
||||
|
||||
OnChainMarket(
|
||||
contract: contract,
|
||||
signer: signer,
|
||||
rewardRecipient: rewardRecipient
|
||||
)
|
||||
|
||||
proc raiseMarketError(message: string) {.raises: [MarketError].} =
|
||||
|
@ -163,7 +170,23 @@ method fillSlot(market: OnChainMarket,
|
|||
|
||||
method freeSlot*(market: OnChainMarket, slotId: SlotId) {.async.} =
|
||||
convertEthersError:
|
||||
discard await market.contract.freeSlot(slotId).confirm(0)
|
||||
var freeSlot: Future[?TransactionResponse]
|
||||
if rewardRecipient =? market.rewardRecipient:
|
||||
# If --reward-recipient specified, use it as the reward recipient, and use
|
||||
# the SP's address as the collateral recipient
|
||||
let collateralRecipient = await market.getSigner()
|
||||
freeSlot = market.contract.freeSlot(
|
||||
slotId,
|
||||
rewardRecipient, # --reward-recipient
|
||||
collateralRecipient) # SP's address
|
||||
|
||||
else:
|
||||
# Otherwise, use the SP's address as both the reward and collateral
|
||||
# recipient (the contract will use msg.sender for both)
|
||||
freeSlot = market.contract.freeSlot(slotId)
|
||||
|
||||
discard await freeSlot.confirm(0)
|
||||
|
||||
|
||||
method withdrawFunds(market: OnChainMarket,
|
||||
requestId: RequestId) {.async.} =
|
||||
|
@ -347,9 +370,11 @@ method subscribeProofSubmission*(market: OnChainMarket,
|
|||
method unsubscribe*(subscription: OnChainMarketSubscription) {.async.} =
|
||||
await subscription.eventSubscription.unsubscribe()
|
||||
|
||||
method queryPastStorageRequests*(market: OnChainMarket,
|
||||
blocksAgo: int):
|
||||
Future[seq[PastStorageRequest]] {.async.} =
|
||||
method queryPastEvents*[T: MarketplaceEvent](
|
||||
market: OnChainMarket,
|
||||
_: type T,
|
||||
blocksAgo: int): Future[seq[T]] {.async.} =
|
||||
|
||||
convertEthersError:
|
||||
let contract = market.contract
|
||||
let provider = contract.provider
|
||||
|
@ -357,13 +382,6 @@ method queryPastStorageRequests*(market: OnChainMarket,
|
|||
let head = await provider.getBlockNumber()
|
||||
let fromBlock = BlockTag.init(head - blocksAgo.abs.u256)
|
||||
|
||||
let events = await contract.queryFilter(StorageRequested,
|
||||
return await contract.queryFilter(T,
|
||||
fromBlock,
|
||||
BlockTag.latest)
|
||||
return events.map(event =>
|
||||
PastStorageRequest(
|
||||
requestId: event.requestId,
|
||||
ask: event.ask,
|
||||
expiry: event.expiry
|
||||
)
|
||||
)
|
||||
|
|
|
@ -16,25 +16,6 @@ export requests
|
|||
|
||||
type
|
||||
Marketplace* = ref object of Contract
|
||||
StorageRequested* = object of Event
|
||||
requestId*: RequestId
|
||||
ask*: StorageAsk
|
||||
expiry*: UInt256
|
||||
SlotFilled* = object of Event
|
||||
requestId* {.indexed.}: RequestId
|
||||
slotIndex*: UInt256
|
||||
SlotFreed* = object of Event
|
||||
requestId* {.indexed.}: RequestId
|
||||
slotIndex*: UInt256
|
||||
RequestFulfilled* = object of Event
|
||||
requestId* {.indexed.}: RequestId
|
||||
RequestCancelled* = object of Event
|
||||
requestId* {.indexed.}: RequestId
|
||||
RequestFailed* = object of Event
|
||||
requestId* {.indexed.}: RequestId
|
||||
ProofSubmitted* = object of Event
|
||||
id*: SlotId
|
||||
|
||||
|
||||
proc config*(marketplace: Marketplace): MarketplaceConfig {.contract, view.}
|
||||
proc token*(marketplace: Marketplace): Address {.contract, view.}
|
||||
|
@ -45,7 +26,9 @@ proc minCollateralThreshold*(marketplace: Marketplace): UInt256 {.contract, view
|
|||
proc requestStorage*(marketplace: Marketplace, request: StorageRequest): ?TransactionResponse {.contract.}
|
||||
proc fillSlot*(marketplace: Marketplace, requestId: RequestId, slotIndex: UInt256, proof: Groth16Proof): ?TransactionResponse {.contract.}
|
||||
proc withdrawFunds*(marketplace: Marketplace, requestId: RequestId): ?TransactionResponse {.contract.}
|
||||
proc withdrawFunds*(marketplace: Marketplace, requestId: RequestId, withdrawAddress: Address): ?TransactionResponse {.contract.}
|
||||
proc freeSlot*(marketplace: Marketplace, id: SlotId): ?TransactionResponse {.contract.}
|
||||
proc freeSlot*(marketplace: Marketplace, id: SlotId, rewardRecipient: Address, collateralRecipient: Address): ?TransactionResponse {.contract.}
|
||||
proc getRequest*(marketplace: Marketplace, id: RequestId): StorageRequest {.contract, view.}
|
||||
proc getHost*(marketplace: Marketplace, id: SlotId): Address {.contract, view.}
|
||||
proc getActiveSlot*(marketplace: Marketplace, id: SlotId): Slot {.contract, view.}
|
||||
|
|
|
@ -163,12 +163,12 @@ func id*(request: StorageRequest): RequestId =
|
|||
let encoding = AbiEncoder.encode((request, ))
|
||||
RequestId(keccak256.digest(encoding).data)
|
||||
|
||||
func slotId*(requestId: RequestId, slot: UInt256): SlotId =
|
||||
let encoding = AbiEncoder.encode((requestId, slot))
|
||||
func slotId*(requestId: RequestId, slotIndex: UInt256): SlotId =
|
||||
let encoding = AbiEncoder.encode((requestId, slotIndex))
|
||||
SlotId(keccak256.digest(encoding).data)
|
||||
|
||||
func slotId*(request: StorageRequest, slot: UInt256): SlotId =
|
||||
slotId(request.id, slot)
|
||||
func slotId*(request: StorageRequest, slotIndex: UInt256): SlotId =
|
||||
slotId(request.id, slotIndex)
|
||||
|
||||
func id*(slot: Slot): SlotId =
|
||||
slotId(slot.request, slot.slotIndex)
|
||||
|
|
|
@ -29,7 +29,7 @@ import ../logutils
|
|||
# TODO: Manifest should be reworked to more concrete types,
|
||||
# perhaps using inheritance
|
||||
type
|
||||
Manifest* = object of RootObj
|
||||
Manifest* = ref object of RootObj
|
||||
treeCid {.serialize.}: Cid # Root of the merkle tree
|
||||
datasetSize {.serialize.}: NBytes # Total size of all blocks
|
||||
blockSize {.serialize.}: NBytes # Size of each contained block (might not be needed if blocks are len-prefixed)
|
||||
|
|
|
@ -28,11 +28,28 @@ type
|
|||
OnRequestCancelled* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
|
||||
OnRequestFailed* = proc(requestId: RequestId) {.gcsafe, upraises:[].}
|
||||
OnProofSubmitted* = proc(id: SlotId) {.gcsafe, upraises:[].}
|
||||
PastStorageRequest* = object
|
||||
ProofChallenge* = array[32, byte]
|
||||
|
||||
# Marketplace events -- located here due to the Market abstraction
|
||||
MarketplaceEvent* = Event
|
||||
StorageRequested* = object of MarketplaceEvent
|
||||
requestId*: RequestId
|
||||
ask*: StorageAsk
|
||||
expiry*: UInt256
|
||||
ProofChallenge* = array[32, byte]
|
||||
SlotFilled* = object of MarketplaceEvent
|
||||
requestId* {.indexed.}: RequestId
|
||||
slotIndex*: UInt256
|
||||
SlotFreed* = object of MarketplaceEvent
|
||||
requestId* {.indexed.}: RequestId
|
||||
slotIndex*: UInt256
|
||||
RequestFulfilled* = object of MarketplaceEvent
|
||||
requestId* {.indexed.}: RequestId
|
||||
RequestCancelled* = object of MarketplaceEvent
|
||||
requestId* {.indexed.}: RequestId
|
||||
RequestFailed* = object of MarketplaceEvent
|
||||
requestId* {.indexed.}: RequestId
|
||||
ProofSubmitted* = object of MarketplaceEvent
|
||||
id*: SlotId
|
||||
|
||||
method getZkeyHash*(market: Market): Future[?string] {.base, async.} =
|
||||
raiseAssert("not implemented")
|
||||
|
@ -202,7 +219,8 @@ method subscribeProofSubmission*(market: Market,
|
|||
method unsubscribe*(subscription: Subscription) {.base, async, upraises:[].} =
|
||||
raiseAssert("not implemented")
|
||||
|
||||
method queryPastStorageRequests*(market: Market,
|
||||
blocksAgo: int):
|
||||
Future[seq[PastStorageRequest]] {.base, async.} =
|
||||
method queryPastEvents*[T: MarketplaceEvent](
|
||||
market: Market,
|
||||
_: type T,
|
||||
blocksAgo: int): Future[seq[T]] {.base, async.} =
|
||||
raiseAssert("not implemented")
|
||||
|
|
|
@ -366,9 +366,6 @@ proc store*(
|
|||
blocks = manifest.blocksCount,
|
||||
datasetSize = manifest.datasetSize
|
||||
|
||||
await self.discovery.provide(manifestBlk.cid)
|
||||
await self.discovery.provide(treeCid)
|
||||
|
||||
return manifestBlk.cid.success
|
||||
|
||||
proc iterateManifests*(self: CodexNodeRef, onManifest: OnManifest) {.async.} =
|
||||
|
|
|
@ -110,6 +110,20 @@ proc retrieveCid(
|
|||
proc initDataApi(node: CodexNodeRef, repoStore: RepoStore, router: var RestRouter) =
|
||||
let allowedOrigin = router.allowedOrigin # prevents capture inside of api defintion
|
||||
|
||||
router.api(
|
||||
MethodOptions,
|
||||
"/api/codex/v1/data") do (
|
||||
resp: HttpResponseRef) -> RestApiResponse:
|
||||
|
||||
if corsOrigin =? allowedOrigin:
|
||||
resp.setHeader("Access-Control-Allow-Origin", corsOrigin)
|
||||
resp.setHeader("Access-Control-Allow-Methods", "POST, OPTIONS")
|
||||
resp.setHeader("Access-Control-Allow-Headers", "content-type")
|
||||
resp.setHeader("Access-Control-Max-Age", "86400")
|
||||
|
||||
resp.status = Http204
|
||||
await resp.sendBody("")
|
||||
|
||||
router.rawApi(
|
||||
MethodPost,
|
||||
"/api/codex/v1/data") do (
|
||||
|
@ -210,13 +224,15 @@ proc initDataApi(node: CodexNodeRef, repoStore: RepoStore, router: var RestRoute
|
|||
return RestApiResponse.response($json, contentType="application/json")
|
||||
|
||||
proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
||||
let allowedOrigin = router.allowedOrigin
|
||||
|
||||
router.api(
|
||||
MethodGet,
|
||||
"/api/codex/v1/sales/slots") do () -> RestApiResponse:
|
||||
## Returns active slots for the host
|
||||
try:
|
||||
without contracts =? node.contracts.host:
|
||||
return RestApiResponse.error(Http503, "Sales unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled")
|
||||
|
||||
let json = %(await contracts.sales.mySlots())
|
||||
return RestApiResponse.response($json, contentType="application/json")
|
||||
|
@ -231,7 +247,7 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
## slot is not active for the host.
|
||||
|
||||
without contracts =? node.contracts.host:
|
||||
return RestApiResponse.error(Http503, "Sales unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled")
|
||||
|
||||
without slotId =? slotId.tryGet.catch, error:
|
||||
return RestApiResponse.error(Http400, error.msg)
|
||||
|
@ -242,7 +258,9 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
let restAgent = RestSalesAgent(
|
||||
state: agent.state() |? "none",
|
||||
slotIndex: agent.data.slotIndex,
|
||||
requestId: agent.data.requestId
|
||||
requestId: agent.data.requestId,
|
||||
request: agent.data.request,
|
||||
reservation: agent.data.reservation,
|
||||
)
|
||||
|
||||
return RestApiResponse.response(restAgent.toJson, contentType="application/json")
|
||||
|
@ -254,7 +272,7 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
|
||||
try:
|
||||
without contracts =? node.contracts.host:
|
||||
return RestApiResponse.error(Http503, "Sales unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled")
|
||||
|
||||
without avails =? (await contracts.sales.context.reservations.all(Availability)), err:
|
||||
return RestApiResponse.error(Http500, err.msg)
|
||||
|
@ -273,25 +291,32 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
##
|
||||
## totalSize - size of available storage in bytes
|
||||
## duration - maximum time the storage should be sold for (in seconds)
|
||||
## minPrice - minimum price to be paid (in amount of tokens)
|
||||
## minPrice - minimal price paid (in amount of tokens) for the whole hosted request's slot for the request's duration
|
||||
## maxCollateral - maximum collateral user is willing to pay per filled Slot (in amount of tokens)
|
||||
|
||||
var headers = newSeq[(string,string)]()
|
||||
|
||||
if corsOrigin =? allowedOrigin:
|
||||
headers.add(("Access-Control-Allow-Origin", corsOrigin))
|
||||
headers.add(("Access-Control-Allow-Methods", "POST, OPTIONS"))
|
||||
headers.add(("Access-Control-Max-Age", "86400"))
|
||||
|
||||
try:
|
||||
without contracts =? node.contracts.host:
|
||||
return RestApiResponse.error(Http503, "Sales unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled", headers = headers)
|
||||
|
||||
let body = await request.getBody()
|
||||
|
||||
without restAv =? RestAvailability.fromJson(body), error:
|
||||
return RestApiResponse.error(Http400, error.msg)
|
||||
return RestApiResponse.error(Http400, error.msg, headers = headers)
|
||||
|
||||
let reservations = contracts.sales.context.reservations
|
||||
|
||||
if restAv.totalSize == 0:
|
||||
return RestApiResponse.error(Http400, "Total size must be larger then zero")
|
||||
return RestApiResponse.error(Http400, "Total size must be larger then zero", headers = headers)
|
||||
|
||||
if not reservations.hasAvailable(restAv.totalSize.truncate(uint)):
|
||||
return RestApiResponse.error(Http422, "Not enough storage quota")
|
||||
return RestApiResponse.error(Http422, "Not enough storage quota", headers = headers)
|
||||
|
||||
without availability =? (
|
||||
await reservations.createAvailability(
|
||||
|
@ -300,14 +325,27 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
restAv.minPrice,
|
||||
restAv.maxCollateral)
|
||||
), error:
|
||||
return RestApiResponse.error(Http500, error.msg)
|
||||
return RestApiResponse.error(Http500, error.msg, headers = headers)
|
||||
|
||||
return RestApiResponse.response(availability.toJson,
|
||||
Http201,
|
||||
contentType="application/json")
|
||||
contentType="application/json",
|
||||
headers = headers)
|
||||
except CatchableError as exc:
|
||||
trace "Excepting processing request", exc = exc.msg
|
||||
return RestApiResponse.error(Http500)
|
||||
return RestApiResponse.error(Http500, headers = headers)
|
||||
|
||||
router.api(
|
||||
MethodOptions,
|
||||
"/api/codex/v1/sales/availability/{id}") do (id: AvailabilityId, resp: HttpResponseRef) -> RestApiResponse:
|
||||
|
||||
if corsOrigin =? allowedOrigin:
|
||||
resp.setHeader("Access-Control-Allow-Origin", corsOrigin)
|
||||
resp.setHeader("Access-Control-Allow-Methods", "PATCH, OPTIONS")
|
||||
resp.setHeader("Access-Control-Max-Age", "86400")
|
||||
|
||||
resp.status = Http204
|
||||
await resp.sendBody("")
|
||||
|
||||
router.rawApi(
|
||||
MethodPatch,
|
||||
|
@ -323,7 +361,7 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
|
||||
try:
|
||||
without contracts =? node.contracts.host:
|
||||
return RestApiResponse.error(Http503, "Sales unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled")
|
||||
|
||||
without id =? id.tryGet.catch, error:
|
||||
return RestApiResponse.error(Http400, error.msg)
|
||||
|
@ -379,7 +417,7 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
|
||||
try:
|
||||
without contracts =? node.contracts.host:
|
||||
return RestApiResponse.error(Http503, "Sales unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled")
|
||||
|
||||
without id =? id.tryGet.catch, error:
|
||||
return RestApiResponse.error(Http400, error.msg)
|
||||
|
@ -387,6 +425,7 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
return RestApiResponse.error(Http400, error.msg)
|
||||
|
||||
let reservations = contracts.sales.context.reservations
|
||||
let market = contracts.sales.context.market
|
||||
|
||||
if error =? (await reservations.get(keyId, Availability)).errorOption:
|
||||
if error of NotExistsError:
|
||||
|
@ -404,6 +443,8 @@ proc initSalesApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
return RestApiResponse.error(Http500)
|
||||
|
||||
proc initPurchasingApi(node: CodexNodeRef, router: var RestRouter) =
|
||||
let allowedOrigin = router.allowedOrigin
|
||||
|
||||
router.rawApi(
|
||||
MethodPost,
|
||||
"/api/codex/v1/storage/request/{cid}") do (cid: Cid) -> RestApiResponse:
|
||||
|
@ -418,37 +459,47 @@ proc initPurchasingApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
## tolerance - allowed number of nodes that can be lost before content is lost
|
||||
## colateral - requested collateral from hosts when they fill slot
|
||||
|
||||
var headers = newSeq[(string,string)]()
|
||||
|
||||
if corsOrigin =? allowedOrigin:
|
||||
headers.add(("Access-Control-Allow-Origin", corsOrigin))
|
||||
headers.add(("Access-Control-Allow-Methods", "POST, OPTIONS"))
|
||||
headers.add(("Access-Control-Max-Age", "86400"))
|
||||
|
||||
try:
|
||||
without contracts =? node.contracts.client:
|
||||
return RestApiResponse.error(Http503, "Purchasing unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled", headers = headers)
|
||||
|
||||
without cid =? cid.tryGet.catch, error:
|
||||
return RestApiResponse.error(Http400, error.msg)
|
||||
return RestApiResponse.error(Http400, error.msg, headers = headers)
|
||||
|
||||
let body = await request.getBody()
|
||||
|
||||
without params =? StorageRequestParams.fromJson(body), error:
|
||||
return RestApiResponse.error(Http400, error.msg)
|
||||
return RestApiResponse.error(Http400, error.msg, headers = headers)
|
||||
|
||||
let nodes = params.nodes |? 1
|
||||
let tolerance = params.tolerance |? 0
|
||||
let nodes = params.nodes |? 3
|
||||
let tolerance = params.tolerance |? 1
|
||||
|
||||
if tolerance == 0:
|
||||
return RestApiResponse.error(Http400, "Tolerance needs to be bigger then zero", headers = headers)
|
||||
|
||||
# prevent underflow
|
||||
if tolerance > nodes:
|
||||
return RestApiResponse.error(Http400, "Invalid parameters: `tolerance` cannot be greater than `nodes`")
|
||||
return RestApiResponse.error(Http400, "Invalid parameters: `tolerance` cannot be greater than `nodes`", headers = headers)
|
||||
|
||||
let ecK = nodes - tolerance
|
||||
let ecM = tolerance # for readability
|
||||
|
||||
# ensure leopard constrainst of 1 < K ≥ M
|
||||
if ecK <= 1 or ecK < ecM:
|
||||
return RestApiResponse.error(Http400, "Invalid parameters: parameters must satify `1 < (nodes - tolerance) ≥ tolerance`")
|
||||
return RestApiResponse.error(Http400, "Invalid parameters: parameters must satify `1 < (nodes - tolerance) ≥ tolerance`", headers = headers)
|
||||
|
||||
without expiry =? params.expiry:
|
||||
return RestApiResponse.error(Http400, "Expiry required")
|
||||
return RestApiResponse.error(Http400, "Expiry required", headers = headers)
|
||||
|
||||
if expiry <= 0 or expiry >= params.duration:
|
||||
return RestApiResponse.error(Http400, "Expiry needs value bigger then zero and smaller then the request's duration")
|
||||
return RestApiResponse.error(Http400, "Expiry needs value bigger then zero and smaller then the request's duration", headers = headers)
|
||||
|
||||
without purchaseId =? await node.requestStorage(
|
||||
cid,
|
||||
|
@ -463,14 +514,14 @@ proc initPurchasingApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
if error of InsufficientBlocksError:
|
||||
return RestApiResponse.error(Http400,
|
||||
"Dataset too small for erasure parameters, need at least " &
|
||||
$(ref InsufficientBlocksError)(error).minSize.int & " bytes")
|
||||
$(ref InsufficientBlocksError)(error).minSize.int & " bytes", headers = headers)
|
||||
|
||||
return RestApiResponse.error(Http500, error.msg)
|
||||
return RestApiResponse.error(Http500, error.msg, headers = headers)
|
||||
|
||||
return RestApiResponse.response(purchaseId.toHex)
|
||||
except CatchableError as exc:
|
||||
trace "Excepting processing request", exc = exc.msg
|
||||
return RestApiResponse.error(Http500)
|
||||
return RestApiResponse.error(Http500, headers = headers)
|
||||
|
||||
router.api(
|
||||
MethodGet,
|
||||
|
@ -479,7 +530,7 @@ proc initPurchasingApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
|
||||
try:
|
||||
without contracts =? node.contracts.client:
|
||||
return RestApiResponse.error(Http503, "Purchasing unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled")
|
||||
|
||||
without id =? id.tryGet.catch, error:
|
||||
return RestApiResponse.error(Http400, error.msg)
|
||||
|
@ -504,7 +555,7 @@ proc initPurchasingApi(node: CodexNodeRef, router: var RestRouter) =
|
|||
"/api/codex/v1/storage/purchases") do () -> RestApiResponse:
|
||||
try:
|
||||
without contracts =? node.contracts.client:
|
||||
return RestApiResponse.error(Http503, "Purchasing unavailable")
|
||||
return RestApiResponse.error(Http503, "Persistence is not enabled")
|
||||
|
||||
let purchaseIds = contracts.purchasing.getPurchaseIds()
|
||||
return RestApiResponse.response($ %purchaseIds, contentType="application/json")
|
||||
|
|
|
@ -38,6 +38,8 @@ type
|
|||
state* {.serialize.}: string
|
||||
requestId* {.serialize.}: RequestId
|
||||
slotIndex* {.serialize.}: UInt256
|
||||
request* {.serialize.}: ?StorageRequest
|
||||
reservation* {.serialize.}: ?Reservation
|
||||
|
||||
RestContent* = object
|
||||
cid* {.serialize.}: Cid
|
||||
|
|
|
@ -180,7 +180,7 @@ proc filled(
|
|||
processing.complete()
|
||||
|
||||
proc processSlot(sales: Sales, item: SlotQueueItem, done: Future[void]) =
|
||||
debug "processing slot from queue", requestId = item.requestId,
|
||||
debug "Processing slot from queue", requestId = item.requestId,
|
||||
slot = item.slotIndex
|
||||
|
||||
let agent = newSalesAgent(
|
||||
|
@ -202,13 +202,17 @@ proc processSlot(sales: Sales, item: SlotQueueItem, done: Future[void]) =
|
|||
proc deleteInactiveReservations(sales: Sales, activeSlots: seq[Slot]) {.async.} =
|
||||
let reservations = sales.context.reservations
|
||||
without reservs =? await reservations.all(Reservation):
|
||||
info "no unused reservations found for deletion"
|
||||
return
|
||||
|
||||
let unused = reservs.filter(r => (
|
||||
let slotId = slotId(r.requestId, r.slotIndex)
|
||||
not activeSlots.any(slot => slot.id == slotId)
|
||||
))
|
||||
info "found unused reservations for deletion", unused = unused.len
|
||||
|
||||
if unused.len == 0:
|
||||
return
|
||||
|
||||
info "Found unused reservations for deletion", unused = unused.len
|
||||
|
||||
for reservation in unused:
|
||||
|
||||
|
@ -219,9 +223,9 @@ proc deleteInactiveReservations(sales: Sales, activeSlots: seq[Slot]) {.async.}
|
|||
if err =? (await reservations.deleteReservation(
|
||||
reservation.id, reservation.availabilityId
|
||||
)).errorOption:
|
||||
error "failed to delete unused reservation", error = err.msg
|
||||
error "Failed to delete unused reservation", error = err.msg
|
||||
else:
|
||||
trace "deleted unused reservation"
|
||||
trace "Deleted unused reservation"
|
||||
|
||||
proc mySlots*(sales: Sales): Future[seq[Slot]] {.async.} =
|
||||
let market = sales.context.market
|
||||
|
|
|
@ -16,7 +16,7 @@
|
|||
## |----------------------------------------| |--------------------------------------|
|
||||
## | UInt256 | totalSize | | | UInt256 | size | |
|
||||
## |----------------------------------------| |--------------------------------------|
|
||||
## | UInt256 | freeSize | | | SlotId | slotId | |
|
||||
## | UInt256 | freeSize | | | UInt256 | slotIndex | |
|
||||
## |----------------------------------------| +--------------------------------------+
|
||||
## | UInt256 | duration | |
|
||||
## |----------------------------------------|
|
||||
|
@ -65,7 +65,7 @@ type
|
|||
totalSize* {.serialize.}: UInt256
|
||||
freeSize* {.serialize.}: UInt256
|
||||
duration* {.serialize.}: UInt256
|
||||
minPrice* {.serialize.}: UInt256
|
||||
minPrice* {.serialize.}: UInt256 # minimal price paid for the whole hosted slot for the request's duration
|
||||
maxCollateral* {.serialize.}: UInt256
|
||||
Reservation* = ref object
|
||||
id* {.serialize.}: ReservationId
|
||||
|
|
|
@ -69,11 +69,11 @@ method run*(state: SalePreparing, machine: Machine): Future[?State] {.async.} =
|
|||
request.ask.duration,
|
||||
request.ask.pricePerSlot,
|
||||
request.ask.collateral):
|
||||
debug "no availability found for request, ignoring"
|
||||
debug "No availability found for request, ignoring"
|
||||
|
||||
return some State(SaleIgnored())
|
||||
|
||||
info "availability found for request, creating reservation"
|
||||
info "Availability found for request, creating reservation"
|
||||
|
||||
without reservation =? await reservations.createReservation(
|
||||
availability.id,
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
import ./proofs/backends
|
||||
import ./proofs/prover
|
||||
import ./proofs/backendfactory
|
||||
|
||||
export circomcompat, prover
|
||||
export circomcompat, prover, backendfactory
|
||||
|
|
|
@ -0,0 +1,85 @@
|
|||
import os
|
||||
import strutils
|
||||
import pkg/chronos
|
||||
import pkg/chronicles
|
||||
import pkg/questionable
|
||||
import pkg/confutils/defs
|
||||
import pkg/stew/io2
|
||||
import pkg/ethers
|
||||
|
||||
import ../../conf
|
||||
import ./backends
|
||||
import ./backendutils
|
||||
|
||||
proc initializeFromConfig(
|
||||
config: CodexConf,
|
||||
utils: BackendUtils): ?!AnyBackend =
|
||||
if not fileAccessible($config.circomR1cs, {AccessFlags.Read}) or
|
||||
not endsWith($config.circomR1cs, ".r1cs"):
|
||||
return failure("Circom R1CS file not accessible")
|
||||
|
||||
if not fileAccessible($config.circomWasm, {AccessFlags.Read}) or
|
||||
not endsWith($config.circomWasm, ".wasm"):
|
||||
return failure("Circom wasm file not accessible")
|
||||
|
||||
if not fileAccessible($config.circomZkey, {AccessFlags.Read}) or
|
||||
not endsWith($config.circomZkey, ".zkey"):
|
||||
return failure("Circom zkey file not accessible")
|
||||
|
||||
trace "Initialized prover backend from cli config"
|
||||
success(utils.initializeCircomBackend(
|
||||
$config.circomR1cs,
|
||||
$config.circomWasm,
|
||||
$config.circomZkey))
|
||||
|
||||
proc r1csFilePath(config: CodexConf): string =
|
||||
config.circuitDir / "proof_main.r1cs"
|
||||
|
||||
proc wasmFilePath(config: CodexConf): string =
|
||||
config.circuitDir / "proof_main.wasm"
|
||||
|
||||
proc zkeyFilePath(config: CodexConf): string =
|
||||
config.circuitDir / "proof_main.zkey"
|
||||
|
||||
proc initializeFromCircuitDirFiles(
|
||||
config: CodexConf,
|
||||
utils: BackendUtils): ?!AnyBackend =
|
||||
if fileExists(config.r1csFilePath) and
|
||||
fileExists(config.wasmFilePath) and
|
||||
fileExists(config.zkeyFilePath):
|
||||
trace "Initialized prover backend from local files"
|
||||
return success(utils.initializeCircomBackend(
|
||||
config.r1csFilePath,
|
||||
config.wasmFilePath,
|
||||
config.zkeyFilePath))
|
||||
|
||||
failure("Circuit files not found")
|
||||
|
||||
proc suggestDownloadTool(config: CodexConf) =
|
||||
without address =? config.marketplaceAddress:
|
||||
raise (ref Defect)(msg: "Proving backend initializing while marketplace address not set.")
|
||||
|
||||
let
|
||||
tokens = [
|
||||
"cirdl",
|
||||
"\"" & $config.circuitDir & "\"",
|
||||
config.ethProvider,
|
||||
$address
|
||||
]
|
||||
instructions = "'./" & tokens.join(" ") & "'"
|
||||
|
||||
warn "Proving circuit files are not found. Please run the following to download them:", instructions
|
||||
|
||||
proc initializeBackend*(
|
||||
config: CodexConf,
|
||||
utils: BackendUtils = BackendUtils()): ?!AnyBackend =
|
||||
|
||||
without backend =? initializeFromConfig(config, utils), cliErr:
|
||||
info "Could not initialize prover backend from CLI options...", msg = cliErr.msg
|
||||
without backend =? initializeFromCircuitDirFiles(config, utils), localErr:
|
||||
info "Could not initialize prover backend from circuit dir files...", msg = localErr.msg
|
||||
suggestDownloadTool(config)
|
||||
return failure("CircuitFilesNotFound")
|
||||
# Unexpected: value of backend does not survive leaving each scope. (definition does though...)
|
||||
return success(backend)
|
||||
return success(backend)
|
|
@ -1,3 +1,6 @@
|
|||
import ./backends/circomcompat
|
||||
|
||||
export circomcompat
|
||||
|
||||
type
|
||||
AnyBackend* = CircomCompat
|
||||
|
|
|
@ -9,17 +9,14 @@
|
|||
|
||||
{.push raises: [].}
|
||||
|
||||
import std/sequtils
|
||||
import std/sugar
|
||||
|
||||
import pkg/chronos
|
||||
import pkg/questionable/results
|
||||
import pkg/circomcompat
|
||||
import pkg/poseidon2/io
|
||||
|
||||
import ../../types
|
||||
import ../../../stores
|
||||
import ../../../merkletree
|
||||
import ../../../codextypes
|
||||
import ../../../contracts
|
||||
|
||||
import ./converters
|
||||
|
@ -39,6 +36,41 @@ type
|
|||
backendCfg : ptr CircomBn254Cfg
|
||||
vkp* : ptr CircomKey
|
||||
|
||||
NormalizedProofInputs*[H] {.borrow: `.`.} = distinct ProofInputs[H]
|
||||
|
||||
func normalizeInput*[H](self: CircomCompat, input: ProofInputs[H]):
|
||||
NormalizedProofInputs[H] =
|
||||
## Parameters in CIRCOM circuits are statically sized and must be properly
|
||||
## padded before they can be passed onto the circuit. This function takes
|
||||
## variable length parameters and performs that padding.
|
||||
##
|
||||
## The output from this function can be JSON-serialized and used as direct
|
||||
## inputs to the CIRCOM circuit for testing and debugging when one wishes
|
||||
## to bypass the Rust FFI.
|
||||
|
||||
let normSamples = collect:
|
||||
for sample in input.samples:
|
||||
var merklePaths = sample.merklePaths
|
||||
merklePaths.setLen(self.slotDepth)
|
||||
Sample[H](
|
||||
cellData: sample.cellData,
|
||||
merklePaths: merklePaths
|
||||
)
|
||||
|
||||
var normSlotProof = input.slotProof
|
||||
normSlotProof.setLen(self.datasetDepth)
|
||||
|
||||
NormalizedProofInputs[H] ProofInputs[H](
|
||||
entropy: input.entropy,
|
||||
datasetRoot: input.datasetRoot,
|
||||
slotIndex: input.slotIndex,
|
||||
slotRoot: input.slotRoot,
|
||||
nCellsPerSlot: input.nCellsPerSlot,
|
||||
nSlotsPerDataSet: input.nSlotsPerDataSet,
|
||||
slotProof: normSlotProof,
|
||||
samples: normSamples
|
||||
)
|
||||
|
||||
proc release*(self: CircomCompat) =
|
||||
## Release the ctx
|
||||
##
|
||||
|
@ -49,27 +81,20 @@ proc release*(self: CircomCompat) =
|
|||
if not isNil(self.vkp):
|
||||
self.vkp.unsafeAddr.release_key()
|
||||
|
||||
proc prove*[H](
|
||||
proc prove[H](
|
||||
self: CircomCompat,
|
||||
input: ProofInputs[H]): ?!CircomProof =
|
||||
## Encode buffers using a ctx
|
||||
##
|
||||
input: NormalizedProofInputs[H]): ?!CircomProof =
|
||||
|
||||
# NOTE: All inputs are statically sized per circuit
|
||||
# and adjusted accordingly right before being passed
|
||||
# to the circom ffi - `setLen` is used to adjust the
|
||||
# sequence length to the correct size which also 0 pads
|
||||
# to the correct length
|
||||
doAssert input.samples.len == self.numSamples,
|
||||
"Number of samples does not match"
|
||||
|
||||
doAssert input.slotProof.len <= self.datasetDepth,
|
||||
"Number of slot proofs does not match"
|
||||
"Slot proof is too deep - dataset has more slots than what we can handle?"
|
||||
|
||||
doAssert input.samples.allIt(
|
||||
block:
|
||||
(it.merklePaths.len <= self.slotDepth + self.blkDepth and
|
||||
it.cellData.len <= self.cellElms * 32)), "Merkle paths length does not match"
|
||||
it.cellData.len == self.cellElms)), "Merkle paths too deep or cells too big for circuit"
|
||||
|
||||
# TODO: All parameters should match circom's static parametter
|
||||
var
|
||||
|
@ -116,8 +141,7 @@ proc prove*[H](
|
|||
var
|
||||
slotProof = input.slotProof.mapIt( it.toBytes ).concat
|
||||
|
||||
slotProof.setLen(self.datasetDepth) # zero pad inputs to correct size
|
||||
|
||||
doAssert(slotProof.len == self.datasetDepth)
|
||||
# arrays are always flattened
|
||||
if ctx.pushInputU256Array(
|
||||
"slotProof".cstring,
|
||||
|
@ -128,16 +152,14 @@ proc prove*[H](
|
|||
for s in input.samples:
|
||||
var
|
||||
merklePaths = s.merklePaths.mapIt( it.toBytes )
|
||||
data = s.cellData
|
||||
data = s.cellData.mapIt( @(it.toBytes) ).concat
|
||||
|
||||
merklePaths.setLen(self.slotDepth) # zero pad inputs to correct size
|
||||
if ctx.pushInputU256Array(
|
||||
"merklePaths".cstring,
|
||||
merklePaths[0].addr,
|
||||
uint (merklePaths[0].len * merklePaths.len)) != ERR_OK:
|
||||
return failure("Failed to push merkle paths")
|
||||
|
||||
data.setLen(self.cellElms * 32) # zero pad inputs to correct size
|
||||
if ctx.pushInputU256Array(
|
||||
"cellData".cstring,
|
||||
data[0].addr,
|
||||
|
@ -162,6 +184,12 @@ proc prove*[H](
|
|||
|
||||
success proof
|
||||
|
||||
proc prove*[H](
|
||||
self: CircomCompat,
|
||||
input: ProofInputs[H]): ?!CircomProof =
|
||||
|
||||
self.prove(self.normalizeInput(input))
|
||||
|
||||
proc verify*[H](
|
||||
self: CircomCompat,
|
||||
proof: CircomProof,
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
import ./backends
|
||||
|
||||
type
|
||||
BackendUtils* = ref object of RootObj
|
||||
|
||||
method initializeCircomBackend*(
|
||||
self: BackendUtils,
|
||||
r1csFile: string,
|
||||
wasmFile: string,
|
||||
zKeyFile: string
|
||||
): AnyBackend {.base.} =
|
||||
CircomCompat.init(r1csFile, wasmFile, zKeyFile)
|
|
@ -21,11 +21,13 @@ import ../../merkletree
|
|||
import ../../stores
|
||||
import ../../market
|
||||
import ../../utils/poseidon2digest
|
||||
import ../../conf
|
||||
|
||||
import ../builder
|
||||
import ../sampler
|
||||
|
||||
import ./backends
|
||||
import ./backendfactory
|
||||
import ../types
|
||||
|
||||
export backends
|
||||
|
@ -34,7 +36,6 @@ logScope:
|
|||
topics = "codex prover"
|
||||
|
||||
type
|
||||
AnyBackend* = CircomCompat
|
||||
AnyProof* = CircomProof
|
||||
|
||||
AnySampler* = Poseidon2Sampler
|
||||
|
@ -86,7 +87,6 @@ proc verify*(
|
|||
inputs: AnyProofInputs): Future[?!bool] {.async.} =
|
||||
## Prove a statement using backend.
|
||||
## Returns a future that resolves to a proof.
|
||||
|
||||
self.backend.verify(proof, inputs)
|
||||
|
||||
proc new*(
|
||||
|
@ -96,6 +96,6 @@ proc new*(
|
|||
nSamples: int): Prover =
|
||||
|
||||
Prover(
|
||||
backend: backend,
|
||||
store: store,
|
||||
backend: backend,
|
||||
nSamples: nSamples)
|
||||
|
|
|
@ -38,7 +38,7 @@ type
|
|||
func getCell*[T, H](
|
||||
self: DataSampler[T, H],
|
||||
blkBytes: seq[byte],
|
||||
blkCellIdx: Natural): seq[byte] =
|
||||
blkCellIdx: Natural): seq[H] =
|
||||
|
||||
let
|
||||
cellSize = self.builder.cellSize.uint64
|
||||
|
@ -47,7 +47,7 @@ func getCell*[T, H](
|
|||
|
||||
doAssert (dataEnd - dataStart) == cellSize, "Invalid cell size"
|
||||
|
||||
toInputData[H](blkBytes[dataStart ..< dataEnd])
|
||||
blkBytes[dataStart ..< dataEnd].elements(H).toSeq()
|
||||
|
||||
proc getSample*[T, H](
|
||||
self: DataSampler[T, H],
|
||||
|
|
|
@ -7,23 +7,13 @@
|
|||
## This file may not be copied, modified, or distributed except according to
|
||||
## those terms.
|
||||
|
||||
import std/sugar
|
||||
import std/bitops
|
||||
import std/sequtils
|
||||
|
||||
import pkg/questionable/results
|
||||
import pkg/poseidon2
|
||||
import pkg/poseidon2/io
|
||||
|
||||
import pkg/constantine/math/arithmetic
|
||||
|
||||
import pkg/constantine/math/io/io_fields
|
||||
|
||||
import ../../merkletree
|
||||
|
||||
func toInputData*[H](data: seq[byte]): seq[byte] =
|
||||
return toSeq(data.elements(H)).mapIt( @(it.toBytes) ).concat
|
||||
|
||||
func extractLowBits*[n: static int](elm: BigInt[n], k: int): uint64 =
|
||||
doAssert( k > 0 and k <= 64 )
|
||||
var r = 0'u64
|
||||
|
@ -39,6 +29,7 @@ func extractLowBits(fld: Poseidon2Hash, k: int): uint64 =
|
|||
return extractLowBits(elm, k);
|
||||
|
||||
func floorLog2*(x : int) : int =
|
||||
doAssert ( x > 0 )
|
||||
var k = -1
|
||||
var y = x
|
||||
while (y > 0):
|
||||
|
@ -47,10 +38,8 @@ func floorLog2*(x : int) : int =
|
|||
return k
|
||||
|
||||
func ceilingLog2*(x : int) : int =
|
||||
if (x == 0):
|
||||
return -1
|
||||
else:
|
||||
return (floorLog2(x-1) + 1)
|
||||
doAssert ( x > 0 )
|
||||
return (floorLog2(x - 1) + 1)
|
||||
|
||||
func toBlkInSlot*(cell: Natural, numCells: Natural): Natural =
|
||||
let log2 = ceilingLog2(numCells)
|
||||
|
@ -80,7 +69,7 @@ func cellIndices*(
|
|||
numCells: Natural, nSamples: Natural): seq[Natural] =
|
||||
|
||||
var indices: seq[Natural]
|
||||
while (indices.len < nSamples):
|
||||
let idx = cellIndex(entropy, slotRoot, numCells, indices.len + 1)
|
||||
indices.add(idx.Natural)
|
||||
for i in 1..nSamples:
|
||||
indices.add(cellIndex(entropy, slotRoot, numCells, i))
|
||||
|
||||
indices
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
|
||||
type
|
||||
Sample*[H] = object
|
||||
cellData*: seq[byte]
|
||||
cellData*: seq[H]
|
||||
merklePaths*: seq[H]
|
||||
|
||||
PublicInputs*[H] = object
|
||||
|
@ -24,5 +24,5 @@ type
|
|||
slotRoot*: H
|
||||
nCellsPerSlot*: Natural
|
||||
nSlotsPerDataSet*: Natural
|
||||
slotProof*: seq[H]
|
||||
samples*: seq[Sample[H]]
|
||||
slotProof*: seq[H] # inclusion proof that shows that the slot root (leaf) is part of the dataset (root)
|
||||
samples*: seq[Sample[H]] # inclusion proofs which show that the selected cells (leafs) are part of the slot (roots)
|
||||
|
|
|
@ -29,7 +29,9 @@ type
|
|||
BlockType* {.pure.} = enum
|
||||
Manifest, Block, Both
|
||||
|
||||
CidCallback* = proc(cid: Cid): Future[void] {.gcsafe, raises:[].}
|
||||
BlockStore* = ref object of RootObj
|
||||
onBlockStored*: ?CidCallback
|
||||
|
||||
method getBlock*(self: BlockStore, cid: Cid): Future[?!Block] {.base.} =
|
||||
## Get a block from the blockstore
|
||||
|
|
|
@ -197,6 +197,9 @@ method putBlock*(
|
|||
return success()
|
||||
|
||||
discard self.putBlockSync(blk)
|
||||
if onBlock =? self.onBlockStored:
|
||||
await onBlock(blk.cid)
|
||||
|
||||
return success()
|
||||
|
||||
method putCidAndProof*(
|
||||
|
@ -282,7 +285,8 @@ proc new*(
|
|||
cache: cache,
|
||||
cidAndProofCache: cidAndProofCache,
|
||||
currentSize: currentSize,
|
||||
size: cacheSize)
|
||||
size: cacheSize,
|
||||
onBlockStored: CidCallback.none)
|
||||
|
||||
for blk in blocks:
|
||||
discard store.putBlockSync(blk)
|
||||
|
|
|
@ -189,6 +189,9 @@ method putBlock*(
|
|||
|
||||
if err =? (await self.updateTotalBlocksCount(plusCount = 1)).errorOption:
|
||||
return failure(err)
|
||||
|
||||
if onBlock =? self.onBlockStored:
|
||||
await onBlock(blk.cid)
|
||||
else:
|
||||
trace "Block already exists"
|
||||
|
||||
|
|
|
@ -11,6 +11,7 @@ import pkg/chronos
|
|||
import pkg/datastore
|
||||
import pkg/datastore/typedds
|
||||
import pkg/libp2p/cid
|
||||
import pkg/questionable
|
||||
|
||||
import ../blockstore
|
||||
import ../../clock
|
||||
|
@ -103,5 +104,6 @@ func new*(
|
|||
clock: clock,
|
||||
postFixLen: postFixLen,
|
||||
quotaMaxBytes: quotaMaxBytes,
|
||||
blockTtl: blockTtl
|
||||
blockTtl: blockTtl,
|
||||
onBlockStored: CidCallback.none
|
||||
)
|
||||
|
|
|
@ -121,6 +121,9 @@ switch("define", "ctt_asm=false")
|
|||
# Allow the use of old-style case objects for nim config compatibility
|
||||
switch("define", "nimOldCaseObjects")
|
||||
|
||||
# Enable compat mode for Chronos V4
|
||||
switch("define", "chronosHandleException")
|
||||
|
||||
# begin Nimble config (version 1)
|
||||
when system.fileExists("nimble.paths"):
|
||||
include "nimble.paths"
|
||||
|
|
|
@ -24,9 +24,9 @@ RUN echo "export PATH=$PATH:$HOME/.cargo/bin" >> $BASH_ENV
|
|||
|
||||
WORKDIR ${BUILD_HOME}
|
||||
COPY . .
|
||||
RUN make clean
|
||||
RUN make -j ${MAKE_PARALLEL} update
|
||||
RUN make -j ${MAKE_PARALLEL}
|
||||
RUN make -j ${MAKE_PARALLEL} cirdl
|
||||
|
||||
# Create
|
||||
FROM ${IMAGE}
|
||||
|
@ -35,10 +35,10 @@ ARG APP_HOME
|
|||
ARG NAT_IP_AUTO
|
||||
|
||||
WORKDIR ${APP_HOME}
|
||||
COPY --from=builder ${BUILD_HOME}/build/codex /usr/local/bin
|
||||
COPY --from=builder ${BUILD_HOME}/build/* /usr/local/bin
|
||||
COPY --from=builder ${BUILD_HOME}/openapi.yaml .
|
||||
COPY --from=builder --chmod=0755 ${BUILD_HOME}/docker/docker-entrypoint.sh /
|
||||
RUN apt-get update && apt-get install -y libgomp1 bash curl jq && rm -rf /var/lib/apt/lists/*
|
||||
RUN apt-get update && apt-get install -y libgomp1 curl jq && rm -rf /var/lib/apt/lists/*
|
||||
ENV NAT_IP_AUTO=${NAT_IP_AUTO}
|
||||
ENTRYPOINT ["/docker-entrypoint.sh"]
|
||||
CMD ["codex"]
|
||||
|
|
|
@ -50,6 +50,21 @@ if [ -n "${PRIV_KEY}" ]; then
|
|||
echo "Private key set"
|
||||
fi
|
||||
|
||||
# Circuit downloader
|
||||
# cirdl [circuitPath] [rpcEndpoint] [marketplaceAddress]
|
||||
if [[ "$@" == *"prover"* ]]; then
|
||||
echo "Run Circuit downloader"
|
||||
# Set circuits dir from CODEX_CIRCUIT_DIR variables if set
|
||||
if [[ -z "${CODEX_CIRCUIT_DIR}" ]]; then
|
||||
export CODEX_CIRCUIT_DIR="${CODEX_DATA_DIR}/circuits"
|
||||
fi
|
||||
# Download circuits
|
||||
mkdir -p "${CODEX_CIRCUIT_DIR}"
|
||||
chmod 700 "${CODEX_CIRCUIT_DIR}"
|
||||
cirdl "${CODEX_CIRCUIT_DIR}" "${CODEX_ETH_PROVIDER}" "${CODEX_MARKETPLACE_ADDRESS}"
|
||||
[[ $? -ne 0 ]] && { echo "Failed to download circuit files"; exit 1; }
|
||||
fi
|
||||
|
||||
# Run
|
||||
echo "Run Codex node"
|
||||
exec "$@"
|
||||
|
|
75
openapi.yaml
75
openapi.yaml
|
@ -23,6 +23,8 @@ components:
|
|||
Id:
|
||||
type: string
|
||||
description: 32bits identifier encoded in hex-decimal string.
|
||||
minLength: 66
|
||||
maxLength: 66
|
||||
example: 0x...
|
||||
|
||||
BigInt:
|
||||
|
@ -136,7 +138,7 @@ components:
|
|||
$ref: "#/components/schemas/Duration"
|
||||
minPrice:
|
||||
type: string
|
||||
description: Minimum price to be paid (in amount of tokens) as decimal string
|
||||
description: Minimal price paid (in amount of tokens) for the whole hosted request's slot for the request's duration as decimal string
|
||||
maxCollateral:
|
||||
type: string
|
||||
description: Maximum collateral user is willing to pay per filled Slot (in amount of tokens) as decimal string
|
||||
|
@ -168,7 +170,39 @@ components:
|
|||
$ref: "#/components/schemas/StorageRequest"
|
||||
slotIndex:
|
||||
type: string
|
||||
description: Slot Index as hexadecimal string
|
||||
description: Slot Index as decimal string
|
||||
|
||||
SlotAgent:
|
||||
type: object
|
||||
properties:
|
||||
id:
|
||||
$ref: "#/components/schemas/SlotId"
|
||||
slotIndex:
|
||||
type: string
|
||||
description: Slot Index as decimal string
|
||||
requestId:
|
||||
$ref: "#/components/schemas/Id"
|
||||
request:
|
||||
$ref: "#/components/schemas/StorageRequest"
|
||||
reservation:
|
||||
$ref: "#/components/schemas/Reservation"
|
||||
state:
|
||||
type: string
|
||||
description: Description of the slot's
|
||||
enum:
|
||||
- SaleCancelled
|
||||
- SaleDownloading
|
||||
- SaleErrored
|
||||
- SaleFailed
|
||||
- SaleFilled
|
||||
- SaleFilling
|
||||
- SaleFinished
|
||||
- SaleIgnored
|
||||
- SaleInitialProving
|
||||
- SalePayout
|
||||
- SalePreparing
|
||||
- SaleProving
|
||||
- SaleUnknown
|
||||
|
||||
Reservation:
|
||||
type: object
|
||||
|
@ -183,7 +217,7 @@ components:
|
|||
$ref: "#/components/schemas/Id"
|
||||
slotIndex:
|
||||
type: string
|
||||
description: Slot Index as hexadecimal string
|
||||
description: Slot Index as decimal string
|
||||
|
||||
StorageRequestCreation:
|
||||
type: object
|
||||
|
@ -259,6 +293,15 @@ components:
|
|||
state:
|
||||
type: string
|
||||
description: Description of the Request's state
|
||||
enum:
|
||||
- cancelled
|
||||
- error
|
||||
- failed
|
||||
- finished
|
||||
- pending
|
||||
- started
|
||||
- submitted
|
||||
- unknown
|
||||
error:
|
||||
type: string
|
||||
description: If Request failed, then here is presented the error message
|
||||
|
@ -491,7 +534,7 @@ paths:
|
|||
$ref: "#/components/schemas/Slot"
|
||||
|
||||
"503":
|
||||
description: Sales are unavailable
|
||||
description: Persistence is not enabled
|
||||
|
||||
"/sales/slots/{slotId}":
|
||||
get:
|
||||
|
@ -511,7 +554,7 @@ paths:
|
|||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: "#/components/schemas/Slot"
|
||||
$ref: "#/components/schemas/SlotAgent"
|
||||
|
||||
"400":
|
||||
description: Invalid or missing SlotId
|
||||
|
@ -520,13 +563,13 @@ paths:
|
|||
description: Host is not in an active sale for the slot
|
||||
|
||||
"503":
|
||||
description: Sales are unavailable
|
||||
description: Persistence is not enabled
|
||||
|
||||
"/sales/availability":
|
||||
get:
|
||||
summary: "Returns storage that is for sale"
|
||||
tags: [ Marketplace ]
|
||||
operationId: getOfferedStorage
|
||||
operationId: getAvailabilities
|
||||
responses:
|
||||
"200":
|
||||
description: Retrieved storage availabilities of the node
|
||||
|
@ -535,11 +578,11 @@ paths:
|
|||
schema:
|
||||
type: array
|
||||
items:
|
||||
$ref: "#/components/schemas/SalesAvailability"
|
||||
$ref: "#/components/schemas/SalesAvailabilityREAD"
|
||||
"500":
|
||||
description: Error getting unused availabilities
|
||||
"503":
|
||||
description: Sales are unavailable
|
||||
description: Persistence is not enabled
|
||||
|
||||
post:
|
||||
summary: "Offers storage for sale"
|
||||
|
@ -564,7 +607,7 @@ paths:
|
|||
"500":
|
||||
description: Error reserving availability
|
||||
"503":
|
||||
description: Sales are unavailable
|
||||
description: Persistence is not enabled
|
||||
"/sales/availability/{id}":
|
||||
patch:
|
||||
summary: "Updates availability"
|
||||
|
@ -597,10 +640,10 @@ paths:
|
|||
"500":
|
||||
description: Error reserving availability
|
||||
"503":
|
||||
description: Sales are unavailable
|
||||
description: Persistence is not enabled
|
||||
|
||||
"/sales/availability/{id}/reservations":
|
||||
patch:
|
||||
get:
|
||||
summary: "Get availability's reservations"
|
||||
description: Return's list of Reservations for ongoing Storage Requests that the node hosts.
|
||||
operationId: getReservations
|
||||
|
@ -628,7 +671,7 @@ paths:
|
|||
"500":
|
||||
description: Error getting reservations
|
||||
"503":
|
||||
description: Sales are unavailable
|
||||
description: Persistence is not enabled
|
||||
|
||||
"/storage/request/{cid}":
|
||||
post:
|
||||
|
@ -659,7 +702,7 @@ paths:
|
|||
"404":
|
||||
description: Request ID not found
|
||||
"503":
|
||||
description: Purchasing is unavailable
|
||||
description: Persistence is not enabled
|
||||
|
||||
"/storage/purchases":
|
||||
get:
|
||||
|
@ -676,7 +719,7 @@ paths:
|
|||
items:
|
||||
type: string
|
||||
"503":
|
||||
description: Purchasing is unavailable
|
||||
description: Persistence is not enabled
|
||||
|
||||
"/storage/purchases/{id}":
|
||||
get:
|
||||
|
@ -702,7 +745,7 @@ paths:
|
|||
"404":
|
||||
description: Purchase not found
|
||||
"503":
|
||||
description: Purchasing is unavailable
|
||||
description: Persistence is not enabled
|
||||
|
||||
"/node/spr":
|
||||
get:
|
||||
|
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
|
@ -32,6 +32,7 @@ asyncchecksuite "Block Advertising and Discovery":
|
|||
peerStore: PeerCtxStore
|
||||
blockDiscovery: MockDiscovery
|
||||
discovery: DiscoveryEngine
|
||||
advertiser: Advertiser
|
||||
wallet: WalletRef
|
||||
network: BlockExcNetwork
|
||||
localStore: CacheStore
|
||||
|
@ -68,11 +69,17 @@ asyncchecksuite "Block Advertising and Discovery":
|
|||
pendingBlocks,
|
||||
minPeersPerBlock = 1)
|
||||
|
||||
advertiser = Advertiser.new(
|
||||
localStore,
|
||||
blockDiscovery
|
||||
)
|
||||
|
||||
engine = BlockExcEngine.new(
|
||||
localStore,
|
||||
wallet,
|
||||
network,
|
||||
discovery,
|
||||
advertiser,
|
||||
peerStore,
|
||||
pendingBlocks)
|
||||
|
||||
|
@ -200,11 +207,17 @@ asyncchecksuite "E2E - Multiple Nodes Discovery":
|
|||
pendingBlocks,
|
||||
minPeersPerBlock = 1)
|
||||
|
||||
advertiser = Advertiser.new(
|
||||
localStore,
|
||||
blockDiscovery
|
||||
)
|
||||
|
||||
engine = BlockExcEngine.new(
|
||||
localStore,
|
||||
wallet,
|
||||
network,
|
||||
discovery,
|
||||
advertiser,
|
||||
peerStore,
|
||||
pendingBlocks)
|
||||
networkStore = NetworkStore.new(engine, localStore)
|
||||
|
|
|
@ -74,30 +74,6 @@ asyncchecksuite "Test Discovery Engine":
|
|||
await allFuturesThrowing(allFinished(wants)).wait(1.seconds)
|
||||
await discoveryEngine.stop()
|
||||
|
||||
test "Should Advertise Haves":
|
||||
var
|
||||
localStore = CacheStore.new(blocks.mapIt(it))
|
||||
discoveryEngine = DiscoveryEngine.new(
|
||||
localStore,
|
||||
peerStore,
|
||||
network,
|
||||
blockDiscovery,
|
||||
pendingBlocks,
|
||||
discoveryLoopSleep = 100.millis)
|
||||
haves = collect(initTable):
|
||||
for cid in @[manifestBlock.cid, manifest.treeCid]:
|
||||
{ cid: newFuture[void]() }
|
||||
|
||||
blockDiscovery.publishBlockProvideHandler =
|
||||
proc(d: MockDiscovery, cid: Cid) {.async, gcsafe.} =
|
||||
if not haves[cid].finished:
|
||||
haves[cid].complete
|
||||
|
||||
await discoveryEngine.start()
|
||||
await allFuturesThrowing(
|
||||
allFinished(toSeq(haves.values))).wait(5.seconds)
|
||||
await discoveryEngine.stop()
|
||||
|
||||
test "Should queue discovery request":
|
||||
var
|
||||
localStore = CacheStore.new()
|
||||
|
@ -191,36 +167,3 @@ asyncchecksuite "Test Discovery Engine":
|
|||
|
||||
reqs.complete()
|
||||
await discoveryEngine.stop()
|
||||
|
||||
test "Should not request if there is already an inflight advertise request":
|
||||
var
|
||||
localStore = CacheStore.new()
|
||||
discoveryEngine = DiscoveryEngine.new(
|
||||
localStore,
|
||||
peerStore,
|
||||
network,
|
||||
blockDiscovery,
|
||||
pendingBlocks,
|
||||
discoveryLoopSleep = 100.millis,
|
||||
concurrentAdvReqs = 2)
|
||||
reqs = newFuture[void]()
|
||||
count = 0
|
||||
|
||||
blockDiscovery.publishBlockProvideHandler =
|
||||
proc(d: MockDiscovery, cid: Cid) {.async, gcsafe.} =
|
||||
check cid == blocks[0].cid
|
||||
if count > 0:
|
||||
check false
|
||||
count.inc
|
||||
|
||||
await reqs # queue the request
|
||||
|
||||
await discoveryEngine.start()
|
||||
discoveryEngine.queueProvideBlocksReq(@[blocks[0].cid])
|
||||
await sleepAsync(200.millis)
|
||||
|
||||
discoveryEngine.queueProvideBlocksReq(@[blocks[0].cid])
|
||||
await sleepAsync(200.millis)
|
||||
|
||||
reqs.complete()
|
||||
await discoveryEngine.stop()
|
||||
|
|
|
@ -0,0 +1,106 @@
|
|||
import std/sequtils
|
||||
import std/random
|
||||
|
||||
import pkg/chronos
|
||||
import pkg/libp2p/routing_record
|
||||
import pkg/codexdht/discv5/protocol as discv5
|
||||
|
||||
import pkg/codex/blockexchange
|
||||
import pkg/codex/stores
|
||||
import pkg/codex/chunker
|
||||
import pkg/codex/discovery
|
||||
import pkg/codex/blocktype as bt
|
||||
import pkg/codex/manifest
|
||||
|
||||
import ../../../asynctest
|
||||
import ../../helpers
|
||||
import ../../helpers/mockdiscovery
|
||||
import ../../examples
|
||||
|
||||
asyncchecksuite "Advertiser":
|
||||
var
|
||||
blockDiscovery: MockDiscovery
|
||||
localStore: BlockStore
|
||||
advertiser: Advertiser
|
||||
let
|
||||
manifest = Manifest.new(
|
||||
treeCid = Cid.example,
|
||||
blockSize = 123.NBytes,
|
||||
datasetSize = 234.NBytes)
|
||||
manifestBlk = Block.new(data = manifest.encode().tryGet(), codec = ManifestCodec).tryGet()
|
||||
|
||||
setup:
|
||||
blockDiscovery = MockDiscovery.new()
|
||||
localStore = CacheStore.new()
|
||||
|
||||
advertiser = Advertiser.new(
|
||||
localStore,
|
||||
blockDiscovery
|
||||
)
|
||||
|
||||
await advertiser.start()
|
||||
|
||||
teardown:
|
||||
await advertiser.stop()
|
||||
|
||||
test "blockStored should queue manifest Cid for advertising":
|
||||
(await localStore.putBlock(manifestBlk)).tryGet()
|
||||
|
||||
check:
|
||||
manifestBlk.cid in advertiser.advertiseQueue
|
||||
|
||||
test "blockStored should queue tree Cid for advertising":
|
||||
(await localStore.putBlock(manifestBlk)).tryGet()
|
||||
|
||||
check:
|
||||
manifest.treeCid in advertiser.advertiseQueue
|
||||
|
||||
test "blockStored should not queue non-manifest non-tree CIDs for discovery":
|
||||
let blk = bt.Block.example
|
||||
|
||||
(await localStore.putBlock(blk)).tryGet()
|
||||
|
||||
check:
|
||||
blk.cid notin advertiser.advertiseQueue
|
||||
|
||||
test "Should not queue if there is already an inflight advertise request":
|
||||
var
|
||||
reqs = newFuture[void]()
|
||||
manifestCount = 0
|
||||
treeCount = 0
|
||||
|
||||
blockDiscovery.publishBlockProvideHandler =
|
||||
proc(d: MockDiscovery, cid: Cid) {.async, gcsafe.} =
|
||||
if cid == manifestBlk.cid:
|
||||
inc manifestCount
|
||||
if cid == manifest.treeCid:
|
||||
inc treeCount
|
||||
|
||||
await reqs # queue the request
|
||||
|
||||
(await localStore.putBlock(manifestBlk)).tryGet()
|
||||
(await localStore.putBlock(manifestBlk)).tryGet()
|
||||
|
||||
reqs.complete()
|
||||
check eventually manifestCount == 1
|
||||
check eventually treeCount == 1
|
||||
|
||||
test "Should advertise existing manifests and their trees":
|
||||
let
|
||||
newStore = CacheStore.new([manifestBlk])
|
||||
|
||||
await advertiser.stop()
|
||||
advertiser = Advertiser.new(
|
||||
newStore,
|
||||
blockDiscovery
|
||||
)
|
||||
await advertiser.start()
|
||||
|
||||
check eventually manifestBlk.cid in advertiser.advertiseQueue
|
||||
check eventually manifest.treeCid in advertiser.advertiseQueue
|
||||
|
||||
test "Stop should clear onBlockStored callback":
|
||||
await advertiser.stop()
|
||||
|
||||
check:
|
||||
localStore.onBlockStored.isNone()
|
|
@ -78,11 +78,17 @@ asyncchecksuite "NetworkStore engine basic":
|
|||
blockDiscovery,
|
||||
pendingBlocks)
|
||||
|
||||
advertiser = Advertiser.new(
|
||||
localStore,
|
||||
blockDiscovery
|
||||
)
|
||||
|
||||
engine = BlockExcEngine.new(
|
||||
localStore,
|
||||
wallet,
|
||||
network,
|
||||
discovery,
|
||||
advertiser,
|
||||
peerStore,
|
||||
pendingBlocks)
|
||||
|
||||
|
@ -113,11 +119,17 @@ asyncchecksuite "NetworkStore engine basic":
|
|||
blockDiscovery,
|
||||
pendingBlocks)
|
||||
|
||||
advertiser = Advertiser.new(
|
||||
localStore,
|
||||
blockDiscovery
|
||||
)
|
||||
|
||||
engine = BlockExcEngine.new(
|
||||
localStore,
|
||||
wallet,
|
||||
network,
|
||||
discovery,
|
||||
advertiser,
|
||||
peerStore,
|
||||
pendingBlocks)
|
||||
|
||||
|
@ -139,6 +151,7 @@ asyncchecksuite "NetworkStore engine handlers":
|
|||
network: BlockExcNetwork
|
||||
engine: BlockExcEngine
|
||||
discovery: DiscoveryEngine
|
||||
advertiser: Advertiser
|
||||
peerCtx: BlockExcPeerCtx
|
||||
localStore: BlockStore
|
||||
blocks: seq[Block]
|
||||
|
@ -176,11 +189,17 @@ asyncchecksuite "NetworkStore engine handlers":
|
|||
blockDiscovery,
|
||||
pendingBlocks)
|
||||
|
||||
advertiser = Advertiser.new(
|
||||
localStore,
|
||||
blockDiscovery
|
||||
)
|
||||
|
||||
engine = BlockExcEngine.new(
|
||||
localStore,
|
||||
wallet,
|
||||
network,
|
||||
discovery,
|
||||
advertiser,
|
||||
peerStore,
|
||||
pendingBlocks)
|
||||
|
||||
|
@ -390,51 +409,6 @@ asyncchecksuite "NetworkStore engine handlers":
|
|||
discard await allFinished(pending)
|
||||
await allFuturesThrowing(cancellations.values().toSeq)
|
||||
|
||||
test "resolveBlocks should queue manifest CIDs for discovery":
|
||||
engine.network = BlockExcNetwork(
|
||||
request: BlockExcRequest(sendWantCancellations: NopSendWantCancellationsProc))
|
||||
|
||||
let
|
||||
manifest = Manifest.new(
|
||||
treeCid = Cid.example,
|
||||
blockSize = 123.NBytes,
|
||||
datasetSize = 234.NBytes
|
||||
)
|
||||
|
||||
let manifestBlk = Block.new(data = manifest.encode().tryGet(), codec = ManifestCodec).tryGet()
|
||||
let blks = @[manifestBlk]
|
||||
|
||||
await engine.resolveBlocks(blks)
|
||||
|
||||
check:
|
||||
manifestBlk.cid in engine.discovery.advertiseQueue
|
||||
|
||||
test "resolveBlocks should queue tree CIDs for discovery":
|
||||
engine.network = BlockExcNetwork(
|
||||
request: BlockExcRequest(sendWantCancellations: NopSendWantCancellationsProc))
|
||||
|
||||
let
|
||||
tCid = Cid.example
|
||||
delivery = BlockDelivery(blk: Block.example, address: BlockAddress(leaf: true, treeCid: tCid))
|
||||
|
||||
await engine.resolveBlocks(@[delivery])
|
||||
|
||||
check:
|
||||
tCid in engine.discovery.advertiseQueue
|
||||
|
||||
test "resolveBlocks should not queue non-manifest non-tree CIDs for discovery":
|
||||
engine.network = BlockExcNetwork(
|
||||
request: BlockExcRequest(sendWantCancellations: NopSendWantCancellationsProc))
|
||||
|
||||
let
|
||||
blkCid = Cid.example
|
||||
delivery = BlockDelivery(blk: Block.example, address: BlockAddress(leaf: false, cid: blkCid))
|
||||
|
||||
await engine.resolveBlocks(@[delivery])
|
||||
|
||||
check:
|
||||
blkCid notin engine.discovery.advertiseQueue
|
||||
|
||||
asyncchecksuite "Task Handler":
|
||||
var
|
||||
rng: Rng
|
||||
|
@ -448,6 +422,7 @@ asyncchecksuite "Task Handler":
|
|||
network: BlockExcNetwork
|
||||
engine: BlockExcEngine
|
||||
discovery: DiscoveryEngine
|
||||
advertiser: Advertiser
|
||||
localStore: BlockStore
|
||||
|
||||
peersCtx: seq[BlockExcPeerCtx]
|
||||
|
@ -481,11 +456,17 @@ asyncchecksuite "Task Handler":
|
|||
blockDiscovery,
|
||||
pendingBlocks)
|
||||
|
||||
advertiser = Advertiser.new(
|
||||
localStore,
|
||||
blockDiscovery
|
||||
)
|
||||
|
||||
engine = BlockExcEngine.new(
|
||||
localStore,
|
||||
wallet,
|
||||
network,
|
||||
discovery,
|
||||
advertiser,
|
||||
peerStore,
|
||||
pendingBlocks)
|
||||
peersCtx = @[]
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
import ./engine/testengine
|
||||
import ./engine/testblockexc
|
||||
import ./engine/testpayments
|
||||
import ./engine/testadvertiser
|
||||
|
||||
{.warning[UnusedImport]: off.}
|
||||
|
|
|
@ -101,7 +101,8 @@ proc new*(_: type MockMarket): MockMarket =
|
|||
proofs: ProofConfig(
|
||||
period: 10.u256,
|
||||
timeout: 5.u256,
|
||||
downtime: 64.uint8
|
||||
downtime: 64.uint8,
|
||||
downtimeProduct: 67.uint8
|
||||
)
|
||||
)
|
||||
MockMarket(signer: Address.example, config: config)
|
||||
|
@ -419,16 +420,21 @@ method subscribeProofSubmission*(mock: MockMarket,
|
|||
mock.subscriptions.onProofSubmitted.add(subscription)
|
||||
return subscription
|
||||
|
||||
method queryPastStorageRequests*(market: MockMarket,
|
||||
blocksAgo: int):
|
||||
Future[seq[PastStorageRequest]] {.async.} =
|
||||
# MockMarket does not have the concept of blocks, so simply return all
|
||||
# previous events
|
||||
method queryPastEvents*[T: MarketplaceEvent](
|
||||
market: MockMarket,
|
||||
_: type T,
|
||||
blocksAgo: int): Future[seq[T]] {.async.} =
|
||||
|
||||
if T of StorageRequested:
|
||||
return market.requested.map(request =>
|
||||
PastStorageRequest(requestId: request.id,
|
||||
StorageRequested(requestId: request.id,
|
||||
ask: request.ask,
|
||||
expiry: request.expiry)
|
||||
)
|
||||
elif T of SlotFilled:
|
||||
return market.filled.map(slot =>
|
||||
SlotFilled(requestId: slot.requestId, slotIndex: slot.slotIndex)
|
||||
)
|
||||
|
||||
method unsubscribe*(subscription: RequestSubscription) {.async.} =
|
||||
subscription.market.subscriptions.onRequest.keepItIf(it != subscription)
|
||||
|
|
|
@ -40,8 +40,9 @@ proc generateNodes*(
|
|||
localStore = CacheStore.new(blocks.mapIt( it ))
|
||||
peerStore = PeerCtxStore.new()
|
||||
pendingBlocks = PendingBlocksManager.new()
|
||||
advertiser = Advertiser.new(localStore, discovery)
|
||||
blockDiscovery = DiscoveryEngine.new(localStore, peerStore, network, discovery, pendingBlocks)
|
||||
engine = BlockExcEngine.new(localStore, wallet, network, blockDiscovery, peerStore, pendingBlocks)
|
||||
engine = BlockExcEngine.new(localStore, wallet, network, blockDiscovery, advertiser, peerStore, pendingBlocks)
|
||||
networkStore = NetworkStore.new(engine, localStore)
|
||||
|
||||
switch.mount(network)
|
||||
|
|
|
@ -82,6 +82,7 @@ template setupAndTearDown*() {.dirty.} =
|
|||
peerStore: PeerCtxStore
|
||||
pendingBlocks: PendingBlocksManager
|
||||
discovery: DiscoveryEngine
|
||||
advertiser: Advertiser
|
||||
taskpool: Taskpool
|
||||
|
||||
let
|
||||
|
@ -109,7 +110,8 @@ template setupAndTearDown*() {.dirty.} =
|
|||
peerStore = PeerCtxStore.new()
|
||||
pendingBlocks = PendingBlocksManager.new()
|
||||
discovery = DiscoveryEngine.new(localStore, peerStore, network, blockDiscovery, pendingBlocks)
|
||||
engine = BlockExcEngine.new(localStore, wallet, network, discovery, peerStore, pendingBlocks)
|
||||
advertiser = Advertiser.new(localStore, blockDiscovery)
|
||||
engine = BlockExcEngine.new(localStore, wallet, network, discovery, advertiser, peerStore, pendingBlocks)
|
||||
store = NetworkStore.new(engine, localStore)
|
||||
taskpool = Taskpool.new(num_threads = countProcessors())
|
||||
node = CodexNodeRef.new(
|
||||
|
@ -120,8 +122,6 @@ template setupAndTearDown*() {.dirty.} =
|
|||
discovery = blockDiscovery,
|
||||
taskpool = taskpool)
|
||||
|
||||
await node.start()
|
||||
|
||||
teardown:
|
||||
close(file)
|
||||
await node.stop()
|
||||
|
|
|
@ -49,6 +49,9 @@ privateAccess(CodexNodeRef) # enable access to private fields
|
|||
asyncchecksuite "Test Node - Basic":
|
||||
setupAndTearDown()
|
||||
|
||||
setup:
|
||||
await node.start()
|
||||
|
||||
test "Fetch Manifest":
|
||||
let
|
||||
manifest = await storeDataGetManifest(localStore, chunker)
|
||||
|
|
|
@ -323,8 +323,7 @@ asyncchecksuite "Sales":
|
|||
slot: UInt256,
|
||||
onBatch: BatchProc): Future[?!void] {.async.} =
|
||||
let blk = bt.Block.new( @[1.byte] ).get
|
||||
onBatch( blk.repeat(request.ask.slotSize.truncate(int)) )
|
||||
return success()
|
||||
await onBatch( blk.repeat(request.ask.slotSize.truncate(int)) )
|
||||
|
||||
createAvailability()
|
||||
await market.requestStorage(request)
|
||||
|
@ -337,8 +336,8 @@ asyncchecksuite "Sales":
|
|||
onBatch: BatchProc): Future[?!void] {.async.} =
|
||||
slotIndex = slot
|
||||
let blk = bt.Block.new( @[1.byte] ).get
|
||||
onBatch(@[ blk ])
|
||||
return success()
|
||||
await onBatch(@[ blk ])
|
||||
|
||||
let sold = newFuture[void]()
|
||||
sales.onSale = proc(request: StorageRequest, slotIndex: UInt256) =
|
||||
sold.complete()
|
||||
|
|
|
@ -524,7 +524,7 @@ suite "Slot queue":
|
|||
request.ask,
|
||||
request.expiry,
|
||||
seen = true)
|
||||
queue.push(item)
|
||||
check queue.push(item).isOk
|
||||
check eventually queue.paused
|
||||
check onProcessSlotCalledWith.len == 0
|
||||
|
||||
|
@ -534,7 +534,7 @@ suite "Slot queue":
|
|||
|
||||
let request = StorageRequest.example
|
||||
var items = SlotQueueItem.init(request)
|
||||
queue.push(items)
|
||||
check queue.push(items).isOk
|
||||
# check all items processed
|
||||
check eventually queue.len == 0
|
||||
|
||||
|
@ -546,7 +546,7 @@ suite "Slot queue":
|
|||
request.expiry,
|
||||
seen = true)
|
||||
check queue.paused
|
||||
queue.push(item0)
|
||||
check queue.push(item0).isOk
|
||||
check queue.paused
|
||||
|
||||
test "paused queue waits for unpause before continuing processing":
|
||||
|
@ -558,7 +558,7 @@ suite "Slot queue":
|
|||
seen = false)
|
||||
check queue.paused
|
||||
# push causes unpause
|
||||
queue.push(item)
|
||||
check queue.push(item).isOk
|
||||
# check all items processed
|
||||
check eventually onProcessSlotCalledWith == @[
|
||||
(item.requestId, item.slotIndex),
|
||||
|
@ -576,8 +576,8 @@ suite "Slot queue":
|
|||
request.ask,
|
||||
request.expiry,
|
||||
seen = true)
|
||||
queue.push(item0)
|
||||
queue.push(item1)
|
||||
check queue.push(item0).isOk
|
||||
check queue.push(item1).isOk
|
||||
check queue[0].seen
|
||||
check queue[1].seen
|
||||
|
||||
|
|
|
@ -17,21 +17,6 @@ import pkg/codex/utils/json
|
|||
|
||||
export types
|
||||
|
||||
func fromCircomData*(_: type Poseidon2Hash, cellData: seq[byte]): seq[Poseidon2Hash] =
|
||||
var
|
||||
pos = 0
|
||||
cellElms: seq[Bn254Fr]
|
||||
while pos < cellData.len:
|
||||
var
|
||||
step = 32
|
||||
offset = min(pos + step, cellData.len)
|
||||
data = cellData[pos..<offset]
|
||||
let ff = Bn254Fr.fromBytes(data.toArray32).get
|
||||
cellElms.add(ff)
|
||||
pos += data.len
|
||||
|
||||
cellElms
|
||||
|
||||
func toJsonDecimal*(big: BigInt[254]): string =
|
||||
let s = big.toDecimal.strip( leading = true, trailing = false, chars = {'0'} )
|
||||
if s.len == 0: "0" else: s
|
||||
|
@ -78,13 +63,16 @@ func toJson*(input: ProofInputs[Poseidon2Hash]): JsonNode =
|
|||
"slotRoot": input.slotRoot.toDecimal,
|
||||
"slotProof": input.slotProof.mapIt( it.toBig.toJsonDecimal ),
|
||||
"cellData": input.samples.mapIt(
|
||||
toSeq( it.cellData.elements(Poseidon2Hash) ).mapIt( it.toBig.toJsonDecimal )
|
||||
it.cellData.mapIt( it.toBig.toJsonDecimal )
|
||||
),
|
||||
"merklePaths": input.samples.mapIt(
|
||||
it.merklePaths.mapIt( it.toBig.toJsonDecimal )
|
||||
)
|
||||
}
|
||||
|
||||
func toJson*(input: NormalizedProofInputs[Poseidon2Hash]): JsonNode =
|
||||
toJson(ProofInputs[Poseidon2Hash](input))
|
||||
|
||||
func jsonToProofInput*(_: type Poseidon2Hash, inputJson: JsonNode): ProofInputs[Poseidon2Hash] =
|
||||
let
|
||||
cellData =
|
||||
|
@ -93,10 +81,12 @@ func jsonToProofInput*(_: type Poseidon2Hash, inputJson: JsonNode): ProofInputs[
|
|||
block:
|
||||
var
|
||||
big: BigInt[256]
|
||||
data = newSeq[byte](big.bits div 8)
|
||||
hash: Poseidon2Hash
|
||||
data: array[32, byte]
|
||||
assert bool(big.fromDecimal( it.str ))
|
||||
data.marshal(big, littleEndian)
|
||||
data
|
||||
assert data.marshal(big, littleEndian)
|
||||
|
||||
Poseidon2Hash.fromBytes(data).get
|
||||
).concat # flatten out elements
|
||||
)
|
||||
|
||||
|
|
|
@ -58,7 +58,7 @@ suite "Test Sampler - control samples":
|
|||
proofInput.nCellsPerSlot,
|
||||
sample.merklePaths[5..<9]).tryGet
|
||||
|
||||
cellData = Poseidon2Hash.fromCircomData(sample.cellData)
|
||||
cellData = sample.cellData
|
||||
cellLeaf = Poseidon2Hash.spongeDigest(cellData, rate = 2).tryGet
|
||||
slotLeaf = cellProof.reconstructRoot(cellLeaf).tryGet
|
||||
|
||||
|
@ -158,7 +158,7 @@ suite "Test Sampler":
|
|||
nSlotCells,
|
||||
sample.merklePaths[5..<sample.merklePaths.len]).tryGet
|
||||
|
||||
cellData = Poseidon2Hash.fromCircomData(sample.cellData)
|
||||
cellData = sample.cellData
|
||||
cellLeaf = Poseidon2Hash.spongeDigest(cellData, rate = 2).tryGet
|
||||
slotLeaf = cellProof.reconstructRoot(cellLeaf).tryGet
|
||||
|
||||
|
|
|
@ -0,0 +1,103 @@
|
|||
import os
|
||||
import ../../asynctest
|
||||
|
||||
import pkg/chronos
|
||||
import pkg/confutils/defs
|
||||
import pkg/codex/conf
|
||||
import pkg/codex/slots/proofs/backends
|
||||
import pkg/codex/slots/proofs/backendfactory
|
||||
import pkg/codex/slots/proofs/backendutils
|
||||
|
||||
import ../helpers
|
||||
import ../examples
|
||||
|
||||
type
|
||||
BackendUtilsMock = ref object of BackendUtils
|
||||
argR1csFile: string
|
||||
argWasmFile: string
|
||||
argZKeyFile: string
|
||||
|
||||
method initializeCircomBackend*(
|
||||
self: BackendUtilsMock,
|
||||
r1csFile: string,
|
||||
wasmFile: string,
|
||||
zKeyFile: string
|
||||
): AnyBackend =
|
||||
self.argR1csFile = r1csFile
|
||||
self.argWasmFile = wasmFile
|
||||
self.argZKeyFile = zKeyFile
|
||||
# We return a backend with *something* that's not nil that we can check for.
|
||||
var
|
||||
key = VerifyingKey(icLen: 123)
|
||||
vkpPtr: ptr VerifyingKey = key.addr
|
||||
return CircomCompat(vkp: vkpPtr)
|
||||
|
||||
suite "Test BackendFactory":
|
||||
let
|
||||
utilsMock = BackendUtilsMock()
|
||||
circuitDir = "testecircuitdir"
|
||||
|
||||
setup:
|
||||
createDir(circuitDir)
|
||||
|
||||
teardown:
|
||||
removeDir(circuitDir)
|
||||
|
||||
test "Should create backend from cli config":
|
||||
let
|
||||
config = CodexConf(
|
||||
cmd: StartUpCmd.persistence,
|
||||
nat: ValidIpAddress.init("127.0.0.1"),
|
||||
discoveryIp: ValidIpAddress.init(IPv4_any()),
|
||||
metricsAddress: ValidIpAddress.init("127.0.0.1"),
|
||||
persistenceCmd: PersistenceCmd.prover,
|
||||
marketplaceAddress: EthAddress.example.some,
|
||||
circomR1cs: InputFile("tests/circuits/fixtures/proof_main.r1cs"),
|
||||
circomWasm: InputFile("tests/circuits/fixtures/proof_main.wasm"),
|
||||
circomZkey: InputFile("tests/circuits/fixtures/proof_main.zkey")
|
||||
)
|
||||
backend = config.initializeBackend(utilsMock).tryGet
|
||||
|
||||
check:
|
||||
backend.vkp != nil
|
||||
utilsMock.argR1csFile == $config.circomR1cs
|
||||
utilsMock.argWasmFile == $config.circomWasm
|
||||
utilsMock.argZKeyFile == $config.circomZkey
|
||||
|
||||
test "Should create backend from local files":
|
||||
let
|
||||
config = CodexConf(
|
||||
cmd: StartUpCmd.persistence,
|
||||
nat: ValidIpAddress.init("127.0.0.1"),
|
||||
discoveryIp: ValidIpAddress.init(IPv4_any()),
|
||||
metricsAddress: ValidIpAddress.init("127.0.0.1"),
|
||||
persistenceCmd: PersistenceCmd.prover,
|
||||
marketplaceAddress: EthAddress.example.some,
|
||||
|
||||
# Set the circuitDir such that the tests/circuits/fixtures/ files
|
||||
# will be picked up as local files:
|
||||
circuitDir: OutDir("tests/circuits/fixtures")
|
||||
)
|
||||
backend = config.initializeBackend(utilsMock).tryGet
|
||||
|
||||
check:
|
||||
backend.vkp != nil
|
||||
utilsMock.argR1csFile == config.circuitDir / "proof_main.r1cs"
|
||||
utilsMock.argWasmFile == config.circuitDir / "proof_main.wasm"
|
||||
utilsMock.argZKeyFile == config.circuitDir / "proof_main.zkey"
|
||||
|
||||
test "Should suggest usage of downloader tool when files not available":
|
||||
let
|
||||
config = CodexConf(
|
||||
cmd: StartUpCmd.persistence,
|
||||
nat: ValidIpAddress.init("127.0.0.1"),
|
||||
discoveryIp: ValidIpAddress.init(IPv4_any()),
|
||||
metricsAddress: ValidIpAddress.init("127.0.0.1"),
|
||||
persistenceCmd: PersistenceCmd.prover,
|
||||
marketplaceAddress: EthAddress.example.some,
|
||||
circuitDir: OutDir(circuitDir)
|
||||
)
|
||||
backendResult = config.initializeBackend(utilsMock)
|
||||
|
||||
check:
|
||||
backendResult.isErr
|
|
@ -15,6 +15,8 @@ import pkg/codex/chunker
|
|||
import pkg/codex/blocktype as bt
|
||||
import pkg/codex/slots
|
||||
import pkg/codex/stores
|
||||
import pkg/codex/conf
|
||||
import pkg/confutils/defs
|
||||
import pkg/poseidon2/io
|
||||
import pkg/codex/utils/poseidon2digest
|
||||
|
||||
|
@ -24,38 +26,36 @@ import ./backends/helpers
|
|||
|
||||
suite "Test Prover":
|
||||
let
|
||||
slotId = 1
|
||||
samples = 5
|
||||
ecK = 3
|
||||
ecM = 2
|
||||
numDatasetBlocks = 8
|
||||
blockSize = DefaultBlockSize
|
||||
cellSize = DefaultCellSize
|
||||
repoTmp = TempLevelDb.new()
|
||||
metaTmp = TempLevelDb.new()
|
||||
challenge = 1234567.toF.toBytes.toArray32
|
||||
|
||||
var
|
||||
datasetBlocks: seq[bt.Block]
|
||||
store: BlockStore
|
||||
manifest: Manifest
|
||||
protected: Manifest
|
||||
verifiable: Manifest
|
||||
sampler: Poseidon2Sampler
|
||||
prover: Prover
|
||||
|
||||
setup:
|
||||
let
|
||||
repoDs = repoTmp.newDb()
|
||||
metaDs = metaTmp.newDb()
|
||||
config = CodexConf(
|
||||
cmd: StartUpCmd.persistence,
|
||||
nat: ValidIpAddress.init("127.0.0.1"),
|
||||
discoveryIp: ValidIpAddress.init(IPv4_any()),
|
||||
metricsAddress: ValidIpAddress.init("127.0.0.1"),
|
||||
persistenceCmd: PersistenceCmd.prover,
|
||||
circomR1cs: InputFile("tests/circuits/fixtures/proof_main.r1cs"),
|
||||
circomWasm: InputFile("tests/circuits/fixtures/proof_main.wasm"),
|
||||
circomZkey: InputFile("tests/circuits/fixtures/proof_main.zkey"),
|
||||
numProofSamples: samples
|
||||
)
|
||||
backend = config.initializeBackend().tryGet()
|
||||
|
||||
store = RepoStore.new(repoDs, metaDs)
|
||||
|
||||
(manifest, protected, verifiable) =
|
||||
await createVerifiableManifest(
|
||||
store,
|
||||
numDatasetBlocks,
|
||||
ecK, ecM,
|
||||
blockSize,
|
||||
cellSize)
|
||||
prover = Prover.new(store, backend, config.numProofSamples)
|
||||
|
||||
teardown:
|
||||
await repoTmp.destroyDb()
|
||||
|
@ -63,13 +63,41 @@ suite "Test Prover":
|
|||
|
||||
test "Should sample and prove a slot":
|
||||
let
|
||||
r1cs = "tests/circuits/fixtures/proof_main.r1cs"
|
||||
wasm = "tests/circuits/fixtures/proof_main.wasm"
|
||||
(_, _, verifiable) =
|
||||
await createVerifiableManifest(
|
||||
store,
|
||||
8, # number of blocks in the original dataset (before EC)
|
||||
5, # ecK
|
||||
3, # ecM
|
||||
blockSize,
|
||||
cellSize)
|
||||
|
||||
circomBackend = CircomCompat.init(r1cs, wasm)
|
||||
prover = Prover.new(store, circomBackend, samples)
|
||||
challenge = 1234567.toF.toBytes.toArray32
|
||||
(inputs, proof) = (await prover.prove(1, verifiable, challenge)).tryGet
|
||||
let
|
||||
(inputs, proof) = (
|
||||
await prover.prove(1, verifiable, challenge)).tryGet
|
||||
|
||||
check:
|
||||
(await prover.verify(proof, inputs)).tryGet == true
|
||||
|
||||
test "Should generate valid proofs when slots consist of single blocks":
|
||||
|
||||
# To get single-block slots, we just need to set the number of blocks in
|
||||
# the original dataset to be the same as ecK. The total number of blocks
|
||||
# after generating random data for parity will be ecK + ecM, which will
|
||||
# match the number of slots.
|
||||
let
|
||||
(_, _, verifiable) =
|
||||
await createVerifiableManifest(
|
||||
store,
|
||||
2, # number of blocks in the original dataset (before EC)
|
||||
2, # ecK
|
||||
1, # ecM
|
||||
blockSize,
|
||||
cellSize)
|
||||
|
||||
let
|
||||
(inputs, proof) = (
|
||||
await prover.prove(1, verifiable, challenge)).tryGet
|
||||
|
||||
check:
|
||||
(await prover.verify(proof, inputs)).tryGet == true
|
||||
|
|
|
@ -15,6 +15,7 @@ import pkg/codex/utils
|
|||
|
||||
import ../../asynctest
|
||||
import ../helpers
|
||||
import ../examples
|
||||
|
||||
type
|
||||
StoreProvider* = proc(): BlockStore {.gcsafe.}
|
||||
|
@ -56,6 +57,16 @@ proc commonBlockStoreTests*(name: string,
|
|||
(await store.putBlock(newBlock1)).tryGet()
|
||||
check (await store.hasBlock(newBlock1.cid)).tryGet()
|
||||
|
||||
test "putBlock raises onBlockStored":
|
||||
var storedCid = Cid.example
|
||||
proc onStored(cid: Cid) {.async.} =
|
||||
storedCid = cid
|
||||
store.onBlockStored = onStored.some()
|
||||
|
||||
(await store.putBlock(newBlock1)).tryGet()
|
||||
|
||||
check storedCid == newBlock1.cid
|
||||
|
||||
test "getBlock":
|
||||
(await store.putBlock(newBlock)).tryGet()
|
||||
let blk = await store.getBlock(newBlock.cid)
|
||||
|
|
|
@ -3,5 +3,6 @@ import ./slots/testsampler
|
|||
import ./slots/testconverters
|
||||
import ./slots/testbackends
|
||||
import ./slots/testprover
|
||||
import ./slots/testbackendfactory
|
||||
|
||||
{.warning[UnusedImport]: off.}
|
||||
|
|
|
@ -11,6 +11,7 @@ ethersuite "Marketplace contracts":
|
|||
let proof = Groth16Proof.example
|
||||
|
||||
var client, host: Signer
|
||||
var rewardRecipient, collateralRecipient: Address
|
||||
var marketplace: Marketplace
|
||||
var token: Erc20Token
|
||||
var periodicity: Periodicity
|
||||
|
@ -24,6 +25,8 @@ ethersuite "Marketplace contracts":
|
|||
setup:
|
||||
client = ethProvider.getSigner(accounts[0])
|
||||
host = ethProvider.getSigner(accounts[1])
|
||||
rewardRecipient = accounts[2]
|
||||
collateralRecipient = accounts[3]
|
||||
|
||||
let address = Marketplace.address(dummyVerifier = true)
|
||||
marketplace = Marketplace.new(address, ethProvider.getSigner())
|
||||
|
@ -82,8 +85,27 @@ ethersuite "Marketplace contracts":
|
|||
let startBalance = await token.balanceOf(address)
|
||||
discard await marketplace.freeSlot(slotId)
|
||||
let endBalance = await token.balanceOf(address)
|
||||
|
||||
check endBalance == (startBalance + request.ask.duration * request.ask.reward + request.ask.collateral)
|
||||
|
||||
test "can be paid out at the end, specifying reward and collateral recipient":
|
||||
switchAccount(host)
|
||||
let hostAddress = await host.getAddress()
|
||||
await startContract()
|
||||
let requestEnd = await marketplace.requestEnd(request.id)
|
||||
await ethProvider.advanceTimeTo(requestEnd.u256 + 1)
|
||||
let startBalanceHost = await token.balanceOf(hostAddress)
|
||||
let startBalanceReward = await token.balanceOf(rewardRecipient)
|
||||
let startBalanceCollateral = await token.balanceOf(collateralRecipient)
|
||||
discard await marketplace.freeSlot(slotId, rewardRecipient, collateralRecipient)
|
||||
let endBalanceHost = await token.balanceOf(hostAddress)
|
||||
let endBalanceReward = await token.balanceOf(rewardRecipient)
|
||||
let endBalanceCollateral = await token.balanceOf(collateralRecipient)
|
||||
|
||||
check endBalanceHost == startBalanceHost
|
||||
check endBalanceReward == (startBalanceReward + request.ask.duration * request.ask.reward)
|
||||
check endBalanceCollateral == (startBalanceCollateral + request.ask.collateral)
|
||||
|
||||
test "cannot mark proofs missing for cancelled request":
|
||||
let expiry = await marketplace.requestExpiry(request.id)
|
||||
await ethProvider.advanceTimeTo((expiry + 1).u256)
|
||||
|
|
|
@ -1,30 +1,47 @@
|
|||
import std/options
|
||||
import std/importutils
|
||||
import pkg/chronos
|
||||
import pkg/ethers/erc20
|
||||
import codex/contracts
|
||||
import ../ethertest
|
||||
import ./examples
|
||||
import ./time
|
||||
import ./deployment
|
||||
|
||||
privateAccess(OnChainMarket) # enable access to private fields
|
||||
|
||||
ethersuite "On-Chain Market":
|
||||
let proof = Groth16Proof.example
|
||||
|
||||
var market: OnChainMarket
|
||||
var marketplace: Marketplace
|
||||
var token: Erc20Token
|
||||
var request: StorageRequest
|
||||
var slotIndex: UInt256
|
||||
var periodicity: Periodicity
|
||||
var host: Signer
|
||||
var hostRewardRecipient: Address
|
||||
|
||||
proc switchAccount(account: Signer) =
|
||||
marketplace = marketplace.connect(account)
|
||||
token = token.connect(account)
|
||||
market = OnChainMarket.new(marketplace, market.rewardRecipient)
|
||||
|
||||
setup:
|
||||
let address = Marketplace.address(dummyVerifier = true)
|
||||
marketplace = Marketplace.new(address, ethProvider.getSigner())
|
||||
let config = await marketplace.config()
|
||||
hostRewardRecipient = accounts[2]
|
||||
|
||||
market = OnChainMarket.new(marketplace)
|
||||
let tokenAddress = await marketplace.token()
|
||||
token = Erc20Token.new(tokenAddress, ethProvider.getSigner())
|
||||
|
||||
periodicity = Periodicity(seconds: config.proofs.period)
|
||||
|
||||
request = StorageRequest.example
|
||||
request.client = accounts[0]
|
||||
host = ethProvider.getSigner(accounts[1])
|
||||
|
||||
slotIndex = (request.ask.slots div 2).u256
|
||||
|
||||
|
@ -72,11 +89,18 @@ ethersuite "On-Chain Market":
|
|||
let r = await market.getRequest(request.id)
|
||||
check (r) == some request
|
||||
|
||||
test "supports withdrawing of funds":
|
||||
test "withdraws funds to client":
|
||||
let clientAddress = request.client
|
||||
|
||||
await market.requestStorage(request)
|
||||
await advanceToCancelledRequest(request)
|
||||
let startBalanceClient = await token.balanceOf(clientAddress)
|
||||
await market.withdrawFunds(request.id)
|
||||
|
||||
let endBalanceClient = await token.balanceOf(clientAddress)
|
||||
|
||||
check endBalanceClient == (startBalanceClient + request.price)
|
||||
|
||||
test "supports request subscriptions":
|
||||
var receivedIds: seq[RequestId]
|
||||
var receivedAsks: seq[StorageAsk]
|
||||
|
@ -256,7 +280,7 @@ ethersuite "On-Chain Market":
|
|||
receivedIds.add(requestId)
|
||||
|
||||
let subscription = await market.subscribeRequestCancelled(request.id, onRequestCancelled)
|
||||
advanceToCancelledRequest(otherRequest) # shares expiry with otherRequest
|
||||
await advanceToCancelledRequest(otherRequest) # shares expiry with otherRequest
|
||||
await market.withdrawFunds(otherRequest.id)
|
||||
check receivedIds.len == 0
|
||||
await market.withdrawFunds(request.id)
|
||||
|
@ -324,7 +348,7 @@ ethersuite "On-Chain Market":
|
|||
let slotId = request.slotId(slotIndex)
|
||||
check (await market.slotState(slotId)) == SlotState.Filled
|
||||
|
||||
test "can query past events":
|
||||
test "can query past StorageRequested events":
|
||||
var request1 = StorageRequest.example
|
||||
var request2 = StorageRequest.example
|
||||
request1.client = accounts[0]
|
||||
|
@ -335,21 +359,84 @@ ethersuite "On-Chain Market":
|
|||
|
||||
# `market.requestStorage` executes an `approve` tx before the
|
||||
# `requestStorage` tx, so that's two PoA blocks per `requestStorage` call (6
|
||||
# blocks for 3 calls). `fromBlock` and `toBlock` are inclusive, so to check
|
||||
# 6 blocks, we only need to check 5 "blocks ago". We don't need to check the
|
||||
# `approve` for the first `requestStorage` call, so that's 1 less again = 4
|
||||
# "blocks ago".
|
||||
# blocks for 3 calls). We don't need to check the `approve` for the first
|
||||
# `requestStorage` call, so we only need to check 5 "blocks ago". "blocks
|
||||
# ago".
|
||||
|
||||
proc getsPastRequest(): Future[bool] {.async.} =
|
||||
let reqs = await market.queryPastStorageRequests(5)
|
||||
let reqs = await market.queryPastEvents(StorageRequested, 5)
|
||||
reqs.mapIt(it.requestId) == @[request.id, request1.id, request2.id]
|
||||
|
||||
check eventually await getsPastRequest()
|
||||
|
||||
test "can query past SlotFilled events":
|
||||
await market.requestStorage(request)
|
||||
await market.fillSlot(request.id, 0.u256, proof, request.ask.collateral)
|
||||
await market.fillSlot(request.id, 1.u256, proof, request.ask.collateral)
|
||||
await market.fillSlot(request.id, 2.u256, proof, request.ask.collateral)
|
||||
let slotId = request.slotId(slotIndex)
|
||||
|
||||
# `market.fill` executes an `approve` tx before the `fillSlot` tx, so that's
|
||||
# two PoA blocks per `fillSlot` call (6 blocks for 3 calls). We don't need
|
||||
# to check the `approve` for the first `fillSlot` call, so we only need to
|
||||
# check 5 "blocks ago".
|
||||
let events = await market.queryPastEvents(SlotFilled, 5)
|
||||
check events == @[
|
||||
SlotFilled(requestId: request.id, slotIndex: 0.u256),
|
||||
SlotFilled(requestId: request.id, slotIndex: 1.u256),
|
||||
SlotFilled(requestId: request.id, slotIndex: 2.u256),
|
||||
]
|
||||
|
||||
test "past event query can specify negative `blocksAgo` parameter":
|
||||
await market.requestStorage(request)
|
||||
|
||||
check eventually (
|
||||
(await market.queryPastStorageRequests(blocksAgo = -2)) ==
|
||||
(await market.queryPastStorageRequests(blocksAgo = 2))
|
||||
(await market.queryPastEvents(StorageRequested, blocksAgo = -2)) ==
|
||||
(await market.queryPastEvents(StorageRequested, blocksAgo = 2))
|
||||
)
|
||||
|
||||
test "pays rewards and collateral to host":
|
||||
await market.requestStorage(request)
|
||||
|
||||
let address = await host.getAddress()
|
||||
switchAccount(host)
|
||||
|
||||
for slotIndex in 0..<request.ask.slots:
|
||||
await market.fillSlot(request.id, slotIndex.u256, proof, request.ask.collateral)
|
||||
|
||||
let requestEnd = await market.getRequestEnd(request.id)
|
||||
await ethProvider.advanceTimeTo(requestEnd.u256 + 1)
|
||||
|
||||
let startBalance = await token.balanceOf(address)
|
||||
|
||||
await market.freeSlot(request.slotId(0.u256))
|
||||
|
||||
let endBalance = await token.balanceOf(address)
|
||||
check endBalance == (startBalance +
|
||||
request.ask.duration * request.ask.reward +
|
||||
request.ask.collateral)
|
||||
|
||||
test "pays rewards to reward recipient, collateral to host":
|
||||
market = OnChainMarket.new(marketplace, hostRewardRecipient.some)
|
||||
let hostAddress = await host.getAddress()
|
||||
|
||||
await market.requestStorage(request)
|
||||
|
||||
switchAccount(host)
|
||||
for slotIndex in 0..<request.ask.slots:
|
||||
await market.fillSlot(request.id, slotIndex.u256, proof, request.ask.collateral)
|
||||
|
||||
let requestEnd = await market.getRequestEnd(request.id)
|
||||
await ethProvider.advanceTimeTo(requestEnd.u256 + 1)
|
||||
|
||||
let startBalanceHost = await token.balanceOf(hostAddress)
|
||||
let startBalanceReward = await token.balanceOf(hostRewardRecipient)
|
||||
|
||||
await market.freeSlot(request.slotId(0.u256))
|
||||
|
||||
let endBalanceHost = await token.balanceOf(hostAddress)
|
||||
let endBalanceReward = await token.balanceOf(hostRewardRecipient)
|
||||
|
||||
check endBalanceHost == (startBalanceHost + request.ask.collateral)
|
||||
check endBalanceReward == (startBalanceReward +
|
||||
request.ask.duration * request.ask.reward)
|
||||
|
|
|
@ -25,5 +25,5 @@ template ethersuite*(name, body) =
|
|||
|
||||
body
|
||||
|
||||
export unittest
|
||||
export asynctest
|
||||
export ethers except `%`
|
||||
|
|
|
@ -2,8 +2,9 @@ import pkg/chronos
|
|||
|
||||
# Allow multiple setups and teardowns in a test suite
|
||||
template asyncmultisetup* =
|
||||
var setups: seq[proc: Future[void] {.gcsafe.}]
|
||||
var teardowns: seq[proc: Future[void] {.gcsafe.}]
|
||||
var setups: seq[proc: Future[void].Raising([AsyncExceptionError]) {.gcsafe.}]
|
||||
var teardowns: seq[
|
||||
proc: Future[void].Raising([AsyncExceptionError]) {.gcsafe.}]
|
||||
|
||||
setup:
|
||||
for setup in setups:
|
||||
|
@ -14,10 +15,12 @@ template asyncmultisetup* =
|
|||
await teardown()
|
||||
|
||||
template setup(setupBody) {.inject, used.} =
|
||||
setups.add(proc {.async.} = setupBody)
|
||||
setups.add(proc {.async: (
|
||||
handleException: true, raises: [AsyncExceptionError]).} = setupBody)
|
||||
|
||||
template teardown(teardownBody) {.inject, used.} =
|
||||
teardowns.insert(proc {.async.} = teardownBody)
|
||||
teardowns.insert(proc {.async: (
|
||||
handleException: true, raises: [AsyncExceptionError]).} = teardownBody)
|
||||
|
||||
template multisetup* =
|
||||
var setups: seq[proc() {.gcsafe.}]
|
||||
|
@ -32,7 +35,8 @@ template multisetup* =
|
|||
teardown()
|
||||
|
||||
template setup(setupBody) {.inject, used.} =
|
||||
setups.add(proc = setupBody)
|
||||
let setupProc = proc = setupBody
|
||||
setups.add(setupProc)
|
||||
|
||||
template teardown(teardownBody) {.inject, used.} =
|
||||
teardowns.insert(proc = teardownBody)
|
||||
|
|
|
@ -96,8 +96,8 @@ proc requestStorageRaw*(
|
|||
proofProbability: UInt256,
|
||||
collateral: UInt256,
|
||||
expiry: uint = 0,
|
||||
nodes: uint = 2,
|
||||
tolerance: uint = 0
|
||||
nodes: uint = 3,
|
||||
tolerance: uint = 1
|
||||
): Response =
|
||||
|
||||
## Call request storage REST endpoint
|
||||
|
@ -125,8 +125,8 @@ proc requestStorage*(
|
|||
proofProbability: UInt256,
|
||||
expiry: uint,
|
||||
collateral: UInt256,
|
||||
nodes: uint = 2,
|
||||
tolerance: uint = 0
|
||||
nodes: uint = 3,
|
||||
tolerance: uint = 1
|
||||
): ?!PurchaseId =
|
||||
## Call request storage REST endpoint
|
||||
##
|
||||
|
|
|
@ -2,6 +2,7 @@ import pkg/questionable
|
|||
import pkg/questionable/results
|
||||
import pkg/confutils
|
||||
import pkg/chronicles
|
||||
import pkg/chronos/asyncproc
|
||||
import pkg/ethers
|
||||
import pkg/libp2p
|
||||
import std/os
|
||||
|
|
|
@ -3,6 +3,7 @@ import pkg/questionable/results
|
|||
import pkg/confutils
|
||||
import pkg/chronicles
|
||||
import pkg/chronos
|
||||
import pkg/chronos/asyncproc
|
||||
import pkg/stew/io2
|
||||
import std/os
|
||||
import std/sets
|
||||
|
|
|
@ -2,6 +2,7 @@ import pkg/questionable
|
|||
import pkg/questionable/results
|
||||
import pkg/confutils
|
||||
import pkg/chronicles
|
||||
import pkg/chronos/asyncproc
|
||||
import pkg/libp2p
|
||||
import std/os
|
||||
import std/strutils
|
||||
|
|
|
@ -3,6 +3,7 @@ import std/tempfiles
|
|||
import codex/conf
|
||||
import codex/utils/fileutils
|
||||
import ./nodes
|
||||
import ../examples
|
||||
|
||||
suite "Command line interface":
|
||||
|
||||
|
@ -25,26 +26,32 @@ suite "Command line interface":
|
|||
node.stop()
|
||||
discard removeFile(unsafeKeyFile)
|
||||
|
||||
test "complains when persistence is enabled without accessible r1cs file":
|
||||
let node = startNode(@["persistence", "prover"])
|
||||
node.waitUntilOutput("r1cs file not readable, doesn't exist or wrong extension (.r1cs)")
|
||||
let
|
||||
marketplaceArg = "--marketplace-address=" & $EthAddress.example
|
||||
expectedDownloadInstruction = "Proving circuit files are not found. Please run the following to download them:"
|
||||
|
||||
test "suggests downloading of circuit files when persistence is enabled without accessible r1cs file":
|
||||
let node = startNode(@["persistence", "prover", marketplaceArg])
|
||||
node.waitUntilOutput(expectedDownloadInstruction)
|
||||
node.stop()
|
||||
|
||||
test "complains when persistence is enabled without accessible wasm file":
|
||||
test "suggests downloading of circuit files when persistence is enabled without accessible wasm file":
|
||||
let node = startNode(@[
|
||||
"persistence",
|
||||
"prover",
|
||||
marketplaceArg,
|
||||
"--circom-r1cs=tests/circuits/fixtures/proof_main.r1cs"
|
||||
])
|
||||
node.waitUntilOutput("wasm file not readable, doesn't exist or wrong extension (.wasm)")
|
||||
node.waitUntilOutput(expectedDownloadInstruction)
|
||||
node.stop()
|
||||
|
||||
test "complains when persistence is enabled without accessible zkey file":
|
||||
test "suggests downloading of circuit files when persistence is enabled without accessible zkey file":
|
||||
let node = startNode(@[
|
||||
"persistence",
|
||||
"prover",
|
||||
marketplaceArg,
|
||||
"--circom-r1cs=tests/circuits/fixtures/proof_main.r1cs",
|
||||
"--circom-wasm=tests/circuits/fixtures/proof_main.wasm"
|
||||
])
|
||||
node.waitUntilOutput("zkey file not readable, doesn't exist or wrong extension (.zkey)")
|
||||
node.waitUntilOutput(expectedDownloadInstruction)
|
||||
node.stop()
|
||||
|
|
|
@ -60,7 +60,9 @@ twonodessuite "Purchasing", debug1 = false, debug2 = false:
|
|||
reward=2.u256,
|
||||
proofProbability=3.u256,
|
||||
expiry=30,
|
||||
collateral=200.u256).get
|
||||
collateral=200.u256,
|
||||
nodes=3.uint,
|
||||
tolerance=1.uint).get
|
||||
check eventually client1.purchaseStateIs(id, "submitted")
|
||||
|
||||
node1.restart()
|
||||
|
@ -73,8 +75,8 @@ twonodessuite "Purchasing", debug1 = false, debug2 = false:
|
|||
check request.ask.proofProbability == 3.u256
|
||||
check request.expiry == 30
|
||||
check request.ask.collateral == 200.u256
|
||||
check request.ask.slots == 2'u64
|
||||
check request.ask.maxSlotLoss == 0'u64
|
||||
check request.ask.slots == 3'u64
|
||||
check request.ask.maxSlotLoss == 1'u64
|
||||
|
||||
test "node requires expiry and its value to be in future":
|
||||
let data = await RandomChunker.example(blocks=2)
|
||||
|
|
|
@ -41,7 +41,7 @@ twonodessuite "REST API", debug1 = false, debug2 = false:
|
|||
|
||||
test "request storage fails for datasets that are too small":
|
||||
let cid = client1.upload("some file contents").get
|
||||
let response = client1.requestStorageRaw(cid, duration=10.u256, reward=2.u256, proofProbability=3.u256, nodes=2, collateral=200.u256, expiry=9)
|
||||
let response = client1.requestStorageRaw(cid, duration=10.u256, reward=2.u256, proofProbability=3.u256, collateral=200.u256, expiry=9)
|
||||
|
||||
check:
|
||||
response.status == "400 Bad Request"
|
||||
|
@ -55,6 +55,29 @@ twonodessuite "REST API", debug1 = false, debug2 = false:
|
|||
check:
|
||||
response.status == "200 OK"
|
||||
|
||||
test "request storage fails if tolerance is zero":
|
||||
let data = await RandomChunker.example(blocks=2)
|
||||
let cid = client1.upload(data).get
|
||||
let duration = 100.u256
|
||||
let reward = 2.u256
|
||||
let proofProbability = 3.u256
|
||||
let expiry = 30.uint
|
||||
let collateral = 200.u256
|
||||
let nodes = 3
|
||||
let tolerance = 0
|
||||
|
||||
var responseBefore = client1.requestStorageRaw(cid,
|
||||
duration,
|
||||
reward,
|
||||
proofProbability,
|
||||
collateral,
|
||||
expiry,
|
||||
nodes.uint,
|
||||
tolerance.uint)
|
||||
|
||||
check responseBefore.status == "400 Bad Request"
|
||||
check responseBefore.body == "Tolerance needs to be bigger then zero"
|
||||
|
||||
test "request storage fails if nodes and tolerance aren't correct":
|
||||
let data = await RandomChunker.example(blocks=2)
|
||||
let cid = client1.upload(data).get
|
||||
|
@ -63,7 +86,7 @@ twonodessuite "REST API", debug1 = false, debug2 = false:
|
|||
let proofProbability = 3.u256
|
||||
let expiry = 30.uint
|
||||
let collateral = 200.u256
|
||||
let ecParams = @[(1, 0), (1, 1), (2, 1), (3, 2), (3, 3)]
|
||||
let ecParams = @[(1, 1), (2, 1), (3, 2), (3, 3)]
|
||||
|
||||
for ecParam in ecParams:
|
||||
let (nodes, tolerance) = ecParam
|
||||
|
@ -113,7 +136,7 @@ twonodessuite "REST API", debug1 = false, debug2 = false:
|
|||
let proofProbability = 3.u256
|
||||
let expiry = 30.uint
|
||||
let collateral = 200.u256
|
||||
let ecParams = @[(2, 0), (3, 1), (5, 2)]
|
||||
let ecParams = @[(3, 1), (5, 2)]
|
||||
|
||||
for ecParam in ecParams:
|
||||
let (nodes, tolerance) = ecParam
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
import ./tools/cirdl/testcirdl
|
||||
|
||||
{.warning[UnusedImport]:off.}
|
|
@ -0,0 +1,39 @@
|
|||
import std/os
|
||||
import std/osproc
|
||||
import std/options
|
||||
import pkg/chronos
|
||||
import codex/contracts
|
||||
import ../../integration/marketplacesuite
|
||||
|
||||
marketplacesuite "tools/cirdl":
|
||||
const
|
||||
cirdl = "build" / "cirdl"
|
||||
workdir = "."
|
||||
|
||||
test "circuit download tool":
|
||||
let
|
||||
circuitPath = "testcircuitpath"
|
||||
rpcEndpoint = "ws://localhost:8545"
|
||||
marketplaceAddress = $marketplace.address
|
||||
|
||||
discard existsOrCreateDir(circuitPath)
|
||||
|
||||
let args = [circuitPath, rpcEndpoint, marketplaceAddress]
|
||||
|
||||
let process = osproc.startProcess(
|
||||
cirdl,
|
||||
workdir,
|
||||
args,
|
||||
options={poParentStreams}
|
||||
)
|
||||
|
||||
let returnCode = process.waitForExit()
|
||||
check returnCode == 0
|
||||
|
||||
check:
|
||||
fileExists(circuitPath/"proof_main_verification_key.json")
|
||||
fileExists(circuitPath/"proof_main.r1cs")
|
||||
fileExists(circuitPath/"proof_main.wasm")
|
||||
fileExists(circuitPath/"proof_main.zkey")
|
||||
|
||||
removeDir(circuitPath)
|
|
@ -0,0 +1,128 @@
|
|||
import std/os
|
||||
import std/streams
|
||||
import pkg/chronicles
|
||||
import pkg/chronos
|
||||
import pkg/ethers
|
||||
import pkg/questionable
|
||||
import pkg/questionable/results
|
||||
import pkg/zippy/tarballs
|
||||
import pkg/chronos/apps/http/httpclient
|
||||
import ../../codex/contracts/marketplace
|
||||
|
||||
proc consoleLog(logLevel: LogLevel, msg: LogOutputStr) {.gcsafe.} =
|
||||
try:
|
||||
stdout.write(msg)
|
||||
stdout.flushFile()
|
||||
except IOError as err:
|
||||
logLoggingFailure(cstring(msg), err)
|
||||
|
||||
proc noOutput(logLevel: LogLevel, msg: LogOutputStr) = discard
|
||||
|
||||
defaultChroniclesStream.outputs[0].writer = consoleLog
|
||||
defaultChroniclesStream.outputs[1].writer = noOutput
|
||||
defaultChroniclesStream.outputs[2].writer = noOutput
|
||||
|
||||
proc printHelp() =
|
||||
info "Usage: ./cirdl [circuitPath] [rpcEndpoint] [marketplaceAddress]"
|
||||
info " circuitPath: path where circuit files will be placed."
|
||||
info " rpcEndpoint: URL of web3 RPC endpoint."
|
||||
info " marketplaceAddress: Address of deployed Codex marketplace contracts."
|
||||
|
||||
proc getCircuitHash(rpcEndpoint: string, marketplaceAddress: string): Future[?!string] {.async.} =
|
||||
let provider = JsonRpcProvider.new(rpcEndpoint)
|
||||
without address =? Address.init(marketplaceAddress):
|
||||
return failure("Invalid address: " & marketplaceAddress)
|
||||
|
||||
let marketplace = Marketplace.new(address, provider)
|
||||
let config = await marketplace.config()
|
||||
return success config.proofs.zkeyHash
|
||||
|
||||
proc formatUrl(hash: string): string =
|
||||
"https://circuit.codex.storage/proving-key/" & hash
|
||||
|
||||
proc retrieveUrl(uri: string): Future[seq[byte]] {.async.} =
|
||||
let httpSession = HttpSessionRef.new()
|
||||
try:
|
||||
let resp = await httpSession.fetch(parseUri(uri))
|
||||
return resp.data
|
||||
finally:
|
||||
await noCancel(httpSession.closeWait())
|
||||
|
||||
proc downloadZipfile(url: string, filepath: string): Future[?!void] {.async.} =
|
||||
try:
|
||||
let file = await retrieveUrl(url)
|
||||
var s = newFileStream(filepath, fmWrite)
|
||||
for b in file:
|
||||
s.write(b)
|
||||
s.close()
|
||||
except Exception as exc:
|
||||
return failure(exc.msg)
|
||||
success()
|
||||
|
||||
proc unzip(zipfile: string, targetPath: string): ?!void =
|
||||
try:
|
||||
extractAll(zipfile, targetPath)
|
||||
except Exception as exc:
|
||||
return failure(exc.msg)
|
||||
success()
|
||||
|
||||
proc copyFiles(unpackDir: string, circuitPath: string): ?!void =
|
||||
try:
|
||||
for file in walkDir(unpackDir):
|
||||
copyFileToDir(file.path, circuitPath)
|
||||
except Exception as exc:
|
||||
return failure(exc.msg)
|
||||
success()
|
||||
|
||||
proc main() {.async.} =
|
||||
info "Codex Circuit Downloader, Aww yeah!"
|
||||
let args = os.commandLineParams()
|
||||
if args.len != 3:
|
||||
printHelp()
|
||||
return
|
||||
|
||||
let
|
||||
circuitPath = args[0]
|
||||
rpcEndpoint = args[1]
|
||||
marketplaceAddress = args[2]
|
||||
zipfile = "circuit.tar.gz"
|
||||
unpackFolder = "." / "tempunpackfolder"
|
||||
|
||||
debug "Starting", circuitPath, rpcEndpoint, marketplaceAddress
|
||||
|
||||
if (dirExists(unpackFolder)):
|
||||
removeDir(unpackFolder)
|
||||
|
||||
without circuitHash =? (await getCircuitHash(rpcEndpoint, marketplaceAddress)), err:
|
||||
error "Failed to get circuit hash", msg = err.msg
|
||||
return
|
||||
debug "Got circuithash", circuitHash
|
||||
|
||||
let url = formatUrl(circuitHash)
|
||||
if dlErr =? (await downloadZipfile(url, zipfile)).errorOption:
|
||||
error "Failed to download circuit file", msg = dlErr.msg
|
||||
return
|
||||
debug "Download completed"
|
||||
|
||||
if err =? unzip(zipfile, unpackFolder).errorOption:
|
||||
error "Failed to unzip file", msg = err.msg
|
||||
return
|
||||
debug "Unzip completed"
|
||||
|
||||
# Unpack library cannot unpack into existing directory. We also cannot
|
||||
# delete the targer directory and have the library recreate it because
|
||||
# Codex has likely created it and set correct permissions.
|
||||
# So, we unpack to a temp folder and move the files.
|
||||
if err =? copyFiles(unpackFolder, circuitPath).errorOption:
|
||||
error "Failed to copy files", msg = err.msg
|
||||
return
|
||||
debug "Files copied"
|
||||
|
||||
removeFile(zipfile)
|
||||
removeDir(unpackFolder)
|
||||
|
||||
debug "file and unpack folder removed"
|
||||
|
||||
when isMainModule:
|
||||
waitFor main()
|
||||
info "Done!"
|
|
@ -1 +1 @@
|
|||
Subproject commit 57e8cd5013325f05e16833a5320b575d32a403f3
|
||||
Subproject commit 558bf645c3dc385437a3e695bba57e7dba1375fb
|
|
@ -1 +1 @@
|
|||
Subproject commit 0277b65be2c7a365ac13df002fba6e172be55537
|
||||
Subproject commit 035ae11ba92369e7722e649db597e79134fd06b9
|
|
@ -1 +1 @@
|
|||
Subproject commit 4467e310b75aa0749ff28c1572a84ffce57d7c1c
|
||||
Subproject commit e710e4c333f367353aaa1ee82a55a47326b63a65
|
|
@ -1 +1 @@
|
|||
Subproject commit a7f14bc9b783f1b9e2d02cc85a338b1411058095
|
||||
Subproject commit 63822e83561ea1c6396d0f3eca583b038f5d44c6
|
|
@ -1 +1 @@
|
|||
Subproject commit 3b491a40c60aad9e8d3407443f46f62511e63b18
|
||||
Subproject commit be57dbc902d36f37540897e98c69aa80f868cb45
|
|
@ -0,0 +1 @@
|
|||
Subproject commit 8d6828f090325c5c015f66a438d31aa592a7045d
|
Loading…
Reference in New Issue