* vault: deposit and withdraw * vault: change data structure to be recipient oriented * vault: burning funds * vault: transfer tokens from one recipient to the other * vault: designate tokens for a single recipient * vault: lock up tokens until expiry time * vault: lock is deleted upon withdrawal * vault: simplify test setup * vault: remove duplication in tests * vault: further test for locks * vault: allow recipient to withdraw * vault: flow tokens from one recipient to the other * vault: designate tokens that flow * vault: move flow accumulation calculation into VaultBase * vault: use custom operators to improve readability * vault: stop flowing when lock expires * vault: reject flow when insufficient tokens available * vault: do not allow flow when lock already expired * vault: allow automine to be disabled in time sensitive tests * vault: improve naming of public functions * vault: flow to multiple recipients - changes balance from uint256 -> uint128 so that entire Balance can be read or written with a single operation - moves Lock to library - simplifies lock checks * vault: reject negative flows * vault: make tests a bit more robust * vault: change flows over time * vault: check Lock invariant before writing * vault: allow flows to be diverted to others * vault: simplify example flow rates in test * vault: disallow transfer of flowing tokens * vault: cannot burn flowing tokens * vault: delete flow when burning or withdrawing * vault: fix flaky time sensitive tests Ensures that setting of lock and starting of flow happen in the same block. Therefore hardhat cannot occasionally increase the timestamp between the two operations. This makes predicting the balances over time much easier. * vault: disallow designating of flowing tokens * vault: document setAutomine() * vault: delete lock all tokens are withdrawn or burned * vault: cleanup * vault: reorder tests * vault: only allow deposit, transfer, etc when locked * vault: reorder functions in roughly chronological order * vault: rename context -> fund * vault: rename balance -> account * vault: combine account and flow mappings * vault: _getAccount updates to the latest timestamp * vault: simplify _getAccount() * vault: reordering * vault: formatting * vault: do not delete lock when burning * vault: combine Account and Flow structs * vault: cleanup * vault: split flow into incoming and outgoing - no need to deal with signed integers anymore - allows flow to self to designate tokens over time * vault: fix transfer to self * vault: remove _getAccount() - no longer calculate flow updates when not needed - use account.update(timestamp) where needed - use _getBalance() to view current balance * vault: rename error * vault: reduce size of timestamp further * vault: prevent approval hijacking - transfer ERC20 funds into the vault from the controller, not from the user - prevents an attacker from hijacking a user's ERC20 approval to move tokens into a part of the vault that is controlled by the attacker * vault: extract common tests for unlocked funds * vault: burn entire fund * vault: transfer tokens to 0xdead when fund is burned * vault: do not expose Lock internals on public api * vault: formatting * vault: test lock state transitions * vault: clean up errors * vault: rename burn -> burnAccount, burnAll -> burnFund * vault: burn part of designated tokens * vault: burn designated/fund allowed when flowing * vault: prefix errors with 'Vault' * vault: cleanup * vault: remove dead code * vault: add documentation * vault: fix accounting of locked value when burning designated tokens * vault: update documentation * update openzeppelin contracts to 5.2.0 * vault: format all solidity files * vault: cleanup tests * vault: pausing and unpausing * vault: rename account->holder in tests * vault: allow for multiple accounts for one account holder * vault: only allow account holder to withdraw for itself * vault: freezeFund() instead of burnFund() * vault: rename Fund -> FundId * vault: rename lock states - NoLock -> Inactive - Unlocked -> Withdrawing * vault: rename Lock -> Fund * vault: clarification Co-Authored-by: Adam Uhlíř <adam@uhlir.dev> * vault: rename update() -> accumulateFlows() Reason: update() is too generic, and can easily be interpreted as changing the on-chain state, whereas it actually updates the in-memory struct. Co-Authored-By: Eric <5089238+emizzle@users.noreply.github.com> Co-Authored-By: Adam Uhlíř <adam@uhlir.dev> * vault: rephrase Co-Authored-By: Adam Uhlíř <adam@uhlir.dev> --------- Co-authored-by: Adam Uhlíř <adam@uhlir.dev> Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
Codex Contracts
An experimental implementation of the smart contracts that underlay the Codex storage network. Its goal is to experiment with the rules around the bidding process, the storage contracts, the storage proofs and the host collateral. Neither completeness nor correctness are guaranteed at this moment in time.
Running
To run the tests, execute the following commands:
npm install
npm test
You can also run fuzzing tests (using Echidna) on the contracts:
npm run fuzz
To start a local Ethereum node with the contracts deployed, execute:
npm start
This will create a deployment-localhost.json file containing the addresses of
the deployed contracts.
Running the prover
To run the formal verification rules using Certora, first, make sure you have Java (JDK >= 11.0) installed on your machine, and then install the Certora CLI
$ pip install certora-cli
Once that is done the certoraRun command can be used to send CVL specs to the prover.
You can run Certora's specs with the provided npm script:
npm run verify
Overview
The Codex storage network depends on hosts offering storage to clients of the network. The smart contracts in this repository handle interactions between client and hosts as they negotiate and fulfill a contract to store data for a certain amount of time.
When all goes well, the client and hosts perform the following steps:
Client Host Marketplace Contract
| | |
| |
| --------------- request (1) -------------> |
| |
| ----- data (2) ---> | |
| | |
| ----- fill (3) ----> |
| |
| ---- proof (4) ----> |
| |
| ---- proof (4) ----> |
| |
| ---- proof (4) ----> |
| |
| <-- payment (5) ---- |
- Client submits a request for storage, containing the size of the data that it wants to store and the length of time it wants to store it
- Client makes the data available to hosts
- Hosts submit storage proofs to fill slots in the contract
- While the storage contract is active, host prove that they are still storing the data by responding to frequent random challenges
- At the end of the contract the hosts are paid
Contracts
A storage contract contains of a number of slots. Each of these slots represents an agreement with a storage host to store a part of the data. Hosts that want to offer storage can fill a slot in the contract.
A contract can be negotiated through requests. A request contains the size of the data, the length of time during which it needs to be stored, and a number of slots. It also contains the reward that a client is willing to pay and proof requirements such as how often a proof will need to be submitted by hosts. A random nonce is included to ensure uniqueness among similar requests.
When a new storage contract is created the client immediately pays the entire price of the contract. The payment is only released to the host upon successful completion of the contract.
Collateral
To motivate a host to remain honest, it must put up some collateral before it is allowed to participate in storage contracts. The collateral may not be withdrawn as long as a host is participating in an active storage contract.
Should a host be misbehaving, then its collateral may be reduced by a certain percentage (slashed).
Proofs
Hosts are required to submit frequent proofs while a contract is active. These proofs ensure with a high probability that hosts are still holding on to the data that they were entrusted with.
To ensure that hosts are not able to predict and precalculate proofs, these proofs are based on a random challenge. Currently we use ethereum block hashes to determine two things: 1) whether or not a proof is required at this point in time, and 2) the random challenge for the proof. Although hosts will not be able to predict the exact times at which proofs are required, the frequency of proofs averages out to a value that was set by the client in the request for storage.
Hosts have a small period of time in which they are expected to submit a proof. When that time has expired without seeing a proof, validators are able to point out the lack of proof. If a host misses too many proofs, it results into a slashing of its collateral.
References
- A marketplace for storage durability (design document)
- Timing of Storage Proofs (design document)
To Do
-
Contract repair
Allow another host to take over a slot in the contract when the original host missed too many proofs.
-
Reward validators
A validator that points out missed proofs should be compensated for its vigilance and for the gas costs of invoking the smart contract.
-
Analysis and optimization of gas usage