e6a387e8e8
* add seen flag * Add MockSlotQueueItem and better prioritisation tests * Update seen priority, and include in SlotQueueItem.init * Re-add processed slots to queue Re-add processed slots to queue if the sale was ignored or errored * add pausing of queue - when processing slots in queue, pause queue if item was marked seen - if availability size is increased, trigger onAvailabilityAdded callback - in sales, on availability added, clear 'seen' flags, then unpause the queue - when items pushed to the queue, unpause the queue * remove unused NoMatchingAvailabilityError from slotqueue The slot queue should also have nothing to do with availabilities * when all availabilities are empty, pause the queue An empty availability is defined as size < DefaultBlockSize as this means even the smallest possible request could not be served. However, this is up for discussion. * remove availability from onAvailabilitiesEmptied callback * refactor onAvailabilityAdded and onAvailabilitiesEmptied onAvailabilityAdded and onAvailabilitiesEmptied are now only called from reservations.update (and eventually reservations.delete once implemented). - Add empty routine for Availability and Reservation - Add allEmpty routine for Availability and Reservation, which returns true when all all Availability or Reservation objects in the datastore are empty. * SlotQueue test support updates * Sales module test support updates * Reservations module tests for queue pausing * Sales module tests for queue pausing Includes tests for sales states cancelled, errored, ignored to ensure onCleanUp is called with correct parameters * SlotQueue module tests for queue pausing * fix existing sales test * PR feedback - indent `self.unpause` - update comment for `clearSeenFlags` * reprocessSlot in SaleErrored only when coming from downloading * remove pausing of queue when availabilities are "emptied" Queue pausing when all availiabilies are "emptied" is not necessary, given that the node would not be able to service slots once all its availabilities' freeSize are too small for the slots in the queue, and would then be paused anyway. Add test that asserts the queue is paused once the freeSpace of availabilities drops too low to fill slots in the queue. * Update clearing of seen flags The asyncheapqueue update overload would need to check index bounds and ultimately a different solution was found using the mitems iterator. * fix test request.id was different before updating request.ask.slots, and that id was used to set the state in mockmarket. * Change filled/cleanup future to nil, so no await is needed * add wait to allow items to be added to queue * do not unpause queue when seen items are pushed * re-add seen item back to queue once paused Previously, when a seen item was processed, it was first popped off the queue, then the queue was paused waiting to process that item once the queue was unpaused. Now, when a seen item is processed, it is popped off the queue, the queue is paused, then the item is re-added to the queue and the queue will wait until unpaused before it will continue popping items off the queue. If the item was not re-added to the queue, it would have been processed immediately once unpaused, however there may have been other items with higher priority pushed to the queue in the meantime. The queue would not be unpaused if those added items were already seen. In particular, this may happen when ignored items due to lack of availability are re-added to a paused queue. Those ignored items will likely have a higher priority than the item that was just seen (due to it having been processed first), causing the queue to the be paused. * address PR comments |
||
---|---|---|
.github | ||
benchmarks | ||
codex | ||
docker | ||
docs | ||
metrics | ||
tests | ||
vendor | ||
.dockerignore | ||
.editorconfig | ||
.gitignore | ||
.gitmodules | ||
BUILDING.md | ||
Makefile | ||
README.md | ||
atlas.lock | ||
build.nims | ||
codecov.yml | ||
codex.nim | ||
codex.nimble | ||
config.nims | ||
env.sh | ||
nimble.lock | ||
openapi.yaml |
README.md
Codex Decentralized Durability Engine
The Codex project aims to create a decentralized durability engine that allows persisting data in p2p networks. In other words, it allows storing files and data with predictable durability guarantees for later retrieval.
WARNING: This project is under active development and is considered pre-alpha.
Build and Run
For detailed instructions on preparing to build nim-codex see Building Codex.
To build the project, clone it and run:
make update && make
The executable will be placed under the build
directory under the project root.
Run the client with:
build/codex
Configuration
It is possible to configure a Codex node in several ways:
- CLI options
- Env. variable
- Config
The order of priority is the same as above: Cli arguments > Env variables > Config file values.
Environment variables
In order to set a configuration option using environment variables, first find the desired CLI option and then transform it in the following way:
- prepend it with
CODEX_
- make it uppercase
- replace
-
with_
For example, to configure --log-level
, use CODEX_LOG_LEVEL
as the environment variable name.
Configuration file
A TOML configuration file can also be used to set configuration values. Configuration option names and corresponding values are placed in the file, separated by =
. Configuration option names can be obtained from the codex --help
command, and should not include the --
prefix. For example, a node's log level (--log-level
) can be configured using TOML as follows:
log-level = "trace"
The Codex node can then read the configuration from this file using the --config-file
CLI parameter, like codex --config-file=/path/to/your/config.toml
.
CLI Options
build/codex --help
Usage:
codex [OPTIONS]... command
The following options are available:
--config-file Loads the configuration from a TOML file [=none].
--log-level Sets the log level [=info].
--metrics Enable the metrics server [=false].
--metrics-address Listening address of the metrics server [=127.0.0.1].
--metrics-port Listening HTTP port of the metrics server [=8008].
-d, --data-dir The directory where codex will store configuration and data.
-i, --listen-addrs Multi Addresses to listen on [=/ip4/0.0.0.0/tcp/0].
-a, --nat IP Addresses to announce behind a NAT [=127.0.0.1].
-e, --disc-ip Discovery listen address [=0.0.0.0].
-u, --disc-port Discovery (UDP) port [=8090].
--net-privkey Source of network (secp256k1) private key file path or name [=key].
-b, --bootstrap-node Specifies one or more bootstrap nodes to use when connecting to the network.
--max-peers The maximum number of peers to connect to [=160].
--agent-string Node agent string which is used as identifier in network [=Codex].
--api-bindaddr The REST API bind address [=127.0.0.1].
-p, --api-port The REST Api port [=8080].
--repo-kind Backend for main repo store (fs, sqlite) [=fs].
-q, --storage-quota The size of the total storage quota dedicated to the node [=8589934592].
-t, --block-ttl Default block timeout in seconds - 0 disables the ttl [=$DefaultBlockTtl].
--block-mi Time interval in seconds - determines frequency of block maintenance cycle: how
often blocks are checked for expiration and cleanup
[=$DefaultBlockMaintenanceInterval].
--block-mn Number of blocks to check every maintenance cycle [=1000].
-c, --cache-size The size of the block cache, 0 disables the cache - might help on slow hardrives
[=0].
Available sub-commands:
codex persistence [OPTIONS]... command
The following options are available:
--eth-provider The URL of the JSON-RPC API of the Ethereum node [=ws://localhost:8545].
--eth-account The Ethereum account that is used for storage contracts.
--eth-private-key File containing Ethereum private key for storage contracts.
--marketplace-address Address of deployed Marketplace contract.
--validator Enables validator, requires an Ethereum node [=false].
--validator-max-slots Maximum number of slots that the validator monitors [=1000].
Available sub-commands:
codex persistence prover [OPTIONS]...
The following options are available:
--circom-r1cs The r1cs file for the storage circuit.
--circom-wasm The wasm file for the storage circuit.
--circom-zkey The zkey file for the storage circuit.
--circom-no-zkey Ignore the zkey file - use only for testing! [=false].
--proof-samples Number of samples to prove [=5].
--max-slot-depth The maximum depth of the slot tree [=32].
--max-dataset-depth The maximum depth of the dataset tree [=8].
--max-block-depth The maximum depth of the network block merkle tree [=5].
--max-cell-elements The maximum number of elements in a cell [=67].
Logging
Codex uses Chronicles logging library, which allows great flexibility in working with logs. Chronicles has the concept of topics, which categorize log entries into semantic groups.
Using the log-level
parameter, you can set the top-level log level like --log-level="trace"
, but more importantly,
you can set log levels for specific topics like --log-level="info; trace: marketplace,node; error: blockexchange"
,
which sets the top-level log level to info
and then for topics marketplace
and node
sets the level to trace
and so on.
Guides
To get acquainted with Codex, consider:
- running the simple Codex Two-Client Test for a start, and;
- if you are feeling more adventurous, try Running a Local Codex Network with Marketplace Support using a local blockchain as well.
API
The client exposes a REST API that can be used to interact with the clients. Overview of the API can be found on api.codex.storage.