Refine demo tooling and shared configs

This commit is contained in:
andrussal 2025-12-09 09:43:49 +01:00
parent a328c92d24
commit f8a41b06b1
11 changed files with 431 additions and 337 deletions

View File

@ -333,6 +333,17 @@ jobs:
mkdir -p "$TMPDIR"
scripts/run-examples.sh -t 60 -v 1 -e 1 compose
- name: Show compose runner log
env:
LOG_DIR: "${{ github.workspace }}/.tmp/compose-logs"
run: |
if [ -f "${LOG_DIR}/runner.log" ]; then
echo "=== runner.log (tail) ==="
tail -n 200 "${LOG_DIR}/runner.log"
else
echo "runner.log not found under ${LOG_DIR}"
fi
- name: Collect compose logs
if: failure()
run: |

View File

@ -14,19 +14,23 @@ nomos-testing/
│ │ └─ k8s/ # K8sDeployer (Kubernetes Helm)
│ └─ assets/ # Docker/K8s stack assets
│ └─ stack/
│ ├─ kzgrs_test_params/ # KZG circuit parameters (fetch via setup-nomos-circuits.sh)
│ ├─ kzgrs_test_params/ # KZG circuit parameters directory
│ │ └─ kzgrs_test_params # Actual proving key file (note repeated name)
│ ├─ monitoring/ # Prometheus config
│ ├─ scripts/ # Container entrypoints, image builder
│ └─ cfgsync.yaml # Config sync server template
├─ examples/ # PRIMARY ENTRY POINT: runnable binaries
│ └─ src/bin/
│ ├─ local_runner.rs # Local processes demo (POL_PROOF_DEV_MODE=true)
│ ├─ compose_runner.rs # Docker Compose demo (requires image)
│ └─ k8s_runner.rs # Kubernetes demo (requires cluster + image)
│ ├─ local_runner.rs # Host processes demo (LocalDeployer)
│ ├─ compose_runner.rs # Docker Compose demo (ComposeDeployer)
│ └─ k8s_runner.rs # Kubernetes demo (K8sDeployer)
├─ scripts/ # Helper utilities
│ └─ setup-nomos-circuits.sh # Fetch KZG circuit parameters
│ ├─ run-examples.sh # Convenience script (handles setup + runs examples)
│ ├─ build-bundle.sh # Build prebuilt binaries+circuits bundle
│ ├─ setup-circuits-stack.sh # Fetch KZG parameters (Linux + host)
│ └─ setup-nomos-circuits.sh # Legacy circuit fetcher
└─ book/ # This documentation (mdBook)
```
@ -47,9 +51,15 @@ Core library crates providing the testing API.
### `testing-framework/assets/stack/`
Docker/K8s deployment assets:
- **`kzgrs_test_params/`**: Circuit parameters (override via `NOMOS_KZGRS_PARAMS_PATH`)
- **`kzgrs_test_params/kzgrs_test_params`**: Circuit parameters file (note repeated name; override via `NOMOS_KZGRS_PARAMS_PATH`)
- **`monitoring/`**: Prometheus config
- **`scripts/`**: Container entrypoints and image builder
### `scripts/`
Convenience utilities:
- **`run-examples.sh`**: All-in-one script for host/compose/k8s modes (recommended)
- **`build-bundle.sh`**: Create prebuilt binaries+circuits bundle for compose/k8s
- **`setup-circuits-stack.sh`**: Fetch KZG parameters for both Linux and host
- **`cfgsync.yaml`**: Configuration sync server template
### `examples/` (Start Here!)

View File

@ -30,11 +30,24 @@ together predictably.
The framework is consumed via **runnable example binaries** in `examples/src/bin/`:
- `local_runner.rs` — Spawns nodes as local processes
- `local_runner.rs` — Spawns nodes as host processes
- `compose_runner.rs` — Deploys via Docker Compose (requires `NOMOS_TESTNET_IMAGE` built)
- `k8s_runner.rs` — Deploys via Kubernetes Helm (requires cluster + image)
**Run with:** `POL_PROOF_DEV_MODE=true cargo run -p runner-examples --bin <name>`
**Recommended:** Use the convenience script:
```bash
scripts/run-examples.sh -t <duration> -v <validators> -e <executors> <mode>
# mode: host, compose, or k8s
```
This handles circuit setup, binary building/bundling, image building, and execution.
**Alternative:** Direct cargo run (requires manual setup):
```bash
POL_PROOF_DEV_MODE=true cargo run -p runner-examples --bin <name>
```
**Important:** All runners require `POL_PROOF_DEV_MODE=true` to avoid expensive Groth16 proof generation that causes timeouts.
@ -75,8 +88,8 @@ Three deployer implementations:
| Deployer | Backend | Prerequisites | Node Control |
|----------|---------|---------------|--------------|
| `LocalDeployer` | Local processes | Binaries in sibling checkout | No |
| `ComposeDeployer` | Docker Compose | `NOMOS_TESTNET_IMAGE` built | Yes |
| `LocalDeployer` | Host processes | Binaries (built on demand or via bundle) | No |
| `ComposeDeployer` | Docker Compose | Image with embedded assets/binaries | Yes |
| `K8sDeployer` | Kubernetes Helm | Cluster + image loaded | Not yet |
**Compose-specific features:**
@ -88,15 +101,17 @@ Three deployer implementations:
### Docker Image
Built via `testing-framework/assets/stack/scripts/build_test_image.sh`:
- Embeds KZG circuit parameters from `testing-framework/assets/stack/kzgrs_test_params/`
- Embeds KZG circuit parameters and binaries from `testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params`
- Includes runner scripts: `run_nomos_node.sh`, `run_nomos_executor.sh`
- Tagged as `NOMOS_TESTNET_IMAGE` (default: `nomos-testnet:local`)
- **Recommended:** Use prebuilt bundle via `scripts/build-bundle.sh --platform linux` and set `NOMOS_BINARIES_TAR` before building image
### Circuit Assets
KZG parameters required for DA workloads:
- **Default path:** `testing-framework/assets/stack/kzgrs_test_params/`
- **Override:** `NOMOS_KZGRS_PARAMS_PATH=/custom/path`
- **Fetch via:** `scripts/setup-nomos-circuits.sh v0.3.1 /tmp/circuits`
- **Host path:** `testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params` (note repeated filename—directory contains file `kzgrs_test_params`)
- **Container path:** `/kzgrs_test_params/kzgrs_test_params` (for compose/k8s)
- **Override:** `NOMOS_KZGRS_PARAMS_PATH=/custom/path/to/file` (must point to file)
- **Fetch via:** `scripts/setup-nomos-circuits.sh v0.3.1 /tmp/circuits` or use `scripts/run-examples.sh`
### Compose Stack
Templates and configs in `testing-framework/runners/compose/assets/`:

View File

@ -4,11 +4,13 @@ Concrete scenario shapes that illustrate how to combine topologies, workloads,
and expectations.
**Runnable examples:** The repo includes complete binaries in `examples/src/bin/`:
- `local_runner.rs`Local processes
- `compose_runner.rs` — Docker Compose (requires `NOMOS_TESTNET_IMAGE` built)
- `local_runner.rs`Host processes (local)
- `compose_runner.rs` — Docker Compose (requires image built)
- `k8s_runner.rs` — Kubernetes (requires cluster access and image loaded)
Run with: `POL_PROOF_DEV_MODE=true cargo run -p runner-examples --bin <name>`
**Recommended:** Use `scripts/run-examples.sh -t <duration> -v <validators> -e <executors> <mode>` where mode is `host`, `compose`, or `k8s`.
**Alternative:** Direct cargo run: `POL_PROOF_DEV_MODE=true cargo run -p runner-examples --bin <name>`
**All runners require `POL_PROOF_DEV_MODE=true`** to avoid expensive proof generation.

View File

@ -43,89 +43,128 @@ See `.github/workflows/compose-mixed.yml` for a complete CI example using Compos
## Running Examples
### Local Runner
The framework provides three runner modes: **host** (local processes), **compose** (Docker Compose), and **k8s** (Kubernetes).
**Recommended:** Use `scripts/run-examples.sh` for all modes:
```bash
POL_PROOF_DEV_MODE=true cargo run -p runner-examples --bin local_runner
# Host mode (local processes)
scripts/run-examples.sh -t 60 -v 1 -e 1 host
# Compose mode (Docker Compose)
scripts/run-examples.sh -t 60 -v 1 -e 1 compose
# K8s mode (Kubernetes)
scripts/run-examples.sh -t 60 -v 1 -e 1 k8s
```
**Optional environment variables:**
- `LOCAL_DEMO_VALIDATORS=3` — Number of validators (default: 1)
- `LOCAL_DEMO_EXECUTORS=2` — Number of executors (default: 1)
- `LOCAL_DEMO_RUN_SECS=120` — Run duration in seconds (default: 60)
- `NOMOS_TESTS_TRACING=true` — Enable persistent file logging (required with `NOMOS_LOG_DIR`)
- `NOMOS_LOG_DIR=/tmp/logs` — Directory for per-node log files (only with `NOMOS_TESTS_TRACING=true`)
This script handles circuit setup, binary building/bundling, image building, and execution.
**Environment overrides:**
- `VERSION=v0.3.1` — Circuit version
- `NOMOS_NODE_REV=<commit>` — nomos-node git revision
- `NOMOS_BINARIES_TAR=path/to/bundle.tar.gz` — Use prebuilt bundle
- `NOMOS_SKIP_IMAGE_BUILD=1` — Skip image rebuild (compose/k8s)
### Host Runner (Direct Cargo Run)
For manual control, you can run the `local_runner` binary directly:
```bash
POL_PROOF_DEV_MODE=true \
NOMOS_NODE_BIN=/path/to/nomos-node \
NOMOS_EXECUTOR_BIN=/path/to/nomos-executor \
cargo run -p runner-examples --bin local_runner
```
**Environment variables:**
- `NOMOS_DEMO_VALIDATORS=3` — Number of validators (default: 1, or use legacy `LOCAL_DEMO_VALIDATORS`)
- `NOMOS_DEMO_EXECUTORS=2` — Number of executors (default: 1, or use legacy `LOCAL_DEMO_EXECUTORS`)
- `NOMOS_DEMO_RUN_SECS=120` — Run duration in seconds (default: 60, or use legacy `LOCAL_DEMO_RUN_SECS`)
- `NOMOS_NODE_BIN` / `NOMOS_EXECUTOR_BIN` — Paths to binaries (required for direct run)
- `NOMOS_TESTS_TRACING=true` — Enable persistent file logging
- `NOMOS_LOG_DIR=/tmp/logs` — Directory for per-node log files
- `NOMOS_LOG_LEVEL=debug` — Set log level (default: info)
- `NOMOS_LOG_FILTER=consensus=trace,da=debug` — Fine-grained module filtering (rate is per-block, not per-second)
- `NOMOS_LOG_FILTER=consensus=trace,da=debug` — Fine-grained module filtering
**Note:** The default `local_runner` example includes DA workload, so circuit assets in `testing-framework/assets/stack/kzgrs_test_params/` are required (fetch via `scripts/setup-nomos-circuits.sh`).
**Note:** Requires circuit assets and host binaries. Use `scripts/run-examples.sh host` to handle setup automatically.
### Compose Runner
### Compose Runner (Direct Cargo Run)
**Prerequisites:**
1. **Docker daemon running**
2. **Circuit assets** in `testing-framework/assets/stack/kzgrs_test_params` (fetched via `scripts/setup-nomos-circuits.sh`)
3. **Test image built** (see below)
For manual control, you can run the `compose_runner` binary directly. Compose requires a Docker image with embedded assets.
**Recommended setup:** Use a prebuilt bundle:
**Build the test image:**
```bash
# Fetch circuit assets first
chmod +x scripts/setup-nomos-circuits.sh
scripts/setup-nomos-circuits.sh v0.3.1 /tmp/nomos-circuits
cp -r /tmp/nomos-circuits/* testing-framework/assets/stack/kzgrs_test_params/
# Build a Linux bundle (includes binaries + circuits)
scripts/build-bundle.sh --platform linux
# Creates .tmp/nomos-binaries-linux-v0.3.1.tar.gz
# Build image (embeds assets)
chmod +x testing-framework/assets/stack/scripts/build_test_image.sh
# Build image (embeds bundle assets)
export NOMOS_BINARIES_TAR=.tmp/nomos-binaries-linux-v0.3.1.tar.gz
testing-framework/assets/stack/scripts/build_test_image.sh
```
**Run the example:**
```bash
# Run
NOMOS_TESTNET_IMAGE=nomos-testnet:local \
POL_PROOF_DEV_MODE=true \
cargo run -p runner-examples --bin compose_runner
```
**Required environment variables:**
- `NOMOS_TESTNET_IMAGE=nomos-testnet:local` — Image tag (must match built image)
- `POL_PROOF_DEV_MODE=true`**Critical:** Without this, proof generation is CPU-intensive and tests will timeout
**Alternative:** Manual circuit/image setup (rebuilds during image build):
**Optional environment variables:**
- `COMPOSE_NODE_PAIRS=1x1` — Topology: "validators×executors" (default varies by example)
```bash
# Fetch and copy circuits
scripts/setup-nomos-circuits.sh v0.3.1 /tmp/nomos-circuits
cp -r /tmp/nomos-circuits/* testing-framework/assets/stack/kzgrs_test_params/
# Build image
testing-framework/assets/stack/scripts/build_test_image.sh
# Run
NOMOS_TESTNET_IMAGE=nomos-testnet:local \
POL_PROOF_DEV_MODE=true \
cargo run -p runner-examples --bin compose_runner
```
**Environment variables:**
- `NOMOS_TESTNET_IMAGE=nomos-testnet:local` — Image tag (required, must match built image)
- `POL_PROOF_DEV_MODE=true`**Required** for all runners
- `NOMOS_DEMO_VALIDATORS=3` / `NOMOS_DEMO_EXECUTORS=2` / `NOMOS_DEMO_RUN_SECS=120` — Topology overrides
- `COMPOSE_NODE_PAIRS=1x1` — Alternative topology format: "validators×executors"
- `TEST_FRAMEWORK_PROMETHEUS_PORT=9091` — Override Prometheus port (default: 9090)
- `COMPOSE_RUNNER_HOST=127.0.0.1` — Host address for port mappings (default: 127.0.0.1)
- `COMPOSE_RUNNER_PRESERVE=1` — Keep containers running after test (for debugging)
- `NOMOS_LOG_DIR=/tmp/compose-logs` — Write logs to files inside containers (requires copy-out or volume mount)
- `NOMOS_LOG_LEVEL=debug` — Set log level
- `COMPOSE_RUNNER_HOST=127.0.0.1` — Host address for port mappings
- `COMPOSE_RUNNER_PRESERVE=1` — Keep containers running after test
- `NOMOS_LOG_DIR=/tmp/compose-logs` — Write logs to files inside containers
**Compose-specific features:**
- **Node control support**: Only runner that supports chaos testing (`.enable_node_control()` + `.chaos()` workloads)
- **Node control support**: Only runner that supports chaos testing (`.enable_node_control()` + chaos workloads)
- **Prometheus observability**: Metrics at `http://localhost:9090`
**Important:** Chaos workloads (random restarts) **only work with ComposeDeployer**. LocalDeployer and K8sDeployer do not support node control.
**Important:**
- Containers expect KZG parameters at `/kzgrs_test_params/kzgrs_test_params` (note the repeated filename)
- Use `scripts/run-examples.sh compose` to handle all setup automatically
### K8s Runner
### K8s Runner (Direct Cargo Run)
For manual control, you can run the `k8s_runner` binary directly. K8s requires the same image setup as Compose.
**Prerequisites:**
1. **Kubernetes cluster** with `kubectl` configured and working
2. **Circuit assets** in `testing-framework/assets/stack/kzgrs_test_params`
3. **Test image built** (same as Compose: `testing-framework/assets/stack/scripts/build_test_image.sh`)
4. **Image available in cluster** (loaded via `kind`, `minikube`, or pushed to registry)
5. **POL_PROOF_DEV_MODE=true** environment variable set
1. **Kubernetes cluster** with `kubectl` configured
2. **Test image built** (same as Compose, preferably with prebuilt bundle)
3. **Image available in cluster** (loaded or pushed to registry)
**Load image into cluster:**
**Build and load image:**
```bash
# For kind clusters
# Build image with bundle (recommended)
scripts/build-bundle.sh --platform linux
export NOMOS_BINARIES_TAR=.tmp/nomos-binaries-linux-v0.3.1.tar.gz
testing-framework/assets/stack/scripts/build_test_image.sh
# Load into cluster
export NOMOS_TESTNET_IMAGE=nomos-testnet:local
kind load docker-image nomos-testnet:local
# For minikube
minikube image load nomos-testnet:local
# For remote clusters (push to registry)
docker tag nomos-testnet:local your-registry/nomos-testnet:local
docker push your-registry/nomos-testnet:local
export NOMOS_TESTNET_IMAGE=your-registry/nomos-testnet:local
kind load docker-image nomos-testnet:local # For kind
# OR: minikube image load nomos-testnet:local # For minikube
# OR: docker push your-registry/nomos-testnet:local # For remote
```
**Run the example:**
@ -135,9 +174,15 @@ export POL_PROOF_DEV_MODE=true
cargo run -p runner-examples --bin k8s_runner
```
**Environment variables:**
- `NOMOS_TESTNET_IMAGE` — Image tag (required)
- `POL_PROOF_DEV_MODE=true`**Required** for all runners
- `NOMOS_DEMO_VALIDATORS` / `NOMOS_DEMO_EXECUTORS` / `NOMOS_DEMO_RUN_SECS` — Topology overrides
**Important:**
- K8s runner mounts `testing-framework/assets/stack/kzgrs_test_params` as a hostPath volume. Ensure this directory exists and contains circuit assets on the node where pods will be scheduled.
- **No node control support yet**: Chaos workloads (`.enable_node_control()`) will fail. Use ComposeDeployer for chaos testing.
- K8s runner mounts `testing-framework/assets/stack/kzgrs_test_params` as a hostPath volume with file `/kzgrs_test_params/kzgrs_test_params` inside pods
- **No node control support yet**: Chaos workloads (`.enable_node_control()`) will fail
- Use `scripts/run-examples.sh k8s` to handle all setup automatically
## Circuit Assets (KZG Parameters)
@ -145,9 +190,13 @@ DA workloads require KZG cryptographic parameters for polynomial commitment sche
### Asset Location
**Default path:** `testing-framework/assets/stack/kzgrs_test_params`
**Default path:** `testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params`
**Override:** Set `NOMOS_KZGRS_PARAMS_PATH` to use a custom location:
Note the repeated filename: the directory `kzgrs_test_params/` contains a file named `kzgrs_test_params`. This is the actual proving key file.
**Container path** (compose/k8s): `/kzgrs_test_params/kzgrs_test_params`
**Override:** Set `NOMOS_KZGRS_PARAMS_PATH` to use a custom location (must point to the file):
```bash
NOMOS_KZGRS_PARAMS_PATH=/path/to/custom/params cargo run -p runner-examples --bin local_runner
```
@ -190,9 +239,11 @@ The CI automatically fetches and places assets:
**Error without assets:**
```
Error: missing KZG parameters at testing-framework/assets/stack/kzgrs_test_params
Error: missing KZG parameters at testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params
```
If you see this error, the file `kzgrs_test_params` is missing from the directory. Use `scripts/run-examples.sh` or `scripts/setup-nomos-circuits.sh` to fetch it.
## Logging and Observability
### Node Logging vs Framework Logging

View File

@ -5,20 +5,31 @@ Get a working example running quickly.
## Prerequisites
- Rust toolchain (nightly)
- Sibling `nomos-node` checkout built and available
- This repository cloned
- Unix-like system (tested on Linux and macOS)
- For Docker Compose examples: Docker daemon running
**Note:** `nomos-node` binaries are built automatically on demand or can be provided via prebuilt bundles.
## Your First Test
The framework ships with runnable example binaries in `examples/src/bin/`. Let's start with the local runner:
The framework ships with runnable example binaries in `examples/src/bin/`.
**Recommended:** Use the convenience script:
```bash
# From the nomos-testing directory
POL_PROOF_DEV_MODE=true cargo run -p runner-examples --bin local_runner
scripts/run-examples.sh -t 60 -v 1 -e 1 host
```
This runs a complete scenario with **defaults**: 1 validator + 1 executor, mixed transaction + DA workload (5 tx/block + 1 channel + 1 blob), 60s duration.
This handles circuit setup, binary building, and runs a complete scenario: 1 validator + 1 executor, mixed transaction + DA workload (5 tx/block + 1 channel + 1 blob), 60s duration.
**Alternative:** Direct cargo run (requires manual setup):
```bash
# Requires circuits in place and NOMOS_NODE_BIN/NOMOS_EXECUTOR_BIN set
POL_PROOF_DEV_MODE=true cargo run -p runner-examples --bin local_runner
```
**Core API Pattern** (simplified example):
@ -131,28 +142,50 @@ let _handle = runner.run(&mut plan).await?; // Execute workloads & expectations
## Adjust the Topology
The binary accepts environment variables to adjust defaults:
**With run-examples.sh** (recommended):
```bash
# Scale up to 3 validators + 2 executors, run for 2 minutes
LOCAL_DEMO_VALIDATORS=3 \
LOCAL_DEMO_EXECUTORS=2 \
LOCAL_DEMO_RUN_SECS=120 \
scripts/run-examples.sh -t 120 -v 3 -e 2 host
```
**With direct cargo run:**
```bash
# Uses NOMOS_DEMO_* env vars (or legacy *_DEMO_* vars)
NOMOS_DEMO_VALIDATORS=3 \
NOMOS_DEMO_EXECUTORS=2 \
NOMOS_DEMO_RUN_SECS=120 \
POL_PROOF_DEV_MODE=true \
cargo run -p runner-examples --bin local_runner
```
## Try Docker Compose
Use the same API with a different deployer for reproducible containerized environment:
Use the same API with a different deployer for reproducible containerized environment.
**Recommended:** Use the convenience script (handles everything):
```bash
# Build the test image first (includes circuit assets)
chmod +x scripts/setup-nomos-circuits.sh
scripts/run-examples.sh -t 60 -v 1 -e 1 compose
```
This automatically:
- Fetches circuit assets (to `testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params`)
- Builds/uses prebuilt binaries (via `NOMOS_BINARIES_TAR` if available)
- Builds the Docker image
- Runs the compose scenario
**Alternative:** Direct cargo run with manual setup:
```bash
# Option 1: Use prebuilt bundle (recommended for compose/k8s)
scripts/build-bundle.sh --platform linux # Creates .tmp/nomos-binaries-linux-v0.3.1.tar.gz
export NOMOS_BINARIES_TAR=.tmp/nomos-binaries-linux-v0.3.1.tar.gz
# Option 2: Manual circuit/image setup (rebuilds during image build)
scripts/setup-nomos-circuits.sh v0.3.1 /tmp/nomos-circuits
cp -r /tmp/nomos-circuits/* testing-framework/assets/stack/kzgrs_test_params/
chmod +x testing-framework/assets/stack/scripts/build_test_image.sh
testing-framework/assets/stack/scripts/build_test_image.sh
# Run with Compose
@ -163,6 +196,8 @@ cargo run -p runner-examples --bin compose_runner
**Benefit:** Reproducible containerized environment with Prometheus at `http://localhost:9090`.
**Note:** Compose expects KZG parameters at `/kzgrs_test_params/kzgrs_test_params` inside containers (the directory name is repeated as the filename).
**In code:** Just swap the deployer:
```rust

View File

@ -6,27 +6,36 @@ environment and operational considerations, see [Operations](operations.md).
**Important:** All runners require `POL_PROOF_DEV_MODE=true` to avoid expensive Groth16 proof generation that causes timeouts.
## Local runner
- Launches node processes directly on the host.
## Host runner (local processes)
- Launches node processes directly on the host (via `LocalDeployer`).
- Binary: `local_runner.rs`, script mode: `host`
- Fastest feedback loop and minimal orchestration overhead.
- Best for development-time iteration and debugging.
- **Can run in CI** for fast smoke tests.
- **Node control:** Not supported (chaos workloads not available)
**Run with:** `scripts/run-examples.sh -t 60 -v 1 -e 1 host`
## Docker Compose runner
- Starts nodes in containers to provide a reproducible multi-node stack on a
single machine.
single machine (via `ComposeDeployer`).
- Binary: `compose_runner.rs`, script mode: `compose`
- Discovers service ports and wires observability for convenient inspection.
- Good balance between fidelity and ease of setup.
- **Recommended for CI pipelines** (isolated environment, reproducible).
- **Node control:** Supported (can restart nodes for chaos testing)
**Run with:** `scripts/run-examples.sh -t 60 -v 1 -e 1 compose`
## Kubernetes runner
- Deploys nodes onto a cluster for higher-fidelity, longer-running scenarios.
- Deploys nodes onto a cluster for higher-fidelity, longer-running scenarios (via `K8sDeployer`).
- Binary: `k8s_runner.rs`, script mode: `k8s`
- Suits CI with cluster access or shared test environments where cluster behavior
and scheduling matter.
- **Node control:** Not supported yet (chaos workloads not available)
**Run with:** `scripts/run-examples.sh -t 60 -v 1 -e 1 k8s`
### Common expectations
- All runners require at least one validator and, for transaction scenarios,
access to seeded wallets.
@ -37,7 +46,7 @@ environment and operational considerations, see [Operations](operations.md).
```mermaid
flowchart TD
Plan[Scenario Plan] --> RunSel{Runner<br/>(local | compose | k8s)}
Plan[Scenario Plan] --> RunSel[Runner<br/>host, compose, or k8s]
RunSel --> Provision[Provision & readiness]
Provision --> Runtime[Runtime + observability]
Runtime --> Exec[Workloads & Expectations execute]

View File

@ -1,16 +1,18 @@
# Troubleshooting Scenarios
**Prerequisites for All Runners:**
- **`POL_PROOF_DEV_MODE=true`** MUST be set for all runners (local, compose, k8s) to avoid expensive Groth16 proof generation that causes timeouts
- **KZG circuit assets** must be present at `testing-framework/assets/stack/kzgrs_test_params/` for DA workloads (fetch via `scripts/setup-nomos-circuits.sh`)
- **`POL_PROOF_DEV_MODE=true`** MUST be set for all runners (host, compose, k8s) to avoid expensive Groth16 proof generation that causes timeouts
- **KZG circuit assets** must be present at `testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params` (note the repeated filename) for DA workloads
**Recommended:** Use `scripts/run-examples.sh` which handles all setup automatically.
## Quick Symptom Guide
Common symptoms and likely causes:
- **No or slow block progression**: missing `POL_PROOF_DEV_MODE=true`, missing KZG circuit assets for DA workloads, too-short run window, port conflicts, or resource exhaustion—set required env vars, verify assets, extend duration, check node logs for startup errors.
- **No or slow block progression**: missing `POL_PROOF_DEV_MODE=true`, missing KZG circuit assets (`/kzgrs_test_params/kzgrs_test_params` file) for DA workloads, too-short run window, port conflicts, or resource exhaustion—set required env vars, verify assets exist, extend duration, check node logs for startup errors.
- **Transactions not included**: unfunded or misconfigured wallets (check `.wallets(N)` vs `.users(M)`), transaction rate exceeding block capacity, or rates exceeding block production speed—reduce rate, increase wallet count, verify wallet setup in logs.
- **Chaos stalls the run**: chaos (node control) only works with ComposeDeployer; LocalDeployer and K8sDeployer don't support it (won't "stall", just can't execute chaos workloads). With compose, aggressive restart cadence can prevent consensus recovery—widen restart intervals.
- **Chaos stalls the run**: chaos (node control) only works with ComposeDeployer; host runner (LocalDeployer) and K8sDeployer don't support it (won't "stall", just can't execute chaos workloads). With compose, aggressive restart cadence can prevent consensus recovery—widen restart intervals.
- **Observability gaps**: metrics or logs unreachable because ports clash or services are not exposed—adjust observability ports and confirm runner wiring.
- **Flaky behavior across runs**: mixing chaos with functional smoke tests or inconsistent topology between environments—separate deterministic and chaos scenarios and standardize topology presets.
@ -20,12 +22,12 @@ Common symptoms and likely causes:
| Runner | Default Output | With `NOMOS_LOG_DIR` + Flags | Access Command |
|--------|---------------|------------------------------|----------------|
| **Local** | Temporary directories (cleaned up) | Per-node files with prefix `nomos-node-{index}` (requires `NOMOS_TESTS_TRACING=true`) | `cat $NOMOS_LOG_DIR/nomos-node-0*` |
| **Host** (local) | Temporary directories (cleaned up) | Per-node files with prefix `nomos-node-{index}` (requires `NOMOS_TESTS_TRACING=true`) | `cat $NOMOS_LOG_DIR/nomos-node-0*` |
| **Compose** | Docker container stdout/stderr | Per-node files inside containers (if path is mounted) | `docker ps` then `docker logs <container-id>` |
| **K8s** | Pod stdout/stderr | Per-node files inside pods (if path is mounted) | `kubectl logs -l app=nomos-validator` |
**Important Notes:**
- **Local runner**: Logs go to system temporary directories (NOT in working directory) by default and are automatically cleaned up after tests. To persist logs, you MUST set both `NOMOS_TESTS_TRACING=true` AND `NOMOS_LOG_DIR=/path/to/logs`.
- **Host runner** (local processes): Logs go to system temporary directories (NOT in working directory) by default and are automatically cleaned up after tests. To persist logs, you MUST set both `NOMOS_TESTS_TRACING=true` AND `NOMOS_LOG_DIR=/path/to/logs`.
- **Compose/K8s**: Per-node log files only exist inside containers/pods if `NOMOS_LOG_DIR` is set AND the path is writable inside the container/pod. By default, rely on `docker logs` or `kubectl logs`.
- **File naming**: Log files use prefix `nomos-node-{index}*` or `nomos-executor-{index}*` with timestamps, e.g., `nomos-node-0.2024-12-01T10-30-45.log` (NOT just `.log` suffix).
- **Container names**: Compose containers include project UUID, e.g., `nomos-compose-<uuid>-validator-0-1` where `<uuid>` is randomly generated per run
@ -74,8 +76,12 @@ docker logs --tail 100 <container-id>
```bash
COMPOSE_RUNNER_PRESERVE=1 \
NOMOS_TESTNET_IMAGE=nomos-testnet:local \
POL_PROOF_DEV_MODE=true \
cargo run -p runner-examples --bin compose_runner
# OR: Use run-examples.sh (handles setup automatically)
COMPOSE_RUNNER_PRESERVE=1 scripts/run-examples.sh -t 60 -v 1 -e 1 compose
# After test failure, containers remain running:
docker ps --filter "name=nomos-compose-"
docker exec -it <container-id> /bin/sh
@ -262,19 +268,26 @@ Run a minimal baseline test (e.g., 2 validators, consensus liveness only). If it
- **Cause**: Docker image not built for Compose/K8s runners, or KZG assets not
baked into the image.
- **Fix**:
1. Fetch KZG assets: `scripts/setup-nomos-circuits.sh v0.3.1 /tmp/nomos-circuits`.
2. Copy to assets:
`cp -r /tmp/nomos-circuits/* testing-framework/assets/stack/kzgrs_test_params/`.
3. Build image: `testing-framework/assets/stack/scripts/build_test_image.sh`.
- **Fix (recommended)**: Use run-examples.sh which handles everything:
```bash
scripts/run-examples.sh -t 60 -v 1 -e 1 compose
```
- **Fix (manual)**:
1. Build bundle: `scripts/build-bundle.sh --platform linux`
2. Set bundle path: `export NOMOS_BINARIES_TAR=.tmp/nomos-binaries-linux-v0.3.1.tar.gz`
3. Build image: `testing-framework/assets/stack/scripts/build_test_image.sh`
### "Failed to load KZG parameters" or "Circuit file not found"
- **Cause**: DA workload requires KZG circuit assets that aren't present.
- **Fix**:
1. Fetch assets: `scripts/setup-nomos-circuits.sh v0.3.1 /tmp/nomos-circuits`.
2. Copy to expected path:
`cp -r /tmp/nomos-circuits/* testing-framework/assets/stack/kzgrs_test_params/`.
3. For Compose/K8s: rebuild image with assets baked in.
- **Cause**: DA workload requires KZG circuit assets. The file `testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params` (note repeated filename) must exist. Inside containers, it's at `/kzgrs_test_params/kzgrs_test_params`.
- **Fix (recommended)**: Use run-examples.sh which handles setup:
```bash
scripts/run-examples.sh -t 60 -v 1 -e 1 <mode>
```
- **Fix (manual)**:
1. Fetch assets: `scripts/setup-nomos-circuits.sh v0.3.1 /tmp/nomos-circuits`
2. Copy to expected path: `cp -r /tmp/nomos-circuits/* testing-framework/assets/stack/kzgrs_test_params/`
3. Verify file exists: `ls -lh testing-framework/assets/stack/kzgrs_test_params/kzgrs_test_params`
4. For Compose/K8s: rebuild image with assets baked in
For detailed logging configuration and observability setup, see [Operations](operations.md).

View File

@ -3,17 +3,35 @@ use std::{collections::HashSet, num::NonZeroUsize, path::PathBuf, time::Duration
use chain_leader::LeaderConfig as ChainLeaderConfig;
use chain_network::{BootstrapConfig as ChainBootstrapConfig, OrphanConfig, SyncConfig};
use chain_service::StartingState;
use nomos_node::config::{
cryptarchia::{
deployment::{SdpConfig as DeploymentSdpConfig, Settings as CryptarchiaDeploymentSettings},
serde::{
Config as CryptarchiaConfig, LeaderConfig as CryptarchiaLeaderConfig,
NetworkConfig as CryptarchiaNetworkConfig, ServiceConfig as CryptarchiaServiceConfig,
},
},
mempool::deployment::Settings as MempoolDeploymentSettings,
time::deployment::Settings as TimeDeploymentSettings,
use nomos_api::ApiServiceSettings;
use nomos_da_sampling::{
DaSamplingServiceSettings, backend::kzgrs::KzgrsSamplingBackendSettings,
verifier::kzgrs::KzgrsDaVerifierSettings as SamplingVerifierSettings,
};
use nomos_da_verifier::{
DaVerifierServiceSettings,
backend::{kzgrs::KzgrsDaVerifierSettings, trigger::MempoolPublishTriggerConfig},
storage::adapters::rocksdb::RocksAdapterSettings as VerifierStorageAdapterSettings,
};
use nomos_node::{
api::backend::AxumBackendSettings as NodeAxumBackendSettings,
config::{
cryptarchia::{
deployment::{
SdpConfig as DeploymentSdpConfig, Settings as CryptarchiaDeploymentSettings,
},
serde::{
Config as CryptarchiaConfig, LeaderConfig as CryptarchiaLeaderConfig,
NetworkConfig as CryptarchiaNetworkConfig,
ServiceConfig as CryptarchiaServiceConfig,
},
},
mempool::deployment::Settings as MempoolDeploymentSettings,
time::{deployment::Settings as TimeDeploymentSettings, serde::Config as TimeConfig},
},
};
use nomos_utils::math::NonNegativeF64;
use nomos_wallet::WalletServiceSettings;
use crate::topology::configs::GeneralConfig;
@ -86,3 +104,111 @@ pub(crate) fn cryptarchia_config(config: &GeneralConfig) -> CryptarchiaConfig {
},
}
}
pub(crate) fn da_verifier_config(
config: &GeneralConfig,
) -> DaVerifierServiceSettings<KzgrsDaVerifierSettings, (), (), VerifierStorageAdapterSettings> {
DaVerifierServiceSettings {
share_verifier_settings: KzgrsDaVerifierSettings {
global_params_path: config.da_config.global_params_path.clone(),
domain_size: config.da_config.num_subnets as usize,
},
tx_verifier_settings: (),
network_adapter_settings: (),
storage_adapter_settings: VerifierStorageAdapterSettings {
blob_storage_directory: "./".into(),
},
mempool_trigger_settings: MempoolPublishTriggerConfig {
publish_threshold: NonNegativeF64::try_from(0.8).unwrap(),
share_duration: Duration::from_secs(5),
prune_duration: Duration::from_secs(30),
prune_interval: Duration::from_secs(5),
},
}
}
pub(crate) fn da_sampling_config(
config: &GeneralConfig,
) -> DaSamplingServiceSettings<KzgrsSamplingBackendSettings, SamplingVerifierSettings> {
DaSamplingServiceSettings {
sampling_settings: KzgrsSamplingBackendSettings {
num_samples: config.da_config.num_samples,
num_subnets: config.da_config.num_subnets,
old_blobs_check_interval: config.da_config.old_blobs_check_interval,
blobs_validity_duration: config.da_config.blobs_validity_duration,
},
share_verifier_settings: SamplingVerifierSettings {
global_params_path: config.da_config.global_params_path.clone(),
domain_size: config.da_config.num_subnets as usize,
},
commitments_wait_duration: Duration::from_secs(1),
sdp_blob_trigger_sampling_delay: crate::adjust_timeout(Duration::from_secs(5)),
}
}
pub(crate) fn time_config(config: &GeneralConfig) -> TimeConfig {
TimeConfig {
backend: nomos_time::backends::NtpTimeBackendSettings {
ntp_server: config.time_config.ntp_server.clone(),
ntp_client_settings: nomos_time::backends::ntp::async_client::NTPClientSettings {
timeout: config.time_config.timeout,
listening_interface: config.time_config.interface.clone(),
},
update_interval: config.time_config.update_interval,
},
chain_start_time: config.time_config.chain_start_time,
}
}
pub(crate) fn mempool_config() -> nomos_node::config::mempool::serde::Config {
nomos_node::config::mempool::serde::Config {
// Disable mempool recovery for hermetic tests.
recovery_path: PathBuf::new(),
}
}
pub(crate) fn tracing_settings(config: &GeneralConfig) -> nomos_tracing_service::TracingSettings {
config.tracing_config.tracing_settings.clone()
}
pub(crate) fn http_config(config: &GeneralConfig) -> ApiServiceSettings<NodeAxumBackendSettings> {
ApiServiceSettings {
backend_settings: NodeAxumBackendSettings {
address: config.api_config.address,
rate_limit_per_second: 10000,
rate_limit_burst: 10000,
max_concurrent_requests: 1000,
..Default::default()
},
}
}
pub(crate) fn testing_http_config(
config: &GeneralConfig,
) -> ApiServiceSettings<NodeAxumBackendSettings> {
ApiServiceSettings {
backend_settings: NodeAxumBackendSettings {
address: config.api_config.testing_http_address,
rate_limit_per_second: 10000,
rate_limit_burst: 10000,
max_concurrent_requests: 1000,
..Default::default()
},
}
}
pub(crate) fn wallet_settings(config: &GeneralConfig) -> WalletServiceSettings {
WalletServiceSettings {
known_keys: {
let mut keys = HashSet::from_iter([config.consensus_config.leader_config.pk]);
keys.extend(
config
.consensus_config
.wallet_accounts
.iter()
.map(crate::topology::configs::wallet::WalletAccount::public_key),
);
keys
},
}
}

View File

@ -1,4 +1,4 @@
use std::{collections::HashSet, path::PathBuf, time::Duration};
use std::time::Duration;
use nomos_da_dispersal::{
DispersalServiceSettings,
@ -12,40 +12,23 @@ use nomos_da_network_service::{
common::DaNetworkBackendSettings, executor::DaNetworkExecutorBackendSettings,
},
};
use nomos_da_sampling::{
DaSamplingServiceSettings, backend::kzgrs::KzgrsSamplingBackendSettings,
verifier::kzgrs::KzgrsDaVerifierSettings as SamplingVerifierSettings,
};
use nomos_da_verifier::{
DaVerifierServiceSettings,
backend::{kzgrs::KzgrsDaVerifierSettings, trigger::MempoolPublishTriggerConfig},
storage::adapters::rocksdb::RocksAdapterSettings as VerifierStorageAdapterSettings,
};
use nomos_executor::config::Config as ExecutorConfig;
use nomos_node::{
RocksBackendSettings,
api::backend::AxumBackendSettings as NodeAxumBackendSettings,
config::{
deployment::DeploymentSettings, mempool::serde::Config as MempoolConfig,
time::serde::Config as TimeConfig,
},
};
use nomos_node::{RocksBackendSettings, config::deployment::DeploymentSettings};
use nomos_sdp::SdpSettings;
use nomos_time::backends::{NtpTimeBackendSettings, ntp::async_client::NTPClientSettings};
use nomos_utils::math::NonNegativeF64;
use nomos_wallet::WalletServiceSettings;
use crate::{
adjust_timeout,
nodes::{
blend::build_blend_service_config,
common::{cryptarchia_config, cryptarchia_deployment, mempool_deployment, time_deployment},
common::{
cryptarchia_config, cryptarchia_deployment, da_sampling_config, da_verifier_config,
http_config, mempool_config, mempool_deployment, testing_http_config, time_config,
time_deployment, tracing_settings, wallet_settings,
},
},
topology::configs::{GeneralConfig, wallet::WalletAccount},
topology::configs::GeneralConfig,
};
#[must_use]
#[expect(clippy::too_many_lines, reason = "TODO: Address this at some point.")]
pub fn create_executor_config(config: GeneralConfig) -> ExecutorConfig {
let network_config = config.network_config.clone();
let (blend_user_config, blend_deployment, network_deployment) =
@ -67,10 +50,10 @@ pub fn create_executor_config(config: GeneralConfig) -> ExecutorConfig {
da_network: DaNetworkConfig {
backend: DaNetworkExecutorBackendSettings {
validator_settings: DaNetworkBackendSettings {
node_key: config.da_config.node_key,
listening_address: config.da_config.listening_address,
policy_settings: config.da_config.policy_settings,
monitor_settings: config.da_config.monitor_settings,
node_key: config.da_config.node_key.clone(),
listening_address: config.da_config.listening_address.clone(),
policy_settings: config.da_config.policy_settings.clone(),
monitor_settings: config.da_config.monitor_settings.clone(),
balancer_interval: config.da_config.balancer_interval,
redial_cooldown: config.da_config.redial_cooldown,
replication_settings: config.da_config.replication_settings,
@ -91,47 +74,10 @@ pub fn create_executor_config(config: GeneralConfig) -> ExecutorConfig {
subnet_threshold: config.da_config.num_samples as usize,
min_session_members: config.da_config.num_samples as usize,
},
da_verifier: DaVerifierServiceSettings {
share_verifier_settings: KzgrsDaVerifierSettings {
global_params_path: config.da_config.global_params_path.clone(),
domain_size: config.da_config.num_subnets as usize,
},
tx_verifier_settings: (),
network_adapter_settings: (),
storage_adapter_settings: VerifierStorageAdapterSettings {
blob_storage_directory: "./".into(),
},
mempool_trigger_settings: MempoolPublishTriggerConfig {
publish_threshold: NonNegativeF64::try_from(0.8).unwrap(),
share_duration: Duration::from_secs(5),
prune_duration: Duration::from_secs(30),
prune_interval: Duration::from_secs(5),
},
},
tracing: config.tracing_config.tracing_settings,
http: nomos_api::ApiServiceSettings {
backend_settings: NodeAxumBackendSettings {
address: config.api_config.address,
rate_limit_per_second: 10000,
rate_limit_burst: 10000,
max_concurrent_requests: 1000,
..Default::default()
},
},
da_sampling: DaSamplingServiceSettings {
sampling_settings: KzgrsSamplingBackendSettings {
num_samples: config.da_config.num_samples,
num_subnets: config.da_config.num_subnets,
old_blobs_check_interval: config.da_config.old_blobs_check_interval,
blobs_validity_duration: config.da_config.blobs_validity_duration,
},
share_verifier_settings: SamplingVerifierSettings {
global_params_path: config.da_config.global_params_path.clone(),
domain_size: config.da_config.num_subnets as usize,
},
commitments_wait_duration: Duration::from_secs(1),
sdp_blob_trigger_sampling_delay: adjust_timeout(Duration::from_secs(5)),
},
da_verifier: da_verifier_config(&config),
tracing: tracing_settings(&config),
http: http_config(&config),
da_sampling: da_sampling_config(&config),
storage: RocksBackendSettings {
db_path: "./db".into(),
read_only: false,
@ -142,52 +88,18 @@ pub fn create_executor_config(config: GeneralConfig) -> ExecutorConfig {
encoder_settings: EncoderSettings {
num_columns: config.da_config.num_subnets as usize,
with_cache: false,
global_params_path: config.da_config.global_params_path,
global_params_path: config.da_config.global_params_path.clone(),
},
dispersal_timeout: Duration::from_secs(20),
retry_cooldown: Duration::from_secs(3),
retry_limit: 2,
},
},
time: TimeConfig {
backend: NtpTimeBackendSettings {
ntp_server: config.time_config.ntp_server,
ntp_client_settings: NTPClientSettings {
timeout: config.time_config.timeout,
listening_interface: config.time_config.interface,
},
update_interval: config.time_config.update_interval,
},
chain_start_time: config.time_config.chain_start_time,
},
mempool: MempoolConfig {
// Disable mempool recovery for hermetic tests.
recovery_path: PathBuf::new(),
},
time: time_config(&config),
mempool: mempool_config(),
sdp: SdpSettings { declaration: None },
wallet: WalletServiceSettings {
known_keys: {
let mut keys = HashSet::from_iter([config.consensus_config.leader_config.pk]);
keys.extend(
config
.consensus_config
.wallet_accounts
.iter()
.map(WalletAccount::public_key),
);
keys
},
},
wallet: wallet_settings(&config),
testing_http: testing_http_config(&config),
key_management: config.kms_config,
testing_http: nomos_api::ApiServiceSettings {
backend_settings: NodeAxumBackendSettings {
address: config.api_config.testing_http_address,
rate_limit_per_second: 10000,
rate_limit_burst: 10000,
max_concurrent_requests: 1000,
..Default::default()
},
},
}
}

View File

@ -1,5 +1,3 @@
use std::{collections::HashSet, path::PathBuf, time::Duration};
use nomos_da_network_core::{
protocols::sampling::SubnetsConfig, swarm::DAConnectionPolicySettings,
};
@ -7,42 +5,24 @@ use nomos_da_network_service::{
NetworkConfig as DaNetworkConfig, api::http::ApiAdapterSettings,
backends::libp2p::common::DaNetworkBackendSettings,
};
use nomos_da_sampling::{
DaSamplingServiceSettings, backend::kzgrs::KzgrsSamplingBackendSettings,
verifier::kzgrs::KzgrsDaVerifierSettings as SamplingVerifierSettings,
};
use nomos_da_verifier::{
DaVerifierServiceSettings,
backend::{kzgrs::KzgrsDaVerifierSettings, trigger::MempoolPublishTriggerConfig},
storage::adapters::rocksdb::RocksAdapterSettings as VerifierStorageAdapterSettings,
};
use nomos_node::{
Config as ValidatorConfig, RocksBackendSettings,
api::backend::AxumBackendSettings as NodeAxumBackendSettings,
config::{
deployment::DeploymentSettings, mempool::serde::Config as MempoolConfig,
time::serde::Config as TimeConfig,
},
Config as ValidatorConfig, RocksBackendSettings, config::deployment::DeploymentSettings,
};
use nomos_sdp::SdpSettings;
use nomos_time::backends::{NtpTimeBackendSettings, ntp::async_client::NTPClientSettings};
use nomos_utils::math::NonNegativeF64;
use nomos_wallet::WalletServiceSettings;
use crate::{
adjust_timeout,
nodes::{
blend::build_blend_service_config,
common::{cryptarchia_config, cryptarchia_deployment, mempool_deployment, time_deployment},
common::{
cryptarchia_config, cryptarchia_deployment, da_sampling_config, da_verifier_config,
http_config, mempool_config, mempool_deployment, testing_http_config, time_config,
time_deployment, tracing_settings, wallet_settings,
},
},
topology::configs::{GeneralConfig, wallet::WalletAccount},
topology::configs::GeneralConfig,
};
#[must_use]
#[expect(
clippy::too_many_lines,
reason = "Validator config wiring aggregates many service settings"
)]
pub fn create_validator_config(config: GeneralConfig) -> ValidatorConfig {
let da_policy_settings = config.da_config.policy_settings.clone();
let network_config = config.network_config.clone();
@ -62,8 +42,8 @@ pub fn create_validator_config(config: GeneralConfig) -> ValidatorConfig {
cryptarchia: cryptarchia_config(&config),
da_network: DaNetworkConfig {
backend: DaNetworkBackendSettings {
node_key: config.da_config.node_key,
listening_address: config.da_config.listening_address,
node_key: config.da_config.node_key.clone(),
listening_address: config.da_config.listening_address.clone(),
policy_settings: DAConnectionPolicySettings {
min_dispersal_peers: 0,
min_replication_peers: da_policy_settings.min_replication_peers,
@ -72,7 +52,7 @@ pub fn create_validator_config(config: GeneralConfig) -> ValidatorConfig {
max_replication_failures: da_policy_settings.max_replication_failures,
malicious_threshold: da_policy_settings.malicious_threshold,
},
monitor_settings: config.da_config.monitor_settings,
monitor_settings: config.da_config.monitor_settings.clone(),
balancer_interval: config.da_config.balancer_interval,
redial_cooldown: config.da_config.redial_cooldown,
replication_settings: config.da_config.replication_settings,
@ -91,90 +71,20 @@ pub fn create_validator_config(config: GeneralConfig) -> ValidatorConfig {
subnet_threshold: config.da_config.num_samples as usize,
min_session_members: config.da_config.num_samples as usize,
},
da_verifier: DaVerifierServiceSettings {
share_verifier_settings: KzgrsDaVerifierSettings {
global_params_path: config.da_config.global_params_path.clone(),
domain_size: config.da_config.num_subnets as usize,
},
tx_verifier_settings: (),
network_adapter_settings: (),
storage_adapter_settings: VerifierStorageAdapterSettings {
blob_storage_directory: "./".into(),
},
mempool_trigger_settings: MempoolPublishTriggerConfig {
publish_threshold: NonNegativeF64::try_from(0.8).unwrap(),
share_duration: Duration::from_secs(5),
prune_duration: Duration::from_secs(30),
prune_interval: Duration::from_secs(5),
},
},
tracing: config.tracing_config.tracing_settings,
http: nomos_api::ApiServiceSettings {
backend_settings: NodeAxumBackendSettings {
address: config.api_config.address,
rate_limit_per_second: 10000,
rate_limit_burst: 10000,
max_concurrent_requests: 1000,
..Default::default()
},
},
da_sampling: DaSamplingServiceSettings {
sampling_settings: KzgrsSamplingBackendSettings {
num_samples: config.da_config.num_samples,
num_subnets: config.da_config.num_subnets,
old_blobs_check_interval: config.da_config.old_blobs_check_interval,
blobs_validity_duration: config.da_config.blobs_validity_duration,
},
share_verifier_settings: SamplingVerifierSettings {
global_params_path: config.da_config.global_params_path,
domain_size: config.da_config.num_subnets as usize,
},
commitments_wait_duration: Duration::from_secs(1),
sdp_blob_trigger_sampling_delay: adjust_timeout(Duration::from_secs(5)),
},
da_verifier: da_verifier_config(&config),
tracing: tracing_settings(&config),
http: http_config(&config),
da_sampling: da_sampling_config(&config),
storage: RocksBackendSettings {
db_path: "./db".into(),
read_only: false,
column_family: Some("blocks".into()),
},
time: TimeConfig {
backend: NtpTimeBackendSettings {
ntp_server: config.time_config.ntp_server,
ntp_client_settings: NTPClientSettings {
timeout: config.time_config.timeout,
listening_interface: config.time_config.interface,
},
update_interval: config.time_config.update_interval,
},
chain_start_time: config.time_config.chain_start_time,
},
mempool: MempoolConfig {
// Disable mempool recovery for hermetic tests.
recovery_path: PathBuf::new(),
},
time: time_config(&config),
mempool: mempool_config(),
sdp: SdpSettings { declaration: None },
wallet: WalletServiceSettings {
known_keys: {
let mut keys = HashSet::from_iter([config.consensus_config.leader_config.pk]);
keys.extend(
config
.consensus_config
.wallet_accounts
.iter()
.map(WalletAccount::public_key),
);
keys
},
},
testing_http: testing_http_config(&config),
wallet: wallet_settings(&config),
key_management: config.kms_config,
testing_http: nomos_api::ApiServiceSettings {
backend_settings: NodeAxumBackendSettings {
address: config.api_config.testing_http_address,
rate_limit_per_second: 10000,
rate_limit_burst: 10000,
max_concurrent_requests: 1000,
..Default::default()
},
},
}
}