2026-03-10 14:00:18 +01:00
# cfgsync
2026-03-12 09:42:38 +01:00
`cfgsync` is a small library stack for bootstrap-time config delivery.
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
The library solves one problem: nodes need to identify themselves, wait until configuration is ready, fetch the files they need, write them locally, and then continue startup. `cfgsync` owns that transport and serving loop. The application using it still decides what “ready” means and what files should be generated.
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
That split is the point of the design:
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
- `cfgsync` owns registration, polling, payload transport, and file delivery.
- the application adapter owns readiness policy and artifact generation.
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
The result is a reusable library without application-specific bootstrap logic leaking into core crates.
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
## How it works
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
The normal flow is registration-backed serving.
Each node first sends a registration containing:
- a stable node identifier
- its IP address
- optional typed application metadata
The server stores registrations and builds a `RegistrationSnapshot` . The application provides a `RegistrationSnapshotMaterializer` , which receives that snapshot and decides whether configuration is ready yet.
If the materializer returns `NotReady` , the node keeps polling. If it returns `Ready` , cfgsync serves one payload containing:
- node-local files for the requesting node
- optional shared files that every node should receive
The node then writes those files locally and continues startup.
That is the main model. Everything else is a variation of it.
## Precomputed artifacts
Some systems already know the final artifacts before any node starts. That still fits the same model.
In that case the server simply starts with precomputed `MaterializedArtifacts` . Nodes still register and fetch through the same protocol, but the materializer already knows the final outputs. Registration becomes an identity and readiness gate, not a source of topology discovery.
This is why cfgsync no longer needs a separate “static mode” as a first-class concept. Precomputed serving is just registration-backed serving with an already-known result.
## Crate layout
2026-03-10 14:00:18 +01:00
### `cfgsync-artifacts`
2026-03-12 09:42:38 +01:00
This crate contains the file-level data model:
- `ArtifactFile` for a single file
- `ArtifactSet` for a group of files
If all you need is “what files exist and how are they grouped”, this is the crate to look at.
2026-03-10 14:00:18 +01:00
### `cfgsync-core`
2026-03-12 09:42:38 +01:00
This crate contains the protocol and the low-level HTTP implementation.
Important types here are:
- `NodeRegistration`
- `RegistrationPayload`
- `NodeArtifactsPayload`
- `CfgsyncClient`
- `NodeConfigSource`
It also defines the HTTP contract:
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
- `POST /register`
- `POST /node`
The server answers with either a payload, `NotReady` , or `Missing` .
2026-03-10 14:00:18 +01:00
### `cfgsync-adapter`
2026-03-12 09:42:38 +01:00
This crate defines the application-facing seam.
The key types are:
- `RegistrationSnapshot`
- `RegistrationSnapshotMaterializer`
- `MaterializedArtifacts`
- `MaterializationResult`
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
The adapter’ s job is simple: given the current registration snapshot, decide whether artifacts are ready, and if they are, return them.
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
The crate also contains reusable wrappers around that seam:
- `CachedSnapshotMaterializer`
- `PersistingSnapshotMaterializer`
- `RegistrationConfigSource`
These exist because caching and result persistence are generic orchestration concerns, not application-specific logic.
2026-03-10 14:00:18 +01:00
### `cfgsync-runtime`
2026-03-12 09:42:38 +01:00
This crate provides the operational entrypoints.
Use it when you want to run cfgsync rather than define its protocol:
- client-side fetch/write helpers
- server config loading
2026-03-12 09:51:03 +01:00
- direct serving helpers such as `serve(...)`
2026-03-12 09:42:38 +01:00
This is the crate that should feel like the normal “start here” path for users integrating cfgsync into a real system.
2026-03-10 14:00:18 +01:00
2026-03-12 07:55:08 +01:00
## Artifact model
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
The adapter usually thinks in full snapshots, but cfgsync serves one node at a time.
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
The materializer returns `MaterializedArtifacts` , which contain:
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
- node-local artifacts keyed by node identifier
- optional shared artifacts
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
When one node fetches config, cfgsync resolves that node’ s local files, merges in the shared files, and returns a single payload.
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
That is why applications usually do not need a second “shared config” endpoint. Shared files can travel in the same payload as node-local files.
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
## The adapter boundary
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
The adapter is where application semantics belong.
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
In practice, the adapter should define:
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
- the typed registration payload
- the readiness rule
- the conversion from registration snapshots into artifacts
- any shared artifact generation the application needs
Typical examples are:
- waiting for `n` initial nodes
- deriving peer lists from registrations
- generating one node-local config file per node
- generating one shared deployment file for all nodes
What does not belong in cfgsync core is equally important. Generic cfgsync should not understand:
- application-specific topology semantics
- genesis or deployment generation rules for one protocol
- application-specific command/state-machine logic
- domain-specific ideas of what a node “really is”
Those belong in the adapter or in the consuming application.
2026-03-12 10:00:10 +01:00
## Start here
If you want the shortest path into the library, start with the end-to-end
runtime example:
- `cfgsync/runtime/examples/minimal_cfgsync.rs`
It shows the full loop:
- define a snapshot materializer
- serve cfgsync
- register a node
- fetch artifacts
- write them locally
After that, the only concepts you usually need to learn are the ones in the
next section.
2026-03-12 09:42:38 +01:00
## Minimal integration path
For a new application, the shortest sensible path is:
1. define a typed registration payload
2. implement `RegistrationSnapshotMaterializer`
3. return node-local and optional shared artifacts
2026-03-12 09:51:03 +01:00
4. serve them with `serve(...)`
2026-03-12 09:42:38 +01:00
5. use `CfgsyncClient` or the runtime helpers on the node side
That gives you the main value of the library without forcing extra application logic into cfgsync itself.
2026-03-12 07:55:08 +01:00
2026-03-12 09:42:38 +01:00
## Code sketch
Typed registration payload:
2026-03-10 14:00:18 +01:00
```rust
use cfgsync_core::NodeRegistration;
#[derive(serde::Serialize)]
struct MyNodeMetadata {
network_port: u16,
api_port: u16,
}
let registration = NodeRegistration::new("node-1", "127.0.0.1".parse().unwrap())
.with_metadata(& MyNodeMetadata {
network_port: 3000,
api_port: 18080,
})?;
```
2026-03-12 09:42:38 +01:00
Snapshot materializer:
2026-03-10 14:00:18 +01:00
```rust
use cfgsync_adapter::{
2026-03-12 07:44:20 +01:00
DynCfgsyncError, MaterializationResult, MaterializedArtifacts, RegistrationSnapshot,
2026-03-10 14:00:18 +01:00
RegistrationSnapshotMaterializer,
};
2026-03-12 07:44:20 +01:00
use cfgsync_artifacts::{ArtifactFile, ArtifactSet};
2026-03-10 14:00:18 +01:00
struct MyMaterializer;
impl RegistrationSnapshotMaterializer for MyMaterializer {
fn materialize_snapshot(
& self,
registrations: & RegistrationSnapshot,
2026-03-12 07:44:20 +01:00
) -> Result< MaterializationResult , DynCfgsyncError > {
2026-03-10 14:00:18 +01:00
if registrations.len() < 2 {
2026-03-12 07:44:20 +01:00
return Ok(MaterializationResult::NotReady);
2026-03-10 14:00:18 +01:00
}
2026-03-12 07:55:08 +01:00
let nodes = registrations.iter().map(|registration| {
(
2026-03-12 07:44:20 +01:00
registration.identifier.clone(),
ArtifactSet::new(vec![ArtifactFile::new(
2026-03-10 14:00:18 +01:00
"/config.yaml",
format!("id: {}\n", registration.identifier),
2026-03-12 07:44:20 +01:00
)]),
2026-03-12 07:55:08 +01:00
)
});
2026-03-10 14:00:18 +01:00
2026-03-12 07:44:20 +01:00
Ok(MaterializationResult::ready(
MaterializedArtifacts::from_nodes(nodes),
))
2026-03-10 14:00:18 +01:00
}
}
```
2026-03-12 09:42:38 +01:00
Serving:
2026-03-10 14:00:18 +01:00
```rust
2026-03-12 09:51:03 +01:00
use cfgsync_runtime::serve;
2026-03-10 14:00:18 +01:00
# async fn run() -> anyhow::Result<()> {
2026-03-12 09:51:03 +01:00
serve(4400, MyMaterializer).await?;
2026-03-10 14:00:18 +01:00
# Ok(())
# }
```
2026-03-12 09:42:38 +01:00
Fetching and writing artifacts:
2026-03-10 14:00:18 +01:00
```rust
2026-03-12 09:54:34 +01:00
use cfgsync_runtime::{Client, OutputMap};
2026-03-10 14:00:18 +01:00
# async fn run(registration: cfgsync_core::NodeRegistration) -> anyhow::Result<()> {
2026-03-12 10:00:10 +01:00
let outputs = OutputMap::config_and_shared(
"/node-data/node-1/config.yaml",
"/node-data/shared",
);
2026-03-10 14:00:18 +01:00
2026-03-12 09:54:34 +01:00
Client::new("http://127.0.0.1:4400")
.fetch_and_write(& registration, & outputs)
.await?;
2026-03-10 14:00:18 +01:00
# Ok(())
# }
```
## Compatibility
2026-03-12 09:42:38 +01:00
The intended public API is what the crate roots reexport today.
2026-03-10 14:00:18 +01:00
2026-03-12 09:42:38 +01:00
Some older compatibility paths still exist internally to avoid breaking current in-repo consumers, but they are not the main model and should not be treated as the recommended public surface.