Subduction CLI
[!CAUTION] This is an early release preview. It has a very unstable API. No guarantees are given. DO NOT use for production use cases at this time. USE AT YOUR OWN RISK.
Overview
The Subduction CLI provides multiple server modes:
server- Subduction document sync server (persistent CRDT storage)client- Subduction client connecting to a serverephemeral-relay- Simple relay for ephemeral messages (presence, awareness)
Installation
# Run directly without installing
# Install to your profile
# Then run
Adding to a Flake
{
inputs.subduction.url = "github:inkandswitch/subduction";
outputs = { nixpkgs, subduction, ... }: {
# NixOS
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
modules = [{
environment.systemPackages = [
subduction.packages.x86_64-linux.default
];
}];
};
# Home Manager
homeConfigurations.myuser = home-manager.lib.homeManagerConfiguration {
modules = [{
home.packages = [
subduction.packages.x86_64-linux.default
];
}];
};
};
}
Using Cargo 🦀
# Build from source
# Run
Commands
Server Mode
Start a Subduction server for document synchronization:
# With Nix
# With Cargo
Options:
--socket <ADDR>- Socket address to bind to (default:0.0.0.0:8080)--data-dir <PATH>- Data directory for storage (default:./data)--peer-id <ID>- Peer ID as 64 hex characters (default: auto-generated)--timeout <SECS>- Request timeout in seconds (default:5)--peer <URL>- Peer WebSocket URL to connect to on startup (can be specified multiple times)--metrics- Enable Prometheus metrics server (disabled by default)--metrics-port <PORT>- Port for Prometheus metrics endpoint (default:9090, only used if--metricsis enabled)
Client Mode
Connect as a client to a Subduction server:
# With Nix
# With Cargo
Options:
--server <URL>- WebSocket server URL to connect to--data-dir <PATH>- Data directory for local storage (default:./client-data)--peer-id <ID>- Peer ID as 64 hex characters (default: auto-generated)--timeout <SECS>- Request timeout in seconds (default:5)
Ephemeral Relay Mode
Start a relay server for ephemeral messages (presence, awareness):
# With Nix
# With Cargo
Alias: relay
Options:
--socket <ADDR>- Socket address to bind to (default:0.0.0.0:8081)--max-message-size <BYTES>- Maximum message size in bytes (default:1048576= 1 MB)
Architecture
The ephemeral relay server provides a simple broadcast mechanism for ephemeral messages like presence, awareness, cursor positions, etc.
┌────────────────────────────────────────┐
│ Client (e.g. automerge-repo) │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ WS :8080 │ │ WS :8081 │ │
│ │ (subduction) │ │ (ephemeral) │ │
│ └──────┬───────┘ └──────┬───────┘ │
└─────────┼───────────────────┼──────────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Subduction │ │ Ephemeral │
│ Server │ │ Relay Server │
│ Port 8080 │ │ Port 8081 │
└──────────────┘ └──────────────┘
Document Sync Presence/Awareness
(persistent) (ephemeral)
How It Works
Subduction Server (default port 8080)
- Handles document synchronization
- Persists changes to storage
- Uses Subduction protocol (CBOR-encoded Messages)
- For CRDTs, fragments, commits, batch sync
Ephemeral Relay (default port 8081)
- Implements automerge-repo NetworkSubsystem protocol handshake
- Responds to "join" messages with "peer" messages
- Broadcasts ephemeral messages between connected peers
- Does NOT persist messages
- For presence, awareness, cursors, temporary state
- Uses sharded deduplication with AHash for DoS-resistant message filtering
Client Configuration
In your automerge-repo client:
const repo = new Repo({
network: [
// Document sync via Subduction
new WebSocketClientAdapter("ws://127.0.0.1:8080", 5000, { subductionMode: true }),
// Ephemeral messages via relay server
new WebSocketClientAdapter("ws://127.0.0.1:8081"),
],
subduction: await Subduction.hydrate(db),
})
Message Flow
Document Changes
Client → WebSocket:8080 → Subduction Server → Storage
↓
Other Clients
Presence Updates
Client → WebSocket:8081 → Relay Server → Other Clients
(broadcast)
Benefits
- Clean separation: Document sync and ephemeral messages use different protocols
- No Subduction changes: Relay server is independent
- Simple relay: Just broadcasts messages, no processing
- Stateless: Relay server doesn't persist anything
- Scalable: Can run relay on different machine/port as needed
- DoS-resistant: Sharded deduplication prevents duplicate message floods
Production Considerations
For production use, you might want to:
- Add authentication - Verify peer identities
- Add rate limiting - Prevent spam
- Add targeted relay - Parse targetId and relay specifically (vs broadcast)
- Add metrics - Track connections, message rates
- Use single port - Multiplex both protocols on one WebSocket (more complex)
- Add message authentication - Prevent forged ephemeral messages (see code TODOs)
- Add timestamp validation - Prevent replay attacks (see code TODOs)
Typical Setup
For a complete setup supporting both document sync and presence:
Terminal 1: Document Sync Server
Terminal 2: Ephemeral Relay Server
Your clients can then connect to:
- Port 8080 for document synchronization
- Port 8081 for ephemeral messages (presence, awareness, etc.)
Environment Variables
RUST_LOG- Set log level (e.g.,RUST_LOG=debug)TOKIO_CONSOLE- Enable tokio console for debugging async tasks
Examples
# Server with debug logging
RUST_LOG=debug
# Server connecting to peers on startup for bidirectional sync
# Server with metrics enabled
# Client connecting to remote server
# Ephemeral relay on custom port
# Ephemeral relay with 5 MB message size limit
The flake provides NixOS and Home Manager modules for running Subduction as a managed service.
NixOS (systemd)
{
inputs.subduction.url = "github:inkandswitch/subduction";
outputs = { nixpkgs, subduction, ... }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
subduction.nixosModules.default
{
services.subduction = {
# Document sync server
server = {
enable = true;
socket = "0.0.0.0:8080";
dataDir = "/var/lib/subduction";
timeout = 5;
# peerId = "..."; # optional: 64 hex chars
# Connect to other peers on startup for bidirectional sync
peers = [
"ws://192.168.1.100:8080"
"ws://192.168.1.101:8080"
];
# Prometheus metrics (disabled by default)
enableMetrics = true;
metricsPort = 9090;
};
# Ephemeral message relay
relay = {
enable = true;
socket = "0.0.0.0:8081";
maxMessageSize = 1048576; # 1 MB
};
# Shared settings
user = "subduction";
group = "subduction";
openFirewall = true; # opens server + relay ports
};
}
];
};
};
}
This creates two systemd services:
subduction.service- Document sync serversubduction-relay.service- Ephemeral message relay
Manage with:
Home Manager (user service)
Works on both Linux (systemd user service) and macOS (launchd agent):
{
inputs.subduction.url = "github:inkandswitch/subduction";
outputs = { home-manager, subduction, ... }: {
homeConfigurations.myuser = home-manager.lib.homeManagerConfiguration {
modules = [
subduction.homeManagerModules.default
{
services.subduction = {
server = {
enable = true;
socket = "127.0.0.1:8080";
# dataDir defaults to ~/.local/share/subduction
# Connect to other peers on startup
peers = ["ws://sync.example.com:8080"];
};
relay = {
enable = true;
socket = "127.0.0.1:8081";
};
};
}
];
};
};
}
On Linux, manage with:
On macOS, manage with:
|
Behind a Reverse Proxy (Caddy)
When running behind Caddy or another reverse proxy, bind to localhost:
services.subduction = {
server = {
enable = true;
socket = "127.0.0.1:8080";
};
relay = {
enable = true;
socket = "127.0.0.1:8081";
};
openFirewall = false; # Caddy handles external access
};
services.caddy = {
enable = true;
virtualHosts."sync.example.com".extraConfig = ''
reverse_proxy localhost:8080
'';
virtualHosts."relay.example.com".extraConfig = ''
reverse_proxy localhost:8081
'';
};
Caddy automatically handles WebSocket upgrades and TLS certificates.
Monitoring
The Subduction server exposes Prometheus metrics on a configurable port (default: 9090).
Server Metrics Options
# Enable metrics
# Metrics are disabled by default
Development Monitoring Stack
When developing locally with Nix, use the monitoring:start command to launch Prometheus and Grafana with pre-configured dashboards:
# Enter the dev shell
# Start the monitoring stack
This starts:
- Prometheus at
http://localhost:9092- scrapes metrics from the server - Grafana at
http://localhost:3939- pre-configured dashboards
The Grafana dashboard includes panels for connections, messages, sync operations, and storage.
Production Monitoring
For production, configure your Prometheus instance to scrape the metrics endpoint:
# prometheus.yml
scrape_configs:
- job_name: 'subduction'
static_configs:
- targets:
Import the Grafana dashboard from subduction_cli/monitoring/grafana/provisioning/dashboards/subduction.json.