knishio
Unified CLI for orchestrating the KnishIO validator stack — production deployment, Docker control, cell management, database management, benchmarks, embeddings, and health checks.
Quick Start
# Install the CLI
# See what the CLI detected about your host (zero side effects)
# Start the validator stack — accel profile is auto-detected
# (NVIDIA host → cuda, Apple Silicon + DMR → dmr, otherwise → cpu)
# Create a cell and check health
# Run a benchmark
# Tear it all down
Every docker-touching command prints an Environment block so you always see exactly what was detected and which compose stack is running. Override auto-detection with --accel <name> — see Hardware Acceleration.
Production Quick Start
# One-time setup: generates secrets, config, TLS certs
# Launch the production stack
# Seed your cell
# Verify everything
Installation
From crates.io (recommended)
This installs the knishio binary into ~/.cargo/bin/.
From source
Requires Rust 1.70+.
The binary is at target/release/knishio. Optionally copy it onto your PATH:
Prerequisites
- Docker with the
composeplugin (v2) - Running validator stack for cell, health, backup, and embed commands
- openssl for TLS certificate generation (
knishio init --tls)
Configuration
The CLI uses a layered configuration system. Values are resolved in this order (highest priority wins):
- CLI flags (
--url, etc.) - Environment variables (
KNISHIO_URL, etc.) - Config file (
knishio.toml, auto-discovered) - Built-in defaults
Config File
Place a knishio.toml anywhere in the project tree — the CLI walks up from your current directory to find it.
# Optional: pin an accel and skip auto-detection every invocation
# default_accel = "cpu"
[]
= "https://localhost:8080"
= false # set to true for self-signed certs
[]
= "docker-compose.standalone.yml"
= "knishio-postgres"
= "knishio-validator"
# Per-accel compose file chains (all of these have baked defaults — override here only)
# [docker.accel.cuda]
# files = ["docker-compose.standalone.yml", "docker-compose.cuda.yml"]
#
# [docker.accel.dmr]
# files = ["docker-compose.standalone.yml", "docker-compose.dmr.yml"]
#
# [docker.accel.metal-native]
# files = ["docker-compose.metal.yml"]
# native_validator = true # emits the cargo-run hint after `start`
[]
= "knishio"
= "knishio"
For production, knishio init generates a knishio.toml that points to docker-compose.production.yml instead.
Environment Variables
| Variable | Config Field | Default |
|---|---|---|
KNISHIO_URL |
validator.url |
https://localhost:8080 |
KNISHIO_PG_CONTAINER |
docker.postgres_container |
knishio-postgres |
KNISHIO_VALIDATOR_CONTAINER |
docker.validator_container |
knishio-validator |
KNISHIO_DB_USER |
database.user |
knishio |
KNISHIO_DB_NAME |
database.name |
knishio |
KNISHIO_INSECURE_TLS |
validator.insecure_tls |
false |
KNISHIO_ACCEL |
default_accel |
(unset → auto-detect) |
Global CLI Flags
--url <URL> Validator base URL for health commands [default: https://localhost:8080]
--accel <ACCEL> Hardware acceleration profile
[default: auto]
[possible values: auto, cpu, cuda, dmr, metal-native, rocm, vulkan]
-h, --help Print help
-V, --version Print version
--url applies to health, ready, full, and db. TLS certificates are validated by default; set insecure_tls = true in knishio.toml or KNISHIO_INSECURE_TLS=true for self-signed local certs. Health requests have a 30-second timeout.
--accel auto (default) auto-detects the host; any other value forces a specific stack and skips detection. See Hardware Acceleration for the full decision table.
Hardware Acceleration
The CLI auto-detects the host and picks the matching compose stack + env vars so GPU-accelerated inference works without typing the right -f a.yml -f b.yml incantations yourself. Every knishio start / rebuild / stop / status / … prints the resolved accel, the compose stack being used, and (for DMR) the host-side routing URL — so the active optimization is never a guess.
Decision Table
| Host signal | → Accel | Compose stack | Validator runs where |
|---|---|---|---|
macOS + Apple Silicon + DMR TCP reachable on :12434 |
dmr | standalone.yml + dmr.yml |
containerised; inference to host DMR |
| macOS + Apple Silicon (DMR missing) | metal-native | metal.yml (Postgres only) + cargo-run hint |
host (native --features metal) |
Linux with nvidia-smi present |
cuda | standalone.yml + cuda.yml |
containerised (GPU passthrough via nvidia-container-toolkit) |
Linux with rocminfo present |
rocm | standalone.yml + rocm.yml (overlay TBD) |
containerised |
| everything else | cpu | standalone.yml |
containerised, CPU-only |
If the chosen accel's overlay file isn't present on disk, the CLI falls back to cpu with a warning line — you're never blocked because an overlay didn't ship.
detect
Probe the host and print the resolved accel — no side effects.
Example output on an M4 Mac with DMR enabled:
Environment
ℹ Host: macos (aarch64)
ℹ CPU: Apple M4 · 32 GB RAM
ℹ GPU: Apple M4 (Apple)
ℹ Docker: 29.3.1
ℹ DMR: running, TCP :12434 reachable, 2 cached model(s)
→ Accel: dmr (Apple Silicon + DMR TCP reachable)
Forcing a profile
Override auto-detection with --accel <name>. The flag is global — works on every docker-touching subcommand.
For CI determinism, pin a profile in knishio.toml instead:
= "cpu"
Or via env var: KNISHIO_ACCEL=cpu.
Apple Silicon via Docker Model Runner
On macOS, Linux containers cannot access the Metal GPU directly. Docker Model Runner (DMR) sidesteps this by running llama.cpp with Metal on the host and exposing an OpenAI-compatible API at model-runner.docker.internal:12434. The validator stays containerised (plain Linux CPU build) and its openai-compatible provider points at the host endpoint over TCP — one ~1ms hop per inference, full Metal acceleration in practice.
One-time setup:
# 1. Enable DMR's TCP endpoint (Docker Desktop 4.62+ required)
# ...or via the CLI:
# 2. Pull models (defaults to the two Qwen GGUFs our compose overlay expects)
# 3. Verify
After that, knishio start -d auto-routes through DMR with no extra flags.
If you skip DMR, knishio start on an Apple Silicon Mac falls back to the metal-native profile: Postgres runs in Docker, and the CLI prints a copy-pasteable cargo run --release --features metal block for running the validator binary natively.
dmr
Docker Model Runner control surface.
| Subcommand | Description |
|---|---|
status |
Print DMR client/server state, TCP reachability, cached model list |
enable |
Enable DMR's TCP endpoint on :12434 (wrapper over docker desktop enable model-runner --tcp=12434). Docker Desktop itself is toggled via its GUI (Settings → Beta/AI) — only the TCP exposure is CLI-controllable |
pull [--model <REF>] |
Pull a model into the DMR cache. Without --model, pulls the two defaults used by docker-compose.dmr.yml: hf.co/Qwen/Qwen3-Embedding-4B-GGUF and hf.co/Qwen/Qwen3.5-0.8B-GGUF |
# Pull a specific model
# Check what's cached and whether TCP is live
Production Deployment
init
Initialize a production deployment. Generates secrets, configuration, and optionally TLS certificates.
| Flag | Description |
|---|---|
--tls |
Generate self-signed TLS certificates (valid 365 days) |
--cors <ORIGINS> |
Set CORS_ORIGINS in the generated .env.production |
What it creates:
| File/Directory | Contents |
|---|---|
secrets/jwt_secret |
Random 64-character hex string |
secrets/db_password |
Random 32-character alphanumeric password |
secrets/db_url |
Full Postgres connection string with generated password |
knishio.toml |
CLI config pointing to docker-compose.production.yml |
.env.production |
Environment config (CORS origins, feature flags) |
certs/ |
Self-signed TLS certificate and key (if --tls) |
backups/ |
Empty directory for database backups |
models/ |
Empty directory for GGUF model files |
All secret files are created with 600 permissions and the secrets/ directory with 700.
# Full production init
# Without TLS (bring your own certs)
Running init again is safe — it skips files that already exist.
Production vs Standalone
The production compose (docker-compose.production.yml) differs from standalone in:
- Secrets injected via Docker
_FILEconvention (not environment variables) KNISHIO_ENV=production(enforces strong JWT secret)- Rate limiting and rule enforcement enabled
- JSON structured logging
- Resource limits on containers (memory + CPU)
- Log rotation (50MB max, 5 files)
restart: always
Docker Control
All Docker commands locate the compose file automatically by walking up from your current directory (see Path Discovery). When using docker-compose.production.yml, the CLI automatically loads .env.production as the env file.
start
Start the validator stack (Postgres + validator).
| Flag | Description |
|---|---|
--build |
Build images before starting |
-d, --detach |
Run in detached mode (background) |
# Interactive foreground
# Detached with rebuild
stop
Stop all containers without removing them.
destroy
Remove containers and networks. Optionally remove volumes (all data).
| Flag | Description |
|---|---|
--volumes |
Also remove volumes — all data will be lost |
rebuild
Full no-cache rebuild of the validator image, then restart in detached mode.
Equivalent to:
update
Pull or build the latest version, restart the stack, and verify health before declaring success.
| Flag | Description |
|---|---|
--build |
Rebuild from source instead of pulling images |
--rollback |
Revert to the previous image version |
The update process:
- Pulls latest images (or rebuilds from source with
--build) - Restarts only changed services (Postgres keeps running)
- Polls
/readyzuntil it returns 200 (up to 120-second timeout) - Reports before/after version numbers from
/health - If health check fails: prints recent logs and suggests next steps
# Pull latest image and restart
# Rebuild from source
# Roll back after a failed update
logs
Show container logs.
| Flag | Description |
|---|---|
-f, --follow |
Follow log output in real time |
--tail <N> |
Show only the last N lines |
# Follow logs, last 100 lines
status
Show running container status (equivalent to docker compose ps).
Cell Management
Manage cells (application-specific sub-ledgers) in the validator's database. Commands execute SQL via docker exec into the knishio-postgres container.
cell create
Create a new cell or update an existing one.
| Argument/Flag | Description | Default |
|---|---|---|
<SLUG> |
Cell slug identifier (required) | — |
--name |
Human-readable display name | Same as slug |
--status |
Initial status | active |
If the slug already exists, the cell's name and status are updated (upsert).
Validation Rules
| Field | Constraints |
|---|---|
| Slug | 1-64 characters, alphanumeric + dashes + underscores only ([a-zA-Z0-9_-]) |
| Name | 1-256 characters, no null bytes or control characters |
| Status | Must be one of: active, paused, archived |
Invalid input is rejected before any database operation runs.
cell list
List all cells with their status and creation time.
Output:
Cells
SLUG NAME STATUS CREATED
--------------------------------------------------------------------------------
public Public Cell active 1773423688
TESTCELL Test Cell active 1773423694
cell activate / pause / archive
Change a cell's status.
# Pause a cell (molecules targeting it will be rejected)
# Reactivate it
# Archive (soft-delete)
Database Management
backup create
Create a database backup using pg_dump via the postgres container.
| Flag | Description | Default |
|---|---|---|
-o, --output |
Output file path | backups/knishio_YYYYMMDD_HHMMSS.sql |
# Default timestamped backup
# Custom output path
Output includes file size:
ℹ Backing up database to backups/knishio_20260406_174028.sql...
✓ Backup complete: backups/knishio_20260406_174028.sql (0.1 MB)
backup list
List available backups in the backups/ directory, sorted newest-first.
Output:
ℹ Found 3 backup(s):
backups/knishio_20260406_174028.sql (89 KB)
backups/knishio_20260405_120000.sql (85 KB)
backups/knishio_20260404_090000.sql (82 KB)
restore
Restore the database from a backup file. Drops and recreates the database, then verifies consistency via /db-check.
| Argument/Flag | Description |
|---|---|
<PATH> |
Path to the backup SQL file (required) |
--skip-verify |
Skip the post-restore /db-check verification |
# Restore with automatic verification
# Restore without verification (faster, for development)
The restore process:
- Terminates existing database connections
- Drops and recreates the database
- Pipes the SQL backup into
psql - Runs
/db-checkto verify migrations and schema integrity
psql
Open an interactive psql session or run a single SQL command against the validator's database.
| Flag | Description |
|---|---|
-c, --command |
Run a single SQL command instead of interactive mode |
# Interactive session
# Single query
# Check table sizes
Benchmarks
Benchmark commands generate ContinuID-compliant pre-signed molecules and submit them to the validator. Plans are stored as SQLite files for reproducibility.
bench run
Generate a benchmark plan and execute it in one shot. The temporary plan file is cleaned up automatically (unless --keep is set).
| Flag | Type | Default | Description |
|---|---|---|---|
--identities |
int | 50 | Number of test identities |
--types |
CSV | meta |
Molecule types: meta, value-transfer, rule, burn |
--metas-per-identity |
int | 100 | Meta mutations per identity |
--transfers-per-identity |
int | 10 | Value transfers per identity |
--rules-per-identity |
int | 5 | Rule molecules per identity |
--burns-per-identity |
int | 5 | Burn molecules per identity |
--token-amount |
float | 1000000.0 | Initial token supply for value transfers |
--endpoint |
URL | https://localhost:8080 |
Validator GraphQL endpoint |
--concurrency |
int | 5 | Concurrent molecule submissions |
--cell-slug |
string | (none) | Target cell slug |
--keep |
flag | false | Retain benchmark data in DB after execution |
# Quick meta-only benchmark
# Mixed isotope benchmark
# High-throughput stress test (keep data for inspection)
bench generate
Generate a pre-signed benchmark plan file (SQLite) without executing it. Useful for reproducible benchmarks.
| Flag | Type | Default | Description |
|---|---|---|---|
-o, --output |
path | (required) | Output SQLite plan file |
--identities |
int | 50 | Number of test identities |
--types |
CSV | meta |
Molecule types |
--metas-per-identity |
int | 100 | Meta mutations per identity |
--transfers-per-identity |
int | 10 | Value transfers per identity |
--rules-per-identity |
int | 5 | Rule molecules per identity |
--burns-per-identity |
int | 5 | Burn molecules per identity |
--token-amount |
float | 1000000.0 | Initial token supply |
bench execute
Execute a previously generated plan file against the validator.
| Flag | Type | Default | Description |
|---|---|---|---|
<PLAN> |
path | (required) | Path to SQLite plan file |
--endpoint |
URL | https://localhost:8080 |
Validator endpoint |
--concurrency |
int | 5 | Concurrent submissions |
--cell-slug |
string | (none) | Target cell slug |
--keep |
flag | false | Retain benchmark data after execution |
# Execute with high concurrency
bench clean
Clean up benchmark data from the database. Only cells prefixed with BENCH_CLI_ can be purged (safety guard).
| Flag | Description |
|---|---|
--cell-slug |
Purge a specific benchmark cell |
--all |
Purge ALL benchmark cells (BENCH_CLI_*) |
# Clean up a specific benchmark cell
# Clean up all benchmark data
Embedding Management
Manage the DataBraid VKG (Vector Knowledge Graph) embedding system. Requires EMBEDDING_ENABLED=true on the validator.
embed status
Show embedding coverage statistics — how many metadata records have embeddings, which models are in use, and coverage percentages.
embed reset
Clear embeddings so the validator's automatic backfill worker re-embeds them. Useful after changing embedding models or dimensions.
| Flag | Description |
|---|---|
--model |
Clear only embeddings from a specific model |
--all |
Clear ALL embeddings (nuclear option) |
-y, --yes |
Skip confirmation prompt |
# Clear embeddings from a specific model
# Clear everything
embed search
Run semantic (vector similarity) search against DAG metadata from the terminal.
| Flag | Type | Default | Description |
|---|---|---|---|
<QUERY> |
string | (required) | Natural language search query |
--limit |
int | 10 | Maximum number of results |
--threshold |
float | 0.7 | Minimum cosine similarity (0.0 to 1.0) |
--meta-type |
string | (none) | Filter results by meta_type |
embed ask
Ask a natural language question about DAG data using RAG (Retrieval-Augmented Generation). Requires GENERATION_ENABLED=true on the validator.
| Flag | Type | Default | Description |
|---|---|---|---|
<QUESTION> |
string | (required) | Natural language question |
--max-results |
int | 20 | Maximum source records to consider |
--threshold |
float | 0.5 | Minimum cosine similarity |
--meta-type |
string | (none) | Filter by meta_type |
Health Checks
HTTP GET requests to the validator's health endpoints. TLS certificates are validated by default (30-second timeout). Set insecure_tls = true in config to accept self-signed certificates for local development.
health
Quick liveness check.
# ✓ Healthy (https://localhost:8080)
Hits GET /healthz. Returns success on HTTP 200.
ready
Readiness check (is the validator ready to accept traffic?).
# ✓ Ready
Hits GET /readyz. Returns success on HTTP 200.
full
Readiness check with full detail — prints the JSON response body.
# ✓ Ready
# {
# "status": "ready",
# "database": { "status": "connected", "latency_ms": 3 },
# "migrations": { "applied": 40, "expected": 34, "is_current": true },
# "cache": { "entries": 0, "hit_ratio": "0.00" },
# "version": "0.2.0"
# }
db
Database consistency check — migrations, schema integrity, and issue reporting.
# ✓ Database consistency check passed
#
# Migrations
# Applied: 40 / 34 expected
# Up to date
If issues are found:
# ✗ Database consistency check FAILED
#
# Migrations
# Applied: 36 / 38 expected
# Migrations pending!
#
# Issues
# • Missing table: cells
# • Missing trigger: cascade_on_bond_insert
Hits GET /db-check. Reports migration status, missing tables, and missing triggers.
Path Discovery
The CLI automatically finds required files by walking up the directory tree from your current working directory.
Docker Compose files — the resolved accel profile (see Hardware Acceleration) expands to a list of compose files; each is independently located by walking up from CWD through these candidates:
./<file>./knishio-validator-rust/<file>./servers/knishio-validator-rust/<file>
The files are then passed to docker compose -f a -f b … in order (base first, overlay second). Default accel-to-files mappings are baked into the CLI and can be overridden in knishio.toml under [docker.accel.<name>] tables.
Env file auto-loading — when any compose file in the resolved chain has "production" in its name and a .env.production exists alongside it, the CLI automatically passes --env-file .env.production to Docker Compose.
Config file — checks in order:
./knishio.toml./knishio-validator-rust/knishio.toml./servers/knishio-validator-rust/knishio.toml- (walks up parent directories repeating the above)
This means the CLI works whether you run it from inside the validator dir, the servers dir, or the monorepo root.
Example Workflows
Apple Silicon Development (GPU-accelerated via DMR)
# 1. One-time DMR setup (only if you haven't already)
# 2. Confirm the CLI sees the right path
# → Accel: dmr (Apple Silicon + DMR TCP reachable)
# 3. Start — auto-uses standalone.yml + dmr.yml
# 4. Seed a cell
# 5. Exercise the RAG pipeline end-to-end
The validator container runs a plain Linux CPU build; embedding and generation traffic is routed to the host-side llama.cpp-with-Metal process DMR manages.
Development
# 1. Start the stack
# 2. Wait for readiness
# 3. Create a test cell
# 4. Check database state
# 5. Run a mixed benchmark
# 6. Check DAG explorer
# Open https://localhost:8080/dag in your browser
# 7. View logs if something looks wrong
# 8. Rebuild after code changes
# 9. Clean up when done
Production Deployment
# 1. First-time setup
# 2. Launch production stack
# 3. Seed your application cell
# 4. Verify health
# 5. Create initial backup
Ongoing Operations
# Before any upgrade, take a backup
# Pull latest and restart (health-gated)
# If something goes wrong
# List available backups
# Restore from backup if needed
# Quick database query
# Check embedding coverage
# Semantic search
Output Symbols
| Symbol | Meaning |
|---|---|
| ✓ | Success (green) |
| ℹ | Informational (blue) |
| ⚠ | Warning (yellow) |
| ✗ | Error (red) |