canic-cli
canic-cli publishes the canic operator binary. It is the command-line
surface for installing local Canic fleets, listing a Canic fleet, capturing
canister snapshots, validating backup artifacts, and preparing guarded
restores.
The CLI currently wraps dfx for live snapshot and restore mutations. Canic
owns the topology selection, manifests, journals, readiness checks, restore
ordering, and runner state around those dfx calls.
Install
Install from a checkout:
Install from crates.io after a release:
For a full local development setup, including dfx, helper tools,
canic-cli, and canic-installer, use the install script in the root README.
First Commands
Show local demo canisters that already have ids:
By default, canic list checks Canic's fixed demo canister roster and prints
a box-drawing canister-id tree for entries that have local dfx ids. Once a
project has Canic fleet state, plain canic list reads the installed root
registry instead. Use --root <name-or-principal> to point at a specific
installed root, --fleet <name> to use a saved fleet without switching, or
--from <name-or-principal> to print one subtree with that node as the
rendered root.
Live list sources call canic_ready for each listed canister and include a
READY column with yes, no, or error.
If the list only shows the root row, the project has reserved a local root id
but has not installed the tree. Run canic install, then use canic list --network local to read the installed root registry.
Install and bootstrap the local fleet:
canic install defaults to the root dfx canister name. You may pass either a
dfx canister name or an IC principal as the root target:
When the root target is a principal, the CLI still builds the conventional
root canister artifact by default. Use --root-build-target <dfx-name> only
when the local root canister is named differently in dfx.json.
canic install uses canisters/canic.toml when that project default exists.
If it does not, and other canic.toml files are present, the command prints a
small choices table and requires --config <path>.
Successful installs write .canic/<network>/fleets/<fleet>.json with the root
target, resolved root principal, build target, config path, and release-set
manifest path. canic list uses the current fleet when --root and --fleet
are not provided; pass --fleet <name> to query another saved fleet or
--root <name-or-principal> to override it.
List and switch saved fleets:
Run command-specific help when you need exact flags:
Print the installed CLI version with canic --version. The flag is accepted
at any command depth, so canic backup preflight --version reports the binary
version instead of running the command.
Happy Path
Capture a canister and its direct registered children:
Use --recursive instead of --include-children to include all descendants.
Use --dry-run to compute the target set without creating or downloading
snapshots.
Non-dry-run captures recompute the selected topology immediately before snapshot creation and fail if the topology hash changed since discovery. This keeps subtree backups from silently crossing a registry change.
dfx creates snapshots only for stopped canisters. Pass
--stop-before-snapshot --resume-after-snapshot when the CLI should perform
that local lifecycle step around each captured artifact.
Run the standard post-capture smoke wrapper:
Smoke is no-mutation. It writes the preflight report bundle, renders restore
operations, creates a restore apply journal, previews the native runner path,
and records the readiness flags in smoke-summary.json.
For the release smoke path, use the canonical checklist: docs/operations/0.30-backup-restore-smoke.md.
Backup Checks
Use these commands after capture and before restore planning:
canic manifest validatechecks manifest shape, topology hash inputs, backup units, and design conformance.canic backup statussummarizes resumable download journal progress.canic backup inspectcompares manifest and journal metadata without reading artifact bytes.canic backup provenancereports source, topology, unit, member, snapshot, code, and artifact provenance.canic backup verifyreads durable artifacts and verifies checksums.canic backup preflightruns the standard no-mutation validation bundle and emits restore planning/status reports.canic backup smokeruns preflight plus restore dry-run and runner preview.
The stricter flags intentionally write their reports before returning a nonzero exit code. That lets CI and operators inspect the failure artifact that explains why a backup is not ready.
Restore Planning
Restore starts from a manifest, not from loose snapshot files:
Planning performs no mutations. It validates mapping, identity mode, snapshot provenance, verification coverage, artifact checksums when requested, and restore ordering. Plans include operation counts and parent-before-child ordering metadata so operators can see the intended restore sequence before any target is touched.
Create the initial restore status:
Render operations and create an apply journal:
restore apply currently requires --dry-run; direct mutation through that
command is intentionally disabled. The generated journal is the input to the
guarded runner.
Guarded Runner
Preview the maintained runner path without calling dfx:
Execute a cautious one-step batch:
The native runner checks journal readiness, claims the next operation, runs the
generated dfx command, marks the operation completed or failed, and persists
the journal after each transition. --max-steps 1 is the safest operational
mode while validating a new restore path.
If a previous runner stopped after claiming work, release the pending operation back to ready:
Restore Journal Tools
These commands inspect the journal produced by restore apply --dry-run:
canic restore apply-statussummarizes progress, blocked work, pending claims, failed operations, and completion counts.canic restore apply-reportwrites an operator-focused report for the work needing attention.
canic restore run is the only maintained command for advancing a restore
journal. It owns command preview, claiming, execution, completion/failure
records, and pending-operation recovery.
Safety Model
- Directory data may select a root, but topology defines membership.
- Captures fail closed when the selected topology hash changes before snapshot creation.
- Backup manifests carry topology, unit, identity, snapshot, artifact, provenance, and verification metadata.
- Restore planning is no-mutation and must prove mapping, ordering, checksum, verification, and design-conformance readiness before execution.
- Runner summaries and journals are durable audit artifacts; failures still write status before returning a nonzero exit code.