canic-cli
Operator CLI for Canic backup and restore workflows.
The initial command focuses on snapshot capture/download planning and execution for a canister plus its registry-discovered children.
Use --recursive instead of --include-children to include all descendants.
Use --registry-json <file> to plan from a saved canic_subnet_registry
response instead of querying a live root. Non-dry-run captures recompute the
selection topology immediately before snapshot creation and fail if the hash
changed since discovery.
DFX only creates snapshots for stopped canisters. Pass
--stop-before-snapshot --resume-after-snapshot when the CLI should perform
that local lifecycle step around each captured artifact.
Successful non-dry-run captures write the canonical backup layout: manifest,
download journal, and durable artifact directories. Generated manifests include
each durable artifact checksum so verification can detect manifest/journal
drift before restore planning. Download journals also include
operation_metrics counters for target count, snapshot create, snapshot
download, checksum verification, and artifact finalization progress.
Validate a captured manifest before restore planning:
The validation summary includes topology hash inputs, consistency mode, backup unit counts, kind counts, and per-unit topology validation metadata.
Inspect resumable journal status:
--require-complete still writes the JSON status report, then exits with an
error when any artifact has resume work remaining.
Inspect manifest and journal agreement without reading artifact bytes:
--require-ready still writes the JSON inspection report, then exits with an
error when manifest and journal metadata, including topology receipts, are not
ready for full verification.
Emit a provenance report for audit/review workflows:
The report records source/tool metadata, topology receipts, declared backup
units, and each member's snapshot/code/artifact provenance without reading
artifact bytes. --require-consistent still writes the JSON report, then exits
with an error when manifest and journal backup IDs or topology receipts drift.
Verify the backup layout and durable artifact checksums:
Run the standard no-mutation preflight bundle:
Preflight writes manifest-validation.json, backup-status.json,
backup-inspection.json, backup-provenance.json, backup-integrity.json,
restore-plan.json, restore-status.json, and preflight-summary.json.
The summary records the backup ID, source root, environment, topology hash,
readiness statuses, provenance consistency status, topology mismatch count,
journal operation metrics, member counts, restore identity/snapshot/
verification/operation/ordering counts, snapshot provenance readiness booleans,
verification readiness booleans, restore_mapping_supplied,
restore_all_sources_mapped, restore_ready, stable
restore_readiness_reasons, and paths to the generated reports.
--require-restore-ready still writes the full report bundle, then exits with
an error when restore_ready is false.
Restore planning is manifest-driven and performs no mutations:
--require-verified runs the same manifest, journal, durable artifact, and
checksum checks as canic backup verify before emitting the plan.
--require-restore-ready still writes the restore plan, then exits with an
error when readiness_summary.ready is false.
Restore plans include an identity_summary with explicit mapping mode,
all-sources-mapped status, and fixed, relocatable, mapped, in-place, and
remapped member counts. They also include a snapshot_summary with module
hash, wasm hash, code version, and checksum coverage counts and readiness
booleans, plus a verification_summary with post-restore check counts,
verification_required, and all_members_have_checks. A readiness_summary
collapses those signals into a single ready flag and stable reason strings.
Plans also include an operation_summary with planned snapshot loads, code
reinstalls, verification checks, and phases, plus an ordering_summary and
per-member ordering dependency metadata so dry-runs show when parent
relationships are satisfied inside the same restore group or by an earlier
group.
Emit the initial restore execution status from a plan:
Restore status is no-mutation. It copies the plan identity, readiness,
verification, phase, and operation counts, then marks each planned member as
planned with its source/target canister, snapshot ID, and artifact path.
Render the restore execution operations without mutating targets:
Apply dry-run output expands the restore phases into ordered upload, load,
reinstall, and member verification operations. When --backup-dir is supplied,
the dry-run also verifies that referenced artifact paths stay under that backup
directory, exist on disk, and match their expected SHA-256 checksums when the
plan includes checksums. When --journal-out is supplied, the command also
writes an initial apply journal with each operation marked ready or blocked
and stable blocking reasons. The command requires --dry-run; real restore
execution is intentionally not enabled yet.
Summarize a restore apply journal: