<div class="cheatsheet">
# Torc CLI Cheat Sheet
## Quick Start
| `torc -s exec -c '<cmd>'` | Run inline command(s) standalone (no server) |
| `torc create <spec>` | Create workflow from spec file |
| `torc run <spec.yaml>` | Create workflow from spec and run locally |
| `torc submit <spec.yaml>` | Create and submit to scheduler (needs actions) |
| `torc status <id>` | Workflow status and job summary |
| `torc watch <id>` | Monitor workflow until completion |
| `torc watch <id> --recover` | Monitor and auto-recover from failures |
| `torc-dash` | Launch web dashboard |
| `torc tui` | Launch interactive terminal UI |
## Inline Commands (`torc exec`)
Run ad-hoc commands with CPU/memory monitoring, no spec file required. Pair with `-s` to avoid
needing a running server. See [Run Inline Commands](../how-to/run-inline-commands.md) for details.
| `torc -s exec -c '<cmd>'` | Monitor CPU/memory of one command |
| `torc -s exec -- python script.py --flag value` | Shell-style single command invocation |
| `torc -s exec -c '<a>' -c '<b>' -c '<c>' -j 2` | Run a queue with parallelism cap (like GNU Parallel) |
| `torc -s exec -C commands.txt -j 4` | Commands from a file (one per line) |
| `torc -s exec -c 'run.sh {i}' --param i=1:100 -j 8` | Parameter sweep (100 jobs) |
| `torc exec --dry-run -c 'run.sh {i}' --param i=1:3` | Preview the expanded workflow spec |
| `torc -s exec -c 'work.sh' --monitor time-series --generate-plots` | Time-series CPU/mem + HTML plots |
| `torc -s results list` | Inspect past exec runs |
## Workflow Lifecycle
| `torc create <spec>` | Create workflow from spec file |
| `torc run <id>` | Run workflow locally |
| `torc submit <id>` | Submit workflow to scheduler |
| `torc status <id>` | Show workflow status and job summary |
| `torc cancel <id>` | Cancel workflow and Slurm jobs |
| `torc delete <id>` | Delete workflow |
## Workflow State
| `torc workflows init <id>` | Initialize workflow dependencies |
| `torc workflows reinit <id>` | Reinitialize workflow after changes |
| `torc workflows reset-status <id>` | Reset workflow and job statuses |
## Workflow Query
| `torc workflows list` | List your workflows |
| `torc workflows get <id>` | Get workflow details |
## Job Management
| `torc jobs list <id>` | List all jobs |
| `torc jobs list -s ready <id>` | List jobs by status |
| `torc jobs get <job_id>` | Get job details |
| `torc results list <id>` | List job results |
| `torc results list --failed <id>` | List failed jobs |
## Recovery & Diagnostics
| `torc status <id>` | Workflow status and job summary |
| `torc workflows check-resources <id>` | Check memory/CPU/time usage |
| `torc results list <id> --include-logs` | Job results with log paths |
| `torc recover <id>` | Interactive recovery wizard (default) |
| `torc recover <id> --no-prompts` | Automatic recovery (no prompts, for scripting) |
| `torc watch <id> --recover --auto-schedule` | Full production recovery mode |
| `torc workflows sync-status <id>` | Fix orphaned jobs (stuck in "running") |
| `torc workflows correct-resources <id>` | Upscale violated + downsize over-allocated RRs |
| `torc slurm sacct <id>` | Get Slurm accounting data |
| `torc slurm stats <id>` | Per-job sacct stats stored in the database |
| `torc slurm usage <id>` | Total compute node and CPU time consumed |
## Remote Workers
| `torc remote add-workers <id> <host>...` | Add remote workers to a workflow |
| `torc remote list-workers <id>` | List remote workers for a workflow |
| `torc remote run <id>` | Start workers on remote machines via SSH |
| `torc remote status <id>` | Check status of remote workers |
| `torc remote stop <id>` | Stop workers on remote machines |
| `torc remote collect-logs <id>` | Collect logs from remote workers |
## Events & Logs
| `torc events monitor <id>` | Monitor events in real-time |
| `torc logs analyze <id>` | Analyze logs for errors |
## Global Options
| `--url <URL>` | Server URL (or set `TORC_API_URL`) |
| `-f json` | Output as JSON instead of table |
| `--log-level debug` | Enable debug logging |
</div>