clat
Command line assistance tool. Describe what you want in plain English; clat generates a shell script and runs it.
clat open a port, docker pull void-base, close port
clat compress all jpegs in this directory to 80% quality
clat show disk usage sorted by size for the current directory
Works with any OpenAI-compatible inference API — LM Studio, Ollama, or a remote API.
Supports reasoning models (DeepSeek-R1, QwQ, etc.) — <think> blocks are stripped automatically.
Install
Homebrew (recommended)
From source
Or manually:
&&
Then add ~/.clat to your PATH (if it isn't already):
# zsh
# bash
Configuration
Config lives at ~/.clat/config.toml — same directory as the binary.
Created automatically on first run, or explicitly with clat --init.
= "http://localhost:1234/v1" # OpenAI-compatible endpoint
= "local-model" # model name (see: clat -l)
= "" # optional bearer token
= false # true = always skip confirmation
= true # false for models without tool-call support
= [] # command names that skip confirmation
# e.g. ["git", "ls", "echo"]
# Optional: override the system prompt sent with every request
# system_prompt = "..."
Models
List models available from your inference server:
# or directly:
|
Load a different model in LM Studio:
Tool calls
When use_tools = true (the default), clat sends tool definitions with each
request so the model can query the system before writing the script — for example
checking which commands are installed, or reading the current OS and working
directory. Set use_tools = false for models that don't support tool calling.
sudo
Scripts that contain sudo are passed directly to bash. The OS handles the
password prompt natively — clat never sees your password.
Usage
clat [OPTIONS] <prompt>...
| Flag | Description |
|---|---|
-y, --yes |
Skip confirmation, run immediately |
-n, --dry-run |
Show generated script, don't execute |
-l, --list |
List models available from the API |
-L, --load <ID> |
Load a model in LM Studio (can combine with a prompt) |
--model <MODEL> |
Override model for this invocation |
--api <URL> |
Override API URL for this invocation |
-v, --verbose |
Print prompt, API URL, model, and tool status |
--config |
Show current config and its path |
--init |
Write default config file (won't overwrite existing) |
Examples
# Basic — shows generated script, asks to confirm
# Skip confirmation
# Dry run — see the script without executing
# List available models
# Load a model then immediately run a prompt
# Override model for one call
# Point at a remote API
# Disable tool calls for models that don't support them
# (or set use_tools = false in config)