snapshell (ss)
Minimal and snappy shell command generator with LLM/AI.
Alternative to GitHub Copilot ghcs, snapshell quickly generates shell commands using your preferred LLM/AI via OpenRouter.
Install
Build and symlink to ss:
OpenRouter configuration
Before using snapshell with LLM features, configure OpenRouter:
- Export your API key for the session:
- Or create a
.envfile based on.env.exampleand load it with your shell or a tool likedirenv:
# edit .env and add your key
Permanent setup (bash / zsh)
To make the key permanent, add the export to your shell startup file.
For bash (~/.bashrc or ~/.profile):
# or
For zsh (~/.zshrc or ~/.zprofile):
# or
After editing, reload your shell or source the file:
Quick usage
ss 'describe what shell command you want'- Generate a single-line shell command, print it, copy to macOS clipboard, and save to history.
ss -a 'chat with the model'- Enter interactive chat mode; you can continue asking follow-ups. Type
/exitor empty line to quit.
- Enter interactive chat mode; you can continue asking follow-ups. Type
ss -r 2 'use reasoning level 2'- Attach a reasoning hint to the model.
ss -m 'provider/model' 'ask'- Override the model (use provider-specific model strings like
groq/...orcerebras/...).
- Override the model (use provider-specific model strings like
ss -L 'ask'- Allow multiline script output instead of forcing one-liner.
ss -H- Print saved history entries.
Flags & examples
- Default single-line mode (default behavior):
- Force multiline output (for scripts):
- Interactive chat mode (follow-ups):
# After response, type follow-up questions at the `>` prompt
- Use a low-latency provider model:
- Override the default system instruction (applies to both modes unless more specific):
- Override single-line or multiline system instruction explicitly:
- View history:
Reasoning
snapshell supports an optional lightweight "reasoning" hint (OpenAI-style effort) you can request from the model.
-r, --reasoning <low|medium|high>— set the reasoning effort. Default:low.-S, --show-reasoning— when set, the model may append a trailing JSON object containing the model's short reasoning, printed on the line after the command as:
Notes:
- Reasoning is not printed by default; only enable it with
-Swhen you want an explanation. - The reasoning line is not copied to the clipboard and is not saved to history; only the generated command is copied/saved.
- Example:
# output:
# (NOT ABLE TO ANSWER): TensorRT requires NVIDIA GPUs and is not available on macOS.
#{"reasoning": "TensorRT depends on NVIDIA GPU drivers not present on macOS"}
Environment variables
SNAPSHELL_OPENROUTER_API_KEY— API key for OpenRouter (required to call remote LLM).SNAPSHELL_SYSTEM— generic system instruction override.SNAPSHELL_SYSTEM_SINGLE— override for single-line mode.SNAPSHELL_SYSTEM_MULTILINE— override for multiline mode.
See .env.example for a sample env file.
OpenRouter integration
This tool is integrated with OpenRouter. Provide your OpenRouter API key via the environment variable SNAPSHELL_OPENROUTER_API_KEY.
You can control the model used in two ways (priority order):
- CLI: pass
-m 'provider/model'toss. - Environment: set
SNAPSHELL_OPENROUTER_MODEL(for exampleopenai/gpt-oss-20borgroq/fast-model).
If neither is set, snapshell falls back to the built-in default openai/gpt-oss-20b.
For the instant result, lowest-latency replies providers recommended are Groq or Cerebras when available, you can enforce this provider in Open Router: Settings > Account > Allowed Providers > Select a provider (also tick the 'Always enforce' checkbox)
History
History is saved as history.jsonl in your OS data dir and contains timestamp, prompt, and generated command. Use ss -H to view.
Notes
- Minimal, fast, designed to return only shell commands by default.
- If the model returns extra text, use
-s/--system-single/--system-multilineto tighten instructions.