snapshell 0.2.1

snapshell - a snappy CLI that generates shell commands via OpenRouter LLMs
snapshell-0.2.1 is not a library.

snapshell (ss)

Minimal and snappy shell command generator with LLM/AI.

Alternative to GitHub Copilot ghcs, snapshell quickly generates shell commands using your preferred LLM/AI via OpenRouter.

Install

Build and symlink to ss:

cargo build --release
ln -s "$(pwd)/target/release/snapshell" /usr/local/bin/ss

OpenRouter configuration

Before using snapshell with LLM features, configure OpenRouter:

  • Export your API key for the session:
export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"
  • Or create a .env file based on .env.example and load it with your shell or a tool like direnv:
cp .env.example .env
# edit .env and add your key
export $(cat .env | xargs)

Permanent setup (bash / zsh)

To make the key permanent, add the export to your shell startup file.

For bash (~/.bashrc or ~/.profile):

echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.bashrc
# or
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.profile

For zsh (~/.zshrc or ~/.zprofile):

echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.zshrc
# or
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.zprofile

After editing, reload your shell or source the file:

source ~/.bashrc   # or source ~/.zshrc

Quick usage

  • ss 'describe what shell command you want'
    • Generate a single-line shell command, print it, copy to macOS clipboard, and save to history.
  • ss -a 'chat with the model'
    • Enter interactive chat mode; you can continue asking follow-ups. Type /exit or empty line to quit.
  • ss -r 2 'use reasoning level 2'
    • Attach a reasoning hint to the model.
  • ss -m 'provider/model' 'ask'
    • Override the model (use provider-specific model strings like groq/... or cerebras/...).
  • ss -L 'ask'
    • Allow multiline script output instead of forcing one-liner.
  • ss -H
    • Print saved history entries.

Flags & examples

  • Default single-line mode (default behavior):
ss "install openvino and show the command to quantize a tensorflow model"
  • Force multiline output (for scripts):
ss -L "generate a bash script to backup ~/projects to /tmp/backup"
  • Interactive chat mode (follow-ups):
ss -a "how to list modified rust files since yesterday?"
# After response, type follow-up questions at the `>` prompt
  • Use a low-latency provider model:
ss -m "groq/fast-model" "list files modified today"
  • Override the default system instruction (applies to both modes unless more specific):
ss -s "You are an expert devops assistant. Output only shell commands." "describe what you want"
  • Override single-line or multiline system instruction explicitly:
ss --system-single "Single-line-only instruction" "do X"
ss --system-multiline "Multiline-allowed instruction" -L "do Y"
  • View history:
ss -H

Reasoning

snapshell supports an optional lightweight "reasoning" hint (OpenAI-style effort) you can request from the model.

  • -r, --reasoning <low|medium|high> — set the reasoning effort. Default: low.
  • -S, --show-reasoning — when set, the model may append a trailing JSON object containing the model's short reasoning, printed on the line after the command as:
{"reasoning": "short one-sentence reason here"}

Notes:

  • Reasoning is not printed by default; only enable it with -S when you want an explanation.
  • The reasoning line is not copied to the clipboard and is not saved to history; only the generated command is copied/saved.
  • Example:
ss -r high -S "why can't I install TensorRT on macOS?"
# output:
# (NOT ABLE TO ANSWER): TensorRT requires NVIDIA GPUs and is not available on macOS.
#{"reasoning": "TensorRT depends on NVIDIA GPU drivers not present on macOS"}

Environment variables

  • SNAPSHELL_OPENROUTER_API_KEY — API key for OpenRouter (required to call remote LLM).
  • SNAPSHELL_SYSTEM — generic system instruction override.
  • SNAPSHELL_SYSTEM_SINGLE — override for single-line mode.
  • SNAPSHELL_SYSTEM_MULTILINE — override for multiline mode.

See .env.example for a sample env file.

OpenRouter integration

This tool is integrated with OpenRouter. Provide your OpenRouter API key via OPENROUTER_API_KEY. The default model is openai/gpt-oss-20b. You can select a different model with -m 'provider/model'.

For the instant result, lowest-latency replies providers recommended are Groq or Cerebras when available, you can enforce this provider in Open Router: Settings > Account > Allowed Providers > Select a provider (also tick the 'Always enforce' checkbox)

History

History is saved as history.jsonl in your OS data dir and contains timestamp, prompt, and generated command. Use ss -H to view.

Notes

  • Minimal, fast, designed to return only shell commands by default.
  • If the model returns extra text, use -s/--system-single/--system-multiline to tighten instructions.