snapshell 0.1.0

snapshell - a snappy CLI that generates shell commands via OpenRouter LLMs
snapshell-0.1.0 is not a library.

snapshell (ss)

Minimal snappy shell command generator.

Install

Build and symlink to ss:

cargo build --release
ln -s "$(pwd)/target/release/snapshell" /usr/local/bin/ss

OpenRouter configuration

Before using snapshell with LLM features, configure OpenRouter:

  • Export your API key for the session:
export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"
  • Or create a .env file based on .env.example and load it with your shell or a tool like direnv:
cp .env.example .env
# edit .env and add your key
export $(cat .env | xargs)

Permanent setup (bash / zsh)

To make the key permanent, add the export to your shell startup file.

For bash (~/.bashrc or ~/.profile):

echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.bashrc
# or
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.profile

For zsh (~/.zshrc or ~/.zprofile):

echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.zshrc
# or
echo 'export SNAPSHELL_OPENROUTER_API_KEY="your_openrouter_api_key"' >> ~/.zprofile

After editing, reload your shell or source the file:

source ~/.bashrc   # or source ~/.zshrc

Quick usage

  • ss 'describe what shell command you want'
    • Generate a single-line shell command, print it, copy to macOS clipboard, and save to history.
  • ss -a 'chat with the model'
    • Enter interactive chat mode; you can continue asking follow-ups. Type /exit or empty line to quit.
  • ss -r 2 'use reasoning level 2'
    • Attach a reasoning hint to the model.
  • ss -m 'provider/model' 'ask'
    • Override the model (use provider-specific model strings like groq/... or cerebras/...).
  • ss -L 'ask'
    • Allow multiline script output instead of forcing one-liner.
  • ss -H
    • Print saved history entries.

Flags & examples

  • Default single-line mode (default behavior):
ss "install openvino and show the command to quantize a tensorflow model"
  • Force multiline output (for scripts):
ss -L "generate a bash script to backup ~/projects to /tmp/backup"
  • Interactive chat mode (follow-ups):
ss -a "how to list modified rust files since yesterday?"
# After response, type follow-up questions at the `>` prompt
  • Use a low-latency provider model:
ss -m "groq/fast-model" "list files modified today"
  • Override the default system instruction (applies to both modes unless more specific):
ss -s "You are an expert devops assistant. Output only shell commands." "describe what you want"
  • Override single-line or multiline system instruction explicitly:
ss --system-single "Single-line-only instruction" "do X"
ss --system-multiline "Multiline-allowed instruction" -L "do Y"
  • View history:
ss -H

Environment variables

  • SNAPSHELL_OPENROUTER_API_KEY — API key for OpenRouter (required to call remote LLM).
  • SNAPSHELL_SYSTEM — generic system instruction override.
  • SNAPSHELL_SYSTEM_SINGLE — override for single-line mode.
  • SNAPSHELL_SYSTEM_MULTILINE — override for multiline mode.

See .env.example for a sample env file.

OpenRouter integration

This tool is integrated with OpenRouter. Provide your OpenRouter API key via OPENROUTER_API_KEY. The default model is openai/gpt-oss-20b. You can select a different model with -m 'provider/model'.

For the instant result, lowest-latency replies providers recommended are Groq or Cerebras when available, you can enforce this provider in Open Router: Settings > Account > Allowed Providers > Select a provider (also tick the 'Always enforce' checkbox)

History

History is saved as history.jsonl in your OS data dir and contains timestamp, prompt, and generated command. Use ss -H to view.

Notes

  • Minimal, fast, designed to return only shell commands by default.
  • If the model returns extra text, use -s/--system-single/--system-multiline to tighten instructions.