WTF-CLI (What The Fix CLI)
wtf-cli is a command-line interface wrapper that seamlessly runs your terminal commands and, if they fail, intercepts the error output to provide an AI-generated solution on the spot. It supports local models via Ollama, as well as cloud-based ones via OpenAI, Gemini, or OpenRouter.
Features
- Seamless wrapping: Just prepend
wtfto your command. If it works, you get your normal output. If it fails, the AI jumps in. - Privacy first: The primary focus is running local AI models using Ollama, meaning no API costs and total privacy.
- Cloud Fallbacks: Supports OpenAI (
OPENAI_API_KEY), Gemini (GEMINI_API_KEY), and OpenRouter (OPENROUTER_API_KEY) fallbacks. - Clear structure: Provides actionable, structured outputs so you know exactly what failed and the command to fix it.
Prerequisites
- Rust & Cargo (latest stable version recommended)
- Optional (but recommended): Ollama running locally for free, private AI analysis.
Installation
From Source
-
Clone the repository:
-
Install the binary using Cargo:
-
Ensure the Cargo bin directory is in your system's
PATH. You can copy these commands exactly; your shell will automatically expand variables like$HOMEor$env:USERPROFILE.Linux / macOS (Bash/Zsh):
Add this to your
~/.bashrcor~/.zshrcto make it permanent.Windows (PowerShell):
$env:Path += ";$env:USERPROFILE\.cargo\bin"To make this permanent, add it to your PowerShell Profile or use the 'Environment Variables' GUI.
-
Configure your preferred AI provider:
Updating
If you've installed wtf-cli from source and want to pull the latest changes:
- Navigate to your local
wtf-clirepository: - Pull the latest code:
- Re-install the project (the
--forceflag ensures the old binary gets overwritten):
Usage
Simply prepend wtf to any command you usually run.
# Example 1: A failing npm script
# Example 2: Exploring a non-existent directory
If the command succeeds, it will gracefully exit just like normally.
If it fails, wtf will capture the error output, send it to the configured AI, and print the diagnosis and suggested fix back to you.
Configuration
You can easily configure your preferred AI provider by running wtf --setup. This command will present an interactive menu allowing you to choose between Ollama, OpenAI, Gemini, and OpenRouter using your arrow keys. It will automatically create or update a .env file in the current directory with your selection.
Alternatively, you can manually create a .env file in the directory where you run the tool. A template is provided in .env.example:
Or set these Environment Variables globally:
# AI Provider (auto-detected if not set)
# Options: ollama, openai, gemini, openrouter
WTF_PROVIDER=ollama
# Ollama (Default provider)
OLLAMA_MODEL=qwen3.5:9b
OLLAMA_HOST=http://localhost:11434
# OpenAI Fallback
OPENAI_API_KEY=your_openai_key_here
OPENAI_MODEL=gpt-4o-mini
# OPENAI_API_BASE=https://api.openai.com/v1
# Gemini Fallback
GEMINI_API_KEY=your_gemini_key_here
GEMINI_MODEL=gemini-2.0-flash
# OpenRouter Fallback
OPENROUTER_API_KEY=your_openrouter_key_here
OPENROUTER_MODEL=arcee-ai/trinity-mini:free
Demo
License
MIT