_ _
__ __| |__ (_)___ _ __ ___
\ \ /\ / /| '_ \| / __|| '__/ __|
\ V V / | | | | \__ \| | \__ \
\_/\_/ |_| |_|_|___/|_| |___/
speak. type. done.
whisrs
Linux-first voice-to-text dictation tool, written in Rust.
Speech-to-text for Wayland, X11, Hyprland, Sway, GNOME, and KDE. Press a hotkey, speak, and your words appear at the cursor. Works with any app, any window manager, any desktop environment. Supports cloud transcription (Groq, Deepgram, OpenAI) and fully offline local transcription via whisper.cpp. Fast, private, open source.
Why whisrs?
Dictation tools like Wispr Flow and Superwhisper are not available on Linux. xhisper proved the concept works, but I kept running into limitations. whisrs takes that idea and rebuilds it in Rust as a single async process with native keyboard layout support, window tracking, and multiple transcription backends.
Installation
Quick install (any distro)
|
Or clone and run locally:
&& &&
The install script handles everything: detects your distro, installs system dependencies, builds the project, and runs interactive setup.
After install, press your hotkey to start recording, press again to stop. Text appears at your cursor.
Arch Linux (AUR)
After install, run whisrs setup to configure your backend, API keys, permissions, and keybindings.
Cargo
Requires system dependencies: alsa-lib, libxkbcommon, clang, cmake.
After install, run whisrs setup.
Nix
Or add to your flake inputs:
inputs.whisrs.url = "github:y0sif/whisrs";
Manual install
1. Dependencies
# Arch Linux
# Debian/Ubuntu
# Fedora
2. Build
3. Setup
The interactive setup will walk you through backend selection, API keys / model download, microphone test, uinput permissions, systemd service, and keybindings.
4. Bind a hotkey
Example for Hyprland (~/.config/hypr/hyprland.conf):
bind = $mainMod, W, exec, whisrs toggle
Example for Sway (~/.config/sway/config):
bindsym $mod+w exec whisrs toggle
Transcription Backends
| Backend | Type | Streaming | Cost | Best for |
|---|---|---|---|---|
| Groq | Cloud | Batch | Free tier available | Getting started, budget use |
| Deepgram Streaming | Cloud (WebSocket) | True streaming | $200 free credit | Streaming with free credits |
| Deepgram REST | Cloud | Batch | $200 free credit | Simple, 60+ languages |
| OpenAI Realtime | Cloud (WebSocket) | True streaming | Paid | Best UX, text as you speak |
| OpenAI REST | Cloud | Batch | Paid | Simple fallback |
| Local whisper.cpp | Local (CPU/GPU) | Sliding window | Free | Privacy, offline use |
Groq is the default. Fast, free tier, good accuracy with whisper-large-v3-turbo.
Deepgram offers $200 in free credits on signup (no credit card required) and supports 60+ languages with the Nova-3 model. The streaming backend provides true real-time transcription over WebSocket.
OpenAI Realtime is the premium option: true streaming over WebSocket means text appears at your cursor while you're still speaking.
Local whisper.cpp
Run transcription entirely on your machine. No API key, no internet, no data leaves your device. Included in every build.
| Model | Size | RAM | Speed (CPU) | Accuracy |
|---|---|---|---|---|
| tiny.en | 75 MB | ~273 MB | Real-time | Decent |
| base.en | 142 MB | ~388 MB | Real-time | Good (recommended) |
| small.en | 466 MB | ~852 MB | Borderline | Very good |
Configuration
Config file: ~/.config/whisrs/config.toml
[]
= "groq" # groq | deepgram-streaming | deepgram | openai-realtime | openai | local-whisper
= "en" # ISO 639-1 or "auto"
= 2000 # auto-stop after silence (streaming only)
= true # desktop notifications
= true # strip "um", "uh", "you know", etc.
= [] # custom list (empty = use built-in defaults)
= true # play tones on record start/stop/done
= 0.5 # 0.0 to 1.0
= ["whisrs", "Hyprland"] # custom terms for better transcription accuracy
= true # system tray icon (requires SNI host like waybar)
[]
= "default"
[]
= "gsk_..."
= "whisper-large-v3-turbo"
[]
= "..."
= "nova-3"
[]
= "sk-..."
= "gpt-4o-mini-transcribe"
[]
= "~/.local/share/whisrs/models/ggml-base.en.bin"
# Command mode: LLM for voice-driven text rewriting
[]
= "sk-..."
= "gpt-4o-mini"
= "https://api.openai.com/v1/chat/completions"
# Built-in global hotkeys (optional, works without WM keybinds)
[]
= "Super+Shift+W"
= "Super+Shift+D"
= "Super+Shift+G"
Environment variable overrides: WHISRS_GROQ_API_KEY, WHISRS_DEEPGRAM_API_KEY, WHISRS_OPENAI_API_KEY
CLI Commands
whisrs setup # Interactive onboarding
whisrs toggle # Start/stop recording
whisrs cancel # Cancel recording, discard audio
whisrs status # Query daemon state
whisrs command # Command mode: select text + speak instruction → LLM rewrite
whisrs log # Show recent transcription history
whisrs log -n 5 # Show last 5 entries
whisrs log --clear # Clear all history
Supported Environments
| Component | Support |
|---|---|
| Hyprland | Tested, full support |
| Sway / i3 | Implemented, needs community testing |
| X11 (any WM) | Implemented, needs community testing |
| GNOME Wayland | Limited, requires window-calls extension for window tracking |
| KDE Wayland | Implemented via D-Bus, needs community testing |
| Audio | PipeWire, PulseAudio, ALSA (auto-detected via cpal) |
| Distros | Any Linux with the system dependencies above |
Note: whisrs has been primarily tested on Hyprland (Arch Linux). Testing on other compositors and distros is a valuable contribution. If you run into issues, please open an issue.
Project Status
whisrs is functional and usable for daily dictation. The core features work:
- Daemon + CLI architecture
- Audio capture and WAV encoding
- Groq, Deepgram (REST + streaming), OpenAI REST, and OpenAI Realtime backends
- Local whisper.cpp backend (sliding window, prompt conditioning, model download)
- Layout-aware keyboard injection (uinput + XKB)
- Wayland/X11 clipboard with save/restore
- Window tracking (Hyprland, Sway, X11, GNOME, KDE)
- Desktop notifications and audio feedback
- Interactive setup with LLM provider selection
- Filler word removal
- Transcription history (
whisrs log) - Multi-language support (18 languages + auto-detect)
- Custom vocabulary for improved transcription accuracy
- LLM command mode (select text + voice instruction → rewrite)
- System tray indicator (idle/recording/transcribing)
- Configurable global hotkeys via evdev
- Packaging (AUR, Nix flake, crates.io)
- Local Vosk backend
- Local Parakeet backend (NVIDIA)
Troubleshooting
Contributing
The biggest way to help right now:
- Test on your compositor — Sway, i3, KDE, GNOME. Report what works and what doesn't.
- Test on your distro — Ubuntu, Fedora, NixOS, etc. Build issues, missing deps, etc.
- Bug reports — if text goes to the wrong window, characters get dropped, or audio doesn't capture, open an issue.
See CONTRIBUTING.md for development setup and project structure.
How whisrs Compares
FAQ
License
MIT