tl – streaming, cached translation CLI
tl is a small CLI that streams translations coming from standard input or files through any OpenAI-compatible endpoint (local or remote). Configure multiple providers with their own endpoints, API keys, and models, then switch between them as needed.
Install
Or build from source:
Quick Usage
|
tl caches each translation (keyed on the input, language, model, endpoint, and prompt) so rerunning the same source is fast and cheap. Streaming responses keep your terminal responsive, and a spinner on stderr signals when work is in progress.
Configuration
Settings live in ~/.config/tl/config.toml.
[]
= "ollama"
= "gemma3:12b"
= "ja"
[]
= "http://localhost:11434"
= ["gemma3:12b", "llama3.2"]
[]
= "https://openrouter.ai/api"
= "OPENROUTER_API_KEY"
= ["anthropic/claude-3.5-sonnet", "openai/gpt-4o"]
Provider Configuration
Each provider has:
endpoint(required) – the OpenAI-compatible API endpointapi_key_env(optional) – environment variable name containing the API keyapi_key(optional) – API key stored directly in config (not recommended)models(optional) – list of available models for this provider
CLI options always supersede the config file.
Managing Providers
Use tl providers to list configured providers:
Chat Mode
For interactive translation sessions, use tl chat:
Type text and press Enter to translate. Translations stream in real-time.
Slash Commands
/config– show current configuration/help– list available commands/quit– exit chat mode
Troubleshooting
- Use
tl languagesto see the supported ISO 639-1 codes before passing--to. - Streaming is cancel-safe: pressing
Ctrl+Cwhile streaming aborts without polluting the cache. - No cache hits? Run with
--no-cacheto force a fresh request. - API key issues? Set the environment variable specified in
api_key_envfor your provider.