LLMG — LLM Gateway
A high-performance Rust LLM gateway and provider library. One OpenAI-compatible API for 70+ LLM providers.
Features
- Unified API — Single OpenAI-compatible endpoint for every provider
- 70+ Providers — OpenAI, Anthropic, Azure, Groq, Mistral, Cohere, DeepSeek, Ollama, OpenRouter, and many more
- Library + Gateway — Use as a Rust crate or deploy the HTTP gateway
- Feature-Gated — Compile only the providers you need
- Streaming — Server-Sent Events across all providers
- Rig Integration — Drop-in provider for Rig agents
Quick Start
Gateway
Install the gateway via cargo:
Then run it with your API keys:
OPENAI_API_KEY=sk-...
Docker
Library
[]
= "0.1"
= { = "0.1", = ["openai"] }
use OpenAiClient;
use Provider;
use ;
let client = from_env?;
let request = ChatCompletionRequest ;
let response = client.chat_completion.await?;
How Routing Works
Requests use the provider/model format:
openai/gpt-4 → OpenAI
anthropic/claude-3-opus → Anthropic
groq/llama3-70b-8192 → Groq
ollama/llama3 → Ollama (local)
openrouter/openai/gpt-4 → OpenRouter (nested)
Built-in aliases let you use short names like gpt-4, claude, or gemini.
Project Structure
| Crate | Purpose |
|---|---|
llmg-core |
Shared types, traits, error handling |
llmg-providers |
Provider implementations (feature-gated) |
llmg-gateway |
HTTP gateway server (Axum) |
Configuration
Set API keys as environment variables. The gateway auto-registers providers based on which keys are present.
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GROQ_API_KEY=gsk_...
See the documentation for the full list of providers and their environment variables.
License
Licensed under either of Apache License 2.0 or MIT at your option.
Contributing
Contributions welcome! See CONTRIBUTING.md for guidelines.