aether-llm
Multi-provider LLM abstraction layer for Rust. Write your code once, then swap between Anthropic, OpenAI, OpenRouter, Ollama, and more by changing a single string.
Table of Contents
Quick start
Parse a "provider:model" string into a provider, build a context, and stream the response:
use ModelProviderParser;
use IsoString;
use ;
use StreamExt;
async
Examples
Conversation with a system prompt
use IsoString;
use ;
let context = new;
Tool use
Define tools with JSON Schema, then feed results back after execution:
use IsoString;
use ;
let tools = vec!;
let mut context = new;
// After streaming the response and executing the tool call,
// feed the result back into the context:
context.push_assistant_turn;
// Then call provider.stream_response(&context) again for the final answer.
Switching providers
ModelProviderParser accepts any supported "provider:model" string, so switching is a one-line change:
use ModelProviderParser;
let parser = default;
// Cloud providers (need API keys in env)
let = parser.parse.unwrap;
let = parser.parse.unwrap;
let = parser.parse.unwrap;
// Local models (no API key needed)
let = parser.parse.unwrap;
let = parser.parse.unwrap;
Direct provider construction
When you need fine-grained control (temperature, max tokens), construct the provider directly:
use AnthropicProvider;
use ProviderFactory;
let provider = from_env
.unwrap
.with_model
.with_temperature
.with_max_tokens;
Providers
| Provider | Example model string | Env var |
|---|---|---|
| Anthropic | anthropic:claude-sonnet-4-5-20250929 |
ANTHROPIC_API_KEY |
OpenAI |
openai:gpt-4o |
OPENAI_API_KEY |
OpenRouter |
openrouter:moonshotai/kimi-k2 |
OPENROUTER_API_KEY |
| ZAI | zai:GLM-4.6 |
ZAI_API_KEY |
| AWS Bedrock | bedrock:us.anthropic.claude-sonnet-4-5-20250929-v1:0 |
AWS credentials |
| Ollama | ollama:llama3.2 |
None (local) |
| Llama.cpp | llamacpp |
None (local) |
Documentation
Full API documentation is available on docs.rs.
Key entry points:
- [
StreamingModelProvider] -- the core trait all providers implement - [
Context] -- conversation state management - [
ChatMessage] -- message types for building conversations - [
LlmResponse] -- streaming response events ModelProviderParser-- parse"provider:model"strings into providers
Key Types
StreamingModelProvider-- Core trait for all LLM providers. Implement this to add a new provider.Context-- Manages the message history, tool definitions, and reasoning effort sent to the model.ChatMessage-- Message enum with variants for user, assistant, and tool call messages.ToolDefinition-- Describes a tool the model can invoke (name, description, JSON schema).LlmModel-- Catalog of known models with metadata (context window, capabilities).
Feature Flags
| Feature | Description |
|---|---|
bedrock |
AWS Bedrock provider support |
oauth |
OAuth authentication (used by Codex provider) |
codex |
OpenAI Codex provider (implies oauth) |
License
MIT