OpenRouter Rust
A comprehensive, type-safe Rust client library for the OpenRouter API.
Features
- 🤖 Chat Completions API: Full support for
/v1/chat/completionsendpoint - 🆕 Responses API (Beta): Support for the new
/v1/responsesendpoint with reasoning capabilities - 🎭 Anthropic Messages API: Native support for Anthropic's Messages API format
- 🔢 Embeddings API: Text and multimodal embeddings support
- 📊 Models API: List, filter, and query available models
- 🏢 Providers API: List available model providers
- 📈 Generation API: Retrieve detailed generation metadata
- 🌊 Streaming: Real-time streaming responses using Server-Sent Events (SSE)
- 🔒 Type-Safe: Comprehensive types for all API requests and responses
- 🏗️ Builder Pattern: Ergonomic builder APIs for constructing requests
- ⚡ Async/Await: Fully async with tokio runtime
- 📦 Modular: Use only the features you need
- 🛠️ Error Handling: Detailed error types for different failure scenarios
Installation
Add this to your Cargo.toml:
[]
= "0.1.0"
Or with specific features:
[]
= { = "0.1.0", = ["chat", "streaming", "embeddings", "anthropic"] }
Feature Flags
chat(default): Enable chat completions API (/v1/chat/completions)responses: Enable Responses API (/v1/responses)streaming(default): Enable streaming supportembeddings: Enable embeddings API (/v1/embeddings)anthropic: Enable Anthropic Messages API (/v1/messages)providers(default): Enable providers API (/v1/providers)models(default): Enable models API (/v1/models,/v1/models/count,/v1/models/user)generations(default): Enable generation metadata API (/v1/generation)
Quick Start
use ;
async
Usage Examples
Basic Chat Completion
use ;
let client = builder
.api_key
.http_referer
.x_title
.build?;
let request = new
.system_message
.user_message
.temperature
.max_tokens
.build;
let response = client.chat_completion.await?;
if let Some = response.choices.message.content
Streaming Responses
use ;
use StreamExt;
let client = builder
.api_key
.build?;
let request = new
.user_message
.stream
.build;
let stream = client.chat_completion_stream.await?;
let mut stream = stream;
while let Some = stream.next.await
Anthropic Messages API
use ;
let client = builder
.api_key
.build?;
let request = new
.user_message
.thinking
.temperature
.build;
let response = client.create_anthropic_message.await?;
for content in &response.content
println!;
Embeddings API
use ;
let client = builder
.api_key
.build?;
// Single text embedding
let request = new
.build;
let response = client.create_embedding.await?;
for item in &response.data
// Batch embeddings
let batch_request = new_with_array.build;
let batch_response = client.create_embedding.await?;
println!;
// List available embedding models
let models = client.list_embedding_models.await?;
for model in &models.data
Responses API (Beta)
use ;
let client = builder
.api_key
.build?;
let request = new
.user_message
.reasoning
.temperature
.build;
let response = client.create_response.await?;
for item in &response.output
Models API
use ;
let client = builder
.api_key
.build?;
// List all models
let models = client.list_models.await?;
for model in &models.data
// Get model count
let count = client.get_models_count.await?;
println!;
// List models filtered by user preferences
let user_models = client.list_models_user.await?;
// Filter models by category
let params = ListModelsParams ;
let programming_models = client.list_models.await?;
Providers API
use OpenRouterClient;
let client = builder
.api_key
.build?;
let providers = client.list_providers.await?;
for provider in &providers.data
Generation Metadata API
use OpenRouterClient;
let client = builder
.api_key
.build?;
// After making a chat completion, get the generation ID from the response
let generation_id = "gen-1234567890abcdef";
let generation = client.get_generation.await?;
let data = &generation.data;
println!;
println!;
println!;
println!;
println!;
println!;
JSON Mode
use ;
let request = new
.system_message
.user_message
.response_format_json
.build;
let response = client.chat_completion.await?;
// Response will be valid JSON
Tool Calling
use ;
use json;
let weather_tool = Tool ;
let request = new
.user_message
.tools
.build;
Configuration
Client Builder Options
let client = builder
.api_key // Required
.base_url // Optional, defaults to official API
.http_referer // Optional, for OpenRouter rankings
.x_title // Optional, for OpenRouter rankings
.timeout // Optional, default 60s
.build?;
Error Handling
The library provides detailed error types:
use OpenRouterError;
match result
Examples
See the examples/ directory for complete working examples:
basic_chat.rs- Simple chat completionstreaming.rs- Real-time streamingresponses_api.rs- Using the new Responses APIanthropic_messages.rs- Anthropic Messages API usageembeddings.rs- Text and image embeddingsmodels_api.rs- Listing and filtering modelsproviders.rs- Listing providersgeneration_metadata.rs- Retrieving generation info
Run examples:
# Basic examples (chat features)
# Advanced features
API Coverage
Implemented ✅
- ✅ Chat Completions (
/v1/chat/completions) - ✅ Responses API (
/v1/responses) - ✅ Anthropic Messages (
/v1/messages) - ✅ Embeddings (
/v1/embeddings,/v1/embeddings/models) - ✅ Models (
/v1/models,/v1/models/count,/v1/models/user) - ✅ Providers (
/v1/providers) - ✅ Generation Metadata (
/v1/generation) - ✅ Streaming (SSE)
- ✅ All standard parameters (temperature, top_p, max_tokens, etc.)
- ✅ Tools/Function calling
- ✅ JSON mode
- ✅ Provider preferences
- ✅ Plugins (web search, auto-router, etc.)
Coming Soon 🔄
- 🔄 Image generation
- 🔄 Audio APIs (TTS, STT)
- 🔄 Batch processing
- 🔄 Fine-tuning
License
This project is licensed under the MIT OR Apache-2.0 license.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
Made with ❤️ for the Rust community