OpenRouter API Client Library
A production-ready Rust client for the OpenRouter API with comprehensive security, ergonomic design, and extensive testing. The library uses a type‑state builder pattern for compile-time configuration validation, ensuring robust and secure API interactions.
Features
🏗️ Architecture & Safety
- Type‑State Builder Pattern: Compile-time configuration validation ensures all required settings are provided before making requests
- Secure Memory Management: API keys are automatically zeroed on drop using the
zeroize crate for enhanced security
- Comprehensive Error Handling: Centralized error management with safe error message redaction to prevent sensitive data leakage
- Modular Organization: Clean separation of concerns across modules for models, API endpoints, types, and utilities
🚀 Ergonomic API Design
- Convenient Constructors: Quick setup with
from_api_key(), from_env(), quick(), and production() methods
- Flexible Configuration: Fluent builder pattern with timeout, retry, and header configuration
- Environment Integration: Automatic API key loading from
OPENROUTER_API_KEY or OR_API_KEY environment variables
🔒 Security & Reliability
- Memory Safety: Secure API key handling with automatic memory zeroing
- Response Redaction: Automatic sanitization of error messages to prevent sensitive data exposure
- Streaming Safety: Buffer limits and backpressure handling for streaming responses
- Input Validation: Comprehensive validation of requests and parameters
🌐 OpenRouter API Support
- Chat Completions: Full support for OpenRouter's chat completion API with streaming
- Text Completions: Traditional text completion endpoint with customizable parameters
- Tool Calling: Define and invoke function tools with proper validation
- Structured Outputs: JSON Schema validation for structured response formats
- Web Search: Type-safe web search API integration
- Provider Preferences: Configure model routing, fallbacks, and provider selection
- Analytics API: Comprehensive activity data retrieval with filtering and pagination
- Providers API: Provider information management with search and filtering
- Enhanced Models API: Advanced model discovery with filtering, sorting, and search
📡 Model Context Protocol (MCP)
- MCP Client: Full JSON-RPC client implementation for the Model Context Protocol
- Resource Access: Retrieve resources from MCP servers
- Tool Invocation: Execute tools provided by MCP servers
- Context Integration: Seamless context sharing between applications and LLMs
🧪 Quality & Testing
- 100% Test Coverage: Comprehensive unit and integration test suite
- CI/CD Pipeline: Automated quality gates with formatting, linting, security audits, and documentation checks
- Production Ready: Extensive error handling, retry logic, and timeout management
Getting Started
Installation
Add the following to your project's Cargo.toml:
cargo add openrouter_api
cargo add openrouter_api --features tracing
Available Features:
rustls (default): Use rustls for TLS
native-tls: Use system TLS
tracing: Enhanced error logging with tracing support
Ensure that you have Rust installed (tested with Rust v1.83.0) and that you're using Cargo for building and testing.
Quick Start Examples
Simple Chat Completion
use openrouter_api::{OpenRouterClient, Result};
use openrouter_api::types::chat::{ChatCompletionRequest, Message};
#[tokio::main]
async fn main() -> Result<()> {
let client = OpenRouterClient::from_env()?;
let request = ChatCompletionRequest {
model: "openai/gpt-4o".to_string(),
messages: vec![Message {
role: "user".to_string(),
content: "Hello, world!".to_string(),
name: None,
tool_calls: None,
}],
stream: None,
response_format: None,
tools: None,
provider: None,
models: None,
transforms: None,
};
let response = client.chat()?.chat_completion(request).await?;
if let Some(choice) = response.choices.first() {
println!("Response: {}", choice.message.content);
}
Ok(())
}
Production Configuration
use openrouter_api::{OpenRouterClient, Result};
#[tokio::main]
async fn main() -> Result<()> {
let client = OpenRouterClient::production(
"sk-or-v1-...", "My Production App", "https://myapp.com" )?;
Ok(())
}
Custom Configuration
use openrouter_api::{OpenRouterClient, Result};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<()> {
let client = OpenRouterClient::new()
.skip_url_configuration() .with_timeout_secs(120) .with_retries(3, 500) .with_http_referer("https://myapp.com")
.with_site_title("My Application")
.with_api_key("sk-or-v1-...")?;
Ok(())
}
Provider Preferences Example
use openrouter_api::{OpenRouterClient, utils, Result};
use openrouter_api::models::provider_preferences::{DataCollection, ProviderPreferences, ProviderSort};
use openrouter_api::types::chat::{ChatCompletionRequest, Message};
use serde_json::json;
#[tokio::main]
async fn main() -> Result<()> {
let api_key = utils::load_api_key_from_env()?;
let client = OpenRouterClient::new()
.with_base_url("https://openrouter.ai/api/v1/")?
.with_api_key(api_key)?;
let preferences = ProviderPreferences::new()
.with_order(vec!["OpenAI".to_string(), "Anthropic".to_string()])
.with_allow_fallbacks(true)
.with_data_collection(DataCollection::Deny)
.with_sort(ProviderSort::Throughput);
let request_builder = client.chat_request_builder(vec![
Message {
role: "user".to_string(),
content: "Hello with provider preferences!".to_string(),
name: None,
tool_calls: None,
},
]);
let payload = request_builder
.with_provider_preferences(preferences)?
.build();
println!("Request payload: {}", serde_json::to_string_pretty(&payload)?);
Ok(())
}
Model Context Protocol (MCP) Client Example
use openrouter_api::{MCPClient, Result};
use openrouter_api::mcp_types::{
ClientCapabilities, GetResourceParams, ToolCallParams,
MCP_PROTOCOL_VERSION
};
#[tokio::main]
async fn main() -> Result<()> {
let client = MCPClient::new("https://mcp-server.example.com/mcp")?;
let server_capabilities = client.initialize(ClientCapabilities {
protocolVersion: MCP_PROTOCOL_VERSION.to_string(),
supportsSampling: Some(true),
}).await?;
println!("Connected to MCP server with capabilities: {:?}", server_capabilities);
let resource = client.get_resource(GetResourceParams {
id: "document-123".to_string(),
parameters: None,
}).await?;
println!("Retrieved resource: {}", resource.content);
let result = client.tool_call(ToolCallParams {
id: "search-tool".to_string(),
parameters: serde_json::json!({
"query": "Rust programming"
}),
}).await?;
println!("Tool call result: {:?}", result.result);
Ok(())
}
Text Completion Example
use openrouter_api::{OpenRouterClient, utils, Result};
use openrouter_api::types::completion::CompletionRequest;
use serde_json::json;
#[tokio::main]
async fn main() -> Result<()> {
let api_key = utils::load_api_key_from_env()?;
let client = OpenRouterClient::new()
.with_base_url("https://openrouter.ai/api/v1/")?
.with_api_key(api_key)?;
let request = CompletionRequest {
model: "openai/gpt-3.5-turbo-instruct".to_string(),
prompt: "Once upon a time".to_string(),
extra_params: json!({
"temperature": 0.8,
"max_tokens": 50
}),
};
let completions_api = client.completions()?;
let response = completions_api.text_completion(request).await?;
if let Some(choice) = response.choices.first() {
println!("Text Completion: {}", choice.text);
}
Ok(())
}
Streaming Chat Example
use openrouter_api::{OpenRouterClient, utils, Result};
use openrouter_api::types::chat::{ChatCompletionRequest, Message};
use futures::StreamExt;
use std::io::Write;
#[tokio::main]
async fn main() -> Result<()> {
let api_key = utils::load_api_key_from_env()?;
let client = OpenRouterClient::new()
.with_base_url("https://openrouter.ai/api/v1/")?
.with_api_key(api_key)?;
let request = ChatCompletionRequest {
model: "openai/gpt-4o".to_string(),
messages: vec![Message {
role: "user".to_string(),
content: "Tell me a story.".to_string(),
name: None,
tool_calls: None,
}],
stream: Some(true),
response_format: None,
tools: None,
provider: None,
models: None,
transforms: None,
};
let chat_api = client.chat()?;
let mut stream = chat_api.chat_completion_stream(request);
let mut total_content = String::new();
while let Some(chunk) = stream.next().await {
match chunk {
Ok(c) => {
if let Some(choice) = c.choices.first() {
if let Some(content) = &choice.delta.content {
print!("{}", content);
total_content.push_str(content);
std::io::stdout().flush().unwrap();
}
}
if let Some(usage) = c.usage {
println!("\nUsage: {} prompt + {} completion = {} total tokens",
usage.prompt_tokens, usage.completion_tokens, usage.total_tokens);
}
},
Err(e) => eprintln!("Error during streaming: {}", e),
}
}
println!();
Ok(())
}
Analytics API Example
use openrouter_api::{OpenRouterClient, utils, Result};
use openrouter_api::types::analytics::{AnalyticsQuery, ActivityType, DateRange};
#[tokio::main]
async fn main() -> Result<()> {
let api_key = utils::load_api_key_from_env()?;
let client = OpenRouterClient::new()
.with_base_url("https://openrouter.ai/api/v1/")?
.with_api_key(api_key)?;
let analytics_api = client.analytics()?;
let mut all_activities = Vec::new();
let mut page = 1;
loop {
let query = AnalyticsQuery::new()
.with_page(page)
.with_per_page(100);
let response = analytics_api.query(query).await?;
all_activities.extend(response.data);
if response.data.len() < 100 {
break; }
page += 1;
}
println!("Retrieved {} total activities", all_activities.len());
let chat_query = AnalyticsQuery::new()
.with_activity_type(vec![ActivityType::ChatCompletion])
.with_per_page(50);
let chat_response = analytics_api.query(chat_query).await?;
println!("Found {} chat completion activities", chat_response.data.len());
let date_range_query = AnalyticsQuery::new()
.with_date_range(DateRange::Custom {
start: "2024-01-01".to_string(),
end: "2024-01-31".to_string(),
});
let january_response = analytics_api.query(date_range_query).await?;
println!("January activities: {}", january_response.data.len());
let usage_stats = analytics_api.usage().await?;
println!("Total requests: {}", usage_stats.total_requests);
println!("Total tokens: {}", usage_stats.total_tokens);
let daily_activity = analytics_api.daily_activity().await?;
for day in daily_activity {
println!("{}: {} requests, {} tokens",
day.date, day.request_count, day.token_count);
}
Ok(())
}
Providers API Example
use openrouter_api::{OpenRouterClient, utils, Result};
use openrouter_api::types::providers::{ProvidersQuery, ProviderSort};
#[tokio::main]
async fn main() -> Result<()> {
let api_key = utils::load_api_key_from_env()?;
let client = OpenRouterClient::new()
.with_base_url("https://openrouter.ai/api/v1/")?
.with_api_key(api_key)?;
let providers_api = client.providers()?;
let all_providers = providers_api.list().await?;
println!("Found {} providers", all_providers.len());
for provider in &all_providers {
println!("{}: {} models", provider.name, provider.model_count);
}
let search_query = ProvidersQuery::new()
.with_search("openai")
.with_sort(ProviderSort::Name);
let search_results = providers_api.search(search_query).await?;
println!("Found {} providers matching 'openai'", search_results.len());
if let Some(openai) = providers_api.get_by_name("OpenAI").await? {
println!("OpenAI provider details:");
println!(" Models: {}", openai.model_count);
println!(" Status: {:?}", openai.status);
if let Some(first_model) = openai.models.first() {
if let Some(domain) = first_model.extract_domain() {
println!(" Domain: {}", domain);
}
}
}
let capability_query = ProvidersQuery::new()
.with_capability("chat");
let chat_providers = providers_api.query(capability_query).await?;
println!("{} providers support chat", chat_providers.len());
Ok(())
}
Enhanced Models API Example
use openrouter_api::{OpenRouterClient, utils, Result};
use openrouter_api::types::models::{ModelsQuery, ModelSort, ModelArchitecture};
#[tokio::main]
async fn main() -> Result<()> {
let api_key = utils::load_api_key_from_env()?;
let client = OpenRouterClient::new()
.with_base_url("https://openrouter.ai/api/v1/")?
.with_api_key(api_key)?;
let models_api = client.models()?;
let all_models = models_api.list().await?;
println!("Found {} models", all_models.len());
let search_query = ModelsQuery::new()
.with_search("gpt-4")
.with_capability("chat")
.with_sort(ModelSort::Name);
let search_results = models_api.search(search_query).await?;
println!("Found {} GPT-4 models with chat capability", search_results.len());
let architecture_query = ModelsQuery::new()
.with_architecture(ModelArchitecture::Transformer);
let transformer_models = models_api.query(architecture_query).await?;
println!("Found {} transformer models", transformer_models.len());
let openai_models = models_api.get_by_provider("OpenAI").await?;
println!("OpenAI has {} models", openai_models.len());
let context_query = ModelsQuery::new()
.with_min_context_length(32000)
.with_max_context_length(128000);
let high_context_models = models_api.query(context_query).await?;
println!("Found {} models with 32k-128k context", high_context_models.len());
let free_models = models_api.get_free_models().await?;
println!("Found {} free models", free_models.len());
if let Some(gpt4) = models_api.get_by_id("openai/gpt-4").await? {
println!("GPT-4 Details:");
println!(" Name: {}", gpt4.name);
println!(" Context Length: {}", gpt4.context_length);
println!(" Pricing: ${}/1M tokens", gpt4.pricing.prompt);
if let Some(description) = gpt4.description {
println!(" Description: {}", description);
}
}
Ok(())
}
Model Context Protocol (MCP) Client
The library includes a client implementation for the Model Context Protocol, which is an open protocol that standardizes how applications provide context to LLMs.
Key features of the MCP client include:
- JSON-RPC Communication: Implements the JSON-RPC 2.0 protocol for MCP
- Resource Access: Retrieve resources from MCP servers
- Tool Invocation: Call tools provided by MCP servers
- Prompt Execution: Execute prompts on MCP servers
- Server Capabilities: Discover and leverage server capabilities
- Proper Authentication: Handle initialization and authentication flows
let client = MCPClient::new("https://mcp-server.example.com/mcp")?;
let server_capabilities = client.initialize(ClientCapabilities {
protocolVersion: "2025-03-26".to_string(),
supportsSampling: Some(true),
}).await?;
let resource = client.get_resource(GetResourceParams {
id: "some-resource-id".to_string(),
parameters: None,
}).await?;
See the Model Context Protocol specification for more details.
Implementation Status
This is a production-ready library with comprehensive functionality:
✅ Core Features (Completed)
- Client Framework: Type‑state builder pattern with compile‑time validation
- Security: Secure API key handling with memory zeroing and error redaction
- Chat Completions: Full OpenRouter chat API support with streaming
- Text Completions: Traditional text completion endpoint
- Web Search: Integrated web search capabilities
- Tool Calling: Function calling with validation
- Structured Outputs: JSON Schema validation
- Provider Preferences: Model routing and fallback configuration
- Analytics API: Comprehensive activity data retrieval with filtering and pagination
- Providers API: Provider information management with search and filtering
- Enhanced Models API: Advanced model discovery with filtering, sorting, and search
- Model Context Protocol: Complete MCP client implementation
✅ Quality Infrastructure (Completed)
- 100% Test Coverage: 147 comprehensive unit and integration tests
- Security Auditing: Automated security vulnerability scanning
- CI/CD Pipeline: GitHub Actions with quality gates
- Documentation: Complete API documentation with examples
- Developer Experience: Contributing guidelines, issue templates, PR templates
✅ Ergonomic Improvements (Completed)
- Convenience Constructors:
from_env(), from_api_key(), production(), quick()
- Flexible Configuration: Timeout, retry, and header management
- Error Handling: Comprehensive error types with context
- Memory Safety: Automatic sensitive data cleanup
- Advanced Filtering: Sophisticated query builders for analytics, providers, and models
- Convenience Methods: Helper methods for common operations like domain extraction
🔄 Future Enhancements
- Credits API: Account credit and usage tracking
- Performance Optimizations: Connection pooling and caching
- Extended MCP Features: Additional MCP protocol capabilities
- Generation API Enhancements: Additional generation endpoints and features
Contributing
Contributions are welcome! Please open an issue or submit a pull request with your ideas or fixes. Follow the code style guidelines and ensure that all tests pass.
License
Distributed under either the MIT license or the Apache License, Version 2.0. See LICENSE for details.
OpenRouter API Rust Crate Documentation
Version: 0.1.6 • License: MIT / Apache‑2.0
The openrouter_api crate is a comprehensive client for interacting with the OpenRouter API and Model Context Protocol servers. It provides strongly‑typed endpoints for chat completions, text completions, web search, and MCP connections. The crate is built using asynchronous Rust and leverages advanced patterns for safe and flexible API usage.
Table of Contents
Core Concepts
-
Type‑State Client Configuration:
The client is built using a type‑state pattern to ensure that required parameters are set before making any API calls.
-
Provider Preferences:
Strongly-typed configuration for model routing, fallbacks, and provider selection.
-
Asynchronous Streaming:
Support for streaming responses via asynchronous streams.
-
Model Context Protocol:
Client implementation for connecting to MCP servers to access resources, tools, and prompts.
-
Error Handling & Validation:
Comprehensive error handling with detailed context and validation utilities.
Architecture & Module Overview
The crate is organized into several modules:
client: Type-state client implementation with builder pattern
api: API endpoint implementations (chat, completions, web search, etc.)
models: Domain models for structured outputs, provider preferences, tools
types: Type definitions for requests and responses
mcp: Model Context Protocol client implementation
error: Centralized error handling
utils: Utility functions and helpers
Client Setup & Type‑State Pattern
let client = OpenRouterClient::from_env()?;
let client = OpenRouterClient::production(
"sk-or-v1-...",
"My App",
"https://myapp.com"
)?;
let client = OpenRouterClient::new()
.with_base_url("https://openrouter.ai/api/v1/")?
.with_timeout(Duration::from_secs(30))
.with_http_referer("https://your-app.com/")
.with_api_key(std::env::var("OPENROUTER_API_KEY")?)?;
API Endpoints
Chat Completions
let response = client.chat()?.chat_completion(
ChatCompletionRequest {
model: "openai/gpt-4o".to_string(),
messages: vec![Message {
role: "user".to_string(),
content: "Explain quantum computing".to_string(),
name: None,
tool_calls: None,
}],
stream: None,
response_format: None,
tools: None,
provider: None,
models: None,
transforms: None,
}
).await?;
Tool Calling
let weather_tool = Tool::Function {
function: FunctionDescription {
name: "get_weather".to_string(),
description: Some("Get weather information for a location".to_string()),
parameters: serde_json::json!({
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City and state"
}
},
"required": ["location"]
}),
}
};
let response = client.chat()?.chat_completion(
ChatCompletionRequest {
model: "openai/gpt-4o".to_string(),
messages: vec![Message {
role: "user".to_string(),
content: "What's the weather in Boston?".to_string(),
name: None,
tool_calls: None,
}],
tools: Some(vec![weather_tool]),
stream: None,
response_format: None,
provider: None,
models: None,
transforms: None,
}
).await?;
Model Context Protocol
let mcp_client = MCPClient::new("https://mcp-server.example.com/mcp")?;
let server_capabilities = mcp_client.initialize(ClientCapabilities {
protocolVersion: MCP_PROTOCOL_VERSION.to_string(),
supportsSampling: Some(true),
}).await?;
let resource = mcp_client.get_resource(GetResourceParams {
id: "document-123".to_string(),
parameters: None,
}).await?;
Error Handling
match client.chat()?.chat_completion(request).await {
Ok(response) => {
println!("Success: {}", response.choices[0].message.content);
},
Err(e) => match e {
Error::ApiError { code, message, .. } => {
eprintln!("API Error ({}): {}", code, message);
},
Error::HttpError(ref err) if err.is_timeout() => {
eprintln!("Request timed out!");
},
Error::ConfigError(msg) => {
eprintln!("Configuration error: {}", msg);
},
_ => eprintln!("Other error: {:?}", e),
}
}
Best Practices
-
Use the Type‑State Pattern:
Let the compiler ensure your client is properly configured.
-
Set Appropriate Timeouts & Headers:
Configure reasonable timeouts and identify your application.
-
Handle Errors Appropriately:
Implement proper error handling for each error type.
-
Use Provider Preferences:
Configure provider routing for optimal model selection.
-
Secure Your API Keys:
Store keys in environment variables or secure storage.
Additional Resources