openrouter-rs 0.4.5

A type-safe OpenRouter Rust SDK
Documentation

OpenRouter Rust SDK

A type-safe, async Rust SDK for the OpenRouter API - Access 200+ AI models with ease

Crates.io Documentation License: MIT

πŸ“š Documentation | 🎯 Examples | πŸ“¦ Crates.io

🌟 What makes this special?

  • πŸ”’ Type Safety: Leverages Rust's type system for compile-time error prevention
  • ⚑ Async/Await: Built on tokio for high-performance concurrent operations
  • 🧠 Reasoning Tokens: Industry-leading chain-of-thought reasoning support
  • πŸ“‘ Streaming: Real-time response streaming with futures
  • πŸ—οΈ Builder Pattern: Ergonomic client and request construction
  • βš™οΈ Smart Presets: Curated model groups for programming, reasoning, and free tiers
  • 🎯 Complete Coverage: All OpenRouter API endpoints supported

πŸš€ Quick Start

Installation

Add to your Cargo.toml:

[dependencies]
openrouter-rs = "0.4.5"
tokio = { version = "1", features = ["full"] }

30-Second Example

use openrouter_rs::{OpenRouterClient, api::chat::*, types::Role};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create client
    let client = OpenRouterClient::builder()
        .api_key("your_api_key")
        .build()?;

    // Send chat completion
    let request = ChatCompletionRequest::builder()
        .model("anthropic/claude-sonnet-4")
        .messages(vec![
            Message::new(Role::User, "Explain Rust ownership in simple terms")
        ])
        .build()?;

    let response = client.send_chat_completion(&request).await?;
    println!("{}", response.choices[0].content().unwrap_or(""));

    Ok(())
}

✨ Key Features

🧠 Advanced Reasoning Support

Leverage chain-of-thought processing with reasoning tokens:

use openrouter_rs::types::Effort;

let request = ChatCompletionRequest::builder()
    .model("deepseek/deepseek-r1")
    .messages(vec![Message::new(Role::User, "What's bigger: 9.9 or 9.11?")])
    .reasoning_effort(Effort::High)    // Enable deep reasoning
    .reasoning_max_tokens(2000)        // Control reasoning depth
    .build()?;

let response = client.send_chat_completion(&request).await?;

// Access both reasoning and final answer
println!("🧠 Reasoning: {}", response.choices[0].reasoning().unwrap_or(""));
println!("πŸ’‘ Answer: {}", response.choices[0].content().unwrap_or(""));

πŸ“‘ Real-time Streaming

Process responses as they arrive:

use futures_util::StreamExt;

let stream = client.stream_chat_completion(&request).await?;

stream
    .filter_map(|event| async { event.ok() })
    .for_each(|chunk| async move {
        if let Some(content) = chunk.choices[0].content() {
            print!("{}", content);  // Print as it arrives
        }
    })
    .await;

βš™οΈ Smart Model Presets

Use curated model collections:

use openrouter_rs::config::OpenRouterConfig;

let config = OpenRouterConfig::default();

// Three built-in presets:
// β€’ programming: Code generation and development
// β€’ reasoning: Advanced problem-solving models  
// β€’ free: Free-tier models for experimentation

println!("Available models: {:?}", config.get_resolved_models());

πŸ›‘οΈ Comprehensive Error Handling

use openrouter_rs::error::OpenRouterError;

match client.send_chat_completion(&request).await {
    Ok(response) => println!("Success!"),
    Err(OpenRouterError::ModerationError { reasons, .. }) => {
        eprintln!("Content flagged: {:?}", reasons);
    }
    Err(OpenRouterError::ApiError { code, message }) => {
        eprintln!("API error {}: {}", code, message);
    }
    Err(e) => eprintln!("Other error: {}", e),
}

πŸ“Š API Coverage

Feature Status Module
Chat Completions βœ… api::chat
Text Completions βœ… api::completion
Reasoning Tokens βœ… api::chat
Streaming Responses βœ… api::chat
Model Information βœ… api::models
API Key Management βœ… api::api_keys
Credit Management βœ… api::credits
Authentication βœ… api::auth

🎯 More Examples

Filter Models by Category

use openrouter_rs::types::ModelCategory;

let models = client
    .list_models_by_category(ModelCategory::Programming)
    .await?;

println!("Found {} programming models", models.len());

Advanced Client Configuration

let client = OpenRouterClient::builder()
    .api_key("your_key")
    .http_referer("https://yourapp.com")
    .x_title("My AI App")
    .base_url("https://openrouter.ai/api/v1")  // Custom endpoint
    .build()?;

Streaming with Reasoning

let stream = client.stream_chat_completion(
    &ChatCompletionRequest::builder()
        .model("anthropic/claude-sonnet-4")
        .messages(vec![Message::new(Role::User, "Solve this step by step: 2x + 5 = 13")])
        .reasoning_effort(Effort::High)
        .build()?
).await?;

let mut reasoning_buffer = String::new();
let mut content_buffer = String::new();

stream.filter_map(|event| async { event.ok() })
    .for_each(|chunk| async {
        if let Some(reasoning) = chunk.choices[0].reasoning() {
            reasoning_buffer.push_str(reasoning);
            print!("🧠");  // Show reasoning progress
        }
        if let Some(content) = chunk.choices[0].content() {
            content_buffer.push_str(content);
            print!("πŸ’¬");  // Show content progress
        }
    }).await;

println!("\n🧠 Reasoning: {}", reasoning_buffer);
println!("πŸ’‘ Answer: {}", content_buffer);

πŸ“š Documentation & Resources

Run Examples Locally

# Set your API key
export OPENROUTER_API_KEY="your_key_here"

# Basic chat completion
cargo run --example send_chat_completion

# Reasoning tokens demo
cargo run --example chat_with_reasoning

# Streaming responses
cargo run --example stream_chat_completion

# Run with reasoning
cargo run --example stream_chat_with_reasoning

🀝 Community & Support

πŸ› Found a Bug?

Please open an issue with:

  • Your Rust version (rustc --version)
  • SDK version you're using
  • Minimal code example
  • Expected vs actual behavior

πŸ’‘ Feature Requests

We love hearing your ideas! Start a discussion to:

  • Suggest new features
  • Share use cases
  • Get help with implementation

πŸ› οΈ Contributing

Contributions are welcome! Please see our contributing guidelines:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Follow the existing code style
  5. Submit a pull request

⭐ Show Your Support

If this SDK helps your project, consider:

  • ⭐ Starring the repository
  • 🐦 Sharing on social media
  • πŸ“ Writing about your experience
  • 🀝 Contributing improvements

πŸ“‹ Requirements

  • Rust: 1.85+ (2024 edition)
  • Tokio: 1.0+ (for async runtime)
  • OpenRouter API Key: Get yours here

πŸ—ΊοΈ Roadmap

  • WebSocket Support - Real-time bidirectional communication
  • Retry Strategies - Automatic retry with exponential backoff
  • Caching Layer - Response caching for improved performance
  • CLI Tool - Command-line interface for quick testing
  • Middleware System - Request/response interceptors

πŸ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

⚠️ Disclaimer

This is a third-party SDK not officially affiliated with OpenRouter. Use at your own discretion.


πŸ“ˆ Release History

Version 0.4.5 (Latest)

  • 🧠 New: Complete reasoning tokens implementation with chain-of-thought support
  • βš™οΈ Updated: Model presets restructured to programming/reasoning/free categories
  • πŸ“š Enhanced: Professional-grade documentation with comprehensive examples
  • πŸ—οΈ Improved: Configuration system with better model management

Version 0.4.4

  • Added: Support for listing models by supported parameters
  • Note: OpenRouter API limitations on simultaneous category and parameter filtering

Version 0.4.3

  • Added: Support for listing models by category
  • Thanks to OpenRouter team for the API enhancement!

Made with ❀️ for the Rust community

⭐ Star us on GitHub | πŸ“¦ Find us on Crates.io | πŸ“š Read the Docs