1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
//! # OpenRouter Rust SDK
//!
//! `openrouter-rs` is a type-safe, async Rust SDK for the [OpenRouter API](https://openrouter.ai/),
//! providing easy access to 200+ AI models from providers like OpenAI, Anthropic, Google, and more.
//!
//! ## ✨ Key Features
//!
//! - **🔒 Type Safety**: Leverages Rust's type system for compile-time error prevention
//! - **⚡ Async/Await**: Built on `tokio` for high-performance async operations
//! - **🏗️ Builder Pattern**: Ergonomic client and request construction
//! - **🧭 Domain Clients**: Grouped API access via `chat()`, `responses()`, `messages()`, `models()`, `management()`
//! - **📡 Streaming Support**: Real-time response streaming with `futures`
//! - **🧩 Unified Streaming Events**: Shared stream event model across chat/responses/messages
//! - **🧠 Reasoning Tokens**: Advanced support for chain-of-thought reasoning
//! - **⚙️ Model Presets**: Pre-configured model groups for different use cases
//! - **🎯 Full API Coverage**: Complete OpenRouter API endpoint support
//!
//! ## 🚀 Quick Start
//!
//! Add to your `Cargo.toml`:
//! ```toml
//! [dependencies]
//! openrouter-rs = "0.5.2"
//! tokio = { version = "1", features = ["full"] }
//! ```
//!
//! ### Basic Chat Completion
//!
//! ```rust
//! use openrouter_rs::{
//! OpenRouterClient,
//! api::chat::{ChatCompletionRequest, Message},
//! types::Role,
//! };
//!
//! #[tokio::main]
//! async fn main() -> Result<(), Box<dyn std::error::Error>> {
//! // Create client with builder pattern
//! let client = OpenRouterClient::builder()
//! .api_key("your_api_key")
//! .http_referer("https://yourapp.com")
//! .x_title("My App")
//! .build()?;
//!
//! // Build chat request
//! let request = ChatCompletionRequest::builder()
//! .model("anthropic/claude-sonnet-4")
//! .messages(vec![
//! Message::new(Role::System, "You are a helpful assistant"),
//! Message::new(Role::User, "Explain Rust ownership in simple terms"),
//! ])
//! .temperature(0.7)
//! .max_tokens(500)
//! .build()?;
//!
//! // Send request and get response
//! let response = client.chat().create(&request).await?;
//! println!("Response: {}", response.choices[0].content().unwrap_or(""));
//!
//! Ok(())
//! }
//! ```
//!
//! ### Streaming Responses
//!
//! ```rust
//! use futures_util::StreamExt;
//! use openrouter_rs::{OpenRouterClient, api::chat::*};
//!
//! # async fn example() -> Result<(), Box<dyn std::error::Error>> {
//! let client = OpenRouterClient::builder()
//! .api_key("your_api_key")
//! .build()?;
//!
//! let request = ChatCompletionRequest::builder()
//! .model("google/gemini-2.5-flash")
//! .messages(vec![Message::new(Role::User, "Write a haiku about Rust")])
//! .build()?;
//!
//! let mut stream = client.chat().stream(&request).await?;
//!
//! while let Some(result) = stream.next().await {
//! if let Ok(response) = result {
//! if let Some(content) = response.choices[0].content() {
//! print!("{}", content);
//! }
//! }
//! }
//! # Ok(())
//! # }
//! ```
//!
//! ### Reasoning Tokens (Chain-of-Thought)
//!
//! ```rust
//! use openrouter_rs::{OpenRouterClient, api::chat::*, types::{Role, Effort}};
//!
//! # async fn example() -> Result<(), Box<dyn std::error::Error>> {
//! let client = OpenRouterClient::builder()
//! .api_key("your_api_key")
//! .build()?;
//!
//! let request = ChatCompletionRequest::builder()
//! .model("deepseek/deepseek-r1")
//! .messages(vec![Message::new(Role::User, "What's bigger: 9.9 or 9.11?")])
//! .reasoning_effort(Effort::High) // Enable high-effort reasoning
//! .reasoning_max_tokens(1000) // Limit reasoning tokens
//! .build()?;
//!
//! let response = client.chat().create(&request).await?;
//!
//! println!("Reasoning: {}", response.choices[0].reasoning().unwrap_or(""));
//! println!("Answer: {}", response.choices[0].content().unwrap_or(""));
//! # Ok(())
//! # }
//! ```
//!
//! ## 📚 Core Modules
//!
//! - [`client`] - Client configuration and HTTP operations
//! - [`api`] - OpenRouter API endpoints (chat, models, credits, etc.)
//! - [`types`] - Request/response types and enums
//! - [`config`] - Configuration management and model presets
//! - [`error`] - Error types and handling
//!
//! ## 🎯 Model Presets
//!
//! The SDK includes curated model presets for different use cases:
//!
//! - **`programming`**: Code generation and software development
//! - **`reasoning`**: Advanced reasoning and problem-solving
//! - **`free`**: Free-tier models for experimentation
//!
//! ```rust
//! use openrouter_rs::config::OpenRouterConfig;
//!
//! let config = OpenRouterConfig::default();
//! println!("Available models: {:?}", config.get_resolved_models());
//! ```
//!
//! ## 🔗 API Coverage
//!
//! | Feature | Status | Module |
//! |---------|--------|---------|
//! | Domain-Oriented Client API | ✅ | [`client::OpenRouterClient`] |
//! | Chat Completions | ✅ | [`api::chat`] |
//! | Legacy Text Completions (`legacy-completions`) | ✅ | `api::legacy::completion` |
//! | Model Information | ✅ | [`api::models`] |
//! | Streaming | ✅ | [`api::chat`] |
//! | Unified Streaming Events | ✅ | [`types::stream`] |
//! | Reasoning Tokens | ✅ | [`api::chat`] |
//! | API Key Management | ✅ | [`api::api_keys`] |
//! | Credit Management | ✅ | [`api::credits`] |
//! | Generation Data | ✅ | [`api::generation`] |
//! | Authentication | ✅ | [`api::auth`] |
//! | Guardrails | ✅ | [`api::guardrails`] |
//!
//! ## 📖 Examples
//!
//! Check out the [`examples/`](https://github.com/realmorrisliu/openrouter-rs/tree/main/examples)
//! directory for comprehensive usage examples:
//!
//! - Basic chat completion
//! - Streaming responses
//! - Reasoning tokens
//! - Model management
//! - Error handling
//! - Advanced configurations
//!
//! ## 🤝 Contributing
//!
//! Contributions are welcome! Please see our
//! [GitHub repository](https://github.com/realmorrisliu/openrouter-rs) for issues and pull requests.
pub use ;
pub use Model;
pub use OpenRouterClient;