openai-oxide
Idiomatic Rust client for the OpenAI API — 1:1 parity with the official Python SDK.
Features
- Async-first (tokio + reqwest)
- Strongly typed requests and responses (serde)
- SSE streaming support
- Automatic retries with exponential backoff
- Builder pattern for requests
- Same resource structure as Python SDK:
client.chat().completions().create()
Quick Start
Add to Cargo.toml:
[dependencies]
openai-oxide = "0.1"
tokio = { version = "1", features = ["full"] }
use openai_oxide::{OpenAI, types::chat::*};
#[tokio::main]
async fn main() -> Result<(), openai_oxide::OpenAIError> {
let client = OpenAI::from_env()?;
let request = ChatCompletionRequest::new(
"gpt-4o-mini",
vec![
ChatCompletionMessageParam::System {
content: "You are a helpful assistant.".into(),
name: None,
},
ChatCompletionMessageParam::User {
content: UserContent::Text("Hello!".into()),
name: None,
},
],
);
let response = client.chat().completions().create(request).await?;
println!("{}", response.choices[0].message.content.as_deref().unwrap_or(""));
Ok(())
}
Streaming
use futures_util::StreamExt;
use openai_oxide::{OpenAI, types::chat::*};
#[tokio::main]
async fn main() -> Result<(), openai_oxide::OpenAIError> {
let client = OpenAI::from_env()?;
let request = ChatCompletionRequest::new(
"gpt-4o-mini",
vec![ChatCompletionMessageParam::User {
content: UserContent::Text("Tell me a joke".into()),
name: None,
}],
);
let mut stream = client.chat().completions().create_stream(request).await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
if let Some(delta) = chunk.choices.first().and_then(|c| c.delta.content.as_deref()) {
print!("{delta}");
}
}
Ok(())
}
Configuration
use openai_oxide::{OpenAI, ClientConfig};
let client = OpenAI::from_env()?;
let client = OpenAI::new("sk-...");
let config = ClientConfig::new("sk-...")
.base_url("https://api.openai.com/v1")
.timeout_secs(30)
.max_retries(3);
let client = OpenAI::with_config(config);
Implemented APIs
| API |
Method |
Status |
| Chat Completions |
client.chat().completions().create() |
Done |
| Chat Completions (streaming) |
client.chat().completions().create_stream() |
Done |
More endpoints coming soon: Embeddings, Images, Audio, Files, Models, Fine-tuning, Moderations, Responses.
Development
cargo test cargo test --features live-tests cargo clippy -- -D warnings cargo fmt -- --check
License
MIT