Expand description
§openai-ergonomic
Ergonomic Rust wrapper for the OpenAI
API, providing type-safe builder patterns and async/await support.
§Features
- 🛡️ Type-safe - Full type safety with builder patterns using
bon
- ⚡ Async/await - Built on
tokio
andreqwest
for modern async Rust - 🔄 Streaming - First-class support for streaming responses
- 📝 Comprehensive - Covers all
OpenAI
API endpoints - 🧪 Well-tested - Extensive test coverage with mock support
- 📚 Well-documented - Rich documentation with examples
§Status
🚧 Under Construction - This crate is currently being developed and is not yet ready for production use.
§Quick Start
Add openai-ergonomic
to your Cargo.toml
:
[dependencies]
openai-ergonomic = "0.1"
tokio = { version = "1.0", features = ["full"] }
§Basic Usage
use openai_ergonomic::{Client, Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::from_env()?
.api_key("your-api-key-here")
.build();
let response = client
.chat_completions()
.model("gpt-4")
.message("user", "Hello, world!")
.send()
.await?;
println!("{}", response.choices[0].message.content);
Ok(())
}
§Streaming Example
use openai_ergonomic::{Client, Config};
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::from_env()?
.api_key("your-api-key-here")
.build();
let mut stream = client
.chat_completions()
.model("gpt-4")
.message("user", "Tell me a story")
.stream()
.await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
if let Some(content) = chunk.choices[0].delta.content {
print!("{}", content);
}
}
Ok(())
}
§Documentation
§Examples
The examples/
directory contains comprehensive examples for all major OpenAI
API features:
- Basic Usage: Simple chat completions and responses
- Streaming: Real-time response streaming
- Function Calling: Tool integration and function calling
- Vision: Image understanding and analysis
- Audio: Speech-to-text and text-to-speech
- Assistants: Assistant API with file handling
- Embeddings: Vector embeddings generation
- Images: Image generation and editing
Run an example:
cargo run --example quickstart
§Contributing
We welcome contributions! Please see our Contributing Guide for details.
§License
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
§Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
§openai-ergonomic
An ergonomic Rust wrapper for the OpenAI
API, providing type-safe builder patterns
and async/await support for all OpenAI
endpoints.
§Features
- Type-safe builders - Use builder patterns with compile-time validation
- Async/await support - Built on tokio and reqwest for modern async Rust
- Streaming responses - First-class support for real-time streaming
- Comprehensive coverage - Support for all
OpenAI
API endpoints - Error handling - Structured error types for robust applications
- Testing support - Mock-friendly design for unit testing
§Quick Start
use openai_ergonomic::{Client, Config};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a client from environment variables
let client = Client::from_env()?;
// Simple chat completion
let response = client
.chat_simple("Hello, how are you?")
.await?;
println!("{}", response);
Ok(())
}
§Streaming Example
use openai_ergonomic::{Client, Config};
use futures::StreamExt;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::from_env()?;
// Stream chat completions
let mut stream = client
.chat()
.user("Tell me a story")
.stream()
.await?;
while let Some(chunk) = stream.next().await {
print!("{}", chunk?.content());
}
Ok(())
}
§Error Handling
use openai_ergonomic::{Client, Error};
#[tokio::main]
async fn main() {
let client = Client::from_env().expect("API key required");
match client.chat_simple("Hello").await {
Ok(response) => println!("{}", response),
Err(Error::RateLimit { .. }) => {
println!("Rate limited, please retry later");
}
Err(e) => eprintln!("Error: {}", e),
}
}
§Custom Configuration
use openai_ergonomic::{Client, Config};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let config = Config::builder()
.api_key("your-api-key")
.organization_id("org-123")
.timeout(Duration::from_secs(30))
.max_retries(5)
.build();
let client = Client::new(config)?;
Ok(())
}
§Testing with Mocks
#[cfg(test)]
mod tests {
use openai_ergonomic::test_utils::MockOpenAIServer;
#[tokio::test]
async fn test_chat_completion() {
let mock = MockOpenAIServer::new();
mock.mock_chat_completion("Hello!", "Hi there!");
let client = mock.client();
let response = client.chat_simple("Hello!").await.unwrap();
assert_eq!(response, "Hi there!");
}
}
§Modules
Re-exports§
pub use client::Client;
pub use config::Config;
pub use config::ConfigBuilder;
pub use errors::Error;
pub use errors::Result;
pub use builders::chat::system_user;
pub use builders::chat::user_message;
pub use builders::Builder;
pub use builders::ChatCompletionBuilder;
pub use builders::Sendable;
pub use responses::chat::ChatChoice;
pub use responses::chat::ChatCompletionResponse;
pub use responses::chat::ChatMessage as ResponseChatMessage;
pub use responses::chat::FunctionCall;
pub use responses::chat::ToolCall;
pub use responses::tool_function;
pub use responses::tool_web_search;
pub use responses::Response;
pub use responses::ResponseBuilder;
pub use responses::Tool;
pub use responses::ToolChoice;
pub use responses::ToolFunction;
pub use responses::Usage;
pub use bon;