openai-ergonomic 0.5.0

Ergonomic Rust wrapper for OpenAI API
Documentation

openai-ergonomic

Crates.io Documentation CI Coverage MSRV License

Ergonomic Rust wrapper for the OpenAI API, providing type-safe builder patterns and async/await support.

Features

  • Type-safe - full type safety with builder patterns using bon
  • Async/await - built on tokio and reqwest for modern async Rust
  • Streaming - first-class support for streaming responses
  • Comprehensive - covers all OpenAI API endpoints
  • Azure OpenAI - seamless support for Azure OpenAI deployments
  • Well-tested - extensive test coverage with mock support
  • Well-documented - rich documentation with examples

Status

Status: under construction. The crate is still in active development and not yet ready for production use.

Quick Start

Add openai-ergonomic to your Cargo.toml:

[dependencies]
openai-ergonomic = "0.1"
tokio = { version = "1.0", features = ["full"] }

Basic Usage

use openai_ergonomic::{Client, Config};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Build client from environment variables
    let client = Client::from_env()?.build();

    let response = client
        .chat_completions()
        .model("gpt-4")
        .message("user", "Hello, world!")
        .send()
        .await?;

    println!("{}", response.choices[0].message.content);
    Ok(())
}

Streaming Example

use openai_ergonomic::Client;
use futures::StreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Build client from environment variables
    let client = Client::from_env()?.build();

    let builder = client
        .chat()
        .user("Tell me a story");

    let mut stream = client.send_chat_stream(builder).await?;

    while let Some(chunk) = stream.next().await {
        let chunk = chunk?;
        if let Some(content) = chunk.content() {
            print!("{}", content);
        }
    }
    Ok(())
}

Custom HTTP Client with Retry Logic

You can provide your own reqwest::Client with custom retry, timeout, and middleware configuration. Note: When using a custom HTTP client, you must configure the timeout on the reqwest::Client itself:

use openai_ergonomic::{Client, Config};
use reqwest_middleware::ClientBuilder;
use reqwest_retry::{RetryTransientMiddleware, policies::ExponentialBackoff};
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create a retry policy with exponential backoff
    let retry_policy = ExponentialBackoff::builder()
        .build_with_max_retries(3);

    // Build a reqwest client with custom timeout
    let reqwest_client = reqwest::Client::builder()
        .timeout(Duration::from_secs(60))  // Configure timeout here
        .build()?;

    // Add retry middleware
    let http_client = ClientBuilder::new(reqwest_client)
        .with(RetryTransientMiddleware::new_with_policy(retry_policy))
        .build();

    // Create OpenAI client with custom HTTP client
    let config = Config::builder()
        .api_key("your-api-key")
        .http_client(http_client)
        .build();

    let client = Client::new(config)?.build();

    // Use the client normally - retries and timeout are handled automatically
    let response = client.chat_simple("Hello!").await?;
    println!("{}", response);
    Ok(())
}

Azure OpenAI Support

The crate seamlessly supports Azure OpenAI deployments. Azure-specific configuration can be provided through environment variables or programmatically.

Using Environment Variables

export AZURE_OPENAI_ENDPOINT="https://my-resource.openai.azure.com"
export AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_DEPLOYMENT="gpt-4"
export AZURE_OPENAI_API_VERSION="2024-02-01"  # Optional, defaults to 2024-02-01
use openai_ergonomic::Client;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Build client from Azure environment variables
    let client = Client::from_env()?.build();

    // Use exactly the same API as standard OpenAI
    let response = client.chat_simple("Hello from Azure!").await?;
    println!("{}", response);
    Ok(())
}

Manual Configuration

use openai_ergonomic::{Client, Config};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let config = Config::builder()
        .api_key("your-azure-api-key")
        .api_base("https://my-resource.openai.azure.com")
        .azure_deployment("gpt-4")
        .azure_api_version("2024-02-01")
        .build();

    let client = Client::new(config)?.build();

    let response = client.chat_simple("Hello!").await?;
    println!("{}", response);
    Ok(())
}

Note: The library automatically handles the differences between Azure OpenAI and standard OpenAI (authentication, URL paths, API versioning). You use the same API regardless of the provider.

Documentation

Examples

The examples/ directory contains comprehensive examples for all major OpenAI API features:

Core Examples

Media & AI Capabilities

Advanced APIs

Run any example with:

# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key-here"

# Run an example
cargo run --example quickstart
cargo run --example responses_streaming
cargo run --example vision_chat

Each example includes:

  • Comprehensive documentation and inline comments
  • Error handling best practices
  • Real-world use cases and patterns
  • Progressive complexity from basic to advanced usage

Contributing

We welcome contributions! Please see our Contributing Guide for details.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.