ferrous-llm-openai
OpenAI provider implementation for the ferrous-llm ecosystem. This crate provides a complete implementation of OpenAI's API, including chat completions, text completions, embeddings, streaming, and tool calling capabilities.
Features
- Chat Completions - Full support for OpenAI's chat completions API
 - Text Completions - Legacy completions API support
 - Streaming - Real-time streaming responses for chat and completions
 - Embeddings - Text embedding generation using OpenAI's embedding models
 - Tool Calling - Function calling and tool use capabilities
 - Flexible Configuration - Environment-based and programmatic configuration
 - Error Handling - Comprehensive error types with retry logic
 - Type Safety - Full Rust type safety with serde serialization
 
Installation
Add this to your Cargo.toml:
[]
 = "0.2.0"
Or use the main ferrous-llm crate with the OpenAI feature:
[]
 = {  = "0.2.0",  = ["openai"] }
Quick Start
Basic Chat
use ;
use ;
async 
Streaming Chat
use ;
use ;
use StreamExt;
async 
Configuration
Environment Variables
Set these environment variables for automatic configuration:
                    # Optional, defaults to gpt-3.5-turbo
  # Optional
   # Optional
   # Optional
Programmatic Configuration
use OpenAIConfig;
use Duration;
// Simple configuration
let config = new;
// Using the builder pattern
let config = builder
    .api_key
    .model
    .organization
    .timeout
    .max_retries
    .header
    .build;
// From environment with validation
let config = from_env?;
Custom Base URL
For OpenAI-compatible APIs (like Azure OpenAI):
let config = builder
    .api_key
    .model
    .base_url?
    .build;
Supported Models
Chat Models
gpt-4- Most capable modelgpt-4-turbo- Latest GPT-4 with improved performancegpt-3.5-turbo- Fast and efficient for most tasksgpt-3.5-turbo-16k- Extended context length
Embedding Models
text-embedding-ada-002- Most capable embedding modeltext-embedding-3-small- Smaller, faster embedding modeltext-embedding-3-large- Larger, more capable embedding model
Image Models
dall-e-3- Latest image generation modeldall-e-2- Previous generation image model
Audio Models
whisper-1- Speech-to-text transcriptiontts-1- Text-to-speech synthesistts-1-hd- High-definition text-to-speech
Advanced Usage
Tool Calling
use ;
use ;
let provider = new?;
let tools = vec!;
let response = provider.chat_with_tools.await?;
Embeddings
use ;
use EmbeddingProvider;
let provider = new?;
let texts = vec!;
let embeddings = provider.embed.await?;
for embedding in embeddings 
Image Generation
use ;
use ;
let provider = new?;
let request = ImageRequest ;
let response = provider.generate_image.await?;
for image in response.images 
Error Handling
The crate provides comprehensive error handling:
use ;
use ErrorKind;
match provider.chat.await 
Testing
Run the test suite:
# Unit tests
# Integration tests (requires API key)
OPENAI_API_KEY=sk-your-key 
# End-to-end tests
OPENAI_API_KEY=sk-your-key 
Examples
See the examples directory for complete working examples:
openai_chat.rs- Basic chat exampleopenai_chat_streaming.rs- Streaming chat example
Run examples:
Rate Limiting
The provider includes automatic retry logic with exponential backoff for rate-limited requests. Configure retry behavior:
let config = builder
    .api_key
    .max_retries  // Maximum retry attempts
    .timeout  // Request timeout
    .build;
Compatibility
This crate is compatible with:
- OpenAI API v1
 - Azure OpenAI Service
 - OpenAI-compatible APIs (with custom base URL)
 
Contributing
This crate is part of the ferrous-llm workspace. See the main repository for contribution guidelines.
License
Licensed under the Apache License 2.0. See LICENSE for details.