Expand description
§OpenAI Provider Implementation
Production-ready OpenAI provider implementing the AI SDK provider specification. This crate delivers complete access to OpenAI’s model portfolio including GPT language models, DALL-E image generation, Whisper transcription, embeddings, and text-to-speech capabilities.
§Supported Models
- GPT Language Models - GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, o1 (reasoning models)
- Text Embeddings - text-embedding-3-small, text-embedding-3-large, ada-002
- Image Generation - DALL-E 2, DALL-E 3 with quality and style controls
- Speech Synthesis - TTS-1 (standard), TTS-1-HD (high definition)
- Speech Transcription - Whisper-1 with timestamps and translations
§Features
- Full Specification Compliance: Implements all v3 provider traits
- Streaming Support: Server-sent events for real-time token streaming
- Tool Calling: Native function calling with parallel tool execution
- Vision Support: Multimodal inputs with image URLs and base64 data
- Structured Output: JSON mode and response format constraints
- Error Handling: Comprehensive error types with retry guidance
- Request Inspection: Access to raw request/response bodies for debugging
§Quick Start
§Basic Text Generation
ⓘ
use ai_sdk_openai::OpenAIChatModel;
use ai_sdk_provider::LanguageModel;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let api_key = std::env::var("OPENAI_API_KEY")?;
let model = OpenAIChatModel::new("gpt-4", api_key);
let response = model
.generate("Explain photosynthesis in simple terms")
.temperature(0.7)
.max_tokens(200)
.await?;
println!("{}", response.text());
Ok(())
}§Using the Provider Factory
ⓘ
use ai_sdk_openai::OpenAIProvider;
use ai_sdk_provider::ProviderV3;
let provider = OpenAIProvider::new("your-api-key");
let chat_model = provider.language_model("gpt-4");
let embedding_model = provider.embedding_model("text-embedding-3-small");§Configuration
§API Key
Obtain your OpenAI API key from: https://platform.openai.com/api-keys
Set via environment variable:
export OPENAI_API_KEY=sk-...Or pass directly to model constructors:
ⓘ
let model = OpenAIChatModel::new("gpt-4", "sk-...");§Custom Base URL
Use OpenAI-compatible endpoints:
ⓘ
use ai_sdk_openai::{OpenAIChatModel, OpenAIConfig};
let config = OpenAIConfig::new("your-api-key")
.base_url("https://your-proxy.com/v1");
let model = OpenAIChatModel::from_config("gpt-4", config);Modules§
- model_
detection - Model detection utilities for OpenAI models.
- responses
- OpenAI Responses API implementation.
Structs§
- OpenAI
Chat Model - OpenAI chat completion implementation of the language model interface.
- OpenAI
Config - Configuration for OpenAI API endpoints and authentication.
- OpenAI
Embedding Model - OpenAI implementation of embedding model.
- OpenAI
Image Model - OpenAI implementation of image model.
- OpenAI
Provider - OpenAI provider for creating and managing all OpenAI API model instances.
- OpenAI
Speech Model - OpenAI implementation of speech model.
- OpenAI
Transcription Model - OpenAI implementation of transcription model.
- OpenAI
UrlOptions - Parameters for dynamic URL construction in OpenAI API requests.
Enums§
- Multimodal
Error - Error types for multimodal conversion
- OpenAI
Content Part - OpenAI content part (can be text, image, or audio)
- OpenAI
Error - Errors that can occur when using the OpenAI provider.
Functions§
- convert_
audio_ part - Converts an audio file to OpenAI format
- convert_
image_ part - Converts an image file to OpenAI format