pub struct OpenAIChatModel { /* private fields */ }Expand description
OpenAI chat completion implementation of the language model interface.
This struct provides support for OpenAI’s chat completion API, enabling text generation, streaming, tool calling, and multi-modal capabilities. It handles conversion between the unified SDK format and OpenAI’s API specifications, including model-specific behaviors for reasoning models, search preview models, and flex processing.
§Supported Features
- Text generation and streaming responses
- Function calling and tool use
- Multi-modal inputs (images, audio)
- JSON schema-based structured outputs
- Vision capabilities for supported models
- Reasoning effort customization for o1/o3 models
- Logit bias and probability tracking
- Token usage tracking including cached and reasoning tokens
§OpenAI-Specific Behaviors
- System messages are converted to developer messages for reasoning models (o1, o3)
- Temperature configuration is ignored for search preview models with a warning
- Reasoning models use
max_completion_tokensinstead ofmax_tokens - Flex processing service tier validation for supported models
- Streaming includes usage information in the final chunk
Implementations§
Source§impl OpenAIChatModel
impl OpenAIChatModel
Sourcepub fn new(model_id: impl Into<String>, config: impl Into<OpenAIConfig>) -> Self
pub fn new(model_id: impl Into<String>, config: impl Into<OpenAIConfig>) -> Self
Creates a new OpenAI chat model instance.
Initializes a language model that communicates with OpenAI’s chat completion API. The model will automatically detect capabilities and apply model-specific configurations based on the provided model ID (e.g., reasoning for o1/o3, special handling for vision models).
§Arguments
model_id- The OpenAI model identifier (e.g., “gpt-4o”, “gpt-4-turbo”, “gpt-4”, “gpt-3.5-turbo”, “o1”, “o1-mini”, “o3-mini”)config- OpenAI configuration containing API endpoint, authentication headers, and optional customizations
§OpenAI Model Categories
- GPT-4o Series: Latest high-capability models with vision and reasoning
- Reasoning Models (o1/o3): Enhanced reasoning with special parameter handling
- GPT-4 Series: Advanced reasoning and code generation
- GPT-3.5 Turbo: Balanced performance and cost-effectiveness
- Search Models: Integration with web search capabilities
§Example
ⓘ
use ai_sdk_openai::{OpenAIConfig, OpenAIChatModel};
let api_key = std::env::var("OPENAI_API_KEY").unwrap();
let config = OpenAIConfig::from_api_key(api_key);
let model = OpenAIChatModel::new("gpt-4o", config);Trait Implementations§
Source§impl LanguageModel for OpenAIChatModel
impl LanguageModel for OpenAIChatModel
Source§fn supported_urls<'life0, 'async_trait>(
&'life0 self,
) -> Pin<Box<dyn Future<Output = HashMap<String, Vec<String>>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn supported_urls<'life0, 'async_trait>(
&'life0 self,
) -> Pin<Box<dyn Future<Output = HashMap<String, Vec<String>>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Returns URLs supported by this model for various operations. Read more
Source§fn do_generate<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<GenerateResponse>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn do_generate<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<GenerateResponse>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Executes a non-streaming generation request. Read more
Source§fn do_stream<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<StreamResponse>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn do_stream<'life0, 'async_trait>(
&'life0 self,
options: CallOptions,
) -> Pin<Box<dyn Future<Output = Result<StreamResponse>> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
Executes a streaming generation request. Read more
Source§fn specification_version(&self) -> &str
fn specification_version(&self) -> &str
Returns the specification version implemented by this model. Read more
Source§fn generate<P>(&self, prompt: P) -> GenerateBuilder<'_, Self>
fn generate<P>(&self, prompt: P) -> GenerateBuilder<'_, Self>
Creates a builder for a non-streaming generation request. Read more
Auto Trait Implementations§
impl Freeze for OpenAIChatModel
impl !RefUnwindSafe for OpenAIChatModel
impl Send for OpenAIChatModel
impl Sync for OpenAIChatModel
impl Unpin for OpenAIChatModel
impl !UnwindSafe for OpenAIChatModel
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more