pub struct ChatCompletion { /* private fields */ }Expand description
OpenAI Chat Completions API client
This structure manages interactions with the OpenAI Chat Completions API and Azure OpenAI API. It handles authentication, request parameter configuration, and API calls.
§Providers
The client supports two providers:
- OpenAI: Standard OpenAI API (default)
- Azure: Azure OpenAI Service
§Examples
§OpenAI (existing behavior - unchanged)
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;
let mut chat = ChatCompletion::new();
let messages = vec![Message::from_string(Role::User, "Hello!")];
let response = chat
.model_id("gpt-4o-mini")
.messages(messages)
.chat()
.await?;§Azure OpenAI
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;
// From environment variables
let mut chat = ChatCompletion::azure()?;
let messages = vec![Message::from_string(Role::User, "Hello!")];
let response = chat.messages(messages).chat().await?;Implementations§
Source§impl ChatCompletion
impl ChatCompletion
Sourcepub fn new() -> Self
pub fn new() -> Self
Creates a new ChatCompletion instance for OpenAI API
Loads the API key from the OPENAI_API_KEY environment variable.
If a .env file exists, it will also be loaded.
§Panics
Panics if the OPENAI_API_KEY environment variable is not set.
§Returns
A new ChatCompletion instance configured for OpenAI API
§Example
use openai_tools::chat::request::ChatCompletion;
let mut chat = ChatCompletion::new();Sourcepub fn with_model(model: ChatModel) -> Self
pub fn with_model(model: ChatModel) -> Self
Creates a new ChatCompletion instance with a specified model
This is the recommended constructor as it enables parameter validation
at setter time. When you set parameters like temperature(), the model’s
parameter support is checked and warnings are logged for unsupported values.
§Arguments
model- The model to use for chat completion
§Panics
Panics if the OPENAI_API_KEY environment variable is not set.
§Returns
A new ChatCompletion instance with the specified model
§Example
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::models::ChatModel;
// Recommended: specify model at creation time
let mut chat = ChatCompletion::with_model(ChatModel::Gpt4oMini);
// For reasoning models, unsupported parameters are validated at setter time
let mut reasoning_chat = ChatCompletion::with_model(ChatModel::O3Mini);
reasoning_chat.temperature(0.5); // Warning logged, value ignoredSourcepub fn with_auth(auth: AuthProvider) -> Self
pub fn with_auth(auth: AuthProvider) -> Self
Creates a new ChatCompletion instance with a custom authentication provider
Use this to explicitly configure OpenAI or Azure authentication.
§Arguments
auth- The authentication provider
§Returns
A new ChatCompletion instance with the specified auth provider
§Example
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::auth::{AuthProvider, AzureAuth};
// Explicit Azure configuration with complete base URL
let auth = AuthProvider::Azure(
AzureAuth::new(
"api-key",
"https://my-resource.openai.azure.com/openai/deployments/gpt-4o?api-version=2024-08-01-preview"
)
);
let mut chat = ChatCompletion::with_auth(auth);Sourcepub fn azure() -> Result<Self>
pub fn azure() -> Result<Self>
Creates a new ChatCompletion instance for Azure OpenAI API
Loads configuration from Azure-specific environment variables.
§Returns
Result<ChatCompletion> - Configured for Azure or error if env vars missing
§Environment Variables
| Variable | Required | Description |
|---|---|---|
AZURE_OPENAI_API_KEY | Yes | Azure API key |
AZURE_OPENAI_BASE_URL | Yes | Complete endpoint URL including deployment, API path, and api-version |
§Example
use openai_tools::chat::request::ChatCompletion;
// With environment variables:
// AZURE_OPENAI_API_KEY=xxx
// AZURE_OPENAI_BASE_URL=https://my-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview
let mut chat = ChatCompletion::azure()?;Sourcepub fn detect_provider() -> Result<Self>
pub fn detect_provider() -> Result<Self>
Creates a new ChatCompletion instance by auto-detecting the provider
Tries Azure first (if AZURE_OPENAI_API_KEY is set), then falls back to OpenAI.
§Returns
Result<ChatCompletion> - Auto-configured client or error
§Example
use openai_tools::chat::request::ChatCompletion;
// Uses Azure if AZURE_OPENAI_API_KEY is set, otherwise OpenAI
let mut chat = ChatCompletion::detect_provider()?;Sourcepub fn with_url<S: Into<String>>(base_url: S, api_key: S) -> Self
pub fn with_url<S: Into<String>>(base_url: S, api_key: S) -> Self
Creates a new ChatCompletion instance with URL-based provider detection
Analyzes the URL pattern to determine the provider:
- URLs containing
.openai.azure.com→ Azure - All other URLs → OpenAI-compatible
§Arguments
base_url- The complete base URL for API requestsapi_key- The API key or token
§Returns
ChatCompletion - Configured client
§Example
use openai_tools::chat::request::ChatCompletion;
// OpenAI-compatible API (e.g., local Ollama)
let chat = ChatCompletion::with_url(
"http://localhost:11434/v1",
"ollama",
);
// Azure OpenAI (complete base URL)
let azure_chat = ChatCompletion::with_url(
"https://my-resource.openai.azure.com/openai/deployments/gpt-4o?api-version=2024-08-01-preview",
"azure-key",
);Sourcepub fn from_url<S: Into<String>>(base_url: S) -> Result<Self>
pub fn from_url<S: Into<String>>(base_url: S) -> Result<Self>
Creates a new ChatCompletion instance from URL using environment variables
Analyzes the URL pattern to determine the provider, then loads credentials from the appropriate environment variables.
§Arguments
base_url- The complete base URL for API requests
§Environment Variables
For Azure URLs (*.openai.azure.com):
AZURE_OPENAI_API_KEY(required)
For other URLs:
OPENAI_API_KEY(required)
§Returns
Result<ChatCompletion> - Configured client or error
§Example
use openai_tools::chat::request::ChatCompletion;
// Uses OPENAI_API_KEY from environment
let chat = ChatCompletion::from_url("https://api.openai.com/v1")?;
// Uses AZURE_OPENAI_API_KEY from environment (complete base URL)
let azure = ChatCompletion::from_url(
"https://my-resource.openai.azure.com/openai/deployments/gpt-4o?api-version=2024-08-01-preview"
)?;Sourcepub fn auth(&self) -> &AuthProvider
pub fn auth(&self) -> &AuthProvider
Sourcepub fn base_url<T: AsRef<str>>(&mut self, url: T) -> &mut Self
pub fn base_url<T: AsRef<str>>(&mut self, url: T) -> &mut Self
Sets a custom API endpoint URL (OpenAI only)
Use this to point to alternative OpenAI-compatible APIs (e.g., proxy servers).
For Azure, use azure() or with_auth() instead.
§Arguments
url- The base URL (e.g., “https://my-proxy.example.com/v1”)
§Returns
A mutable reference to self for method chaining
§Note
This method only works with OpenAI authentication. For Azure, the endpoint is constructed from resource name and deployment name.
§Example
use openai_tools::chat::request::ChatCompletion;
let mut chat = ChatCompletion::new();
chat.base_url("https://my-proxy.example.com/v1");Sourcepub fn model(&mut self, model: ChatModel) -> &mut Self
pub fn model(&mut self, model: ChatModel) -> &mut Self
Sets the model to use for chat completion.
§Arguments
model- The model to use (e.g.,ChatModel::Gpt4oMini,ChatModel::Gpt4o)
§Returns
A mutable reference to self for method chaining
§Example
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::models::ChatModel;
let mut chat = ChatCompletion::new();
chat.model(ChatModel::Gpt4oMini);Sourcepub fn model_id<T: AsRef<str>>(&mut self, model_id: T) -> &mut Self
👎Deprecated since 0.2.0: Use model(ChatModel) instead for type safety
pub fn model_id<T: AsRef<str>>(&mut self, model_id: T) -> &mut Self
model(ChatModel) instead for type safetySets the model using a string ID (for backward compatibility).
Prefer using [model] with ChatModel enum for type safety.
§Arguments
model_id- OpenAI model ID string (e.g., “gpt-4o-mini”)
§Returns
A mutable reference to self for method chaining
§Example
use openai_tools::chat::request::ChatCompletion;
let mut chat = ChatCompletion::new();
chat.model_id("gpt-4o-mini");Sourcepub fn timeout(&mut self, timeout: Duration) -> &mut Self
pub fn timeout(&mut self, timeout: Duration) -> &mut Self
Sets the request timeout duration
§Arguments
timeout- The maximum time to wait for a response
§Returns
A mutable reference to self for method chaining
§Example
use std::time::Duration;
use openai_tools::chat::request::ChatCompletion;
let mut chat = ChatCompletion::new();
chat.model_id("gpt-4o-mini")
.timeout(Duration::from_secs(30));Sourcepub fn add_message(&mut self, message: Message) -> &mut Self
pub fn add_message(&mut self, message: Message) -> &mut Self
Adds a single message to the conversation history
This method appends a new message to the existing conversation history. It’s useful for building conversations incrementally.
§Arguments
message- The message to add to the conversation
§Returns
A mutable reference to self for method chaining
§Examples
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;
let mut chat = ChatCompletion::new();
chat.add_message(Message::from_string(Role::User, "Hello!"))
.add_message(Message::from_string(Role::Assistant, "Hi there!"))
.add_message(Message::from_string(Role::User, "How are you?"));Sourcepub fn frequency_penalty(&mut self, frequency_penalty: f32) -> &mut Self
pub fn frequency_penalty(&mut self, frequency_penalty: f32) -> &mut Self
Sets the frequency penalty
A parameter that penalizes based on word frequency to reduce repetition. Positive values decrease repetition, negative values increase it.
Note: Reasoning models (GPT-5, o-series) only support frequency_penalty=0. For these models, non-zero values will be ignored with a warning.
§Arguments
frequency_penalty- Frequency penalty value (range: -2.0 to 2.0)
§Returns
A mutable reference to self for method chaining
Sourcepub fn logit_bias<T: AsRef<str>>(
&mut self,
logit_bias: HashMap<T, i32>,
) -> &mut Self
pub fn logit_bias<T: AsRef<str>>( &mut self, logit_bias: HashMap<T, i32>, ) -> &mut Self
Sets logit bias to adjust the probability of specific tokens
Note: Reasoning models (GPT-5, o-series) do not support logit_bias. For these models, this parameter will be ignored with a warning.
§Arguments
logit_bias- A map of token IDs to adjustment values
§Returns
A mutable reference to self for method chaining
Sourcepub fn logprobs(&mut self, logprobs: bool) -> &mut Self
pub fn logprobs(&mut self, logprobs: bool) -> &mut Self
Sets whether to include probability information for each token
Note: Reasoning models (GPT-5, o-series) do not support logprobs. For these models, this parameter will be ignored with a warning.
§Arguments
logprobs-trueto include probability information
§Returns
A mutable reference to self for method chaining
Sourcepub fn top_logprobs(&mut self, top_logprobs: u8) -> &mut Self
pub fn top_logprobs(&mut self, top_logprobs: u8) -> &mut Self
Sets the number of top probabilities to return for each token
Note: Reasoning models (GPT-5, o-series) do not support top_logprobs. For these models, this parameter will be ignored with a warning.
§Arguments
top_logprobs- Number of top probabilities (range: 0-20)
§Returns
A mutable reference to self for method chaining
Sourcepub fn max_completion_tokens(&mut self, max_completion_tokens: u64) -> &mut Self
pub fn max_completion_tokens(&mut self, max_completion_tokens: u64) -> &mut Self
Sourcepub fn modalities<T: AsRef<str>>(&mut self, modalities: Vec<T>) -> &mut Self
pub fn modalities<T: AsRef<str>>(&mut self, modalities: Vec<T>) -> &mut Self
Sourcepub fn presence_penalty(&mut self, presence_penalty: f32) -> &mut Self
pub fn presence_penalty(&mut self, presence_penalty: f32) -> &mut Self
Sets the presence penalty
A parameter that controls the tendency to include new content in the document. Positive values encourage talking about new topics, negative values encourage staying on existing topics.
Note: Reasoning models (GPT-5, o-series) only support presence_penalty=0. For these models, non-zero values will be ignored with a warning.
§Arguments
presence_penalty- Presence penalty value (range: -2.0 to 2.0)
§Returns
A mutable reference to self for method chaining
Sourcepub fn temperature(&mut self, temperature: f32) -> &mut Self
pub fn temperature(&mut self, temperature: f32) -> &mut Self
Sets the temperature parameter to control response randomness
Higher values (e.g., 1.0) produce more creative and diverse outputs, while lower values (e.g., 0.2) produce more deterministic and consistent outputs.
Note: Reasoning models (GPT-5, o-series) only support temperature=1.0. For these models, other values will be ignored with a warning.
§Arguments
temperature- Temperature parameter (range: 0.0 to 2.0)
§Returns
A mutable reference to self for method chaining
Sourcepub fn json_schema(&mut self, json_schema: Schema) -> &mut Self
pub fn json_schema(&mut self, json_schema: Schema) -> &mut Self
Sourcepub fn tools(&mut self, tools: Vec<Tool>) -> &mut Self
pub fn tools(&mut self, tools: Vec<Tool>) -> &mut Self
Sets the tools that can be called by the model
Enables function calling by providing a list of tools that the model can choose to call. When tools are provided, the model may generate tool calls instead of or in addition to regular text responses.
§Arguments
tools- Vector of tools available for the model to use
§Returns
A mutable reference to self for method chaining
Sourcepub fn safety_identifier<T: AsRef<str>>(&mut self, safety_id: T) -> &mut Self
pub fn safety_identifier<T: AsRef<str>>(&mut self, safety_id: T) -> &mut Self
Sets the safety identifier for end-user tracking
A stable identifier used to help OpenAI detect users of your application that may be violating usage policies. This enables per-user safety monitoring and abuse detection.
§Arguments
safety_id- A unique, stable identifier for the end user (recommended: hash of email or internal user ID)
§Returns
A mutable reference to self for method chaining
§Examples
use openai_tools::chat::request::ChatCompletion;
let mut chat = ChatCompletion::new();
chat.safety_identifier("user_abc123");Sourcepub fn get_message_history(&self) -> Vec<Message>
pub fn get_message_history(&self) -> Vec<Message>
Sourcepub async fn chat(&mut self) -> Result<Response>
pub async fn chat(&mut self) -> Result<Response>
Sends the chat completion request to OpenAI API
This method validates the request parameters, constructs the HTTP request, and sends it to the OpenAI Chat Completions endpoint.
§Returns
A Result containing the API response on success, or an error on failure.
§Errors
Returns an error if:
- API key is not set
- Model ID is not set
- Messages are empty
- Network request fails
- Response parsing fails
§Parameter Validation
For reasoning models (GPT-5, o-series), certain parameters have restrictions:
temperature: only 1.0 supportedfrequency_penalty: only 0 supportedpresence_penalty: only 0 supportedlogprobs,top_logprobs,logit_bias: not supportedn: only 1 supported
Validation occurs at two points:
- At setter time (when using
with_model()constructor) - immediate warning - At API call time (fallback) - for cases where model is changed after setting params
Unsupported parameter values are ignored with a warning and the request proceeds.
§Example
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;
let mut chat = ChatCompletion::new();
let messages = vec![Message::from_string(Role::User, "Hello!")];
let response = chat
.model_id("gpt-4o-mini")
.messages(messages)
.temperature(1.0)
.chat()
.await?;
println!("{}", response.choices[0].message.content.as_ref().unwrap().text.as_ref().unwrap());Trait Implementations§
Source§impl Clone for ChatCompletion
impl Clone for ChatCompletion
Source§fn clone(&self) -> ChatCompletion
fn clone(&self) -> ChatCompletion
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for ChatCompletion
impl Debug for ChatCompletion
Auto Trait Implementations§
impl Freeze for ChatCompletion
impl RefUnwindSafe for ChatCompletion
impl Send for ChatCompletion
impl Sync for ChatCompletion
impl Unpin for ChatCompletion
impl UnwindSafe for ChatCompletion
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<T> PolicyExt for Twhere
T: ?Sized,
impl<T> PolicyExt for Twhere
T: ?Sized,
Source§impl<R, P> ReadPrimitive<R> for P
impl<R, P> ReadPrimitive<R> for P
Source§fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
ReadEndian::read_from_little_endian().