Skip to main content

ChatCompletion

Struct ChatCompletion 

Source
pub struct ChatCompletion { /* private fields */ }
Expand description

OpenAI Chat Completions API client

This structure manages interactions with the OpenAI Chat Completions API and Azure OpenAI API. It handles authentication, request parameter configuration, and API calls.

§Providers

The client supports two providers:

  • OpenAI: Standard OpenAI API (default)
  • Azure: Azure OpenAI Service

§Examples

§OpenAI (existing behavior - unchanged)

use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;

let mut chat = ChatCompletion::new();
let messages = vec![Message::from_string(Role::User, "Hello!")];

let response = chat
    .model_id("gpt-4o-mini")
    .messages(messages)
    .chat()
    .await?;

§Azure OpenAI

use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;

// From environment variables
let mut chat = ChatCompletion::azure()?;

let messages = vec![Message::from_string(Role::User, "Hello!")];
let response = chat.messages(messages).chat().await?;

Implementations§

Source§

impl ChatCompletion

Source

pub fn new() -> Self

Creates a new ChatCompletion instance for OpenAI API

Loads the API key from the OPENAI_API_KEY environment variable. If a .env file exists, it will also be loaded.

§Panics

Panics if the OPENAI_API_KEY environment variable is not set.

§Returns

A new ChatCompletion instance configured for OpenAI API

§Example
use openai_tools::chat::request::ChatCompletion;

let mut chat = ChatCompletion::new();
Source

pub fn with_model(model: ChatModel) -> Self

Creates a new ChatCompletion instance with a specified model

This is the recommended constructor as it enables parameter validation at setter time. When you set parameters like temperature(), the model’s parameter support is checked and warnings are logged for unsupported values.

§Arguments
  • model - The model to use for chat completion
§Panics

Panics if the OPENAI_API_KEY environment variable is not set.

§Returns

A new ChatCompletion instance with the specified model

§Example
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::models::ChatModel;

// Recommended: specify model at creation time
let mut chat = ChatCompletion::with_model(ChatModel::Gpt4oMini);

// For reasoning models, unsupported parameters are validated at setter time
let mut reasoning_chat = ChatCompletion::with_model(ChatModel::O3Mini);
reasoning_chat.temperature(0.5); // Warning logged, value ignored
Source

pub fn with_auth(auth: AuthProvider) -> Self

Creates a new ChatCompletion instance with a custom authentication provider

Use this to explicitly configure OpenAI or Azure authentication.

§Arguments
  • auth - The authentication provider
§Returns

A new ChatCompletion instance with the specified auth provider

§Example
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::auth::{AuthProvider, AzureAuth};

// Explicit Azure configuration with complete base URL
let auth = AuthProvider::Azure(
    AzureAuth::new(
        "api-key",
        "https://my-resource.openai.azure.com/openai/deployments/gpt-4o?api-version=2024-08-01-preview"
    )
);
let mut chat = ChatCompletion::with_auth(auth);
Source

pub fn azure() -> Result<Self>

Creates a new ChatCompletion instance for Azure OpenAI API

Loads configuration from Azure-specific environment variables.

§Returns

Result<ChatCompletion> - Configured for Azure or error if env vars missing

§Environment Variables
VariableRequiredDescription
AZURE_OPENAI_API_KEYYesAzure API key
AZURE_OPENAI_BASE_URLYesComplete endpoint URL including deployment, API path, and api-version
§Example
use openai_tools::chat::request::ChatCompletion;

// With environment variables:
// AZURE_OPENAI_API_KEY=xxx
// AZURE_OPENAI_BASE_URL=https://my-resource.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2024-08-01-preview
let mut chat = ChatCompletion::azure()?;
Source

pub fn detect_provider() -> Result<Self>

Creates a new ChatCompletion instance by auto-detecting the provider

Tries Azure first (if AZURE_OPENAI_API_KEY is set), then falls back to OpenAI.

§Returns

Result<ChatCompletion> - Auto-configured client or error

§Example
use openai_tools::chat::request::ChatCompletion;

// Uses Azure if AZURE_OPENAI_API_KEY is set, otherwise OpenAI
let mut chat = ChatCompletion::detect_provider()?;
Source

pub fn with_url<S: Into<String>>(base_url: S, api_key: S) -> Self

Creates a new ChatCompletion instance with URL-based provider detection

Analyzes the URL pattern to determine the provider:

  • URLs containing .openai.azure.com → Azure
  • All other URLs → OpenAI-compatible
§Arguments
  • base_url - The complete base URL for API requests
  • api_key - The API key or token
§Returns

ChatCompletion - Configured client

§Example
use openai_tools::chat::request::ChatCompletion;

// OpenAI-compatible API (e.g., local Ollama)
let chat = ChatCompletion::with_url(
    "http://localhost:11434/v1",
    "ollama",
);

// Azure OpenAI (complete base URL)
let azure_chat = ChatCompletion::with_url(
    "https://my-resource.openai.azure.com/openai/deployments/gpt-4o?api-version=2024-08-01-preview",
    "azure-key",
);
Source

pub fn from_url<S: Into<String>>(base_url: S) -> Result<Self>

Creates a new ChatCompletion instance from URL using environment variables

Analyzes the URL pattern to determine the provider, then loads credentials from the appropriate environment variables.

§Arguments
  • base_url - The complete base URL for API requests
§Environment Variables

For Azure URLs (*.openai.azure.com):

  • AZURE_OPENAI_API_KEY (required)

For other URLs:

  • OPENAI_API_KEY (required)
§Returns

Result<ChatCompletion> - Configured client or error

§Example
use openai_tools::chat::request::ChatCompletion;

// Uses OPENAI_API_KEY from environment
let chat = ChatCompletion::from_url("https://api.openai.com/v1")?;

// Uses AZURE_OPENAI_API_KEY from environment (complete base URL)
let azure = ChatCompletion::from_url(
    "https://my-resource.openai.azure.com/openai/deployments/gpt-4o?api-version=2024-08-01-preview"
)?;
Source

pub fn auth(&self) -> &AuthProvider

Returns the authentication provider

§Returns

Reference to the authentication provider

Source

pub fn base_url<T: AsRef<str>>(&mut self, url: T) -> &mut Self

Sets a custom API endpoint URL (OpenAI only)

Use this to point to alternative OpenAI-compatible APIs (e.g., proxy servers). For Azure, use azure() or with_auth() instead.

§Arguments
  • url - The base URL (e.g., “https://my-proxy.example.com/v1”)
§Returns

A mutable reference to self for method chaining

§Note

This method only works with OpenAI authentication. For Azure, the endpoint is constructed from resource name and deployment name.

§Example
use openai_tools::chat::request::ChatCompletion;

let mut chat = ChatCompletion::new();
chat.base_url("https://my-proxy.example.com/v1");
Source

pub fn model(&mut self, model: ChatModel) -> &mut Self

Sets the model to use for chat completion.

§Arguments
  • model - The model to use (e.g., ChatModel::Gpt4oMini, ChatModel::Gpt4o)
§Returns

A mutable reference to self for method chaining

§Example
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::models::ChatModel;

let mut chat = ChatCompletion::new();
chat.model(ChatModel::Gpt4oMini);
Source

pub fn model_id<T: AsRef<str>>(&mut self, model_id: T) -> &mut Self

👎Deprecated since 0.2.0: Use model(ChatModel) instead for type safety

Sets the model using a string ID (for backward compatibility).

Prefer using [model] with ChatModel enum for type safety.

§Arguments
  • model_id - OpenAI model ID string (e.g., “gpt-4o-mini”)
§Returns

A mutable reference to self for method chaining

§Example
use openai_tools::chat::request::ChatCompletion;

let mut chat = ChatCompletion::new();
chat.model_id("gpt-4o-mini");
Source

pub fn timeout(&mut self, timeout: Duration) -> &mut Self

Sets the request timeout duration

§Arguments
  • timeout - The maximum time to wait for a response
§Returns

A mutable reference to self for method chaining

§Example
use std::time::Duration;
use openai_tools::chat::request::ChatCompletion;

let mut chat = ChatCompletion::new();
chat.model_id("gpt-4o-mini")
    .timeout(Duration::from_secs(30));
Source

pub fn messages(&mut self, messages: Vec<Message>) -> &mut Self

Sets the chat message history

§Arguments
  • messages - Vector of chat messages representing the conversation history
§Returns

A mutable reference to self for method chaining

Source

pub fn add_message(&mut self, message: Message) -> &mut Self

Adds a single message to the conversation history

This method appends a new message to the existing conversation history. It’s useful for building conversations incrementally.

§Arguments
  • message - The message to add to the conversation
§Returns

A mutable reference to self for method chaining

§Examples
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;

let mut chat = ChatCompletion::new();
chat.add_message(Message::from_string(Role::User, "Hello!"))
    .add_message(Message::from_string(Role::Assistant, "Hi there!"))
    .add_message(Message::from_string(Role::User, "How are you?"));
Source

pub fn store(&mut self, store: bool) -> &mut Self

Sets whether to store the request and response at OpenAI

§Arguments
  • store - true to store, false to not store
§Returns

A mutable reference to self for method chaining

Source

pub fn frequency_penalty(&mut self, frequency_penalty: f32) -> &mut Self

Sets the frequency penalty

A parameter that penalizes based on word frequency to reduce repetition. Positive values decrease repetition, negative values increase it.

Note: Reasoning models (GPT-5, o-series) only support frequency_penalty=0. For these models, non-zero values will be ignored with a warning.

§Arguments
  • frequency_penalty - Frequency penalty value (range: -2.0 to 2.0)
§Returns

A mutable reference to self for method chaining

Source

pub fn logit_bias<T: AsRef<str>>( &mut self, logit_bias: HashMap<T, i32>, ) -> &mut Self

Sets logit bias to adjust the probability of specific tokens

Note: Reasoning models (GPT-5, o-series) do not support logit_bias. For these models, this parameter will be ignored with a warning.

§Arguments
  • logit_bias - A map of token IDs to adjustment values
§Returns

A mutable reference to self for method chaining

Source

pub fn logprobs(&mut self, logprobs: bool) -> &mut Self

Sets whether to include probability information for each token

Note: Reasoning models (GPT-5, o-series) do not support logprobs. For these models, this parameter will be ignored with a warning.

§Arguments
  • logprobs - true to include probability information
§Returns

A mutable reference to self for method chaining

Source

pub fn top_logprobs(&mut self, top_logprobs: u8) -> &mut Self

Sets the number of top probabilities to return for each token

Note: Reasoning models (GPT-5, o-series) do not support top_logprobs. For these models, this parameter will be ignored with a warning.

§Arguments
  • top_logprobs - Number of top probabilities (range: 0-20)
§Returns

A mutable reference to self for method chaining

Source

pub fn max_completion_tokens(&mut self, max_completion_tokens: u64) -> &mut Self

Sets the maximum number of tokens to generate

§Arguments
  • max_completion_tokens - Maximum number of tokens
§Returns

A mutable reference to self for method chaining

Source

pub fn n(&mut self, n: u32) -> &mut Self

Sets the number of responses to generate

Note: Reasoning models (GPT-5, o-series) only support n=1. For these models, values other than 1 will be ignored with a warning.

§Arguments
  • n - Number of responses to generate
§Returns

A mutable reference to self for method chaining

Source

pub fn modalities<T: AsRef<str>>(&mut self, modalities: Vec<T>) -> &mut Self

Sets the available modalities for the response

§Arguments
  • modalities - List of modalities (e.g., ["text", "audio"])
§Returns

A mutable reference to self for method chaining

Source

pub fn presence_penalty(&mut self, presence_penalty: f32) -> &mut Self

Sets the presence penalty

A parameter that controls the tendency to include new content in the document. Positive values encourage talking about new topics, negative values encourage staying on existing topics.

Note: Reasoning models (GPT-5, o-series) only support presence_penalty=0. For these models, non-zero values will be ignored with a warning.

§Arguments
  • presence_penalty - Presence penalty value (range: -2.0 to 2.0)
§Returns

A mutable reference to self for method chaining

Source

pub fn temperature(&mut self, temperature: f32) -> &mut Self

Sets the temperature parameter to control response randomness

Higher values (e.g., 1.0) produce more creative and diverse outputs, while lower values (e.g., 0.2) produce more deterministic and consistent outputs.

Note: Reasoning models (GPT-5, o-series) only support temperature=1.0. For these models, other values will be ignored with a warning.

§Arguments
  • temperature - Temperature parameter (range: 0.0 to 2.0)
§Returns

A mutable reference to self for method chaining

Source

pub fn json_schema(&mut self, json_schema: Schema) -> &mut Self

Sets structured output using JSON schema

Enables receiving responses in a structured JSON format according to the specified JSON schema.

§Arguments
  • json_schema - JSON schema defining the response structure
§Returns

A mutable reference to self for method chaining

Source

pub fn tools(&mut self, tools: Vec<Tool>) -> &mut Self

Sets the tools that can be called by the model

Enables function calling by providing a list of tools that the model can choose to call. When tools are provided, the model may generate tool calls instead of or in addition to regular text responses.

§Arguments
  • tools - Vector of tools available for the model to use
§Returns

A mutable reference to self for method chaining

Source

pub fn safety_identifier<T: AsRef<str>>(&mut self, safety_id: T) -> &mut Self

Sets the safety identifier for end-user tracking

A stable identifier used to help OpenAI detect users of your application that may be violating usage policies. This enables per-user safety monitoring and abuse detection.

§Arguments
  • safety_id - A unique, stable identifier for the end user (recommended: hash of email or internal user ID)
§Returns

A mutable reference to self for method chaining

§Examples
use openai_tools::chat::request::ChatCompletion;

let mut chat = ChatCompletion::new();
chat.safety_identifier("user_abc123");
Source

pub fn get_message_history(&self) -> Vec<Message>

Gets the current message history

§Returns

A vector containing the message history

Source

pub async fn chat(&mut self) -> Result<Response>

Sends the chat completion request to OpenAI API

This method validates the request parameters, constructs the HTTP request, and sends it to the OpenAI Chat Completions endpoint.

§Returns

A Result containing the API response on success, or an error on failure.

§Errors

Returns an error if:

  • API key is not set
  • Model ID is not set
  • Messages are empty
  • Network request fails
  • Response parsing fails
§Parameter Validation

For reasoning models (GPT-5, o-series), certain parameters have restrictions:

  • temperature: only 1.0 supported
  • frequency_penalty: only 0 supported
  • presence_penalty: only 0 supported
  • logprobs, top_logprobs, logit_bias: not supported
  • n: only 1 supported

Validation occurs at two points:

  1. At setter time (when using with_model() constructor) - immediate warning
  2. At API call time (fallback) - for cases where model is changed after setting params

Unsupported parameter values are ignored with a warning and the request proceeds.

§Example
use openai_tools::chat::request::ChatCompletion;
use openai_tools::common::message::Message;
use openai_tools::common::role::Role;

let mut chat = ChatCompletion::new();
let messages = vec![Message::from_string(Role::User, "Hello!")];

let response = chat
    .model_id("gpt-4o-mini")
    .messages(messages)
    .temperature(1.0)
    .chat()
    .await?;

println!("{}", response.choices[0].message.content.as_ref().unwrap().text.as_ref().unwrap());

Trait Implementations§

Source§

impl Clone for ChatCompletion

Source§

fn clone(&self) -> ChatCompletion

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for ChatCompletion

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Default for ChatCompletion

Source§

fn default() -> Self

Returns the “default value” for a type. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> PolicyExt for T
where T: ?Sized,

Source§

fn and<P, B, E>(self, other: P) -> And<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow only if self and other return Action::Follow. Read more
Source§

fn or<P, B, E>(self, other: P) -> Or<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow if either self or other returns Action::Follow. Read more
Source§

impl<R, P> ReadPrimitive<R> for P
where R: Read + ReadEndian<P>, P: Default,

Source§

fn read_from_little_endian(read: &mut R) -> Result<Self, Error>

Read this value from the supplied reader. Same as ReadEndian::read_from_little_endian().
Source§

fn read_from_big_endian(read: &mut R) -> Result<Self, Error>

Read this value from the supplied reader. Same as ReadEndian::read_from_big_endian().
Source§

fn read_from_native_endian(read: &mut R) -> Result<Self, Error>

Read this value from the supplied reader. Same as ReadEndian::read_from_native_endian().
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more