pub enum ChatModel {
Show 22 variants
Gpt5_2,
Gpt5_2ChatLatest,
Gpt5_2Pro,
Gpt5_1,
Gpt5_1ChatLatest,
Gpt5_1CodexMax,
Gpt5Mini,
Gpt4_1,
Gpt4_1Mini,
Gpt4_1Nano,
Gpt4o,
Gpt4oMini,
Gpt4oAudioPreview,
Gpt4Turbo,
Gpt4,
Gpt3_5Turbo,
O1,
O1Pro,
O3,
O3Mini,
O4Mini,
Custom(String),
}Expand description
Models available for Chat Completions and Responses APIs.
This enum covers all models that can be used with the Chat Completions API
(/v1/chat/completions) and the Responses API (/v1/responses).
§Model Categories
§GPT-5 Series (Latest Flagship)
- [
Gpt5_2]: GPT-5.2 Thinking - flagship model for coding and agentic tasks - [
Gpt5_2ChatLatest]: GPT-5.2 Instant - fast workhorse for everyday work - [
Gpt5_2Pro]: GPT-5.2 Pro - smartest for difficult questions (Responses API only) - [
Gpt5_1]: GPT-5.1 - configurable reasoning and non-reasoning - [
Gpt5_1CodexMax]: GPT-5.1 Codex Max - powers Codex CLI - [
Gpt5Mini]: GPT-5 Mini - smaller, faster variant
§GPT-4.1 Series
- [
Gpt4_1]: 1M context window flagship - [
Gpt4_1Mini]: Balanced performance and cost - [
Gpt4_1Nano]: Fastest and most cost-efficient
§GPT-4o Series
- [
Gpt4o]: High-intelligence flagship model - [
Gpt4oMini]: Cost-effective GPT-4o variant - [
Gpt4oAudioPreview]: Audio-capable GPT-4o
§Reasoning Models (o-series)
- [
O1], [O1Pro]: Full reasoning models - [
O3], [O3Mini]: Latest reasoning models - [
O4Mini]: Fast, cost-efficient reasoning
§Reasoning Model Restrictions
Reasoning models (GPT-5 series, o1, o3, o4 series) have parameter restrictions:
temperature: Only 1.0 supportedtop_p: Only 1.0 supportedfrequency_penalty: Only 0 supportedpresence_penalty: Only 0 supported
GPT-5 models support reasoning.effort parameter:
none: No reasoning (GPT-5.1 default)minimal: Very few reasoning tokenslow,medium,high: Increasing reasoning depthxhigh: Maximum reasoning (GPT-5.2 Pro, GPT-5.1 Codex Max)
§Example
use openai_tools::common::models::ChatModel;
// Check if a model is a reasoning model
let model = ChatModel::O3Mini;
assert!(model.is_reasoning_model());
// GPT-5 models are also reasoning models
let gpt5 = ChatModel::Gpt5_2;
assert!(gpt5.is_reasoning_model());
// Get the API model ID string
assert_eq!(model.as_str(), "o3-mini");Variants§
Gpt5_2
GPT-5.2 Thinking - Flagship model for coding and agentic tasks
- Context: 128K tokens (256K with thinking)
- Supports: reasoning.effort (none, minimal, low, medium, high, xhigh)
- Supports: verbosity parameter (low, medium, high)
Gpt5_2ChatLatest
GPT-5.2 Instant - Fast workhorse for everyday work
Points to the GPT-5.2 snapshot used in ChatGPT
Gpt5_2Pro
GPT-5.2 Pro - Smartest for difficult questions
- Available in Responses API only
- Supports: xhigh reasoning effort
Gpt5_1
GPT-5.1 - Configurable reasoning and non-reasoning
- Defaults to no reasoning (effort: none)
- Supports: reasoning.effort (none, low, medium, high)
Gpt5_1ChatLatest
GPT-5.1 Chat Latest - Chat-optimized GPT-5.1
Gpt5_1CodexMax
GPT-5.1 Codex Max - Powers Codex and Codex CLI
- Available in Responses API only
- Supports: reasoning.effort (none, medium, high, xhigh)
Gpt5Mini
GPT-5 Mini - Smaller, faster GPT-5 variant
Gpt4_1
GPT-4.1 - Smartest non-reasoning model with 1M token context
Gpt4_1Mini
GPT-4.1 Mini - Balanced performance and cost
Gpt4_1Nano
GPT-4.1 Nano - Fastest and most cost-efficient
Gpt4o
GPT-4o - High-intelligence flagship model (multimodal)
Gpt4oMini
GPT-4o Mini - Cost-effective GPT-4o variant
Gpt4oAudioPreview
GPT-4o Audio Preview - Audio-capable GPT-4o
Gpt4Turbo
GPT-4 Turbo - High capability with faster responses
Gpt4
GPT-4 - Original GPT-4 model
Gpt3_5Turbo
GPT-3.5 Turbo - Fast and cost-effective
O1
O1 - Full reasoning model for complex tasks
O1Pro
O1 Pro - O1 with more compute for complex problems
O3
O3 - Latest full reasoning model
O3Mini
O3 Mini - Smaller, faster reasoning model
O4Mini
O4 Mini - Fast, cost-efficient reasoning model
Custom(String)
Custom model ID for fine-tuned models or new models not yet in enum
Implementations§
Source§impl ChatModel
impl ChatModel
Sourcepub fn as_str(&self) -> &str
pub fn as_str(&self) -> &str
Returns the model identifier string for API requests.
§Example
use openai_tools::common::models::ChatModel;
assert_eq!(ChatModel::Gpt4oMini.as_str(), "gpt-4o-mini");
assert_eq!(ChatModel::O3Mini.as_str(), "o3-mini");
assert_eq!(ChatModel::Gpt5_2.as_str(), "gpt-5.2");Sourcepub fn is_reasoning_model(&self) -> bool
pub fn is_reasoning_model(&self) -> bool
Checks if this is a reasoning model with parameter restrictions.
Reasoning models (GPT-5 series, o1, o3, o4 series) only support:
temperature = 1.0top_p = 1.0frequency_penalty = 0presence_penalty = 0
§Example
use openai_tools::common::models::ChatModel;
assert!(ChatModel::O3Mini.is_reasoning_model());
assert!(ChatModel::Gpt5_2.is_reasoning_model());
assert!(!ChatModel::Gpt4oMini.is_reasoning_model());
assert!(!ChatModel::Gpt4_1.is_reasoning_model());Sourcepub fn parameter_support(&self) -> ParameterSupport
pub fn parameter_support(&self) -> ParameterSupport
Returns parameter support information for this model.
This method provides detailed information about which parameters are supported by the model and any restrictions that apply.
§Example
use openai_tools::common::models::{ChatModel, ParameterRestriction};
// Standard model supports all parameters
let standard = ChatModel::Gpt4oMini;
let support = standard.parameter_support();
assert_eq!(support.temperature, ParameterRestriction::Any);
assert!(support.logprobs);
// Reasoning model has restrictions
let reasoning = ChatModel::O3Mini;
let support = reasoning.parameter_support();
assert_eq!(support.temperature, ParameterRestriction::FixedValue(1.0));
assert!(!support.logprobs);
assert!(support.reasoning);Sourcepub fn custom(model_id: impl Into<String>) -> Self
pub fn custom(model_id: impl Into<String>) -> Self
Creates a custom model from a string.
Use this for fine-tuned models or new models not yet in the enum.
§Example
use openai_tools::common::models::ChatModel;
let model = ChatModel::custom("ft:gpt-4o-mini:my-org::abc123");
assert_eq!(model.as_str(), "ft:gpt-4o-mini:my-org::abc123");Trait Implementations§
Source§impl<'de> Deserialize<'de> for ChatModel
impl<'de> Deserialize<'de> for ChatModel
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
impl Eq for ChatModel
impl StructuralPartialEq for ChatModel
Auto Trait Implementations§
impl Freeze for ChatModel
impl RefUnwindSafe for ChatModel
impl Send for ChatModel
impl Sync for ChatModel
impl Unpin for ChatModel
impl UnwindSafe for ChatModel
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key and return true if they are equal.Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<T> PolicyExt for Twhere
T: ?Sized,
impl<T> PolicyExt for Twhere
T: ?Sized,
Source§impl<R, P> ReadPrimitive<R> for P
impl<R, P> ReadPrimitive<R> for P
Source§fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
fn read_from_little_endian(read: &mut R) -> Result<Self, Error>
ReadEndian::read_from_little_endian().Source§impl<T> ToStringFallible for Twhere
T: Display,
impl<T> ToStringFallible for Twhere
T: Display,
Source§fn try_to_string(&self) -> Result<String, TryReserveError>
fn try_to_string(&self) -> Result<String, TryReserveError>
ToString::to_string, but without panic on OOM.