Enum Model

Source
pub enum Model {
Show 48 variants GPT_4_0125_PREVIEW, GPT_4_TURBO_PREVIEW, GPT_4_1106_PREVIEW, GPT_4_TURBO_WITH_VISION, GPT_4_TURBO_1106_WITH_VISION, GPT_4, GPT_4_0314, GPT_4_0613, GPT_4_32K, GPT_4_32K_0613, GPT_4_32K_0314, GPT_3_5_TURBO_0125, GPT_3_5_TURBO, GPT_3_5_TURBO_1106, GPT_3_5_TURBO_16K, GPT_3_5_TURBO_16K_0613, GPT_3_5_TURBO_0613, GPT_3_5_TURBO_0301, TEXT_DAVINCI_003, TEXT_DAVINCI_002, TEXT_DAVINCI_EDIT_001, CODE_DAVINCI_EDIT_001, WHISPER_1, TEXT_EMBEDDING_ADA_002, TEXT_EMBEDDING_ADA_002_v2, TEXT_SEARCH_ADA_DOC_001, CODE_DAVINCI_002, CODE_DAVINCI_001, CODE_CUSHMAN_002, CODE_CUSHMAN_001, TEXT_MODERATION_LATEST, TEXT_MODERATION_004, TEXT_MODERATION_003, TEXT_MODERATION_002, TEXT_MODERATION_001, TEXT_MODERATION_STABLE, TEXT_CURIE_001, TEXT_BABBAGE_001, TEXT_ADA_001, DAVINCI, CURIE, BABBAGE, ADA, DALL_E_3, DALL_E_2, TTS_1, TTS_1_HD, UNKNOWN,
}
Expand description

An enum of OpenAI models

Note: GPT-4 are not publicly availble yet (Mar 22, 2023).

Variants§

§

GPT_4_0125_PREVIEW

GPT-4 Turbo

The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn’t complete a task. Returns a maximum of 4,096 output tokens. Learn more.

MAX TOKENSTRAINING DATASERIES
128000 tokensUp to Dec 2023GPT-4
§

GPT_4_TURBO_PREVIEW

Currently points to gpt-4-0125-preview.

MAX TOKENSTRAINING DATASERIES
128000 tokensUp to Dec 2023GPT-4
§

GPT_4_1106_PREVIEW

GPT-4 Turbo model featuring improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This is a preview model. Learn more.

MAX TOKENSTRAINING DATASERIES
128000 tokensUp to Apr 2023GPT-4
§

GPT_4_TURBO_WITH_VISION

Ability to understand images, in addition to all other GPT-4 Turbo capabilties. Returns a maximum of 4,096 output tokens. This is a preview model version and not suited yet for production traffic. Learn more.

MAX TOKENSTRAINING DATASERIES
128000 tokensUp to Apr 2023GPT-4
§

GPT_4_TURBO_1106_WITH_VISION

GPT-4 with the ability to understand images, in addition to all other GPT-4 Turbo capabilities. Returns a maximum of 4,096 output tokens. This is a preview model version. Learn more.

MAX TOKENSTRAINING DATASERIES
128000 tokensUp to Apr 2023GPT-4
§

GPT_4

More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration.

Note: on June 27th, 2023, gpt-4 will be updated to point from gpt-4-0314 to gpt-4-0613, the latest model iteration.

MAX TOKENSTRAINING DATASERIES
8192 tokensUp to Sep 2021GPT-4
§

GPT_4_0314

👎Deprecated: Discontinuation date 2024-06-13, use gpt-4-0613 instead

Snapshot of gpt-4 from March 14th 2023. Unlike gpt-4, this model will not receive updates, and will only be supported for a three month period ending on June 13th 2023.

MAX TOKENSTRAINING DATASERIES
8192 tokensUp to Sep 2021GPT-4
§

GPT_4_0613

Snapshot of gpt-4 from June 13th 2023 with function calling data. Unlike gpt-4, this model will not receive updates, and will be deprecated 3 months after a new version is released.

MAX TOKENSTRAINING DATASERIES
8,192 tokensUp to Sep 2021GPT-4
§

GPT_4_32K

Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration.

MAX TOKENSTRAINING DATASERIES
32768 tokensUp to Sep 2021GPT-4
§

GPT_4_32K_0613

Snapshot of gpt-4-32 from June 13th 2023. Unlike gpt-4-32k, this model will not receive updates, and will be deprecated 3 months after a new version is released.

MAX TOKENSTRAINING DATASERIES
32,768 tokensUp to Sep 2021GPT-4
§

GPT_4_32K_0314

👎Deprecated: Discontinuation date 2023-09-13, use gpt-4-32k-0613 instead

Snapshot of gpt-4-32 from March 14th 2023. Unlike gpt-4-32k, this model will not receive updates, and will be deprecated 3 months after a new version is released.

MAX TOKENSTRAINING DATASERIES
32768 tokensUp to Sep 2021GPT-4
§

GPT_3_5_TURBO_0125

Updated GPT 3.5 Turbo

The latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls. Returns a maximum of 4,096 output tokens. Learn more

MAX TOKENSTRAINING DATASERIES
16,385 tokensUp to Sep 2021GPT-3.5
§

GPT_3_5_TURBO

Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003. Will be updated with our latest model iteration.

Note: on June 27th, 2023, gpt-3.5-turbo will be updated to point from gpt-3.5-turbo-0301 to gpt-3.5-turbo-0613.

MAX TOKENSTRAINING DATASERIES
4096 tokensUp to Sep 2021GPT-3.5
§

GPT_3_5_TURBO_1106

The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. Learn more.

MAX TOKENSTRAINING DATASERIES
16385 tokensUp to Sep 2021GPT-3.5
§

GPT_3_5_TURBO_16K

Same capabilities as the standard gpt-3.5-turbo model but with 4 times the context.

Note: currently points to gpt-3.5-turbo-16k-0613

MAX TOKENSTRAINING DATASERIES
16,385 tokensUp to Sep 2021GPT-3.5
§

GPT_3_5_TURBO_16K_0613

👎Deprecated: Discontinuation date 2024-06-13

Snapshot of gpt-3.5-turbo-16k from June 13th 2023. Unlike gpt-3.5-turbo-16k, this model will not receive updates, and will be deprecated 3 months after a new version is released.

Note: Will be deprecated on June 13, 2024.

MAX TOKENSTRAINING DATASERIES
16,385 tokensUp to Sep 2021GPT-3.5
§

GPT_3_5_TURBO_0613

👎Deprecated: Discontinuation date 2024-06-13

Snapshot of gpt-3.5-turbo from June 13th 2023 with function calling data. Unlike gpt-3.5-turbo, this model will not receive updates, and will be deprecated 3 months after a new version is released.

Note: Will be deprecated on June 13, 2024.

MAX TOKENSTRAINING DATASERIES
4,096 tokensUp to Sep 2021GPT-3.5
§

GPT_3_5_TURBO_0301

👎Deprecated: Discontinuation date 2023-09-13, use gpt-3.5-turbo-0613 instead

Snapshot of gpt-3.5-turbo from March 1st 2023. Unlike gpt-3.5-turbo, this model will not receive updates, and will only be supported for a three month period ending on June 1st 2023.

MAX TOKENSTRAINING DATASERIES
4096 tokensUp to Sep 2021GPT-3.5
§

TEXT_DAVINCI_003

Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports some additional features such as inserting text.

MAX TOKENSTRAINING DATASERIES
4097 tokensUp to Sep 2021GPT-3.5
§

TEXT_DAVINCI_002

Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning

MAX TOKENSTRAINING DATASERIES
4097 tokensUp to Sep 2021GPT-3.5
§

TEXT_DAVINCI_EDIT_001

§

CODE_DAVINCI_EDIT_001

Optimized for code-completion tasks

MAX TOKENSTRAINING DATASERIES
8001 tokensUp to Sep 2021GPT-3.5
§

WHISPER_1

§

TEXT_EMBEDDING_ADA_002

§

TEXT_EMBEDDING_ADA_002_v2

§

TEXT_SEARCH_ADA_DOC_001

§

CODE_DAVINCI_002

👎Deprecated: The Codex models are now deprecated.

Most capable Codex model. Particularly good at translating natural language to code. In addition to completing code, also supports inserting completions within code.

MAX TOKENSTRAINING DATASERIES
8001 tokensUp to Jun 2021Codex
§

CODE_DAVINCI_001

👎Deprecated: The Codex models are now deprecated.

Earlier version of code-davinci-002

MAX TOKENSTRAINING DATASERIES
8001 tokensUp to Jun 2021Codex
§

CODE_CUSHMAN_002

👎Deprecated: The Codex models are now deprecated.

Almost as capable as Davinci Codex, but slightly faster. This speed advantage may make it preferable for real-time applications.

MAX TOKENSTRAINING DATASERIES
2048 tokens-Codex
§

CODE_CUSHMAN_001

👎Deprecated: The Codex models are now deprecated.

Earlier version of code-cushman-002

MAX TOKENSTRAINING DATASERIES
2048 tokens-Codex
§

TEXT_MODERATION_LATEST

Most capable moderation model. Accuracy will be slighlty higher than the stable model

Series: Moderation

§

TEXT_MODERATION_004

§

TEXT_MODERATION_003

§

TEXT_MODERATION_002

§

TEXT_MODERATION_001

§

TEXT_MODERATION_STABLE

Almost as capable as the latest model, but slightly older.

Series: Moderation

§

TEXT_CURIE_001

Very capable, faster and lower cost than Davinci.

MAX TOKENSTRAINING DATASERIES
2049 tokensUp to Oct 2019GPT-3
§

TEXT_BABBAGE_001

Capable of straightforward tasks, very fast, and lower cost.

MAX TOKENSTRAINING DATASERIES
2049 tokensUp to Oct 2019GPT-3
§

TEXT_ADA_001

Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.

MAX TOKENSTRAINING DATASERIES
2049 tokensUp to Oct 2019GPT-3
§

DAVINCI

Most capable GPT-3 model. Can do any task the other models can do, often with higher quality.

MAX TOKENSTRAINING DATASERIES
2049 tokensUp to Oct 2019GPT-3
§

CURIE

Very capable, but faster and lower cost than Davinci.

MAX TOKENSTRAINING DATASERIES
2049 tokensUp to Oct 2019GPT-3
§

BABBAGE

Capable of straightforward tasks, very fast, and lower cost.

MAX TOKENSTRAINING DATASERIES
2049 tokensUp to Oct 2019GPT-3
§

ADA

Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.

MAX TOKENSTRAINING DATASERIES
2049 tokensUp to Oct 2019GPT-3
§

DALL_E_3

The latest DALL·E model released in Nov 2023. Learn more.

§

DALL_E_2

The previous DALL·E model released in Nov 2022. The 2nd iteration of DALL·E with more realistic, accurate, and 4x greater resolution images than the original model.

§

TTS_1

The latest text to speech model, optimized for speed.

§

TTS_1_HD

The latest text to speech model, optimized for quality.

§

UNKNOWN

Trait Implementations§

Source§

impl Clone for Model

Source§

fn clone(&self) -> Model

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Model

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<'de> Deserialize<'de> for Model

Source§

fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>
where __D: Deserializer<'de>,

Deserialize this value from the given Serde deserializer. Read more
Source§

impl Into<&'static str> for Model

Source§

fn into(self) -> &'static str

Converts this type into the (usually inferred) input type.
Source§

impl PartialEq for Model

Source§

fn eq(&self, other: &Model) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl Serialize for Model

Source§

fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>
where __S: Serializer,

Serialize this value into the given Serde serializer. Read more
Source§

impl Eq for Model

Source§

impl StructuralPartialEq for Model

Auto Trait Implementations§

§

impl Freeze for Model

§

impl RefUnwindSafe for Model

§

impl Send for Model

§

impl Sync for Model

§

impl Unpin for Model

§

impl UnwindSafe for Model

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<Q, K> Equivalent<K> for Q
where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,

Source§

fn equivalent(&self, key: &K) -> bool

Checks if this value is equivalent to the given key. Read more
Source§

impl<Q, K> Equivalent<K> for Q
where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,

Source§

fn equivalent(&self, key: &K) -> bool

Compare self to key and return true if they are equal.
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> DeserializeOwned for T
where T: for<'de> Deserialize<'de>,

Source§

impl<T> ErasedDestructor for T
where T: 'static,

Source§

impl<T> MaybeSendSync for T