pub enum Model {
Show 48 variants
GPT_4_0125_PREVIEW,
GPT_4_TURBO_PREVIEW,
GPT_4_1106_PREVIEW,
GPT_4_TURBO_WITH_VISION,
GPT_4_TURBO_1106_WITH_VISION,
GPT_4,
GPT_4_0314,
GPT_4_0613,
GPT_4_32K,
GPT_4_32K_0613,
GPT_4_32K_0314,
GPT_3_5_TURBO_0125,
GPT_3_5_TURBO,
GPT_3_5_TURBO_1106,
GPT_3_5_TURBO_16K,
GPT_3_5_TURBO_16K_0613,
GPT_3_5_TURBO_0613,
GPT_3_5_TURBO_0301,
TEXT_DAVINCI_003,
TEXT_DAVINCI_002,
TEXT_DAVINCI_EDIT_001,
CODE_DAVINCI_EDIT_001,
WHISPER_1,
TEXT_EMBEDDING_ADA_002,
TEXT_EMBEDDING_ADA_002_v2,
TEXT_SEARCH_ADA_DOC_001,
CODE_DAVINCI_002,
CODE_DAVINCI_001,
CODE_CUSHMAN_002,
CODE_CUSHMAN_001,
TEXT_MODERATION_LATEST,
TEXT_MODERATION_004,
TEXT_MODERATION_003,
TEXT_MODERATION_002,
TEXT_MODERATION_001,
TEXT_MODERATION_STABLE,
TEXT_CURIE_001,
TEXT_BABBAGE_001,
TEXT_ADA_001,
DAVINCI,
CURIE,
BABBAGE,
ADA,
DALL_E_3,
DALL_E_2,
TTS_1,
TTS_1_HD,
UNKNOWN,
}
Expand description
An enum of OpenAI models
Note: GPT-4 are not publicly availble yet (Mar 22, 2023).
Variants§
GPT_4_0125_PREVIEW
GPT-4 Turbo
The latest GPT-4 model intended to reduce cases of “laziness” where the model doesn’t complete a task. Returns a maximum of 4,096 output tokens. Learn more.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
128000 tokens | Up to Dec 2023 | GPT-4 |
GPT_4_TURBO_PREVIEW
Currently points to gpt-4-0125-preview.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
128000 tokens | Up to Dec 2023 | GPT-4 |
GPT_4_1106_PREVIEW
GPT-4 Turbo model featuring improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. This is a preview model. Learn more.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
128000 tokens | Up to Apr 2023 | GPT-4 |
GPT_4_TURBO_WITH_VISION
Ability to understand images, in addition to all other GPT-4 Turbo capabilties. Returns a maximum of 4,096 output tokens. This is a preview model version and not suited yet for production traffic. Learn more.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
128000 tokens | Up to Apr 2023 | GPT-4 |
GPT_4_TURBO_1106_WITH_VISION
GPT-4 with the ability to understand images, in addition to all other GPT-4 Turbo capabilities. Returns a maximum of 4,096 output tokens. This is a preview model version. Learn more.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
128000 tokens | Up to Apr 2023 | GPT-4 |
GPT_4
More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration.
Note: on June 27th, 2023, gpt-4
will be updated to point from
gpt-4-0314
to gpt-4-0613
, the latest model iteration.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8192 tokens | Up to Sep 2021 | GPT-4 |
GPT_4_0314
gpt-4-0613
insteadSnapshot of gpt-4
from March 14th 2023. Unlike gpt-4
, this model
will not receive updates, and will only be supported for a three month
period ending on June 13th 2023.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8192 tokens | Up to Sep 2021 | GPT-4 |
GPT_4_0613
Snapshot of gpt-4
from June 13th 2023 with function calling data.
Unlike gpt-4
, this model will not receive updates, and will be
deprecated 3 months after a new version is released.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8,192 tokens | Up to Sep 2021 | GPT-4 |
GPT_4_32K
Same capabilities as the base gpt-4
mode but with 4x the context
length. Will be updated with our latest model iteration.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
32768 tokens | Up to Sep 2021 | GPT-4 |
GPT_4_32K_0613
Snapshot of gpt-4-32
from June 13th 2023. Unlike gpt-4-32k
, this
model will not receive updates, and will be deprecated 3 months after a
new version is released.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
32,768 tokens | Up to Sep 2021 | GPT-4 |
GPT_4_32K_0314
gpt-4-32k-0613
insteadSnapshot of gpt-4-32
from March 14th 2023. Unlike gpt-4-32k
, this
model will not receive updates, and will be deprecated 3 months after a
new version is released.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
32768 tokens | Up to Sep 2021 | GPT-4 |
GPT_3_5_TURBO_0125
Updated GPT 3.5 Turbo
The latest GPT-3.5 Turbo model with higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls. Returns a maximum of 4,096 output tokens. Learn more
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
16,385 tokens | Up to Sep 2021 | GPT-3.5 |
GPT_3_5_TURBO
Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of
text-davinci-003
. Will be updated with our latest model iteration.
Note: on June 27th, 2023, gpt-3.5-turbo
will be updated to point from
gpt-3.5-turbo-0301
to gpt-3.5-turbo-0613
.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4096 tokens | Up to Sep 2021 | GPT-3.5 |
GPT_3_5_TURBO_1106
The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Returns a maximum of 4,096 output tokens. Learn more.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
16385 tokens | Up to Sep 2021 | GPT-3.5 |
GPT_3_5_TURBO_16K
Same capabilities as the standard gpt-3.5-turbo
model but with 4
times the context.
Note: currently points to gpt-3.5-turbo-16k-0613
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
16,385 tokens | Up to Sep 2021 | GPT-3.5 |
GPT_3_5_TURBO_16K_0613
Snapshot of gpt-3.5-turbo-16k
from June 13th 2023. Unlike
gpt-3.5-turbo-16k
, this model will not receive updates, and will be
deprecated 3 months after a new version is released.
Note: Will be deprecated on June 13, 2024.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
16,385 tokens | Up to Sep 2021 | GPT-3.5 |
GPT_3_5_TURBO_0613
Snapshot of gpt-3.5-turbo
from June 13th 2023 with function calling
data. Unlike gpt-3.5-turbo
, this model will not receive updates, and
will be deprecated 3 months after a new version is released.
Note: Will be deprecated on June 13, 2024.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4,096 tokens | Up to Sep 2021 | GPT-3.5 |
GPT_3_5_TURBO_0301
gpt-3.5-turbo-0613
insteadSnapshot of gpt-3.5-turbo
from March 1st 2023. Unlike
gpt-3.5-turbo
, this model will not receive updates, and will only be
supported for a three month period ending on June 1st 2023.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4096 tokens | Up to Sep 2021 | GPT-3.5 |
TEXT_DAVINCI_003
Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports some additional features such as inserting text.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4097 tokens | Up to Sep 2021 | GPT-3.5 |
TEXT_DAVINCI_002
Similar capabilities to text-davinci-003
but trained with supervised
fine-tuning instead of reinforcement learning
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4097 tokens | Up to Sep 2021 | GPT-3.5 |
TEXT_DAVINCI_EDIT_001
CODE_DAVINCI_EDIT_001
Optimized for code-completion tasks
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8001 tokens | Up to Sep 2021 | GPT-3.5 |
WHISPER_1
TEXT_EMBEDDING_ADA_002
TEXT_EMBEDDING_ADA_002_v2
TEXT_SEARCH_ADA_DOC_001
CODE_DAVINCI_002
Most capable Codex model. Particularly good at translating natural language to code. In addition to completing code, also supports inserting completions within code.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8001 tokens | Up to Jun 2021 | Codex |
CODE_DAVINCI_001
Earlier version of code-davinci-002
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8001 tokens | Up to Jun 2021 | Codex |
CODE_CUSHMAN_002
Almost as capable as Davinci Codex, but slightly faster. This speed advantage may make it preferable for real-time applications.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2048 tokens | - | Codex |
CODE_CUSHMAN_001
Earlier version of code-cushman-002
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2048 tokens | - | Codex |
TEXT_MODERATION_LATEST
Most capable moderation model. Accuracy will be slighlty higher than the stable model
Series: Moderation
TEXT_MODERATION_004
TEXT_MODERATION_003
TEXT_MODERATION_002
TEXT_MODERATION_001
TEXT_MODERATION_STABLE
Almost as capable as the latest model, but slightly older.
Series: Moderation
TEXT_CURIE_001
Very capable, faster and lower cost than Davinci.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
TEXT_BABBAGE_001
Capable of straightforward tasks, very fast, and lower cost.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
TEXT_ADA_001
Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
DAVINCI
Most capable GPT-3 model. Can do any task the other models can do, often with higher quality.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
CURIE
Very capable, but faster and lower cost than Davinci.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
BABBAGE
Capable of straightforward tasks, very fast, and lower cost.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
ADA
Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
DALL_E_3
The latest DALL·E model released in Nov 2023. Learn more.
DALL_E_2
The previous DALL·E model released in Nov 2022. The 2nd iteration of DALL·E with more realistic, accurate, and 4x greater resolution images than the original model.
TTS_1
The latest text to speech model, optimized for speed.
TTS_1_HD
The latest text to speech model, optimized for quality.
UNKNOWN
Trait Implementations§
Source§impl<'de> Deserialize<'de> for Model
impl<'de> Deserialize<'de> for Model
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
impl Eq for Model
impl StructuralPartialEq for Model
Auto Trait Implementations§
impl Freeze for Model
impl RefUnwindSafe for Model
impl Send for Model
impl Sync for Model
impl Unpin for Model
impl UnwindSafe for Model
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.