pub enum Model {
Show 30 variants
GPT_4,
GPT_4_0314,
GPT_4_32K,
GPT_4_32K_0314,
GPT_3_5_TURBO,
GPT_3_5_TURBO_0301,
TEXT_DAVINCI_003,
TEXT_DAVINCI_002,
TEXT_DAVINCI_EDIT_001,
CODE_DAVINCI_EDIT_001,
WHISPER_1,
TEXT_EMBEDDING_ADA_002,
TEXT_EMBEDDING_ADA_002_v2,
TEXT_SEARCH_ADA_DOC_001,
CODE_DAVINCI_002,
CODE_CUSHMAN_001,
TEXT_MODERATION_LATEST,
TEXT_MODERATION_004,
TEXT_MODERATION_003,
TEXT_MODERATION_002,
TEXT_MODERATION_001,
TEXT_MODERATION_STABLE,
TEXT_CURIE_001,
TEXT_BABBAGE_001,
TEXT_ADA_001,
DAVINCI,
CURIE,
BABBAGE,
ADA,
UNKNOWN,
}
Expand description
An enum of OpenAI models
Note: GPT-4 are not publicly availble yet (Mar 22, 2023).
Variants§
GPT_4
More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8192 tokens | Up to Sep 2021 | GPT-4 |
GPT_4_0314
Snapshot of gpt-4
from March 14th 2023. Unlike gpt-4
, this model
will not receive updates, and will only be supported for a three month
period ending on June 14th 2023.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8192 tokens | Up to Sep 2021 | GPT-4 |
GPT_4_32K
Same capabilities as the base gpt-4
mode but with 4x the context
length. Will be updated with our latest model iteration.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
32768 tokens | Up to Sep 2021 | GPT-4 |
GPT_4_32K_0314
Snapshot of gpt-4-32
from March 14th 2023. Unlike gpt-4-32k
, this
model will not receive updates, and will only be supported for a three
month period ending on June 14th 2023.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
32768 tokens | Up to Sep 2021 | GPT-4 |
GPT_3_5_TURBO
Most capable GPT-3.5 model and optimized for chat at 1/10th the cost of
text-davinci-003
. Will be updated with our latest model iteration.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4096 tokens | Up to Sep 2021 | GPT-3.5 |
GPT_3_5_TURBO_0301
Snapshot of gpt-3.5-turbo
from March 1st 2023. Unlike
gpt-3.5-turbo
, this model will not receive updates, and will only be
supported for a three month period ending on June 1st 2023.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4096 tokens | Up to Sep 2021 | GPT-3.5 |
TEXT_DAVINCI_003
Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports inserting completions within text.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4097 tokens | Up to Sep 2021 | GPT-3.5 |
TEXT_DAVINCI_002
Similar capabilities to text-davinci-003
but trained with supervised
fine-tuning instead of reinforcement learning
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
4097 tokens | Up to Sep 2021 | GPT-3.5 |
TEXT_DAVINCI_EDIT_001
CODE_DAVINCI_EDIT_001
Optimized for code-completion tasks
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8001 tokens | Up to Sep 2021 | GPT-3.5 |
WHISPER_1
TEXT_EMBEDDING_ADA_002
TEXT_EMBEDDING_ADA_002_v2
TEXT_SEARCH_ADA_DOC_001
CODE_DAVINCI_002
Most capable Codex model. Particularly good at translating natural language to code. In addition to completing code, also supports inserting completions within code.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
8001 tokens | Up to Jun 2021 | Codex |
CODE_CUSHMAN_001
Almost as capable as Davinci Codex, but slightly faster. This speed advantage may make it preferable for real-time applications.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2048 tokens | - | Codex |
TEXT_MODERATION_LATEST
Most capable moderation model. Accuracy will be slighlty higher than the stable model
Series: Moderation
TEXT_MODERATION_004
TEXT_MODERATION_003
TEXT_MODERATION_002
TEXT_MODERATION_001
TEXT_MODERATION_STABLE
Almost as capable as the latest model, but slightly older.
Series: Moderation
TEXT_CURIE_001
Very capable, faster and lower cost than Davinci.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
TEXT_BABBAGE_001
Capable of straightforward tasks, very fast, and lower cost.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
TEXT_ADA_001
Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
DAVINCI
Most capable GPT-3 model. Can do any task the other models can do, often with higher quality.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
CURIE
Very capable, but faster and lower cost than Davinci.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
BABBAGE
Capable of straightforward tasks, very fast, and lower cost.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
ADA
Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.
MAX TOKENS | TRAINING DATA | SERIES |
---|---|---|
2049 tokens | Up to Oct 2019 | GPT-3 |
UNKNOWN
Trait Implementations§
source§impl<'de> Deserialize<'de> for Model
impl<'de> Deserialize<'de> for Model
source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where __D: Deserializer<'de>,
source§impl PartialEq<Model> for Model
impl PartialEq<Model> for Model
impl Eq for Model
impl StructuralEq for Model
impl StructuralPartialEq for Model
Auto Trait Implementations§
impl RefUnwindSafe for Model
impl Send for Model
impl Sync for Model
impl Unpin for Model
impl UnwindSafe for Model
Blanket Implementations§
source§impl<Q, K> Equivalent<K> for Qwhere
Q: Eq + ?Sized,
K: Borrow<Q> + ?Sized,
impl<Q, K> Equivalent<K> for Qwhere Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,
source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key
and return true
if they are equal.