TokenizerModel

Type Alias TokenizerModel 

Source
pub type TokenizerModel = TokenModel;
Expand description

Backward-compatible alias for TokenModel

Aliased Type§

pub enum TokenizerModel {
Show 27 variants Gpt52, Gpt52Pro, Gpt51, Gpt51Mini, Gpt51Codex, Gpt5, Gpt5Mini, Gpt5Nano, O4Mini, O3, O3Mini, O1, O1Mini, O1Preview, Gpt4o, Gpt4oMini, Gpt4, Gpt35Turbo, Claude, Gemini, Llama, CodeLlama, Mistral, DeepSeek, Qwen, Cohere, Grok,
}

Variants§

§

Gpt52

GPT-5.2 - Latest flagship model (Dec 2025), uses o200k_base

§

Gpt52Pro

GPT-5.2 Pro - Enhanced GPT-5.2 variant, uses o200k_base

§

Gpt51

GPT-5.1 - Previous flagship (Nov 2025), uses o200k_base

§

Gpt51Mini

GPT-5.1 Mini - Smaller GPT-5.1 variant, uses o200k_base

§

Gpt51Codex

GPT-5.1 Codex - Code-specialized variant, uses o200k_base

§

Gpt5

GPT-5 - Original GPT-5 (Aug 2025), uses o200k_base

§

Gpt5Mini

GPT-5 Mini - Smaller GPT-5 variant, uses o200k_base

§

Gpt5Nano

GPT-5 Nano - Smallest GPT-5 variant, uses o200k_base

§

O4Mini

O4 Mini - Latest reasoning model, uses o200k_base

§

O3

O3 - Reasoning model, uses o200k_base

§

O3Mini

O3 Mini - Smaller O3 variant, uses o200k_base

§

O1

O1 - Original reasoning model, uses o200k_base

§

O1Mini

O1 Mini - Smaller O1 variant, uses o200k_base

§

O1Preview

O1 Preview - O1 preview version, uses o200k_base

§

Gpt4o

GPT-4o - Omni model, uses o200k_base encoding (most efficient)

§

Gpt4oMini

GPT-4o Mini - Smaller GPT-4o variant, uses o200k_base encoding

§

Gpt4

GPT-4/GPT-4 Turbo - uses cl100k_base encoding (legacy)

§

Gpt35Turbo

GPT-3.5-turbo - uses cl100k_base encoding (legacy)

§

Claude

Claude (all versions) - uses estimation based on ~3.5 chars/token

§

Gemini

Gemini (all versions including 3, 2.5, 1.5) - estimation ~3.8 chars/token

§

Llama

Llama 3/4 - estimation based on ~3.5 chars/token

§

CodeLlama

CodeLlama - more granular for code (~3.2 chars/token)

§

Mistral

Mistral (Large, Medium, Small, Codestral) - estimation ~3.5 chars/token

§

DeepSeek

DeepSeek (V3, R1, Coder) - estimation ~3.5 chars/token

§

Qwen

Qwen (Qwen3, Qwen2.5) - estimation ~3.5 chars/token

§

Cohere

Cohere (Command R+, Command R) - estimation ~3.6 chars/token

§

Grok

Grok (Grok 2, Grok 3) - estimation ~3.5 chars/token