pub enum ModelId {
Show 91 variants
Gemini25FlashPreview,
Gemini25Flash,
Gemini25FlashLite,
Gemini25Pro,
GPT5,
GPT5Codex,
GPT5Mini,
GPT5Nano,
CodexMiniLatest,
OpenAIGptOss20b,
OpenAIGptOss120b,
ClaudeOpus41,
ClaudeSonnet45,
ClaudeHaiku45,
ClaudeSonnet4,
DeepSeekChat,
DeepSeekReasoner,
XaiGrok4,
XaiGrok4Mini,
XaiGrok4Code,
XaiGrok4CodeLatest,
XaiGrok4Vision,
ZaiGlm46,
ZaiGlm45,
ZaiGlm45Air,
ZaiGlm45X,
ZaiGlm45Airx,
ZaiGlm45Flash,
ZaiGlm432b0414128k,
MoonshotKimiK2TurboPreview,
MoonshotKimiK20905Preview,
MoonshotKimiK20711Preview,
MoonshotKimiLatest,
MoonshotKimiLatest8k,
MoonshotKimiLatest32k,
MoonshotKimiLatest128k,
OllamaGptOss20b,
OllamaGptOss120bCloud,
OllamaQwen317b,
LmStudioMetaLlama38BInstruct,
LmStudioMetaLlama318BInstruct,
LmStudioQwen257BInstruct,
LmStudioGemma22BIt,
LmStudioGemma29BIt,
LmStudioPhi31Mini4kInstruct,
OpenRouterGrokCodeFast1,
OpenRouterGrok4Fast,
OpenRouterGrok4,
OpenRouterZaiGlm46,
OpenRouterMoonshotaiKimiK20905,
OpenRouterMoonshotaiKimiK2Free,
OpenRouterQwen3Max,
OpenRouterQwen3235bA22b,
OpenRouterQwen3235bA22bFree,
OpenRouterQwen3235bA22b2507,
OpenRouterQwen3235bA22bThinking2507,
OpenRouterQwen332b,
OpenRouterQwen330bA3b,
OpenRouterQwen330bA3bFree,
OpenRouterQwen330bA3bInstruct2507,
OpenRouterQwen330bA3bThinking2507,
OpenRouterQwen314b,
OpenRouterQwen314bFree,
OpenRouterQwen38b,
OpenRouterQwen38bFree,
OpenRouterQwen34bFree,
OpenRouterQwen3Next80bA3bInstruct,
OpenRouterQwen3Next80bA3bThinking,
OpenRouterQwen3Coder,
OpenRouterQwen3CoderFree,
OpenRouterQwen3CoderPlus,
OpenRouterQwen3CoderFlash,
OpenRouterQwen3Coder30bA3bInstruct,
OpenRouterDeepSeekV32Exp,
OpenRouterDeepSeekChatV31,
OpenRouterDeepSeekR1,
OpenRouterDeepSeekChatV31Free,
OpenRouterNvidiaNemotronNano9bV2Free,
OpenRouterOpenAIGptOss120b,
OpenRouterOpenAIGptOss20b,
OpenRouterOpenAIGptOss20bFree,
OpenRouterOpenAIGpt5,
OpenRouterOpenAIGpt5Codex,
OpenRouterOpenAIGpt5Chat,
OpenRouterOpenAIGpt4oSearchPreview,
OpenRouterOpenAIGpt4oMiniSearchPreview,
OpenRouterOpenAIChatgpt4oLatest,
OpenRouterAnthropicClaudeSonnet45,
OpenRouterAnthropicClaudeHaiku45,
OpenRouterAnthropicClaudeOpus41,
OpenRouterMinimaxM2Free,
}Expand description
Centralized enum for all supported model identifiers
Variants§
Gemini25FlashPreview
Gemini 2.5 Flash Preview - Latest fast model with advanced capabilities
Gemini25Flash
Gemini 2.5 Flash - Legacy alias for flash preview
Gemini25FlashLite
Gemini 2.5 Flash Lite - Legacy alias for flash preview (lite)
Gemini25Pro
Gemini 2.5 Pro - Latest most capable Gemini model
GPT5
GPT-5 - Latest most capable OpenAI model (2025-08-07)
GPT5Codex
GPT-5 Codex - Code-focused GPT-5 variant using the Responses API
GPT5Mini
GPT-5 Mini - Latest efficient OpenAI model (2025-08-07)
GPT5Nano
GPT-5 Nano - Latest most cost-effective OpenAI model (2025-08-07)
CodexMiniLatest
Codex Mini Latest - Latest Codex model for code generation (2025-05-16)
OpenAIGptOss20b
GPT-OSS 20B - OpenAI’s open-source 20B parameter model using harmony
OpenAIGptOss120b
GPT-OSS 120B - OpenAI’s open-source 120B parameter model using harmony
ClaudeOpus41
Claude Opus 4.1 - Latest most capable Anthropic model (2025-08-05)
ClaudeSonnet45
Claude Sonnet 4.5 - Latest balanced Anthropic model (2025-10-15)
ClaudeHaiku45
Claude Haiku 4.5 - Latest efficient Anthropic model (2025-10-15)
ClaudeSonnet4
Claude Sonnet 4 - Previous balanced Anthropic model (2025-05-14)
DeepSeekChat
DeepSeek V3.2-Exp Chat - Non-thinking mode
DeepSeekReasoner
DeepSeek V3.2-Exp Reasoner - Thinking mode with deliberate reasoning output
XaiGrok4
Grok-4 - Flagship xAI model with advanced reasoning
XaiGrok4Mini
Grok-4 Mini - Efficient xAI model variant
XaiGrok4Code
Grok-4 Code - Code-focused Grok deployment
XaiGrok4CodeLatest
Grok-4 Code Latest - Latest Grok code model with enhanced reasoning tools
XaiGrok4Vision
Grok-4 Vision - Multimodal Grok model
ZaiGlm46
GLM-4.6 - Latest flagship GLM reasoning model
ZaiGlm45
GLM-4.5 - Balanced GLM release for general tasks
ZaiGlm45Air
GLM-4.5-Air - Efficient GLM variant
ZaiGlm45X
GLM-4.5-X - Enhanced capability GLM variant
ZaiGlm45Airx
GLM-4.5-AirX - Hybrid efficient GLM variant
ZaiGlm45Flash
GLM-4.5-Flash - Low-latency GLM variant
ZaiGlm432b0414128k
GLM-4-32B-0414-128K - Legacy long-context GLM deployment
MoonshotKimiK2TurboPreview
Kimi K2 Turbo Preview - Recommended high-speed K2 deployment
MoonshotKimiK20905Preview
Kimi K2 0905 Preview - Flagship 256K K2 release with enhanced coding agents
MoonshotKimiK20711Preview
Kimi K2 0711 Preview - Long-context K2 release tuned for balanced workloads
MoonshotKimiLatest
Kimi Latest - Auto-tier alias that selects 8K/32K/128K variants automatically
MoonshotKimiLatest8k
Kimi Latest 8K - Vision-enabled 8K tier with automatic context caching
MoonshotKimiLatest32k
Kimi Latest 32K - Vision-enabled mid-tier with extended context
MoonshotKimiLatest128k
Kimi Latest 128K - Vision-enabled flagship tier with maximum context
OllamaGptOss20b
GPT-OSS 20B - Open-weight GPT-OSS 20B model served via Ollama locally
OllamaGptOss120bCloud
GPT-OSS 120B Cloud - Cloud-hosted GPT-OSS 120B served via Ollama Cloud
OllamaQwen317b
Qwen3 1.7B - Qwen3 1.7B model served via Ollama
LmStudioMetaLlama38BInstruct
Meta Llama 3 8B Instruct served locally via LM Studio
LmStudioMetaLlama318BInstruct
Meta Llama 3.1 8B Instruct served locally via LM Studio
LmStudioQwen257BInstruct
Qwen2.5 7B Instruct served locally via LM Studio
LmStudioGemma22BIt
Gemma 2 2B IT served locally via LM Studio
LmStudioGemma29BIt
Gemma 2 9B IT served locally via LM Studio
LmStudioPhi31Mini4kInstruct
Phi-3.1 Mini 4K Instruct served locally via LM Studio
OpenRouterGrokCodeFast1
Grok Code Fast 1 - Fast OpenRouter coding model powered by xAI Grok
OpenRouterGrok4Fast
Grok 4 Fast - Reasoning-focused Grok endpoint with transparent traces
OpenRouterGrok4
Grok 4 - Flagship Grok 4 endpoint exposed through OpenRouter
OpenRouterZaiGlm46
GLM 4.6 - Z.AI GLM 4.6 long-context reasoning model
OpenRouterMoonshotaiKimiK20905
Kimi K2 0905 - MoonshotAI Kimi K2 0905 MoE release optimised for coding agents
OpenRouterMoonshotaiKimiK2Free
Kimi K2 (free) - Community tier for MoonshotAI Kimi K2
OpenRouterQwen3Max
Qwen3 Max - Flagship Qwen3 mixture for general reasoning
OpenRouterQwen3235bA22b
Qwen3 235B A22B - Mixture-of-experts Qwen3 235B general model
OpenRouterQwen3235bA22bFree
Qwen3 235B A22B (free) - Community tier for Qwen3 235B A22B
OpenRouterQwen3235bA22b2507
Qwen3 235B A22B Instruct 2507 - Instruction-tuned Qwen3 235B A22B
OpenRouterQwen3235bA22bThinking2507
Qwen3 235B A22B Thinking 2507 - Deliberative Qwen3 235B A22B reasoning release
OpenRouterQwen332b
Qwen3 32B - Dense 32B Qwen3 deployment
OpenRouterQwen330bA3b
Qwen3 30B A3B - Active-parameter 30B Qwen3 model
OpenRouterQwen330bA3bFree
Qwen3 30B A3B (free) - Community tier for Qwen3 30B A3B
OpenRouterQwen330bA3bInstruct2507
Qwen3 30B A3B Instruct 2507 - Instruction-tuned Qwen3 30B A3B
OpenRouterQwen330bA3bThinking2507
Qwen3 30B A3B Thinking 2507 - Deliberative Qwen3 30B A3B release
OpenRouterQwen314b
Qwen3 14B - Lightweight Qwen3 14B model
OpenRouterQwen314bFree
Qwen3 14B (free) - Community tier for Qwen3 14B
OpenRouterQwen38b
Qwen3 8B - Compact Qwen3 8B deployment
OpenRouterQwen38bFree
Qwen3 8B (free) - Community tier for Qwen3 8B
OpenRouterQwen34bFree
Qwen3 4B (free) - Entry level Qwen3 4B deployment
OpenRouterQwen3Next80bA3bInstruct
Qwen3 Next 80B A3B Instruct - Next-generation Qwen3 instruction model
OpenRouterQwen3Next80bA3bThinking
Qwen3 Next 80B A3B Thinking - Next-generation Qwen3 reasoning release
OpenRouterQwen3Coder
Qwen3 Coder - Qwen3-based coding model tuned for IDE workflows
OpenRouterQwen3CoderFree
Qwen3 Coder (free) - Community tier for Qwen3 Coder
OpenRouterQwen3CoderPlus
Qwen3 Coder Plus - Premium Qwen3 coding model with long context
OpenRouterQwen3CoderFlash
Qwen3 Coder Flash - Latency optimised Qwen3 coding model
OpenRouterQwen3Coder30bA3bInstruct
Qwen3 Coder 30B A3B Instruct - Large Mixture-of-Experts coding deployment
OpenRouterDeepSeekV32Exp
DeepSeek V3.2 Exp - Experimental DeepSeek V3.2 listing
OpenRouterDeepSeekChatV31
DeepSeek Chat v3.1 - Advanced DeepSeek model via OpenRouter
OpenRouterDeepSeekR1
DeepSeek R1 - DeepSeek R1 reasoning model with chain-of-thought
OpenRouterDeepSeekChatV31Free
DeepSeek Chat v3.1 (free) - Community tier for DeepSeek Chat v3.1
OpenRouterNvidiaNemotronNano9bV2Free
Nemotron Nano 9B v2 (free) - NVIDIA Nemotron Nano 9B v2 community tier
OpenRouterOpenAIGptOss120b
OpenAI gpt-oss-120b - Open-weight 120B reasoning model via OpenRouter
OpenRouterOpenAIGptOss20b
OpenAI gpt-oss-20b - Open-weight 20B deployment via OpenRouter
OpenRouterOpenAIGptOss20bFree
OpenAI gpt-oss-20b (free) - Community tier for OpenAI gpt-oss-20b
OpenRouterOpenAIGpt5
OpenAI GPT-5 - OpenAI GPT-5 model accessed through OpenRouter
OpenRouterOpenAIGpt5Codex
OpenAI GPT-5 Codex - OpenRouter listing for GPT-5 Codex
OpenRouterOpenAIGpt5Chat
OpenAI GPT-5 Chat - Chat optimised GPT-5 endpoint without tool use
OpenRouterOpenAIGpt4oSearchPreview
OpenAI GPT-4o Search Preview - GPT-4o search preview endpoint via OpenRouter
OpenRouterOpenAIGpt4oMiniSearchPreview
OpenAI GPT-4o Mini Search Preview - GPT-4o mini search preview endpoint
OpenRouterOpenAIChatgpt4oLatest
OpenAI ChatGPT-4o Latest - ChatGPT 4o latest listing via OpenRouter
OpenRouterAnthropicClaudeSonnet45
Claude Sonnet 4.5 - Anthropic Claude Sonnet 4.5 listing
OpenRouterAnthropicClaudeHaiku45
Claude Haiku 4.5 - Anthropic Claude Haiku 4.5 listing
OpenRouterAnthropicClaudeOpus41
Claude Opus 4.1 - Anthropic Claude Opus 4.1 listing
OpenRouterMinimaxM2Free
MiniMax-M2 (free) - Community tier for MiniMax-M2
Implementations§
Source§impl ModelId
impl ModelId
Sourcepub fn as_str(&self) -> &'static str
pub fn as_str(&self) -> &'static str
Convert the model identifier to its string representation used in API calls and configurations
Sourcepub fn supports_reasoning_effort(&self) -> bool
pub fn supports_reasoning_effort(&self) -> bool
Whether this model supports configurable reasoning effort levels
Sourcepub fn display_name(&self) -> &'static str
pub fn display_name(&self) -> &'static str
Get the display name for the model (human-readable)
Sourcepub fn description(&self) -> &'static str
pub fn description(&self) -> &'static str
Get a description of the model’s characteristics
Sourcepub fn openrouter_vendor(&self) -> Option<&'static str>
pub fn openrouter_vendor(&self) -> Option<&'static str>
Return the OpenRouter vendor slug when this identifier maps to a marketplace listing
Sourcepub fn all_models() -> Vec<ModelId>
pub fn all_models() -> Vec<ModelId>
Get all available models as a vector
Sourcepub fn models_for_provider(provider: Provider) -> Vec<ModelId>
pub fn models_for_provider(provider: Provider) -> Vec<ModelId>
Get all models for a specific provider
Sourcepub fn fallback_models() -> Vec<ModelId>
pub fn fallback_models() -> Vec<ModelId>
Get recommended fallback models in order of preference
Sourcepub fn default_orchestrator() -> Self
pub fn default_orchestrator() -> Self
Get the default orchestrator model (more capable)
Sourcepub fn default_subagent() -> Self
pub fn default_subagent() -> Self
Get the default subagent model (fast and efficient)
Sourcepub fn default_orchestrator_for_provider(provider: Provider) -> Self
pub fn default_orchestrator_for_provider(provider: Provider) -> Self
Get provider-specific defaults for orchestrator
Sourcepub fn default_subagent_for_provider(provider: Provider) -> Self
pub fn default_subagent_for_provider(provider: Provider) -> Self
Get provider-specific defaults for subagent
Sourcepub fn default_single_for_provider(provider: Provider) -> Self
pub fn default_single_for_provider(provider: Provider) -> Self
Get provider-specific defaults for single agent
Sourcepub fn is_flash_variant(&self) -> bool
pub fn is_flash_variant(&self) -> bool
Check if this is a “flash” variant (optimized for speed)
Sourcepub fn is_pro_variant(&self) -> bool
pub fn is_pro_variant(&self) -> bool
Check if this is a “pro” variant (optimized for capability)
Sourcepub fn is_efficient_variant(&self) -> bool
pub fn is_efficient_variant(&self) -> bool
Check if this is an optimized/efficient variant
Sourcepub fn is_top_tier(&self) -> bool
pub fn is_top_tier(&self) -> bool
Check if this is a top-tier model
Sourcepub fn is_reasoning_variant(&self) -> bool
pub fn is_reasoning_variant(&self) -> bool
Determine whether the model is a reasoning-capable variant
Sourcepub fn supports_tool_calls(&self) -> bool
pub fn supports_tool_calls(&self) -> bool
Determine whether the model supports tool calls/function execution
Sourcepub fn generation(&self) -> &'static str
pub fn generation(&self) -> &'static str
Get the generation/version string for this model
Trait Implementations§
Source§impl<'de> Deserialize<'de> for ModelId
impl<'de> Deserialize<'de> for ModelId
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Source§impl JsonSchema for ModelId
impl JsonSchema for ModelId
Source§fn schema_name() -> String
fn schema_name() -> String
Source§fn schema_id() -> Cow<'static, str>
fn schema_id() -> Cow<'static, str>
Source§fn json_schema(generator: &mut SchemaGenerator) -> Schema
fn json_schema(generator: &mut SchemaGenerator) -> Schema
Source§fn is_referenceable() -> bool
fn is_referenceable() -> bool
$ref keyword. Read moreimpl Copy for ModelId
impl Eq for ModelId
impl StructuralPartialEq for ModelId
Auto Trait Implementations§
impl Freeze for ModelId
impl RefUnwindSafe for ModelId
impl Send for ModelId
impl Sync for ModelId
impl Unpin for ModelId
impl UnwindSafe for ModelId
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key and return true if they are equal.Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key and return true if they are equal.