Skip to main content

ModelId

Enum ModelId 

Source
pub enum ModelId {
Show 122 variants Gemini3ProPreview, Gemini3FlashPreview, GPT5, GPT52, GPT52Codex, GPT5Codex, GPT5Mini, GPT5Nano, GPT51, GPT51Codex, GPT51CodexMax, GPT51Mini, CodexMiniLatest, OpenAIGptOss20b, OpenAIGptOss120b, ClaudeOpus46, ClaudeOpus45, ClaudeOpus41, ClaudeSonnet45, ClaudeHaiku45, ClaudeSonnet4, ClaudeOpus4, ClaudeSonnet37, ClaudeHaiku35, DeepSeekChat, DeepSeekReasoner, HuggingFaceDeepseekV32, HuggingFaceOpenAIGptOss20b, HuggingFaceOpenAIGptOss120b, HuggingFaceDeepseekV32Novita, HuggingFaceXiaomiMimoV2FlashNovita, HuggingFaceMinimaxM25Novita, HuggingFaceGlm5Novita, HuggingFaceQwen3CoderNextNovita, XaiGrok4, XaiGrok4Mini, XaiGrok4Code, XaiGrok4CodeLatest, XaiGrok4Vision, ZaiGlm5, MoonshotMinimaxM25, MoonshotQwen3CoderNext, OllamaGptOss20b, OllamaGptOss20bCloud, OllamaGptOss120bCloud, OllamaQwen317b, OllamaDeepseekV32Cloud, OllamaQwen3Next80bCloud, OllamaMistralLarge3675bCloud, OllamaQwen3Coder480bCloud, OllamaGemini3ProPreviewLatestCloud, OllamaDevstral2123bCloud, OllamaMinimaxM2Cloud, OllamaGlm5Cloud, OllamaMinimaxM25Cloud, OllamaGemini3FlashPreviewCloud, OllamaNemotron3Nano30bCloud, MinimaxM25, MinimaxM2, LmStudioMetaLlama38BInstruct, LmStudioMetaLlama318BInstruct, LmStudioQwen257BInstruct, LmStudioGemma22BIt, LmStudioGemma29BIt, LmStudioPhi31Mini4kInstruct, OpenRouterGrokCodeFast1, OpenRouterGrok4Fast, OpenRouterGrok41Fast, OpenRouterGrok4, OpenRouterQwen3Max, OpenRouterQwen3235bA22b, OpenRouterQwen3235bA22b2507, OpenRouterQwen3235bA22bThinking2507, OpenRouterQwen332b, OpenRouterQwen330bA3b, OpenRouterQwen330bA3bInstruct2507, OpenRouterQwen330bA3bThinking2507, OpenRouterQwen314b, OpenRouterQwen38b, OpenRouterQwen3Next80bA3bInstruct, OpenRouterQwen3Next80bA3bThinking, OpenRouterQwen35Plus0215, OpenRouterQwen3Coder, OpenRouterQwen3CoderPlus, OpenRouterQwen3CoderFlash, OpenRouterQwen3Coder30bA3bInstruct, OpenRouterQwen3CoderNext, OpenRouterDeepseekChat, OpenRouterDeepSeekV32, OpenRouterDeepseekReasoner, OpenRouterDeepSeekV32Speciale, OpenRouterDeepSeekV32Exp, OpenRouterDeepSeekChatV31, OpenRouterDeepSeekR1, OpenRouterOpenAIGptOss120b, OpenRouterOpenAIGptOss120bFree, OpenRouterOpenAIGptOss20b, OpenRouterOpenAIGpt5, OpenRouterOpenAIGpt5Codex, OpenRouterOpenAIGpt5Chat, OpenRouterAnthropicClaudeSonnet45, OpenRouterAnthropicClaudeHaiku45, OpenRouterAnthropicClaudeOpus41, OpenRouterAmazonNova2LiteV1, OpenRouterMistralaiMistralLarge2512, OpenRouterNexAgiDeepseekV31NexN1, OpenRouterOpenAIGpt51, OpenRouterOpenAIGpt51Codex, OpenRouterOpenAIGpt51CodexMax, OpenRouterOpenAIGpt51CodexMini, OpenRouterOpenAIGpt51Chat, OpenRouterOpenAIGpt52, OpenRouterOpenAIGpt52Chat, OpenRouterOpenAIGpt52Codex, OpenRouterOpenAIGpt52Pro, OpenRouterOpenAIO1Pro, OpenRouterStepfunStep35FlashFree, OpenRouterZaiGlm5, OpenRouterMoonshotaiKimiK20905, OpenRouterMoonshotaiKimiK2Thinking, OpenRouterMoonshotaiKimiK25, OpenRouterMinimaxM25,
}
Expand description

Centralized enum for all supported model identifiers

Variants§

§

Gemini3ProPreview

Gemini 3 Pro Preview - Preview of next-generation Gemini model

§

Gemini3FlashPreview

Gemini 3 Flash Preview - Our most intelligent model built for speed, combining frontier intelligence with superior search and grounding

§

GPT5

GPT-5 - Latest most capable OpenAI model (2025-08-07)

§

GPT52

GPT-5.2 - Latest flagship general-purpose OpenAI model (2025-12-11)

§

GPT52Codex

GPT-5.2 Codex - Code-focused GPT-5.2 variant optimized for agentic coding

§

GPT5Codex

GPT-5 Codex - Code-focused GPT-5 variant using the Responses API

§

GPT5Mini

GPT-5 Mini - Latest efficient OpenAI model (2025-08-07)

§

GPT5Nano

GPT-5 Nano - Latest most cost-effective OpenAI model (2025-08-07)

§

GPT51

GPT-5.1 - Enhanced latest most capable OpenAI model with improved reasoning (2025-11-14)

§

GPT51Codex

GPT-5.1 Codex - Code-focused GPT-5.1 variant using the Responses API

§

GPT51CodexMax

GPT-5.1 Codex Max - Maximum context code-focused GPT-5.1 variant

§

GPT51Mini

GPT-5.1 Mini - Enhanced efficient OpenAI model with improved capabilities (2025-11-14)

§

CodexMiniLatest

Codex Mini Latest - Latest Codex model for code generation (2025-05-16)

§

OpenAIGptOss20b

GPT-OSS 20B - OpenAI’s open-source 20B parameter model using harmony

§

OpenAIGptOss120b

GPT-OSS 120B - OpenAI’s open-source 120B parameter model using harmony

§

ClaudeOpus46

Claude Opus 4.6 - Next-gen flagship Anthropic model with extended thinking

§

ClaudeOpus45

Claude Opus 4.5 - Latest flagship Anthropic model with exceptional reasoning (2025-11-01)

§

ClaudeOpus41

Claude Opus 4.1 - Previous most capable Anthropic model (2025-08-05)

§

ClaudeSonnet45

Claude Sonnet 4.5 - Latest balanced Anthropic model (2025-10-15)

§

ClaudeHaiku45

Claude Haiku 4.5 - Latest efficient Anthropic model (2025-10-15)

§

ClaudeSonnet4

Claude Sonnet 4 - Previous balanced Anthropic model (2025-05-14)

§

ClaudeOpus4

Claude Opus 4 - Previous flagship model (2025-05-14)

§

ClaudeSonnet37

Claude Sonnet 3.7 - Latest Claude 3 Sonnet (2025-02-19)

§

ClaudeHaiku35

Claude Haiku 3.5 - Latest Claude 3 Haiku (2024-10-22)

§

DeepSeekChat

DeepSeek V3.2 Chat - Fast non-thinking mode

§

DeepSeekReasoner

DeepSeek V3.2 Reasoner - Thinking mode with structured reasoning output

§

HuggingFaceDeepseekV32

DeepSeek V3.2 via Hugging Face router

§

HuggingFaceOpenAIGptOss20b

OpenAI GPT-OSS 20B via Hugging Face router

§

HuggingFaceOpenAIGptOss120b

OpenAI GPT-OSS 120B via Hugging Face router

§

HuggingFaceDeepseekV32Novita

DeepSeek V3.2 via Novita on Hugging Face router

§

HuggingFaceXiaomiMimoV2FlashNovita

Xiaomi MiMo-V2-Flash via Novita on Hugging Face router

§

HuggingFaceMinimaxM25Novita

MiniMax M2.5 via Novita on Hugging Face router

§

HuggingFaceGlm5Novita

Z.AI GLM-5 via Novita on Hugging Face router

§

HuggingFaceQwen3CoderNextNovita

Qwen3-Coder-Next via Novita inference provider on Hugging Face router

§

XaiGrok4

Grok-4 - Flagship xAI model with advanced reasoning

§

XaiGrok4Mini

Grok-4 Mini - Efficient xAI model variant

§

XaiGrok4Code

Grok-4 Code - Code-focused Grok deployment

§

XaiGrok4CodeLatest

Grok-4 Code Latest - Latest Grok code model with enhanced reasoning tools

§

XaiGrok4Vision

Grok-4 Vision - Multimodal Grok model

§

ZaiGlm5

GLM-5 - Flagship Z.ai foundation model for complex systems

§

MoonshotMinimaxM25

MiniMax-M2.5 - MiniMax model served via Moonshot API

§

MoonshotQwen3CoderNext

Qwen3-Coder-Next - Qwen3 Coder Next model served via Moonshot API

§

OllamaGptOss20b

GPT-OSS 20B - Open-weight GPT-OSS 20B model served via Ollama locally

§

OllamaGptOss20bCloud

GPT-OSS 20B Cloud - Cloud-hosted GPT-OSS 20B served via Ollama Cloud

§

OllamaGptOss120bCloud

GPT-OSS 120B Cloud - Cloud-hosted GPT-OSS 120B served via Ollama Cloud

§

OllamaQwen317b

Qwen3 1.7B - Qwen3 1.7B model served via Ollama

§

OllamaDeepseekV32Cloud

DeepSeek V3.2 Cloud - DeepSeek V3.2 reasoning deployment via Ollama Cloud

§

OllamaQwen3Next80bCloud

Qwen3 Next 80B Cloud - Next-generation Qwen3 80B via Ollama Cloud

§

OllamaMistralLarge3675bCloud

Mistral Large 3 675B Cloud - Mistral Large 3 reasoning model via Ollama Cloud

§

OllamaQwen3Coder480bCloud

Qwen3 Coder 480B Cloud - Cloud-hosted Qwen3 Coder model served via Ollama Cloud

§

OllamaGemini3ProPreviewLatestCloud

Gemini 3 Pro Preview Latest Cloud - Google Gemini 3 Pro Preview via Ollama Cloud

§

OllamaDevstral2123bCloud

Devstral 2 123B Cloud - Mistral Devstral 2 123B model via Ollama Cloud

§

OllamaMinimaxM2Cloud

MiniMax-M2 Cloud - Cloud-hosted MiniMax-M2 model served via Ollama Cloud

§

OllamaGlm5Cloud

GLM-5 Cloud - Cloud-hosted GLM-5 model served via Ollama Cloud

§

OllamaMinimaxM25Cloud

MiniMax-M2.5 Cloud - Cloud-hosted MiniMax-M2.5 model served via Ollama Cloud

§

OllamaGemini3FlashPreviewCloud

Gemini 3 Flash Preview Cloud - Google Gemini 3 Flash Preview via Ollama Cloud

§

OllamaNemotron3Nano30bCloud

Nemotron-3-Nano 30B Cloud - NVIDIA Nemotron-3-Nano 30B via Ollama Cloud

§

MinimaxM25

MiniMax-M2.5 - Latest MiniMax model with further improvements in reasoning and coding

§

MinimaxM2

MiniMax-M2 - MiniMax reasoning-focused model

§

LmStudioMetaLlama38BInstruct

Meta Llama 3 8B Instruct served locally via LM Studio

§

LmStudioMetaLlama318BInstruct

Meta Llama 3.1 8B Instruct served locally via LM Studio

§

LmStudioQwen257BInstruct

Qwen2.5 7B Instruct served locally via LM Studio

§

LmStudioGemma22BIt

Gemma 2 2B IT served locally via LM Studio

§

LmStudioGemma29BIt

Gemma 2 9B IT served locally via LM Studio

§

LmStudioPhi31Mini4kInstruct

Phi-3.1 Mini 4K Instruct served locally via LM Studio

§

OpenRouterGrokCodeFast1

Grok Code Fast 1 - Fast OpenRouter coding model powered by xAI Grok

§

OpenRouterGrok4Fast

Grok 4 Fast - Reasoning-focused Grok endpoint with transparent traces

§

OpenRouterGrok41Fast

Grok 4.1 Fast - Enhanced Grok 4.1 fast inference with improved reasoning

§

OpenRouterGrok4

Grok 4 - Flagship Grok 4 endpoint exposed through OpenRouter

§

OpenRouterQwen3Max

Qwen3 Max - Flagship Qwen3 mixture for general reasoning

§

OpenRouterQwen3235bA22b

Qwen3 235B A22B - Mixture-of-experts Qwen3 235B general model

§

OpenRouterQwen3235bA22b2507

Qwen3 235B A22B Instruct 2507 - Instruction-tuned Qwen3 235B A22B

§

OpenRouterQwen3235bA22bThinking2507

Qwen3 235B A22B Thinking 2507 - Deliberative Qwen3 235B A22B reasoning release

§

OpenRouterQwen332b

Qwen3 32B - Dense 32B Qwen3 deployment

§

OpenRouterQwen330bA3b

Qwen3 30B A3B - Active-parameter 30B Qwen3 model

§

OpenRouterQwen330bA3bInstruct2507

Qwen3 30B A3B Instruct 2507 - Instruction-tuned Qwen3 30B A3B

§

OpenRouterQwen330bA3bThinking2507

Qwen3 30B A3B Thinking 2507 - Deliberative Qwen3 30B A3B release

§

OpenRouterQwen314b

Qwen3 14B - Lightweight Qwen3 14B model

§

OpenRouterQwen38b

Qwen3 8B - Compact Qwen3 8B deployment

§

OpenRouterQwen3Next80bA3bInstruct

Qwen3 Next 80B A3B Instruct - Next-generation Qwen3 instruction model

§

OpenRouterQwen3Next80bA3bThinking

Qwen3 Next 80B A3B Thinking - Next-generation Qwen3 reasoning release

§

OpenRouterQwen35Plus0215

Qwen3.5-397B-A17B - Native vision-language model with linear attention and sparse MoE, 1M context window

§

OpenRouterQwen3Coder

Qwen3 Coder - Qwen3-based coding model tuned for IDE workflows

§

OpenRouterQwen3CoderPlus

Qwen3 Coder Plus - Premium Qwen3 coding model with long context

§

OpenRouterQwen3CoderFlash

Qwen3 Coder Flash - Latency optimised Qwen3 coding model

§

OpenRouterQwen3Coder30bA3bInstruct

Qwen3 Coder 30B A3B Instruct - Large Mixture-of-Experts coding deployment

§

OpenRouterQwen3CoderNext

Qwen3 Coder Next - Next-generation Qwen3 coding model with enhanced reasoning

§

OpenRouterDeepseekChat

DeepSeek V3.2 Chat - Official chat model via OpenRouter

§

OpenRouterDeepSeekV32

DeepSeek V3.2 - Standard model with thinking support via OpenRouter

§

OpenRouterDeepseekReasoner

DeepSeek V3.2 Reasoner - Thinking mode via OpenRouter

§

OpenRouterDeepSeekV32Speciale

DeepSeek V3.2 Speciale - Enhanced reasoning model (no tool-use)

§

OpenRouterDeepSeekV32Exp

DeepSeek V3.2 Exp - Experimental DeepSeek V3.2 listing

§

OpenRouterDeepSeekChatV31

DeepSeek Chat v3.1 - Advanced DeepSeek model via OpenRouter

§

OpenRouterDeepSeekR1

DeepSeek R1 - DeepSeek R1 reasoning model with chain-of-thought

§

OpenRouterOpenAIGptOss120b

OpenAI gpt-oss-120b - Open-weight 120B reasoning model via OpenRouter

§

OpenRouterOpenAIGptOss120bFree

OpenAI gpt-oss-120b:free - Open-weight 120B reasoning model free tier via OpenRouter

§

OpenRouterOpenAIGptOss20b

OpenAI gpt-oss-20b - Open-weight 20B deployment via OpenRouter

§

OpenRouterOpenAIGpt5

OpenAI GPT-5 - OpenAI GPT-5 model accessed through OpenRouter

§

OpenRouterOpenAIGpt5Codex

OpenAI GPT-5 Codex - OpenRouter listing for GPT-5 Codex

§

OpenRouterOpenAIGpt5Chat

OpenAI GPT-5 Chat - Chat optimised GPT-5 endpoint without tool use

§

OpenRouterAnthropicClaudeSonnet45

Claude Sonnet 4.5 - Anthropic Claude Sonnet 4.5 listing

§

OpenRouterAnthropicClaudeHaiku45

Claude Haiku 4.5 - Anthropic Claude Haiku 4.5 listing

§

OpenRouterAnthropicClaudeOpus41

Claude Opus 4.1 - Anthropic Claude Opus 4.1 listing

§

OpenRouterAmazonNova2LiteV1

Amazon Nova 2 Lite - Amazon Nova 2 Lite model via OpenRouter

§

OpenRouterMistralaiMistralLarge2512

Mistral Large 3 2512 - Mistral Large 3 2512 model via OpenRouter

§

OpenRouterNexAgiDeepseekV31NexN1

DeepSeek V3.1 Nex N1 - Nex AGI DeepSeek V3.1 Nex N1 model via OpenRouter

§

OpenRouterOpenAIGpt51

OpenAI GPT-5.1 - OpenAI GPT-5.1 model accessed through OpenRouter

§

OpenRouterOpenAIGpt51Codex

OpenAI GPT-5.1-Codex - OpenRouter listing for GPT-5.1 Codex

§

OpenRouterOpenAIGpt51CodexMax

OpenAI GPT-5.1-Codex-Max - OpenRouter listing for GPT-5.1 Codex Max

§

OpenRouterOpenAIGpt51CodexMini

OpenAI GPT-5.1-Codex-Mini - OpenRouter listing for GPT-5.1 Codex Mini

§

OpenRouterOpenAIGpt51Chat

OpenAI GPT-5.1 Chat - Chat optimised GPT-5.1 endpoint without tool use

§

OpenRouterOpenAIGpt52

OpenAI GPT-5.2 - OpenAI GPT-5.2 model accessed through OpenRouter

§

OpenRouterOpenAIGpt52Chat

OpenAI GPT-5.2 Chat - Chat optimised GPT-5.2 endpoint without tool use

§

OpenRouterOpenAIGpt52Codex

OpenAI GPT-5.2-Codex - OpenRouter listing for GPT-5.2 Codex

§

OpenRouterOpenAIGpt52Pro

OpenAI GPT-5.2 Pro - Professional tier GPT-5.2 model accessed through OpenRouter

§

OpenRouterOpenAIO1Pro

OpenAI o1-pro - OpenAI o1-pro advanced reasoning model via OpenRouter

§

OpenRouterStepfunStep35FlashFree

Step 3.5 Flash (free) - StepFun’s most capable open-source reasoning model via OpenRouter

§

OpenRouterZaiGlm5

GLM-5 - Z.AI GLM-5 flagship foundation model via OpenRouter

§

OpenRouterMoonshotaiKimiK20905

MoonshotAI: Kimi K2 0905 - MoonshotAI Kimi K2 0905 MoE release optimised for coding agents

§

OpenRouterMoonshotaiKimiK2Thinking

MoonshotAI: Kimi K2 Thinking - MoonshotAI reasoning-tier Kimi K2 release optimized for long-horizon agents

§

OpenRouterMoonshotaiKimiK25

MoonshotAI: Kimi K2.5 - MoonshotAI Kimi K2.5 multimodal model with long-context and reasoning capabilities via OpenRouter

§

OpenRouterMinimaxM25

MiniMax-M2.5 - MiniMax flagship model via OpenRouter

Implementations§

Source§

impl ModelId

Source

pub fn as_str(&self) -> &'static str

Convert the model identifier to its string representation used in API calls and configurations

Source§

impl ModelId

Source

pub fn is_flash_variant(&self) -> bool

Check if this is a “flash” variant (optimized for speed)

Source

pub fn is_pro_variant(&self) -> bool

Check if this is a “pro” variant (optimized for capability)

Source

pub fn is_efficient_variant(&self) -> bool

Check if this is an optimized/efficient variant

Source

pub fn is_top_tier(&self) -> bool

Check if this is a top-tier model

Source

pub fn is_reasoning_variant(&self) -> bool

Determine whether the model is a reasoning-capable variant

Source

pub fn supports_tool_calls(&self) -> bool

Determine whether the model supports tool calls/function execution

Source

pub fn generation(&self) -> &'static str

Get the generation/version string for this model

Source§

impl ModelId

Source

pub fn openrouter_vendor(&self) -> Option<&'static str>

Return the OpenRouter vendor slug when this identifier maps to a marketplace listing

Source

pub fn all_models() -> Vec<ModelId>

Get all available models as a vector

Source

pub fn models_for_provider(provider: Provider) -> Vec<ModelId>

Get all models for a specific provider

Source§

impl ModelId

Source

pub fn fallback_models() -> Vec<ModelId>

Get recommended fallback models in order of preference

Source

pub fn default_orchestrator() -> Self

Get the default orchestrator model (more capable)

Source

pub fn default_subagent() -> Self

Get the default subagent model (fast and efficient)

Source

pub fn default_orchestrator_for_provider(provider: Provider) -> Self

Get provider-specific defaults for orchestrator

Source

pub fn default_subagent_for_provider(provider: Provider) -> Self

Get provider-specific defaults for subagent

Source

pub fn default_single_for_provider(provider: Provider) -> Self

Get provider-specific defaults for single agent

Source§

impl ModelId

Source

pub fn description(&self) -> &'static str

Get a description of the model’s characteristics

Source§

impl ModelId

Source

pub fn display_name(&self) -> &'static str

Get the display name for the model (human-readable)

Source§

impl ModelId

Source

pub fn provider(&self) -> Provider

Get the provider for this model

Source

pub fn supports_reasoning_effort(&self) -> bool

Whether this model supports configurable reasoning effort levels

Trait Implementations§

Source§

impl Clone for ModelId

Source§

fn clone(&self) -> ModelId

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for ModelId

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Default for ModelId

Source§

fn default() -> ModelId

Returns the “default value” for a type. Read more
Source§

impl<'de> Deserialize<'de> for ModelId

Source§

fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>
where __D: Deserializer<'de>,

Deserialize this value from the given Serde deserializer. Read more
Source§

impl Display for ModelId

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl FromStr for ModelId

Source§

type Err = ModelParseError

The associated error which can be returned from parsing.
Source§

fn from_str(s: &str) -> Result<Self, Self::Err>

Parses a string s to return a value of this type. Read more
Source§

impl Hash for ModelId

Source§

fn hash<__H: Hasher>(&self, state: &mut __H)

Feeds this value into the given Hasher. Read more
1.3.0 · Source§

fn hash_slice<H>(data: &[Self], state: &mut H)
where H: Hasher, Self: Sized,

Feeds a slice of this type into the given Hasher. Read more
Source§

impl JsonSchema for ModelId

Source§

fn schema_name() -> Cow<'static, str>

The name of the generated JSON Schema. Read more
Source§

fn schema_id() -> Cow<'static, str>

Returns a string that uniquely identifies the schema produced by this type. Read more
Source§

fn json_schema(generator: &mut SchemaGenerator) -> Schema

Generates a JSON Schema for this type. Read more
Source§

fn inline_schema() -> bool

Whether JSON Schemas generated for this type should be included directly in parent schemas, rather than being re-used where possible using the $ref keyword. Read more
Source§

impl PartialEq for ModelId

Source§

fn eq(&self, other: &ModelId) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl Serialize for ModelId

Source§

fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>
where __S: Serializer,

Serialize this value into the given Serde serializer. Read more
Source§

impl Copy for ModelId

Source§

impl Eq for ModelId

Source§

impl StructuralPartialEq for ModelId

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> DynClone for T
where T: Clone,

Source§

fn __clone_box(&self, _: Private) -> *mut ()

Source§

impl<Q, K> Equivalent<K> for Q
where Q: Eq + ?Sized, K: Borrow<Q> + ?Sized,

Source§

fn equivalent(&self, key: &K) -> bool

Compare self to key and return true if they are equal.
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> PolicyExt for T
where T: ?Sized,

Source§

fn and<P, B, E>(self, other: P) -> And<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow only if self and other return Action::Follow. Read more
Source§

fn or<P, B, E>(self, other: P) -> Or<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow if either self or other returns Action::Follow. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToCompactString for T
where T: Display,

Source§

impl<T> ToLine for T
where T: Display,

Source§

fn to_line(&self) -> Line<'_>

Converts the value to a Line.
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T> ToSpan for T
where T: Display,

Source§

fn to_span(&self) -> Span<'_>

Converts the value to a Span.
Source§

impl<T> ToString for T
where T: Display + ?Sized,

Source§

fn to_string(&self) -> String

Converts the given value to a String. Read more
Source§

impl<T> ToStringFallible for T
where T: Display,

Source§

fn try_to_string(&self) -> Result<String, TryReserveError>

ToString::to_string, but without panic on OOM.

Source§

impl<T> ToText for T
where T: Display,

Source§

fn to_text(&self) -> Text<'_>

Converts the value to a Text.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

impl<T> DeserializeOwned for T
where T: for<'de> Deserialize<'de>,