Skip to main content

Model

Trait Model 

Source
pub trait Model: Send + Sync {
    // Required methods
    fn name(&self) -> &'static str;
    fn max_context_tokens(&self) -> usize;
    fn max_output_tokens(&self) -> usize;
    fn estimate_token_count(&self, text: &str) -> usize;

    // Provided methods
    fn estimate_message_tokens(&self, messages: &[Message]) -> usize { ... }
    fn estimate_content_block_tokens(&self, block: &ContentBlock) -> usize { ... }
}
Expand description

Core model metadata trait

All models implement this to provide their capabilities. This is provider-agnostic - the same model has the same context window whether accessed via Bedrock or Anthropic.

Required Methods§

Source

fn name(&self) -> &'static str

Human-readable model name (e.g., “Claude Sonnet 4.5”)

Source

fn max_context_tokens(&self) -> usize

Maximum input context tokens

Source

fn max_output_tokens(&self) -> usize

Maximum output tokens the model can generate

Source

fn estimate_token_count(&self, text: &str) -> usize

Estimate token count for text

Models should implement this to provide accurate token estimation. A simple heuristic (~4 characters per token) works reasonably well for most models but can be overridden with actual tokenization.

Provided Methods§

Source

fn estimate_message_tokens(&self, messages: &[Message]) -> usize

Estimate tokens for a conversation

Default implementation sums token estimates for all content blocks plus overhead for message structure.

Source

fn estimate_content_block_tokens(&self, block: &ContentBlock) -> usize

Estimate tokens for a single content block

Implementors§

Source§

impl Model for Claude3_7Sonnet

Source§

impl Model for ClaudeHaiku4_5

Source§

impl Model for ClaudeOpus4

Source§

impl Model for ClaudeOpus4_1

Source§

impl Model for ClaudeOpus4_5

Source§

impl Model for ClaudeOpus4_6

Source§

impl Model for ClaudeSonnet4

Source§

impl Model for ClaudeSonnet4_5

Source§

impl Model for CohereCommandRPlus

Source§

impl Model for DeepSeekR1

Source§

impl Model for DeepSeekV3_1

Source§

impl Model for DeepSeekV3_2

Source§

impl Model for Gemma3_4B

Source§

impl Model for Gemma3_12B

Source§

impl Model for Gemma3_27B

Source§

impl Model for KimiK2Thinking

Source§

impl Model for KimiK2_5

Source§

impl Model for Llama3_1_8B

Source§

impl Model for Llama3_1_70B

Source§

impl Model for Llama3_1_405B

Source§

impl Model for Llama3_2_1B

Source§

impl Model for Llama3_2_3B

Source§

impl Model for Llama3_2_11B

Source§

impl Model for Llama3_2_90B

Source§

impl Model for Llama3_3_70B

Source§

impl Model for Llama4Maverick17B

Source§

impl Model for Llama4Scout17B

Source§

impl Model for MagistralSmall

Source§

impl Model for Ministral3B

Source§

impl Model for Ministral8B

Source§

impl Model for Ministral14B

Source§

impl Model for MistralLarge3

Source§

impl Model for Nova2Lite

Source§

impl Model for Nova2Sonic

Source§

impl Model for NovaLite

Source§

impl Model for NovaMicro

Source§

impl Model for NovaPremier

Source§

impl Model for NovaPro

Source§

impl Model for PixtralLarge

Source§

impl Model for Qwen3Coder30B

Source§

impl Model for Qwen3Coder480B

Source§

impl Model for Qwen3Next80B

Source§

impl Model for Qwen3VL235B

Source§

impl Model for Qwen3_32B

Source§

impl Model for Qwen3_235B

Source§

impl Model for VoxtralMini3B

Source§

impl Model for VoxtralSmall24B