Skip to main content

Module llm

Module llm 

Source
Expand description

LLM Provider and Model Configuration

This module provides the configuration types for LLM providers and models.

§Provider Configuration

Providers are configured in a providers HashMap where the key becomes the model prefix for routing requests to the correct provider.

§Built-in Providers

  • openai - OpenAI API
  • anthropic - Anthropic API (supports OAuth via access_token)
  • gemini - Google Gemini API

For built-in providers, you can use the model name directly without a prefix:

  • claude-sonnet-4-5 → auto-detected as Anthropic
  • gpt-4 → auto-detected as OpenAI
  • gemini-2.5-pro → auto-detected as Gemini

§Custom Providers

Any OpenAI-compatible API can be configured using type = "custom". The provider key becomes the model prefix.

§Model Routing

Models can be specified with or without a provider prefix:

  • claude-sonnet-4-5 → auto-detected as anthropic provider
  • anthropic/claude-sonnet-4-5 → explicit anthropic provider
  • offline/llama3 → routes to offline custom provider, sends llama3 to API
  • custom/anthropic/claude-opus → routes to custom provider, sends anthropic/claude-opus to the API

§Example Configuration

[profiles.default]
provider = "local"
smart_model = "claude-sonnet-4-5"  # auto-detected as anthropic
eco_model = "offline/llama3"       # custom provider

[profiles.default.providers.anthropic]
type = "anthropic"
# api_key from auth.toml or ANTHROPIC_API_KEY env var

[profiles.default.providers.offline]
type = "custom"
api_endpoint = "http://localhost:11434/v1"

Structs§

GenerationDeltaToolUse
LLMAnthropicOptions
Anthropic-specific options
LLMChoice
LLMCompletionResponse
LLMCompletionStreamResponse
LLMGoogleOptions
Google/Gemini-specific options
LLMInput
LLMMessage
LLMMessageImageSource
LLMOpenAIOptions
OpenAI-specific options
LLMProviderConfig
Aggregated provider configuration for LLM operations
LLMProviderOptions
Provider-specific options for LLM requests
LLMStreamChoice
LLMStreamDelta
LLMStreamInput
LLMThinkingOptions
Thinking/reasoning options
LLMTokenUsage
LLMTool
PromptTokensDetails
SimpleLLMMessage

Enums§

GenerationDelta
LLMMessageContent
LLMMessageTypedContent
LLMModel
ProviderConfig
Unified provider configuration enum
SimpleLLMRole
TokenType