Expand description
LLM abstraction layer for multiple providers
Structs§
- Assistant
Message - Assistant message
- Cache
Control - Cache control for prompt caching
- Chat
Anthropic - Anthropic Chat Model
- Chat
Completion - Chat completion response
- Chat
Deep Seek - DeepSeek Chat Model
- Chat
Google - Google Gemini Chat Model
- Chat
Groq - Groq Chat Model
- Chat
Mistral - Mistral Chat Model
- Chat
Ollama - Ollama Chat Model
- Chat
OpenAI - OpenAI Chat Model
- Chat
OpenAI Compatible - OpenAI-compatible Chat Model base implementation
- Chat
Open Router - OpenRouter Chat Model
- Content
Part Document - Document content part (for PDFs etc.)
- Content
Part Image - Image content part
- Content
Part Redacted Thinking - Redacted thinking content
- Content
Part Refusal - Refusal content part
- Content
Part Text - Text content part
- Content
Part Thinking - Thinking content part
- Developer
Message - Developer message (for o1+ models)
- Document
Source - Function
- Function call from the LLM
- Image
Url - Image URL structure
- Model
Builder - Builder pattern helpers for common model configurations
- Schema
Optimizer - Schema optimizer for creating LLM-compatible JSON schemas
- System
Message - System message
- Tool
Call - Tool call from the LLM
- Tool
Definition - Definition of a tool that can be called by the LLM
- Tool
Message - Tool result message
- Usage
- Token usage information
- User
Message - User message
Enums§
- Cache
Control Type - Content
Part - Union type for all content parts
- LlmError
- Error types for LLM operations
- Message
- Union type for all messages
- Reasoning
Effort - Reasoning effort levels for o1+ models
- Stop
Reason - Stop reason for completion
- Tool
Choice - Tool choice strategy
Traits§
- Base
Chat Model - Base trait for chat model implementations
Type Aliases§
- Chat
Stream - Type alias for boxed stream
- Json
Schema - JSON Schema for tool parameters