Expand description
LLM provider abstraction layer.
This module defines the Provider trait and shared request/response types used by all
backends (Anthropic/OpenAI/Gemini/etc).
Providers are responsible for:
- Translating
crate::model::Messagehistory into provider-specific HTTP requests. - Emitting
StreamEventvalues as SSE/HTTP chunks arrive. - Advertising tool schemas to the model (so it can call
crate::toolsby name).
Re-exports§
pub use crate::model::StreamEvent;
Structs§
- Context
- Inputs to a single completion request.
- Model
- A model definition loaded from the models registry.
- Model
Cost - Model pricing per million tokens.
- Stream
Options - Options that control streaming completion behavior.
- Thinking
Budgets - Custom thinking token budgets per level.
- ToolDef
- A tool definition exposed to the model.
Enums§
- Api
- Known API types.
- Cache
Retention - Cache retention policy.
- Input
Type - Input types supported by a model.
- Known
Provider - Known providers.
Traits§
- Provider
- An LLM backend capable of streaming assistant output (and tool calls).