Expand description
Model crate — LLM provider implementations, enum dispatch, configuration, construction, and runtime management.
Merges all provider backends (OpenAI, Claude, Local) with the Provider
enum, ProviderManager, and ProviderConfig into a single crate. Config
uses flat ProviderConfig with model-prefix kind detection. DeepSeek and
other OpenAI-compatible providers route through the OpenAI backend.
Re-exports§
pub use config::ProviderConfig;pub use config::ProviderKind;pub use http::HttpProvider;pub use manager::ProviderManager;
Modules§
- claude
- Claude (Anthropic) LLM provider.
- config
- Provider configuration.
- http
- Shared HTTP transport for OpenAI-compatible LLM providers.
- manager
ProviderManager— concurrent-safe named provider registry with model routing and active-provider swapping.- openai
- OpenAI-compatible LLM provider.
Structs§
- Client
- An asynchronous
Clientto make Requests with.
Enums§
- Provider
- Unified LLM provider enum.
Functions§
- build_
provider - Construct a
Providerfrom config and a shared HTTP client.