Expand description
AI Provider abstraction layer.
Unified interface for multiple AI providers (OpenAI, Anthropic, Google, StepFun, Bedrock, etc.).
§Architecture
- [
types] — shared data types (Message,StreamChunk, etc.) - [
traits] — theProvidertrait andModelInfo - [
registry] —ProviderRegistry(name → provider map) - [
parse] — model-string parser ("openai/gpt-4o"→(provider, model)) - [
init_vault] — Vault-based provider initialization - [
init_config] — TOML-config-based initialization - [
init_env] — environment-variable fallback - [
init_dispatch] / [init_dispatch_impl] — per-provider constructors
Provider implementations live in their own modules (openai, anthropic,
bedrock/, etc.).
Modules§
- anthropic
- Anthropic provider implementation using the Messages API
- bedrock
- Amazon Bedrock provider for the Converse API.
- copilot
- GitHub Copilot provider implementation using raw HTTP.
- gemini_
web - Gemini Web provider drives the Gemini chat UI’s undocumented BardChatUi endpoint using browser cookies stored in HashiCorp Vault.
- glm5
- GLM-5 FP8 provider for Vast.ai serverless deployments
- Google Gemini provider implementation
- limits
- Canonical context-window limits for known LLM models.
- local_
cuda - Stub when CUDA is not compiled in.
- metrics
- Provider metrics wrapper
- models
- Model catalog from CodeTether API
- moonshot
- Moonshot AI provider implementation (direct API)
- openai
- OpenAI provider implementation
- openai_
codex - OpenAI Codex provider using ChatGPT Plus/Pro subscription via OAuth
- openrouter
- OpenRouter provider implementation using raw HTTP
- retry
- Provider HTTP retry logic.
- stepfun
- StepFun provider implementation (direct API, not via OpenRouter)
- util
- Provider-level re-exports of shared crate utilities.
- vertex_
anthropic - Vertex AI Anthropic provider implementation
- vertex_
glm - Vertex AI GLM provider implementation (MaaS endpoint)
- zai
- Z.AI provider implementation (direct API)
Structs§
- Completion
Request - Request to generate a completion.
- Completion
Response - Response from a completion request.
- Embedding
Request - Request to generate embeddings.
- Embedding
Response - Response from an embedding request.
- Message
- A message in a conversation.
- Model
Info - Metadata about a model offered by a provider.
- Provider
Registry - Registry of available providers.
- Tool
Definition - Schema-driven tool definition passed to the model.
- Usage
- Token usage statistics.
Enums§
- Content
Part - One content block within a
Message. - Finish
Reason - Reason the model stopped generating.
- Role
- Participant role in a conversation.
- Stream
Chunk - A streaming chunk produced by
Provider::complete_stream.
Traits§
- Provider
- Trait that all AI providers must implement.
Functions§
- parse_
model_ string - Parse a model string into
(provider, model).