# Provider Layer
Service vendor facades — the primary entry point for users. Each provider is a thin wrapper that delegates to a [Protocol](../protocol/README.md) implementation.
## Overview
The provider layer is the **recommended API surface** for application developers. Providers abstract away wire-format details and offer a consistent interface:
```rust
use tiycore::provider::get_provider;
use tiycore::types::*;
// Providers are auto-registered on first access — just get and use
let provider = get_provider(&Provider::OpenAI).unwrap();
let stream = provider.stream(&model, &context, options);
```
All providers implement the same `LLMProtocol` trait, so switching from OpenAI to Anthropic is a one-line change.
## Architecture
```
┌──────────────────────────────────────────────────────────────────┐
│ Your Application │
└──────────────┬──────────────────────────────────┬────────────────┘
│ │
┌───────▼───────┐ ┌───────▼───────┐
│ Agent │ │ Direct Call │
└───────┬───────┘ └───────┬───────┘
│ │
┌───────▼──────────────────────────────────▼────────┐
│ Provider Layer (this module) │
│ ┌──────────────────────┐ ┌────────────────────┐ │
│ │ Direct Providers │ │ Delegation Providers│ │
│ │ OpenAI, Anthropic, │ │ OpenAI-Compatible, │ │
│ │ Google, Ollama │ │ xAI, Groq, ZAI, │ │
│ │ │ │ OpenRouter, MiniMax,│ │
│ │ │ │ Kimi Coding, Zenmux │ │
│ └──────────┬───────────┘ └──────────┬─────────┘ │
└─────────────┼────────────────────────┼────────────┘
│ │
┌─────────────▼────────────────────────▼────────────┐
│ Protocol Layer (wire format) │
│ OpenAI Completions │ OpenAI Responses │ Anthropic │
│ Google GenAI/Vertex │
└────────────────────────────────────────────────────┘
```
## Direct Providers
Thin facades that delegate to a single protocol implementation:
| OpenAI | `openai.rs` | `OpenAIProvider` | `protocol::openai_responses` | `https://api.openai.com/v1` |
| Anthropic | `anthropic.rs` | `AnthropicProvider` | `protocol::anthropic` | `https://api.anthropic.com/v1` |
| Google | `google.rs` | `GoogleProvider` | `protocol::google` | `https://generativelanguage.googleapis.com/v1beta` |
| Ollama | `ollama.rs` | `OllamaProvider` | `protocol::openai_completions` | `http://localhost:11434/v1` |
### Usage Example
```rust
use tiycore::provider::get_provider;
use tiycore::types::*;
let model = Model::builder()
.id("claude-sonnet-4-20250514")
.name("Claude Sonnet 4")
.provider(Provider::Anthropic)
.context_window(200000)
.max_tokens(8192)
.build()
.unwrap();
// Provider is auto-registered on first access
let provider = get_provider(&model.provider).unwrap();
let stream = provider.stream(&model, &context, StreamOptions {
api_key: Some("sk-...".into()),
..Default::default()
});
```
## Delegation Providers
Providers that inject API keys, compat settings, and/or custom base URLs, then delegate to an existing protocol. Most are generated by macros in `delegation.rs`.
### OpenAI-Compatible (→ OpenAI Completions Protocol)
| OpenAI-Compatible | `openai_compatible.rs` | `OpenAICompatibleProvider` | `OPENAI_API_KEY` | Generic facade; uses caller-supplied `model.base_url` or `StreamOptions.base_url` |
| xAI | `xai.rs` | `XAIProvider` | `XAI_API_KEY` | `supports_store: false`, `supports_developer_role: false`, `thinking_format: "openai"` |
| Groq | `groq.rs` | `GroqProvider` | `GROQ_API_KEY` | Model-aware: custom `reasoning_effort_map` for `qwen/qwen3-32b` |
| OpenRouter | `openrouter.rs` | `OpenRouterProvider` | `OPENROUTER_API_KEY` | No compat injection; supports routing extensions via `open_router_routing` |
| ZAI | `zai.rs` | `ZAIProvider` | `ZAI_API_KEY` | `thinking_format: "zai"` (uses `enable_thinking` parameter), `supports_developer_role: false` |
| DeepSeek | `deepseek.rs` | `DeepSeekProvider` | `DEEPSEEK_API_KEY` | `supports_store: false`, `supports_developer_role: false`, `thinking_format: "openai"` |
### Anthropic-Compatible (→ Anthropic Messages Protocol)
| MiniMax | `minimax.rs` | `MiniMaxProvider` | `MINIMAX_API_KEY` | Hand-written (dual env var: `MINIMAX_API_KEY` / `MINIMAX_CN_API_KEY` based on provider variant) |
| Kimi Coding | `kimi_coding.rs` | `KimiCodingProvider` | `KIMI_API_KEY` | Macro-generated |
### Zenmux (Adaptive Multi-Protocol)
| Zenmux | `zenmux.rs` | `ZenmuxProvider` | `ZENMUX_API_KEY` |
Zenmux is a unique multi-protocol proxy that routes to different protocols based on model ID:
| Contains `google` or `gemini` | Google (Vertex AI) | `https://zenmux.ai/api/vertex-ai` |
| Contains `openai` or `gpt` | OpenAI Responses | `https://zenmux.ai/api/v1` |
| Everything else | Anthropic Messages | `https://zenmux.ai/api/anthropic/v1` |
When a custom (non-zenmux) base URL is provided, it falls back to OpenAI Completions protocol.
### OpenCode Go (Adaptive Multi-Protocol)
| OpenCode Go | `opencode_go.rs` | `OpenCodeGoProvider` | `OPENCODE_GO_API_KEY` |
OpenCode Go is a multi-protocol proxy provider that routes to different protocols based on model ID:
| Contains `minimax` (case-insensitive) | Anthropic Messages | `https://opencode.ai/zen/go/v1` |
| Everything else | OpenAI Completions | `https://opencode.ai/zen/go/v1` |
OpenCode Go supports GLM, Kimi, Mimo, and MiniMax models through adaptive protocol selection.
## API Key Resolution
Keys are resolved in priority order:
1. `StreamOptions.api_key` — per-request override
2. Provider's `default_api_key` — set via `with_api_key()` constructor
3. Environment variable — provider-specific (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)
Base URLs follow the same 3-level fallback: `StreamOptions.base_url` > `model.base_url` > provider default.
## OpenAICompletionsCompat
Delegation providers that target the OpenAI Completions protocol can inject `OpenAICompletionsCompat` flags to control protocol-level behavior differences:
| `supports_store` | `bool` | Whether the provider supports the `store` parameter |
| `supports_developer_role` | `bool` | Whether `developer` role messages are supported |
| `supports_reasoning_effort` | `bool` | Whether `reasoning_effort` parameter is supported |
| `thinking_format` | `String` | Thinking format variant (`"openai"`, `"zai"`, etc.) |
| `reasoning_effort_map` | `HashMap` | Custom mapping of thinking levels to provider-specific values |
| `open_router_routing` | `Option` | OpenRouter-specific routing preferences |
Compat is injected only when `model.compat.is_none()` — explicitly set compat on the model takes precedence.
## File Structure
```
provider/
├── mod.rs # Module declarations, re-exports protocol traits + registry API
├── registry.rs # ProtocolRegistry + global static + auto-registration + convenience functions
├── delegation.rs # Macros for generating delegation providers (define_openai/anthropic_delegation_provider!)
├── openai.rs # OpenAI → protocol::openai_responses
├── anthropic.rs # Anthropic → protocol::anthropic
├── google.rs # Google → protocol::google (GenAI + Vertex dual-mode)
├── ollama.rs # Ollama → protocol::openai_completions (localhost)
├── openai_compatible.rs # OpenAI-Compatible → OpenAI Completions (macro-generated, generic facade)
├── xai.rs # xAI → OpenAI Completions (macro-generated, static compat)
├── groq.rs # Groq → OpenAI Completions (macro-generated, model-aware compat)
├── openrouter.rs # OpenRouter → OpenAI Completions (macro-generated, no compat)
├── zai.rs # ZAI → OpenAI Completions (macro-generated, static compat)
├── deepseek.rs # DeepSeek → OpenAI Completions (macro-generated, static compat)
├── minimax.rs # MiniMax → Anthropic (hand-written, dual env var)
├── kimi_coding.rs # Kimi Coding → Anthropic (macro-generated)
├── zenmux.rs # Zenmux → adaptive 3-way routing (hand-written)
└── opencode_go.rs # OpenCode Go → adaptive 2-way routing (hand-written, MiniMax → Anthropic, others → OpenAI)
```
## Adding a New Provider
### Delegation Provider (most common)
Use the macros in `delegation.rs` to generate a provider:
```rust
// In src/provider/my_provider.rs
use crate::stream::AssistantMessageEventStream;
use crate::types::*;
define_openai_delegation_provider! {
name: MyProvider,
doc: "My provider (OpenAI-compatible).",
provider_type: Provider::MyProvider,
env_var: "MY_API_KEY",
default_compat: || OpenAICompletionsCompat {
supports_store: false,
..Default::default()
},
}
```
Then add `pub mod my_provider;` to `mod.rs`.
### Direct Provider (facade)
For providers needing custom logic (like Ollama's no-API-key model or MiniMax's dual env var):
1. Create `src/provider/<name>.rs` with a struct wrapping the protocol
2. Implement `LLMProtocol` by delegating to the inner protocol
3. Add `pub mod <name>;` to `mod.rs`
4. Add tests in `tests/`
### When to Write a New Protocol Instead
Only when the target API uses a **completely different** HTTP/SSE wire format than the existing four protocols. See the [Protocol README](../protocol/README.md) for details.
---
<a name="提供商层"></a>
# 提供商层
服务商门面 —— 用户的主要入口。每个提供商都是一个轻量包装器,委托给 [Protocol](../protocol/README.md) 实现。
## 概述
提供商层是应用开发者的**推荐 API 入口**。提供商抽象了线路格式细节,提供一致的接口:
```rust
use tiycore::provider::get_provider;
use tiycore::types::*;
// 提供商在首次访问时自动注册 — 直接获取即可使用
let provider = get_provider(&Provider::OpenAI).unwrap();
let stream = provider.stream(&model, &context, options);
```
所有提供商实现相同的 `LLMProtocol` trait,因此从 OpenAI 切换到 Anthropic 只需修改一行代码。
## 架构
```
┌──────────────────────────────────────────────────────────────────┐
│ 你的应用 │
└──────────────┬──────────────────────────────────┬────────────────┘
│ │
┌───────▼───────┐ ┌───────▼───────┐
│ Agent │ │ 直接调用 │
└───────┬───────┘ └───────┬───────┘
│ │
┌───────▼──────────────────────────────────▼────────┐
│ Provider 层(本模块) │
│ ┌──────────────────────┐ ┌────────────────────┐ │
│ │ 直接提供商 │ │ 委托提供商 │ │
│ │ OpenAI, Anthropic, │ │ OpenAI-Compatible, │ │
│ │ Google, Ollama │ │ xAI, Groq, ZAI, │ │
│ │ │ │ OpenRouter, MiniMax,│ │
│ │ │ │ Kimi Coding, Zenmux │ │
│ └──────────┬───────────┘ └──────────┬─────────┘ │
└─────────────┼────────────────────────┼────────────┘
│ │
┌─────────────▼────────────────────────▼────────────┐
│ Protocol 层(线路格式) │
│ OpenAI Completions │ OpenAI Responses │ Anthropic │
│ Google GenAI/Vertex │
└────────────────────────────────────────────────────┘
```
## 直接提供商
委托到单个协议实现的轻量门面:
| OpenAI | `openai.rs` | `OpenAIProvider` | `protocol::openai_responses` | `https://api.openai.com/v1` |
| Anthropic | `anthropic.rs` | `AnthropicProvider` | `protocol::anthropic` | `https://api.anthropic.com/v1` |
| Google | `google.rs` | `GoogleProvider` | `protocol::google` | `https://generativelanguage.googleapis.com/v1beta` |
| Ollama | `ollama.rs` | `OllamaProvider` | `protocol::openai_completions` | `http://localhost:11434/v1` |
### 使用示例
```rust
use tiycore::provider::get_provider;
use tiycore::types::*;
let model = Model::builder()
.id("claude-sonnet-4-20250514")
.name("Claude Sonnet 4")
.provider(Provider::Anthropic)
.context_window(200000)
.max_tokens(8192)
.build()
.unwrap();
// 提供商在首次访问时自动注册
let provider = get_provider(&model.provider).unwrap();
let stream = provider.stream(&model, &context, StreamOptions {
api_key: Some("sk-...".into()),
..Default::default()
});
```
## 委托提供商
注入 API Key、兼容性设置和/或自定义 Base URL 后委托给现有协议的提供商。大多数通过 `delegation.rs` 中的宏生成。
### OpenAI 兼容(→ OpenAI Completions 协议)
| OpenAI-Compatible | `openai_compatible.rs` | `OpenAICompatibleProvider` | `OPENAI_API_KEY` | 通用门面;使用调用方提供的 `model.base_url` 或 `StreamOptions.base_url` |
| xAI | `xai.rs` | `XAIProvider` | `XAI_API_KEY` | `supports_store: false`,`supports_developer_role: false`,`thinking_format: "openai"` |
| Groq | `groq.rs` | `GroqProvider` | `GROQ_API_KEY` | 模型感知:`qwen/qwen3-32b` 使用自定义 `reasoning_effort_map` |
| OpenRouter | `openrouter.rs` | `OpenRouterProvider` | `OPENROUTER_API_KEY` | 无兼容性注入;支持通过 `open_router_routing` 进行路由扩展 |
| ZAI | `zai.rs` | `ZAIProvider` | `ZAI_API_KEY` | `thinking_format: "zai"`(使用 `enable_thinking` 参数),`supports_developer_role: false` |
| DeepSeek | `deepseek.rs` | `DeepSeekProvider` | `DEEPSEEK_API_KEY` | `supports_store: false`,`supports_developer_role: false`,`thinking_format: "openai"` |
### Anthropic 兼容(→ Anthropic Messages 协议)
| MiniMax | `minimax.rs` | `MiniMaxProvider` | `MINIMAX_API_KEY` | 手写实现(双环境变量:`MINIMAX_API_KEY` / `MINIMAX_CN_API_KEY`) |
| Kimi Coding | `kimi_coding.rs` | `KimiCodingProvider` | `KIMI_API_KEY` | 宏生成 |
### Zenmux(自适应多协议)
| Zenmux | `zenmux.rs` | `ZenmuxProvider` | `ZENMUX_API_KEY` |
Zenmux 是独特的多协议代理,根据模型 ID 路由到不同协议:
| 包含 `google` 或 `gemini` | Google (Vertex AI) | `https://zenmux.ai/api/vertex-ai` |
| 包含 `openai` 或 `gpt` | OpenAI Responses | `https://zenmux.ai/api/v1` |
| 其他 | Anthropic Messages | `https://zenmux.ai/api/anthropic/v1` |
当提供自定义(非 zenmux)Base URL 时,回退到 OpenAI Completions 协议。
### OpenCode Go(自适应多协议)
| OpenCode Go | `opencode_go.rs` | `OpenCodeGoProvider` | `OPENCODE_GO_API_KEY` |
OpenCode Go 是多协议代理提供商,根据模型 ID 路由到不同协议:
| 包含 `minimax`(不区分大小写) | Anthropic Messages | `https://opencode.ai/zen/go/v1` |
| 其他 | OpenAI Completions | `https://opencode.ai/zen/go/v1` |
OpenCode Go 通过自适应协议选择支持 GLM、Kimi、Mimo 和 MiniMax 模型。
## API Key 解析优先级
Key 按以下优先级解析:
1. `StreamOptions.api_key` — 逐请求覆盖
2. 提供商的 `default_api_key` — 通过 `with_api_key()` 构造函数设置
3. 环境变量 — 提供商特定(如 `OPENAI_API_KEY`、`ANTHROPIC_API_KEY`)
Base URL 遵循相同的三级回退:`StreamOptions.base_url` > `model.base_url` > 提供商默认值。
## OpenAICompletionsCompat
面向 OpenAI Completions 协议的委托提供商可以注入 `OpenAICompletionsCompat` 标志来控制协议层面的行为差异:
| `supports_store` | `bool` | 提供商是否支持 `store` 参数 |
| `supports_developer_role` | `bool` | 是否支持 `developer` 角色消息 |
| `supports_reasoning_effort` | `bool` | 是否支持 `reasoning_effort` 参数 |
| `thinking_format` | `String` | 思维格式变体(`"openai"`、`"zai"` 等) |
| `reasoning_effort_map` | `HashMap` | 思维级别到提供商特定值的自定义映射 |
| `open_router_routing` | `Option` | OpenRouter 特定的路由偏好 |
兼容性仅在 `model.compat.is_none()` 时注入 —— 模型上显式设置的兼容性优先。
## 文件结构
```
provider/
├── mod.rs # 模块声明,re-export 协议 trait + 注册表 API
├── registry.rs # ProtocolRegistry + 全局静态实例 + 自动注册 + 便捷函数
├── delegation.rs # 生成委托提供商的宏(define_openai/anthropic_delegation_provider!)
├── openai.rs # OpenAI → protocol::openai_responses
├── anthropic.rs # Anthropic → protocol::anthropic
├── google.rs # Google → protocol::google(GenAI + Vertex 双模式)
├── ollama.rs # Ollama → protocol::openai_completions(本地)
├── openai_compatible.rs # OpenAI-Compatible → OpenAI Completions(宏生成,通用门面)
├── xai.rs # xAI → OpenAI Completions(宏生成,静态兼容性)
├── groq.rs # Groq → OpenAI Completions(宏生成,模型感知兼容性)
├── openrouter.rs # OpenRouter → OpenAI Completions(宏生成,无兼容性)
├── zai.rs # ZAI → OpenAI Completions(宏生成,静态兼容性)
├── deepseek.rs # DeepSeek → OpenAI Completions(宏生成,静态兼容性)
├── minimax.rs # MiniMax → Anthropic(手写,双环境变量)
├── kimi_coding.rs # Kimi Coding → Anthropic(宏生成)
├── zenmux.rs # Zenmux → 自适应三路路由(手写)
└── opencode_go.rs # OpenCode Go → 自适应双路路由(手写,MiniMax → Anthropic,其他 → OpenAI)
```
## 添加新提供商
### 委托提供商(最常见)
使用 `delegation.rs` 中的宏生成提供商:
```rust
// 在 src/provider/my_provider.rs 中
use crate::stream::AssistantMessageEventStream;
use crate::types::*;
define_openai_delegation_provider! {
name: MyProvider,
doc: "My provider (OpenAI-compatible).",
provider_type: Provider::MyProvider,
env_var: "MY_API_KEY",
default_compat: || OpenAICompletionsCompat {
supports_store: false,
..Default::default()
},
}
```
然后在 `mod.rs` 中添加 `pub mod my_provider;`。
### 直接提供商(门面)
对于需要自定义逻辑的提供商(如 Ollama 的无 API Key 模式或 MiniMax 的双环境变量):
1. 创建 `src/provider/<name>.rs`,结构体内包装协议
2. 实现 `LLMProtocol`,委托给内部协议
3. 在 `mod.rs` 中添加 `pub mod <name>;`
4. 在 `tests/` 中添加测试
### 何时编写新协议
仅当目标 API 使用与现有四个协议**完全不同**的 HTTP/SSE 线路格式时。详见 [Protocol README](../protocol/README.md)。