tiycore 0.1.7

Unified LLM API and stateful Agent runtime in Rust
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
# Provider Layer

[English]./README.md | [中文]#提供商层

Service vendor facades — the primary entry point for users. Each provider is a thin wrapper that delegates to a [Protocol]../protocol/README.md implementation.

## Overview

The provider layer is the **recommended API surface** for application developers. Providers abstract away wire-format details and offer a consistent interface:

```rust
use tiycore::provider::get_provider;
use tiycore::types::*;

// Providers are auto-registered on first access — just get and use
let provider = get_provider(&Provider::OpenAI).unwrap();
let stream = provider.stream(&model, &context, options);
```

All providers implement the same `LLMProtocol` trait, so switching from OpenAI to Anthropic is a one-line change.

## Architecture

```
┌──────────────────────────────────────────────────────────────────┐
│                        Your Application                          │
└──────────────┬──────────────────────────────────┬────────────────┘
               │                                  │
       ┌───────▼───────┐                  ┌───────▼───────┐
       │    Agent       │                  │  Direct Call   │
       └───────┬───────┘                  └───────┬───────┘
               │                                  │
       ┌───────▼──────────────────────────────────▼────────┐
       │              Provider Layer (this module)          │
       │  ┌──────────────────────┐  ┌────────────────────┐ │
       │  │  Direct Providers    │  │ Delegation Providers│ │
       │  │  OpenAI, Anthropic,  │  │ OpenAI-Compatible, │ │
       │  │  Google, Ollama      │  │ xAI, Groq, ZAI,    │ │
       │  │                      │  │ OpenRouter, MiniMax,│ │
       │  │                      │  │ Kimi Coding, Zenmux │ │
       │  └──────────┬───────────┘  └──────────┬─────────┘ │
       └─────────────┼────────────────────────┼────────────┘
                     │                        │
       ┌─────────────▼────────────────────────▼────────────┐
       │            Protocol Layer (wire format)            │
       │  OpenAI Completions │ OpenAI Responses │ Anthropic │
       │  Google GenAI/Vertex                               │
       └────────────────────────────────────────────────────┘
```

## Direct Providers

Thin facades that delegate to a single protocol implementation:

| Provider | Module | Struct | Delegates To | Default Base URL |
|---|---|---|---|---|
| OpenAI | `openai.rs` | `OpenAIProvider` | `protocol::openai_responses` | `https://api.openai.com/v1` |
| Anthropic | `anthropic.rs` | `AnthropicProvider` | `protocol::anthropic` | `https://api.anthropic.com/v1` |
| Google | `google.rs` | `GoogleProvider` | `protocol::google` | `https://generativelanguage.googleapis.com/v1beta` |
| Ollama | `ollama.rs` | `OllamaProvider` | `protocol::openai_completions` | `http://localhost:11434/v1` |

### Usage Example

```rust
use tiycore::provider::get_provider;
use tiycore::types::*;

let model = Model::builder()
    .id("claude-sonnet-4-20250514")
    .name("Claude Sonnet 4")
    .provider(Provider::Anthropic)
    .context_window(200000)
    .max_tokens(8192)
    .build()
    .unwrap();

// Provider is auto-registered on first access
let provider = get_provider(&model.provider).unwrap();
let stream = provider.stream(&model, &context, StreamOptions {
    api_key: Some("sk-...".into()),
    ..Default::default()
});
```

## Delegation Providers

Providers that inject API keys, compat settings, and/or custom base URLs, then delegate to an existing protocol. Most are generated by macros in `delegation.rs`.

### OpenAI-Compatible (→ OpenAI Completions Protocol)

| Provider | Module | Struct | Env Var | Compat Notes |
|---|---|---|---|---|
| OpenAI-Compatible | `openai_compatible.rs` | `OpenAICompatibleProvider` | `OPENAI_API_KEY` | Generic facade; uses caller-supplied `model.base_url` or `StreamOptions.base_url` |
| xAI | `xai.rs` | `XAIProvider` | `XAI_API_KEY` | `supports_store: false`, `supports_developer_role: false`, `thinking_format: "openai"` |
| Groq | `groq.rs` | `GroqProvider` | `GROQ_API_KEY` | Model-aware: custom `reasoning_effort_map` for `qwen/qwen3-32b` |
| OpenRouter | `openrouter.rs` | `OpenRouterProvider` | `OPENROUTER_API_KEY` | No compat injection; supports routing extensions via `open_router_routing` |
| ZAI | `zai.rs` | `ZAIProvider` | `ZAI_API_KEY` | `thinking_format: "zai"` (uses `enable_thinking` parameter), `supports_developer_role: false` |
| DeepSeek | `deepseek.rs` | `DeepSeekProvider` | `DEEPSEEK_API_KEY` | `supports_store: false`, `supports_developer_role: false`, `thinking_format: "openai"` |

### Anthropic-Compatible (→ Anthropic Messages Protocol)

| Provider | Module | Struct | Env Var | Notes |
|---|---|---|---|---|
| MiniMax | `minimax.rs` | `MiniMaxProvider` | `MINIMAX_API_KEY` | Hand-written (dual env var: `MINIMAX_API_KEY` / `MINIMAX_CN_API_KEY` based on provider variant) |
| Kimi Coding | `kimi_coding.rs` | `KimiCodingProvider` | `KIMI_API_KEY` | Macro-generated |

### Zenmux (Adaptive Multi-Protocol)

| Provider | Module | Struct | Env Var |
|---|---|---|---|
| Zenmux | `zenmux.rs` | `ZenmuxProvider` | `ZENMUX_API_KEY` |

Zenmux is a unique multi-protocol proxy that routes to different protocols based on model ID:

| Model ID Pattern | Routed Protocol | Base URL |
|---|---|---|
| Contains `google` or `gemini` | Google (Vertex AI) | `https://zenmux.ai/api/vertex-ai` |
| Contains `openai` or `gpt` | OpenAI Responses | `https://zenmux.ai/api/v1` |
| Everything else | Anthropic Messages | `https://zenmux.ai/api/anthropic/v1` |

When a custom (non-zenmux) base URL is provided, it falls back to OpenAI Completions protocol.

## API Key Resolution

Keys are resolved in priority order:

1. `StreamOptions.api_key` — per-request override
2. Provider's `default_api_key` — set via `with_api_key()` constructor
3. Environment variable — provider-specific (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`)

Base URLs follow the same 3-level fallback: `StreamOptions.base_url` > `model.base_url` > provider default.

## OpenAICompletionsCompat

Delegation providers that target the OpenAI Completions protocol can inject `OpenAICompletionsCompat` flags to control protocol-level behavior differences:

| Field | Type | Description |
|---|---|---|
| `supports_store` | `bool` | Whether the provider supports the `store` parameter |
| `supports_developer_role` | `bool` | Whether `developer` role messages are supported |
| `supports_reasoning_effort` | `bool` | Whether `reasoning_effort` parameter is supported |
| `thinking_format` | `String` | Thinking format variant (`"openai"`, `"zai"`, etc.) |
| `reasoning_effort_map` | `HashMap` | Custom mapping of thinking levels to provider-specific values |
| `open_router_routing` | `Option` | OpenRouter-specific routing preferences |

Compat is injected only when `model.compat.is_none()` — explicitly set compat on the model takes precedence.

## File Structure

```
provider/
├── mod.rs           # Module declarations, re-exports protocol traits + registry API
├── registry.rs      # ProtocolRegistry + global static + auto-registration + convenience functions
├── delegation.rs    # Macros for generating delegation providers (define_openai/anthropic_delegation_provider!)
├── openai.rs        # OpenAI → protocol::openai_responses
├── anthropic.rs     # Anthropic → protocol::anthropic
├── google.rs        # Google → protocol::google (GenAI + Vertex dual-mode)
├── ollama.rs        # Ollama → protocol::openai_completions (localhost)
├── openai_compatible.rs # OpenAI-Compatible → OpenAI Completions (macro-generated, generic facade)
├── xai.rs           # xAI → OpenAI Completions (macro-generated, static compat)
├── groq.rs          # Groq → OpenAI Completions (macro-generated, model-aware compat)
├── openrouter.rs    # OpenRouter → OpenAI Completions (macro-generated, no compat)
├── zai.rs           # ZAI → OpenAI Completions (macro-generated, static compat)
├── deepseek.rs      # DeepSeek → OpenAI Completions (macro-generated, static compat)
├── minimax.rs       # MiniMax → Anthropic (hand-written, dual env var)
├── kimi_coding.rs   # Kimi Coding → Anthropic (macro-generated)
└── zenmux.rs        # Zenmux → adaptive 3-way routing (hand-written)
```

## Adding a New Provider

### Delegation Provider (most common)

Use the macros in `delegation.rs` to generate a provider:

```rust
// In src/provider/my_provider.rs
use crate::stream::AssistantMessageEventStream;
use crate::types::*;

define_openai_delegation_provider! {
    name: MyProvider,
    doc: "My provider (OpenAI-compatible).",
    provider_type: Provider::MyProvider,
    env_var: "MY_API_KEY",
    default_compat: || OpenAICompletionsCompat {
        supports_store: false,
        ..Default::default()
    },
}
```

Then add `pub mod my_provider;` to `mod.rs`.

### Direct Provider (facade)

For providers needing custom logic (like Ollama's no-API-key model or MiniMax's dual env var):

1. Create `src/provider/<name>.rs` with a struct wrapping the protocol
2. Implement `LLMProtocol` by delegating to the inner protocol
3. Add `pub mod <name>;` to `mod.rs`
4. Add tests in `tests/`

### When to Write a New Protocol Instead

Only when the target API uses a **completely different** HTTP/SSE wire format than the existing four protocols. See the [Protocol README](../protocol/README.md) for details.

---

<a name="提供商层"></a>
# 提供商层

[English]#provider-layer | [中文]./README.md

服务商门面 —— 用户的主要入口。每个提供商都是一个轻量包装器,委托给 [Protocol]../protocol/README.md 实现。

## 概述

提供商层是应用开发者的**推荐 API 入口**。提供商抽象了线路格式细节,提供一致的接口:

```rust
use tiycore::provider::get_provider;
use tiycore::types::*;

// 提供商在首次访问时自动注册 — 直接获取即可使用
let provider = get_provider(&Provider::OpenAI).unwrap();
let stream = provider.stream(&model, &context, options);
```

所有提供商实现相同的 `LLMProtocol` trait,因此从 OpenAI 切换到 Anthropic 只需修改一行代码。

## 架构

```
┌──────────────────────────────────────────────────────────────────┐
│                          你的应用                                 │
└──────────────┬──────────────────────────────────┬────────────────┘
               │                                  │
       ┌───────▼───────┐                  ┌───────▼───────┐
       │    Agent       │                  │   直接调用     │
       └───────┬───────┘                  └───────┬───────┘
               │                                  │
       ┌───────▼──────────────────────────────────▼────────┐
       │              Provider 层(本模块)                   │
       │  ┌──────────────────────┐  ┌────────────────────┐ │
       │  │   直接提供商          │  │   委托提供商         │ │
       │  │  OpenAI, Anthropic,  │  │ OpenAI-Compatible, │ │
       │  │  Google, Ollama      │  │ xAI, Groq, ZAI,    │ │
       │  │                      │  │ OpenRouter, MiniMax,│ │
       │  │                      │  │ Kimi Coding, Zenmux │ │
       │  └──────────┬───────────┘  └──────────┬─────────┘ │
       └─────────────┼────────────────────────┼────────────┘
                     │                        │
       ┌─────────────▼────────────────────────▼────────────┐
       │              Protocol 层(线路格式)                 │
       │  OpenAI Completions │ OpenAI Responses │ Anthropic │
       │  Google GenAI/Vertex                               │
       └────────────────────────────────────────────────────┘
```

## 直接提供商

委托到单个协议实现的轻量门面:

| 提供商 | 模块 | 结构体 | 委托目标 | 默认 Base URL |
|---|---|---|---|---|
| OpenAI | `openai.rs` | `OpenAIProvider` | `protocol::openai_responses` | `https://api.openai.com/v1` |
| Anthropic | `anthropic.rs` | `AnthropicProvider` | `protocol::anthropic` | `https://api.anthropic.com/v1` |
| Google | `google.rs` | `GoogleProvider` | `protocol::google` | `https://generativelanguage.googleapis.com/v1beta` |
| Ollama | `ollama.rs` | `OllamaProvider` | `protocol::openai_completions` | `http://localhost:11434/v1` |

### 使用示例

```rust
use tiycore::provider::get_provider;
use tiycore::types::*;

let model = Model::builder()
    .id("claude-sonnet-4-20250514")
    .name("Claude Sonnet 4")
    .provider(Provider::Anthropic)
    .context_window(200000)
    .max_tokens(8192)
    .build()
    .unwrap();

// 提供商在首次访问时自动注册
let provider = get_provider(&model.provider).unwrap();
let stream = provider.stream(&model, &context, StreamOptions {
    api_key: Some("sk-...".into()),
    ..Default::default()
});
```

## 委托提供商

注入 API Key、兼容性设置和/或自定义 Base URL 后委托给现有协议的提供商。大多数通过 `delegation.rs` 中的宏生成。

### OpenAI 兼容(→ OpenAI Completions 协议)

| 提供商 | 模块 | 结构体 | 环境变量 | 兼容性说明 |
|---|---|---|---|---|
| OpenAI-Compatible | `openai_compatible.rs` | `OpenAICompatibleProvider` | `OPENAI_API_KEY` | 通用门面;使用调用方提供的 `model.base_url``StreamOptions.base_url` |
| xAI | `xai.rs` | `XAIProvider` | `XAI_API_KEY` | `supports_store: false``supports_developer_role: false``thinking_format: "openai"` |
| Groq | `groq.rs` | `GroqProvider` | `GROQ_API_KEY` | 模型感知:`qwen/qwen3-32b` 使用自定义 `reasoning_effort_map` |
| OpenRouter | `openrouter.rs` | `OpenRouterProvider` | `OPENROUTER_API_KEY` | 无兼容性注入;支持通过 `open_router_routing` 进行路由扩展 |
| ZAI | `zai.rs` | `ZAIProvider` | `ZAI_API_KEY` | `thinking_format: "zai"`(使用 `enable_thinking` 参数),`supports_developer_role: false` |
| DeepSeek | `deepseek.rs` | `DeepSeekProvider` | `DEEPSEEK_API_KEY` | `supports_store: false``supports_developer_role: false``thinking_format: "openai"` |

### Anthropic 兼容(→ Anthropic Messages 协议)

| 提供商 | 模块 | 结构体 | 环境变量 | 说明 |
|---|---|---|---|---|
| MiniMax | `minimax.rs` | `MiniMaxProvider` | `MINIMAX_API_KEY` | 手写实现(双环境变量:`MINIMAX_API_KEY` / `MINIMAX_CN_API_KEY`|
| Kimi Coding | `kimi_coding.rs` | `KimiCodingProvider` | `KIMI_API_KEY` | 宏生成 |

### Zenmux(自适应多协议)

| 提供商 | 模块 | 结构体 | 环境变量 |
|---|---|---|---|
| Zenmux | `zenmux.rs` | `ZenmuxProvider` | `ZENMUX_API_KEY` |

Zenmux 是独特的多协议代理,根据模型 ID 路由到不同协议:

| 模型 ID 模式 | 路由协议 | Base URL |
|---|---|---|
| 包含 `google``gemini` | Google (Vertex AI) | `https://zenmux.ai/api/vertex-ai` |
| 包含 `openai``gpt` | OpenAI Responses | `https://zenmux.ai/api/v1` |
| 其他 | Anthropic Messages | `https://zenmux.ai/api/anthropic/v1` |

当提供自定义(非 zenmux)Base URL 时,回退到 OpenAI Completions 协议。

## API Key 解析优先级

Key 按以下优先级解析:

1. `StreamOptions.api_key` — 逐请求覆盖
2. 提供商的 `default_api_key` — 通过 `with_api_key()` 构造函数设置
3. 环境变量 — 提供商特定(如 `OPENAI_API_KEY``ANTHROPIC_API_KEY`
Base URL 遵循相同的三级回退:`StreamOptions.base_url` > `model.base_url` > 提供商默认值。

## OpenAICompletionsCompat

面向 OpenAI Completions 协议的委托提供商可以注入 `OpenAICompletionsCompat` 标志来控制协议层面的行为差异:

| 字段 | 类型 | 描述 |
|---|---|---|
| `supports_store` | `bool` | 提供商是否支持 `store` 参数 |
| `supports_developer_role` | `bool` | 是否支持 `developer` 角色消息 |
| `supports_reasoning_effort` | `bool` | 是否支持 `reasoning_effort` 参数 |
| `thinking_format` | `String` | 思维格式变体(`"openai"``"zai"` 等) |
| `reasoning_effort_map` | `HashMap` | 思维级别到提供商特定值的自定义映射 |
| `open_router_routing` | `Option` | OpenRouter 特定的路由偏好 |

兼容性仅在 `model.compat.is_none()` 时注入 —— 模型上显式设置的兼容性优先。

## 文件结构

```
provider/
├── mod.rs           # 模块声明,re-export 协议 trait + 注册表 API
├── registry.rs      # ProtocolRegistry + 全局静态实例 + 自动注册 + 便捷函数
├── delegation.rs    # 生成委托提供商的宏(define_openai/anthropic_delegation_provider!)
├── openai.rs        # OpenAI → protocol::openai_responses
├── anthropic.rs     # Anthropic → protocol::anthropic
├── google.rs        # Google → protocol::google(GenAI + Vertex 双模式)
├── ollama.rs        # Ollama → protocol::openai_completions(本地)
├── openai_compatible.rs # OpenAI-Compatible → OpenAI Completions(宏生成,通用门面)
├── xai.rs           # xAI → OpenAI Completions(宏生成,静态兼容性)
├── groq.rs          # Groq → OpenAI Completions(宏生成,模型感知兼容性)
├── openrouter.rs    # OpenRouter → OpenAI Completions(宏生成,无兼容性)
├── zai.rs           # ZAI → OpenAI Completions(宏生成,静态兼容性)
├── deepseek.rs      # DeepSeek → OpenAI Completions(宏生成,静态兼容性)
├── minimax.rs       # MiniMax → Anthropic(手写,双环境变量)
├── kimi_coding.rs   # Kimi Coding → Anthropic(宏生成)
└── zenmux.rs        # Zenmux → 自适应三路路由(手写)
```

## 添加新提供商

### 委托提供商(最常见)

使用 `delegation.rs` 中的宏生成提供商:

```rust
// 在 src/provider/my_provider.rs 中
use crate::stream::AssistantMessageEventStream;
use crate::types::*;

define_openai_delegation_provider! {
    name: MyProvider,
    doc: "My provider (OpenAI-compatible).",
    provider_type: Provider::MyProvider,
    env_var: "MY_API_KEY",
    default_compat: || OpenAICompletionsCompat {
        supports_store: false,
        ..Default::default()
    },
}
```

然后在 `mod.rs` 中添加 `pub mod my_provider;`。

### 直接提供商(门面)

对于需要自定义逻辑的提供商(如 Ollama 的无 API Key 模式或 MiniMax 的双环境变量):

1. 创建 `src/provider/<name>.rs`,结构体内包装协议
2. 实现 `LLMProtocol`,委托给内部协议
3.`mod.rs` 中添加 `pub mod <name>;`
4.`tests/` 中添加测试

### 何时编写新协议

仅当目标 API 使用与现有四个协议**完全不同**的 HTTP/SSE 线路格式时。详见 [Protocol README](../protocol/README.md)。