Expand description
§ai-lib-rust
这是 AI-Protocol 规范的高性能 Rust 参考实现,提供统一的多厂商 AI 模型交互接口。
Protocol Runtime for AI-Protocol - A high-performance Rust reference implementation that enables provider-agnostic AI model interactions.
§Overview
This library implements the AI-Protocol specification as a runtime, where all logic is operators and all configuration is protocol. It provides a unified interface for interacting with AI models across different providers without hardcoding provider-specific logic.
§Core Philosophy
- Protocol-Driven: All behavior is configured through protocol manifests, not code
- Provider-Agnostic: Unified interface across OpenAI, Anthropic, Google, and others
- Streaming-First: Native support for Server-Sent Events (SSE) streaming
- Type-Safe: Strongly typed request/response handling with comprehensive error types
§Key Features
- Unified Client:
AiClientprovides a single entry point for all AI interactions - Protocol Loading: Load and validate protocol manifests from local files or remote URLs
- Streaming Pipeline: Configurable operator pipeline for response processing
- Batching: Efficient request batching with [
batch::BatchCollector] (requiresbatchfeature) - Caching: Response caching with pluggable backends via
cachemodule - Resilience: Circuit breaker and rate limiting via
resiliencemodule - Content Safety: Guardrails for content filtering via [
guardrails] module - Telemetry: Optional feedback collection via [
telemetry] module
§Quick Start
use ai_lib_rust::{AiClient, AiClientBuilder, Message, MessageRole};
#[tokio::main]
async fn main() -> ai_lib_rust::Result<()> {
let client = AiClientBuilder::new()
.with_protocol_path("protocols/openai.yaml")?
.with_api_key("your-api-key")
.build()?;
let messages = vec![
Message::user("Hello, how are you?"),
];
// Streaming response
let mut stream = client.chat_stream(&messages, None).await?;
// Process stream events...
Ok(())
}§Module Organization
| Module | Description |
|---|---|
protocol | Protocol specification loading and validation |
client | AI client implementation and builders |
pipeline | Streaming response pipeline operators |
types | Core type definitions (messages, events, tools) |
[batch] | Request batching and parallel execution (requires batch feature) |
cache | Response caching with multiple backends |
[embeddings] | Embedding generation and vector operations (requires embeddings feature) |
resilience | Circuit breaker and rate limiting |
[guardrails] | Content filtering and safety checks (requires guardrails feature) |
[tokens] | Token counting and cost estimation (requires tokens feature) |
[telemetry] | Optional feedback and telemetry collection (requires telemetry feature) |
Re-exports§
pub use client::CallStats;pub use client::CancelHandle;pub use client::ClientMetrics;pub use client::ChatBatchRequest;pub use client::EndpointExt;pub use client::AiClient;pub use client::AiClientBuilder;pub use feedback::FeedbackEvent;pub use feedback::FeedbackSink;pub use types::events::StreamingEvent;pub use types::message::Message;pub use types::message::MessageRole;pub use types::tool::ToolCall;pub use error::Error;pub use error::ErrorContext;pub use error_code::StandardErrorCode;
Modules§
- cache
- 响应缓存模块:提供可插拔的缓存后端以减少重复 API 调用。
- client
- 统一客户端接口:提供协议驱动的 AI 模型交互入口。
- drivers
- Provider 驱动抽象层 — 通过 trait 实现多厂商 API 适配的动态分发
- error
- Error type for the library 错误处理模块:提供统一的错误类型和结构化错误上下文。
- error_
code - V2 标准错误码:定义 13 个规范错误码及其重试/回退语义。
- feedback
- 核心反馈类型:提供 FeedbackSink trait 和多种反馈事件(始终编译)。
- pipeline
- 流水线处理模块:实现流式响应处理的核心算子执行引擎。
- plugins
- Plugin and middleware system.
- protocol
- 协议规范层:负责加载、验证和管理 AI-Protocol 规范文件。
- registry
- 能力注册表 — 根据 Manifest 声明动态加载和管理运行时模块
- resilience
- 弹性模式模块:提供熔断器和限流器等可靠性保障机制。
- structured
- Structured output module for ai-lib-rust.
- transport
- types
- 类型系统模块:定义基于 AI-Protocol 规范的核心数据类型。
- utils
- Utility modules
Type Aliases§
- BoxStream
- A unified pinned, boxed stream that emits
PipeResult<T> - Pipe
Result - A specialized Result for pipeline operations
- Result
- Result type alias for the library