Expand description
§ai-lib-rust
这是 AI-Protocol 规范的高性能 Rust 参考实现,提供统一的多厂商 AI 模型交互接口。
Protocol Runtime for AI-Protocol - A high-performance Rust reference implementation that enables provider-agnostic AI model interactions.
§Overview
This library implements the AI-Protocol specification as a runtime, where all logic is operators and all configuration is protocol. It provides a unified interface for interacting with AI models across different providers without hardcoding provider-specific logic.
§Core Philosophy
- Protocol-Driven: All behavior is configured through protocol manifests, not code
- Provider-Agnostic: Unified interface across OpenAI, Anthropic, Google, and others
- Streaming-First: Native support for Server-Sent Events (SSE) streaming
- Type-Safe: Strongly typed request/response handling with comprehensive error types
§Key Features
- Unified Client:
AiClientprovides a single entry point for all AI interactions - Protocol Loading: Load and validate protocol manifests from local files or remote URLs
- Streaming Pipeline: Configurable operator pipeline for response processing
- Batching: Efficient request batching with
batch::BatchCollector - Caching: Response caching with pluggable backends via
cachemodule - Resilience: Circuit breaker and rate limiting via
resiliencemodule - Content Safety: Guardrails for content filtering via
guardrailsmodule - Telemetry: Optional feedback collection via
telemetrymodule
§Quick Start
use ai_lib_rust::{AiClient, AiClientBuilder, Message, MessageRole};
#[tokio::main]
async fn main() -> ai_lib_rust::Result<()> {
let client = AiClientBuilder::new()
.with_protocol_path("protocols/openai.yaml")?
.with_api_key("your-api-key")
.build()?;
let messages = vec![
Message::user("Hello, how are you?"),
];
// Streaming response
let mut stream = client.chat_stream(&messages, None).await?;
// Process stream events...
Ok(())
}§Module Organization
| Module | Description |
|---|---|
protocol | Protocol specification loading and validation |
client | AI client implementation and builders |
pipeline | Streaming response pipeline operators |
types | Core type definitions (messages, events, tools) |
batch | Request batching and parallel execution |
cache | Response caching with multiple backends |
embeddings | Embedding generation and vector operations |
resilience | Circuit breaker and rate limiting |
guardrails | Content filtering and safety checks |
tokens | Token counting and cost estimation |
telemetry | Optional feedback and telemetry collection |
Re-exports§
pub use client::CallStats;pub use client::CancelHandle;pub use client::ChatBatchRequest;pub use client::EndpointExt;pub use client::AiClient;pub use client::AiClientBuilder;pub use telemetry::FeedbackEvent;pub use telemetry::FeedbackSink;pub use types::events::StreamingEvent;pub use types::message::Message;pub use types::message::MessageRole;pub use types::tool::ToolCall;pub use error::Error;pub use error::ErrorContext;
Modules§
- batch
- 请求批处理模块:提供高效的批量请求收集和执行功能。
- cache
- 响应缓存模块:提供可插拔的缓存后端以减少重复 API 调用。
- client
- Unified client interface for AI-Protocol runtime.
- embeddings
- 向量嵌入模块:提供文本嵌入生成和向量相似度计算功能。
- error
- Error type for the library 错误处理模块:提供统一的错误类型和结构化错误上下文。
- guardrails
- 内容安全模块:提供可配置的内容过滤和敏感信息检测功能。
- pipeline
- 流水线处理模块:实现流式响应处理的核心算子执行引擎。
- plugins
- Plugin and middleware system.
- protocol
- 协议规范层:负责加载、验证和管理 AI-Protocol 规范文件。
- resilience
- 弹性模式模块:提供熔断器和限流器等可靠性保障机制。
- telemetry
- 遥测与反馈模块:提供可选的、应用可控的用户反馈收集机制。
- tokens
- Token 计数与成本估算模块:提供多种 Token 统计方法和价格计算功能。
- transport
- types
- 类型系统模块:定义基于 AI-Protocol 规范的核心数据类型。
- utils
- Utility modules
Type Aliases§
- BoxStream
- A unified pinned, boxed stream that emits
PipeResult<T> - Pipe
Result - A specialized Result for pipeline operations
- Result
- Result type alias for the library