langchainrust
A LangChain-inspired Rust framework for building LLM applications. Provides abstractions for agents, chains, memory, RAG pipelines, and tool-calling workflows.
一个受 LangChain 启发的 Rust 框架,用于构建 LLM 应用。提供 Agent、Chain、Memory、RAG 和工具调用等核心抽象。
中文文档
✨ 核心特性
| 组件 | 功能 |
|---|---|
| LLM | OpenAI 兼容接口,支持流式输出 |
| Agents | ReActAgent - 推理+行动的智能代理 |
| Prompts | PromptTemplate 和 ChatPromptTemplate |
| Memory | 对话历史管理 |
| Chains | LLMChain 和 SequentialChain 工作流 |
| RAG | 文档分割、向量存储、语义检索 |
| Tools | 内置工具:计算器、日期时间、数学运算、URL抓取 |
关键优势
- 🚀 完全异步 - 基于 Tokio 的 async/await 支持
- 🔒 类型安全 - 利用 Rust 类型系统确保代码可靠性
- 📦 零成本抽象 - 高性能设计
- 🎯 简洁 API - 直观易用的接口
- 🔌 易于扩展 - 方便添加自定义工具和组件
📦 安装
在 Cargo.toml 中添加:
[]
= "0.1.2"
= { = "1.0", = ["full"] }
🚀 快速开始
基础对话
use ;
use Message;
async
提示词模板
use ;
use Message;
use HashMap;
// 字符串模板
let template = new;
let mut vars = new;
vars.insert;
vars.insert;
let prompt = template.format?;
// 聊天模板
let chat_template = new;
let mut vars = new;
vars.insert;
vars.insert;
vars.insert;
vars.insert;
let messages = chat_template.format?;
Agent 与工具调用
use ;
use Arc;
let tools: = vec!;
let agent = new;
let executor = new
.with_max_iterations;
let result = executor.invoke.await?;
println!;
对话记忆
use ;
let mut history = new;
// 添加消息
history.add_message;
history.add_message;
// 获取历史
for msg in history.messages
Chain 工作流
use ;
use Arc;
use HashMap;
use Value;
// 单步 Chain
let chain1 = new;
// 多步顺序 Chain
let chain2 = new;
let pipeline = new
.add_chain
.add_chain;
let mut inputs = new;
inputs.insert;
let results = pipeline.invoke.await?;
RAG 检索增强生成
use ;
use Arc;
// 创建文档
let docs = vec!;
// 文档分割
let splitter = new;
let chunks = splitter.split_document;
// 创建检索器
let store = new;
let embeddings = new;
let retriever = new;
// 索引文档
retriever.add_documents.await?;
// 检索
let relevant_docs = retriever.retrieve.await?;
📚 完整示例
查看 examples/ 目录:
基础示例
hello_llm- 基础 LLM 对话streaming- 流式输出prompt_template- 提示词模板tools- 内置工具
中级示例
agent_with_tools- Agent 工具调用memory_conversation- 多轮对话记忆chain_pipeline- Chain 工作流
高级示例
rag_demo- 完整 RAG 流程multi_tool_agent- 多工具 Agentfull_pipeline- 完整 AI 应用
运行示例:
# 无需 API Key
# 需要 API Key
🧪 测试
# 运行所有测试
# 运行特定模块测试
# 显示测试输出
📁 项目结构
src/
├── core/ # 核心抽象
│ ├── language_models/ # 基础 LLM trait
│ ├── runnables/ # Runnable trait
│ └── tools/ # Tool trait
├── language_models/ # LLM 实现
│ └── openai/ # OpenAI 客户端
├── agents/ # Agent 框架
│ └── react/ # ReActAgent
├── prompts/ # 提示词模板
├── memory/ # 记忆管理
├── chains/ # 链式调用
├── retrieval/ # RAG 组件
├── embeddings/ # 文本嵌入
├── vector_stores/ # 向量存储
├── tools/ # 内置工具
└── schema/ # 数据结构
🔧 配置
环境变量
# 可选:自定义端点
OpenAIConfig 配置项
| 字段 | 类型 | 说明 |
|---|---|---|
api_key |
String |
OpenAI API 密钥 |
base_url |
String |
API 端点(支持代理) |
model |
String |
模型名称(如 "gpt-3.5-turbo") |
streaming |
bool |
启用流式响应 |
temperature |
Option<f32> |
采样温度 (0.0-2.0) |
max_tokens |
Option<usize> |
最大生成 token 数 |
🔐 安全提示
- 切勿将 API Key 提交到版本控制
- 使用环境变量存储密钥
- 支持代理/自定义端点
📖 文档
🤝 贡献
欢迎贡献代码!请查看 CONTRIBUTING.md 了解详情。
📄 许可证
Apache License, Version 2.0 或 MIT License,任选其一。
🙏 致谢
本项目受 LangChain 启发,使用 Rust 实现。
English Documentation
✨ Features
| Component | Description |
|---|---|
| LLM | OpenAI-compatible API with streaming support |
| Agents | ReActAgent for reasoning + acting |
| Prompts | PromptTemplate and ChatPromptTemplate |
| Memory | Conversation history management |
| Chains | LLMChain and SequentialChain workflows |
| RAG | Document splitting, vector stores, semantic retrieval |
| Tools | Built-in: Calculator, DateTime, Math, URLFetch |
Key Benefits
- 🚀 Fully Async - Tokio-based async/await support
- 🔒 Type-Safe - Leverage Rust's type system
- 📦 Zero-Cost Abstractions - High-performance design
- 🎯 Simple API - Intuitive interfaces
- 🔌 Extensible - Easy to add custom tools
📦 Installation
Add to Cargo.toml:
[]
= "0.1.2"
= { = "1.0", = ["full"] }
🚀 Quick Start
Basic Chat
use ;
use Message;
async
Prompt Templates
use ;
use Message;
use HashMap;
// String template
let template = new;
let mut vars = new;
vars.insert;
vars.insert;
let prompt = template.format?;
// Chat template
let chat_template = new;
let mut vars = new;
vars.insert;
vars.insert;
vars.insert;
vars.insert;
let messages = chat_template.format?;
Agent with Tools
use ;
use Arc;
let tools: = vec!;
let agent = new;
let executor = new
.with_max_iterations;
let result = executor.invoke.await?;
println!;
Memory
use ;
let mut history = new;
// Add messages
history.add_message;
history.add_message;
// Retrieve messages
for msg in history.messages
Chain Pipelines
use ;
use Arc;
use HashMap;
use Value;
// Single chain
let chain1 = new;
// Sequential chains
let chain2 = new;
let pipeline = new
.add_chain
.add_chain;
let mut inputs = new;
inputs.insert;
let results = pipeline.invoke.await?;
RAG Pipeline
use ;
use Arc;
// Create documents
let docs = vec!;
// Split documents
let splitter = new;
let chunks = splitter.split_document;
// Create retriever
let store = new;
let embeddings = new;
let retriever = new;
// Index documents
retriever.add_documents.await?;
// Search
let relevant_docs = retriever.retrieve.await?;
📚 Examples
See examples/ for complete code:
Basic
hello_llm- Basic LLM chatstreaming- Streaming outputprompt_template- Using templatestools- Built-in tools
Intermediate
agent_with_tools- Agent with tool callingmemory_conversation- Multi-turn conversationschain_pipeline- Chain workflows
Advanced
rag_demo- Full RAG pipelinemulti_tool_agent- Agent with multiple toolsfull_pipeline- Complete AI application
Run examples:
# Without API key
# With API key
🧪 Testing
# Run all tests
# Run specific module
# Show test output
📁 Project Structure
src/
├── core/ # Core abstractions
│ ├── language_models/ # Base LLM traits
│ ├── runnables/ # Runnable trait
│ └── tools/ # Tool trait
├── language_models/ # LLM implementations
│ └── openai/ # OpenAI client
├── agents/ # Agent framework
│ └── react/ # ReActAgent
├── prompts/ # Prompt templates
├── memory/ # Memory management
├── chains/ # Chain workflows
├── retrieval/ # RAG components
├── embeddings/ # Text embeddings
├── vector_stores/ # Vector databases
├── tools/ # Built-in tools
└── schema/ # Data structures
🔧 Configuration
Environment Variables
# Optional: custom endpoint
OpenAIConfig Options
| Field | Type | Description |
|---|---|---|
api_key |
String |
OpenAI API key |
base_url |
String |
API endpoint (supports proxies) |
model |
String |
Model name (e.g., "gpt-3.5-turbo") |
streaming |
bool |
Enable streaming responses |
temperature |
Option<f32> |
Sampling temperature (0.0-2.0) |
max_tokens |
Option<usize> |
Maximum tokens to generate |
🔐 Security
- Never commit API keys to version control
- Use environment variables for secrets
- Support for proxy/custom endpoints
📖 Documentation
🤝 Contributing
Contributions are welcome! See CONTRIBUTING.md for details.
📄 License
Licensed under either of:
- Apache License, Version 2.0
- MIT License
🙏 Acknowledgments
Inspired by LangChain, implemented in Rust.