Expand description
§echo-agent
§The Production-Grade AI Agent Framework for Rust
ReAct Engine • Multi-Agent • Memory • Streaming • MCP • IM Channels • Workflows
中文文档 · Documentation · Examples · Changelog
§Quick Start
Add to Cargo.toml:
[dependencies]
echo-agent = "0.1.4"
tokio = { version = "1", features = ["full"] }Define a tool and run an agent — in under 20 lines:
use echo_agent::prelude::*;
use echo_agent::{agent, tool};
#[tool(name = "add", description = "Add two numbers")]
async fn add(a: f64, b: f64) -> Result<ToolResult> {
Ok(ToolResult::success(format!("{}", a + b)))
}
#[tokio::main]
async fn main() -> Result<()> {
let mut agent = agent! {
model: "qwen3-max",
system_prompt: "You are a helpful math assistant",
tools: [AddTool],
}?;
let answer = agent.execute("What is 1337 * 42?").await?;
println!("{answer}");
Ok(())
}§Why echo-agent?
Most AI agent frameworks live in Python. echo-agent brings full-featured Agent development to Rust — matching LangGraph, CrewAI, and AutoGen feature parity, with the performance, type safety, and reliability that only Rust can deliver.
| echo-agent | LangGraph | CrewAI | AutoGen | |
|---|---|---|---|---|
| Language | Rust | Python | Python | Python |
| Memory safety | Compile-time | GC | GC | GC |
| ReAct loop | Built-in | Built-in | Built-in | Built-in |
| Tool system | #[tool] macro + JSON Schema | Decorator | Decorator | Function calling |
| Multi-agent | SubAgent + Handoff | Graph | Crew | Conversation |
| Streaming | Native async streams | Callback | Limited | Callback |
| MCP protocol | Native (stdio/SSE/HTTP) | Via LangChain | No | No |
| IM channels | QQ + Feishu built-in | No | No | No |
| Workflow | Graph + DAG + Sequential | StateGraph | Sequential | Sequential |
| Context compression | SlidingWindow + LLM + Hybrid | No | No | No |
| Guardrails | Rule + LLM filtering | No | No | No |
| Sandbox | Local + Docker + K8s | No | No | Docker |
| Single binary deploy | Yes | No | No | No |
§Deploy to IM in 5 lines
// Requires feature: channels
use echo_agent::channels::{ChannelManager, QqChannel, QqConfig, FeishuChannel, FeishuConfig};
let mut manager = ChannelManager::new();
manager.register(Box::new(QqChannel::new(QqConfig::new("app_id", "secret"))?));
manager.register(Box::new(FeishuChannel::new(FeishuConfig::new_long_poll("app_id".into(), "secret".into()))?));
manager.start_all(handler).await?;§Run examples
cargo run --example demo01_tools # Custom tools
cargo run --example demo25_macros # Macro system
cargo run --example demo34_workflow_stream # Workflow streaming
cargo run --example demo36_multimodal # Multi-modal messages
cargo run --example demo38_im_channels --features channels # IM channels§Feature Flags
| Feature | Default | Description |
|---|---|---|
full | on | All features enabled |
web | on | Web search + page fetch (DuckDuckGo/Brave/Tavily) |
mcp | on | Model Context Protocol client |
media | on | PDF/Excel/Word/image extraction |
channels | on | QQ Bot + Feishu IM integrations |
human-loop | on | Human-in-the-loop approval (Console/Webhook/WebSocket) |
plan-execute | on | Plan-then-execute agent pattern |
self-reflection | on | LLM self-critique and refinement |
subagent | on | Multi-agent orchestration |
sqlite | on | SQLite-backed persistent memory |
telemetry | on | OpenTelemetry tracing + metrics |
a2a | on | Agent-to-Agent protocol (server + client) |
handoff | on | Agent handoff/collaboration |
topology | on | Multi-agent topology tracking |
tasks | on | DAG task scheduling |
data | on | Polars-powered data tools |
rag | on | Retrieval-Augmented Generation |
chart | on | Chart generation tools |
git | on | Git operations tools |
database | — | SQL database tools (requires sqlx) |
content-guard | — | Content filtering guardrails |
project-rules | — | .claude/rules project rule parsing |
§Architecture
┌─────────────┐
│ Your App │
└──────┬───────┘
│
┌────────────────▼────────────────┐
│ ReactAgent │
│ │
│ ┌──────────┐ ┌──────────────┐ │
│ │ Context │ │ Tools │ │
│ │ Manager │ │ Manager │ │
│ │(compress) │ │(retry/limit) │ │
│ └──────────┘ └──────────────┘ │
│ │
│ ┌──────────┐ ┌──────────────┐ │
│ │ Memory │ │ Human │ │
│ │Store+Cp │ │ Approval │ │
│ └──────────┘ └──────────────┘ │
│ │
│ ┌──────────┐ ┌──────────────┐ │
│ │ Skills │ │ SubAgent │ │
│ │ Registry │ │ Registry │ │
│ └──────────┘ └──────────────┘ │
└────────────────┬────────────────┘
│
┌──────────────────────▼──────────────────────┐
│ LLM Providers │
│ OpenAI · Anthropic · DeepSeek · Qwen · Ollama │
└─────────────────────────────────────────────┘§Feature Matrix
echo-agent ships with 28+ capabilities across 6 crates, all accessible through a single use echo_agent::prelude::*.
§Core
| Feature | Description | API Preview |
|---|---|---|
| ReAct Engine | Thought → Action → Observation loop with CoT | agent.execute("task").await? |
| Tool System | #[tool] macro with auto JSON Schema, timeout + retry | #[tool(name = "calc")] async fn calc(...) |
| Dual-layer Memory | Store (long-term KV) + Checkpointer (session) | .with_memory_tools(store) |
| Context Compression | SlidingWindow / LLM Summary / Hybrid | SlidingWindowCompressor::new(4096) |
| Token Budget | Auto-truncation + pre-think compression trigger | .max_tool_output_tokens(2000) |
| Unified Retry | One RetryPolicy for LLM, MCP, A2A, sandbox | with_retry(&policy, || ...) |
| Dynamic Tools | Add / remove / replace tools mid-conversation | agent.remove_tool("old") |
| Streaming | Real-time AgentEvent stream (tokens + tool calls) | agent.execute_stream(task).await? |
| Structured Output | LLM output → typed Rust structs via JSON Schema | agent.extract::<Contact>(text) |
| Multi-Modal | Text + images (base64/URL) + files in one message | Message::user_with_image(...) |
| Guard System | Rule-based / LLM-powered content filtering | #[guard(name = "safety")] async fn ... |
| Permission Model | Declarative tool permissions with pluggable policies | DefaultPermissionPolicy::new() |
| Audit Logging | Structured events with pluggable backends | agent.set_audit_logger(...) |
| Macro System | 11 macros: #[tool], agent!{}, messages![], … | agent! { model: "..", tools: [...] } |
§Multi-Agent & Orchestration
| Feature | Description | API Preview |
|---|---|---|
| SubAgent | Sync / Fork / Teammate execution modes | agent.register_agent(sub) |
| Agent Handoff | Context-aware transfer between agents | HandoffManager::new() |
| Plan-and-Execute | Explicit planning phase → step-by-step execution | PlanExecuteAgent::new(...) |
| Self-Reflection | LLM-based self-critique and refinement loops | SelfReflectionAgent::new(...) |
| Graph Workflow | Linear, conditional, loop, parallel fan-out/fan-in | GraphBuilder::new("pipeline") |
| DAG Tasks | Dependency-aware task scheduling with hooks | TaskManager::default() |
| Declarative Workflow | Define graphs in YAML/JSON — no Rust code needed | Graph::from_yaml("wf.yaml")? |
§Integrations
| Feature | Description | API Preview |
|---|---|---|
| MCP Protocol | Connect any MCP server (stdio / SSE / HTTP) | mcp.connect(McpServerConfig::stdio(...)) |
| A2A Protocol | Agent Card publishing, cross-framework collaboration | A2AServer::bind("0.0.0.0:3000") |
| Skill System | Progressive disclosure: discover → activate → use | agent.load_skill("web_research") |
| IM Channels | QQ Bot (WebSocket) & Feishu (Webhook) built-in | ChannelManager::new() |
| Web Tools | Search (DuckDuckGo/Brave/Tavily) + Page Fetch | WebSearchTool::auto() |
| Media Tools | PDF, Excel, Word, Image analysis built-in | ImageAnalysisTool |
| Data Tools | Polars-powered filter, aggregate, transform, stats | DataReadTool |
| Sandbox | Local / Docker / K8s code execution with limits | LocalSandbox::new() |
| OpenTelemetry | Distributed tracing and metrics via OTLP | init_telemetry(&config) |
| Snapshot/Rollback | Capture & restore agent state at any point | agent.snapshot() / agent.rollback(1) |
| Circuit Breaker | Auto-fail-fast when LLM is down | agent.set_circuit_breaker(config) |
§Feature Flags
# Minimal — just the ReAct engine
echo-agent = { version = "0.1.4", default-features = false }
# Full (default) — all features enabled
echo-agent = "0.1.4"
# Pick only what you need
echo-agent = { version = "0.1.4", default-features = false, features = ["mcp", "web"] }| Feature | Enables | Key Dependencies |
|---|---|---|
mcp | MCP protocol client | echo-mcp, tokio-tungstenite |
web | Web search + fetch tools | scraper, html2text |
media | PDF, Excel, Word, Image tools | lopdf, calamine, docx-rs |
data | Polars data analysis | polars |
sqlite | SQLite memory persistence | rusqlite |
channels | QQ Bot + Feishu integrations | echo-channels |
human-loop | Human-in-the-loop approvals | tokio-tungstenite |
tasks | DAG task management | — |
workflow | Graph workflow engine | — |
plan-execute | Plan-and-Execute agent | — |
self-reflection | Self-critique agent | — |
subagent | Multi-agent orchestration | — |
handoff | Agent handoff | — |
a2a | Agent-to-Agent protocol | — |
topology | Agent topology visualization | — |
telemetry | OpenTelemetry tracing | opentelemetry |
§Workspace Structure
echo-agent/
├── echo-core/ Core traits: Tool, Agent, LlmClient, Guard, Error, Retry
├── echo-macros/ Procedural macros: #[tool], #[callback], #[guard], #[handler]
├── echo-execution/ Sandbox, skills, and tool execution
├── echo-state/ Memory, compression, and audit logging
├── echo-orchestration/ Workflow, human-loop, and DAG tasks
├── echo-integration/ LLM providers, MCP, and IM channels (QQ/Feishu)
├── src/ Agent engine, re-exports, and facade layer
├── examples/ 40+ runnable demos
├── docs/ Bilingual documentation (en + zh)
├── skills/ External skill packs (Markdown-based)
└── echo-agent.yaml Example configurationNote:
echo-agentis a library framework. For a ready-to-use application with CLI, Web UI, and WebSocket, see echo-agent-cli.
§Configuration
Create echo-agent.yaml in your project root:
# Provider / model registry (used by ProviderFactory and config-backed clients)
models:
qwen3-max:
provider: dashscope
api_key: ${DASHSCOPE_API_KEY}
deepseek-chat:
provider: deepseek
api_key: ${DEEPSEEK_API_KEY}
# Embedding config (used by semantic memory / vector search demos)
embedding:
base_url: https://api.openai.com
api_key: ${OPENAI_API_KEY}
model: text-embedding-3-small
timeout_secs: 30
# Runtime app config (used by examples such as IM channels)
model:
name: qwen3-max
max_tokens: 4096
temperature: 0.7
agent:
name: my-assistant
system_prompt: "You are a helpful assistant."
max_iterations: 10
enable_tools: true
enable_memory: true
channels:
qq:
enabled: false
app_id: ${QQ_APP_ID}
client_secret: ${QQ_CLIENT_SECRET}
feishu:
enabled: false
app_id: ${FEISHU_APP_ID}
app_secret: ${FEISHU_APP_SECRET}
mode: long_poll
session:
timeout_minutes: 60
reset_keywords: ["重置对话", "新对话", "清除记忆"]
reset_commands: ["/reset", "/clear", "/new"]
mcp:
config_path: ./mcp.json
server:
host: 0.0.0.0
port: 3000
logging:
level: infoNotes:
models:is the registry used byProviderFactory,LlmConfig::from_model(), and config-backed LLM clients.embedding:is used by semantic memory / vector search examples.model:/agent:/channels:/mcp:/server:/logging:are the framework runtime settings loaded byecho_agent::config.
Set secrets via environment variables:
export DASHSCOPE_API_KEY=sk-xxx # Alibaba Qwen
export DEEPSEEK_API_KEY=sk-xxx # DeepSeek
export OPENAI_API_KEY=sk-xxx # OpenAI
export ANTHROPIC_API_KEY=sk-ant-xxx # Anthropic
export QQ_APP_ID=your-qq-app-id
export QQ_CLIENT_SECRET=your-qq-client-secret
export FEISHU_APP_ID=your-feishu-app-id
export FEISHU_APP_SECRET=your-feishu-app-secret§Highlights
- 40+ capabilities — ReAct loop, tools, memory, streaming, multi-agent, skills, MCP, IM channels, guards, audit, and more
- 40 runnable examples — every feature has a demo you can
cargo runimmediately - 629+ unit tests — comprehensive coverage across all modules
- 6 crates, 1 import — modular workspace, but
use echo_agent::prelude::*is all you need - Multi-modal — text, images (base64 & URL), and file attachments in a single message
- IM integration — QQ Bot (WebSocket) & Feishu (Webhook) out of the box
- Declarative workflows — define agent graphs in YAML/JSON, no Rust code required
- Unified retry — one
RetryPolicyfor all external calls (LLM, MCP, A2A, sandbox) - Zero-cost abstractions — compiled to native code, no runtime overhead
§Core Concepts
echo-agent is built around several key concepts that enable flexible, production-ready agent development:
§1. ReAct Engine — Thought → Action → Observation loop
The foundation of echo-agent is the ReAct (Reasoning + Acting) pattern with built-in Chain-of-Thought prompting. Agents think step-by-step, decide which tool to call, observe results, and continue until they reach a final answer.
use echo_agent::prelude::*;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.system_prompt("You are a helpful assistant")
.build()?;
let answer = agent.execute("What is 42 * 1337?").await?;
println!("{answer}");
Ok(())
}Three builder presets for different needs:
use echo_agent::prelude::*;
fn main() -> echo_agent::error::Result<()> {
// Minimal — no tools, no memory, just chat
let _agent = ReactAgentBuilder::simple("qwen3-max", "Be helpful")?;
// Standard — tools + CoT enabled
let _agent = ReactAgentBuilder::standard("qwen3-max", "assistant", "Be helpful")?;
// Full-featured — tools + memory + tasks + CoT
let _agent = ReactAgentBuilder::full_featured("qwen3-max", "assistant", "Be helpful")?;
Ok(())
}§2. Tool System — #[tool] macro + auto JSON Schema
Define tools as simple async functions. The #[tool] macro generates parameter schemas, descriptions, and the TypedTool implementation automatically.
use echo_agent::{tool, prelude::*};
#[tool(name = "weather", description = "Get weather for a city")]
async fn weather(city: String) -> Result<ToolResult> {
Ok(ToolResult::success(format!("Sunny in {city}")))
}
// Use it: agent.add_tool(Box::new(WeatherTool));Built-in media tools (feature media): PDF extract/info, Excel read/info/to_csv, Word read/info/structure, Image analysis, Text read/search/stats/process/export.
Built-in data tools (feature data): Polars-powered read/filter/aggregate/stats/transform/export.
§3. Dual-layer Memory — Store + Checkpointer
- Store: Long-term key-value storage with namespace isolation (
InMemoryStore,FileStore,SqliteStore) - Checkpointer: Session history preservation across restarts (
FileCheckpointer,InMemoryCheckpointer)
One line to give your agent persistent memory — no manual tool wiring:
use echo_agent::prelude::*;
use std::sync::Arc;
fn main() -> echo_agent::error::Result<()> {
let store = Arc::new(InMemoryStore::new());
let _agent = ReactAgentBuilder::new()
.model("qwen3-max")
.with_memory_tools(store) // registers remember + recall + search_memory + forget
.build()?;
Ok(())
}§4. Multi-Modal Messages — Text, images, files in one message
Send and receive images (base64 or URLs) and file attachments alongside text, compatible with OpenAI Vision and Anthropic APIs.
use echo_agent::prelude::*;
fn main() {
let base64_data = "..."; // your base64-encoded image
let _msg = Message::user_with_image(
"What's in this image?",
"image/png",
base64_data,
);
}§5. Context Compression — Sliding window, LLM summary, hybrid
Manage token limits with configurable compression strategies that preserve conversation context.
use echo_agent::prelude::*;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.build()?;
agent.set_compressor(SlidingWindowCompressor::new(4096)).await;
Ok(())
}Three strategies:
- SlidingWindow — keeps the most recent messages within token budget
- SummaryCompressor — uses LLM to summarize older messages
- HybridCompressor — combines both for best quality
Token counting — estimate token usage before calling the LLM:
use echo_agent::prelude::*;
fn main() {
let tokenizer = HeuristicTokenizer;
let count = tokenizer.count_tokens("Hello, world!");
println!("~{count} tokens"); // ~4 tokens
// For cost tracking across requests:
use echo_agent::tokenizer::TokenUsageTracker;
let tracker = TokenUsageTracker::new("gpt-4o");
tracker.record(1500, 800, Some(2300));
println!("{}", tracker.summary());
}§6. Unified Retry Policy — One policy for all external calls
Configure retry, timeout, and backoff once, apply to LLM calls, MCP requests, A2A communication, and sandbox execution.
use echo_agent::prelude::*;
use std::time::Duration;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let policy = RetryPolicy::new(3, Duration::from_millis(500))
.max_delay(Duration::from_secs(30))
.jitter(true);
// Apply to any fallible async operation:
let _response = with_retry(&policy, || async { Ok::<_, &str>("done") }).await.unwrap();
Ok(())
}§7. Dynamic Tool Management — Add/remove/replace tools mid-conversation
Adapt toolset based on conversation phase or user needs without restarting the agent.
use echo_agent::{prelude::*, tool};
// User-defined tools (via #[tool] macro):
#[tool(name = "search_web", description = "Search the web")]
async fn search_web(query: String) -> Result<ToolResult> {
Ok(ToolResult::success(format!("Results for: {query}")))
}
fn main() -> echo_agent::error::Result<()> {
let mut agent = ReactAgentBuilder::new()
.model("qwen3-max")
.build()?;
agent.add_tool(Box::new(SearchWebTool));
agent.remove_tool("search_web");
agent.replace_tool(Box::new(SearchWebTool));
Ok(())
}§8. Human-in-the-Loop — Approval gates for critical actions
Require human approval before executing sensitive tools via Console, Webhook, or WebSocket interfaces.
// Requires feature: human-loop
use echo_agent::prelude::*;
use echo_agent::advanced::ConsoleHumanLoopProvider;
use std::sync::Arc;
fn main() -> echo_agent::error::Result<()> {
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.build()?;
let approval: Arc<ConsoleHumanLoopProvider> = Arc::new(ConsoleHumanLoopProvider);
agent.set_human_loop_provider(approval);
Ok(())
}Full 7-stage permission pipeline (inspired by Claude Code):
Bypass → Plan → Rules(deny-first) → ProtectedPaths → Cache(TTL) → DenialTracker → Mode dispatch- SessionApprovalCache with configurable TTL (default 30 min)
- Audit Trail:
PermissionAuditSinktrait + InMemory/Logging/Composite implementations - ProtectedPathChecker:
.git/.env/.sshalways protected - AI Classifier: RuleClassifier/LlmClassifier/CompositeClassifier for Auto mode
- DenialTracker: auto-fallback after consecutive denials
- PermissionMode: Default/Plan/Auto/AcceptEdits/BypassPermissions/DontAsk/Bubble
use echo_agent::prelude::*;
#[tokio::main]
async fn main() {
// Grant read access, require approval for execute
let policy = DefaultPermissionPolicy::new()
.grant(ToolPermission::Read)
.require_approval(ToolPermission::Execute);
let decision = policy.check("shell", &[ToolPermission::Execute]).await;
assert!(decision.requires_approval());
}§9. Multi-Agent Orchestration — Orchestrator + SubAgent teams
Coordinate multiple specialized agents with context isolation and handoff protocols.
Three execution modes:
- Sync — parent blocks until subagent returns
- Fork — subagent runs in background, parent continues
- Teammate — collaborative mode with shared Mailbox
// Requires feature: subagent
use echo_agent::prelude::*;
fn main() -> echo_agent::error::Result<()> {
let math_agent = ReactAgentBuilder::new()
.model("qwen3-max")
.name("math_expert")
.system_prompt("You solve math problems.")
.build()?;
let mut agent = ReactAgentBuilder::new()
.model("qwen3-max")
.enable_subagent()
.build()?;
agent.register_agent(Box::new(math_agent));
Ok(())
}§10. Skill System — Progressive capability disclosure
Packages of related tools and prompts that can be discovered, activated, and used on demand.
use echo_agent::prelude::*;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.build()?;
// Discover and activate file-based skills (SKILL.md packs):
agent.load_skills_from_dir("./skills/web_research").await?;
Ok(())
}Pre-built skills: code_review, data_analyst, project-stats, python-linter, web_researcher.
§11. MCP Protocol — Connect any Model Context Protocol server
Integrate filesystem, databases, browsers, and other resources via standardized MCP servers.
// Requires feature: mcp
use echo_agent::prelude::*;
use echo_agent::advanced::{McpManager, McpServerConfig};
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let mut mcp = McpManager::new();
let tools = mcp.connect(McpServerConfig::stdio(
"filesystem", "npx", vec!["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
)).await?;
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.build()?;
agent.add_tools(tools);
Ok(())
}Supports three transports: stdio, SSE, HTTP.
§12. Plan-and-Execute — Explicit planning phase before execution
Planner agent creates a task DAG, Executor agent follows it step-by-step with optional replanning.
// Requires feature: plan-execute
use echo_agent::prelude::*;
use echo_agent::advanced::PlanExecuteAgent;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let planner = PlanExecuteAgent::new("research_agent", planner_config, executor_config);
let result = planner.execute("Research quantum computing trends").await?;
println!("{result}");
Ok(())
}§13. Streaming — Real-time token-by-token output
Receive AgentEvent streams including tokens, tool calls, and final answers as they happen.
use echo_agent::prelude::*;
use futures::StreamExt;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.build()?;
let mut stream = agent.execute_stream("Explain quantum entanglement").await?;
while let Some(event) = stream.next().await {
match event? {
AgentEvent::Token(t) => print!("{t}"),
AgentEvent::FinalAnswer(a) => { println!("\n{a}"); break; }
_ => {}
}
}
Ok(())
}§14. Structured Output — LLM responses to typed Rust structs
Extract structured data from LLM responses using JSON Schema validation.
use echo_agent::prelude::*;
use echo_agent::llm::ResponseFormat;
use serde::Deserialize;
use serde_json::json;
#[derive(Deserialize, Debug)]
struct Contact { name: String, email: String, phone: String }
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.system_prompt("You are an extraction assistant")
.build()?;
let contacts: Vec<Contact> = agent.extract(
"Extract contacts from this text...",
ResponseFormat::json_schema("contacts", json!({
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"email": {"type": "string"},
"phone": {"type": "string"}
},
"required": ["name", "email", "phone"]
}
})),
).await?;
println!("{:?}", contacts);
Ok(())
}§15. Declarative Workflow — YAML/JSON workflow definitions
Define agent graphs without writing Rust code.
name: research_pipeline
nodes:
- name: researcher
type: agent
model: qwen3-max
system_prompt: "You are a research assistant"
input_key: task
output_key: research
- name: writer
type: agent
model: qwen3-max
system_prompt: "You are a writing assistant"
input_key: research
output_key: result
edges:
- from: researcher
to: writer
entry: researcher
finish: [writer]use echo_agent::prelude::*;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let wf = WorkflowDefinition::from_yaml("workflow.yaml")?;
let graph = wf.build_graph()?;
let state = SharedState::new();
let result = graph.run(state).await?;
Ok(())
}§16. Guard System — Rule-based and LLM-powered content filtering
Block or modify unsafe content on input and output with customizable guard pipelines.
use echo_agent::{guard, prelude::*};
#[guard(name = "length-limit")]
async fn check_length(content: &str, _: GuardDirection) -> Result<GuardResult> {
if content.len() > 50000 {
Ok(GuardResult::Block { reason: "Content too long".into() })
} else {
Ok(GuardResult::Pass)
}
}§17. Graph Workflow Engine — LangGraph-style state machines
Build complex workflows with linear pipelines, conditional branches, loops, and parallel fan-out/fan-in.
use echo_agent::prelude::*;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let state = SharedState::new();
let graph = GraphBuilder::new("etl_pipeline")
.add_function_node("extract", |state| Box::pin(async move {
state.set("data", vec!["hello", "world"])?;
Ok(())
}))
.add_function_node("transform", |state| Box::pin(async move {
// transform data...
Ok(())
}))
.add_edge("extract", "transform")
.add_edge("transform", Graph::END)
.build()?;
let result = graph.run(state).await?;
Ok(())
}Also supports streaming execution: graph.run_stream(state).await? yields WorkflowEvent per node.
§18. IM Channels — Deploy agents to messaging platforms
Connect your agent to QQ (WebSocket) and Feishu (Webhook) with automatic token management and reconnection.
// Requires feature: channels
use echo_agent::channels::{ChannelManager, QqChannel, QqConfig, FeishuChannel, FeishuConfig};
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
// QQ Bot — WebSocket gateway
let qq = QqChannel::new(QqConfig::new("your_app_id", "your_client_secret"))?;
// Feishu — HTTP webhook
let feishu = FeishuChannel::new(FeishuConfig::new_webhook(
"your_app_id".into(),
"your_app_secret".into(),
"0.0.0.0:8080".into(),
"/webhook".into(),
None,
))?;
let mut manager = ChannelManager::new();
manager.register(Box::new(qq));
manager.register(Box::new(feishu));
manager.start_all(handler).await?;
Ok(())
}Features:
- Unified
ChannelPlugininterface — add new platforms by implementing one trait - Automatic token management — OAuth caching and refresh, no manual handling
- WebSocket reconnection — exponential backoff, never drops silently
- Message queuing — async
mpscchannel prevents lost messages under load - Whitelist support —
ChatConfig::with_allow_from()for access control
§19. Macro System — Declarative APIs for common patterns
#[tool], #[callback], #[guard], #[handler], agent!{}, messages![] and more.
use echo_agent::callback;
struct MyCallback;
#[callback]
impl MyCallback {
async fn on_tool_start(&self, _agent: &str, tool: &str, _args: &serde_json::Value) {
println!("[tool] {tool}");
}
}§20. Web Tools — Search the internet and fetch web pages
Give your Agent real-time internet access with web search and page fetching.
// Requires feature: web
use echo_agent::prelude::*;
use echo_agent::tools::web::{WebSearchTool, WebFetchTool};
fn main() -> echo_agent::error::Result<()> {
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.build()?;
// Auto-select best provider: Tavily > Brave > DuckDuckGo
agent.add_tool(Box::new(WebSearchTool::auto()));
agent.add_tool(Box::new(WebFetchTool::new()));
Ok(())
}| Provider | Cost | Quality | Notes |
|---|---|---|---|
| DuckDuckGo | Free | Medium | HTML scraping, no API key needed |
| Brave | Free 2k/mo | High | Official API |
| Tavily | Paid (free tier) | Highest | AI-optimized for agents |
§21. Self-Reflection Agent — LLM self-critique and refinement
// Requires feature: self-reflection
use echo_agent::prelude::*;
use echo_agent::advanced::LlmCritic;
use echo_agent::agent::self_reflection::SelfReflectionAgent;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let generator = ReactAgentBuilder::new()
.model("qwen3-max")
.system_prompt("You are a technical writer.")
.build()?;
let critic = LlmCritic::new("qwen3-max");
let agent = SelfReflectionAgent::new("reflection_agent", generator, critic)
.max_reflections(3);
let result = agent.execute("Write a summary of quantum computing").await?;
println!("{result}");
Ok(())
}§22. Snapshot & Rollback — Time-travel debugging
use echo_agent::prelude::*;
#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
let agent = ReactAgentBuilder::new()
.model("qwen3-max")
.snapshot_policy(SnapshotPolicy::default())
.build()?;
let snapshot_id = agent.snapshot().await; // Option<String>
// ... some operations that go wrong ...
if let Some(id) = snapshot_id {
agent.rollback_to(&id).await; // rollback to specific snapshot
}
agent.rollback(1).await; // go back 1 step
Ok(())
}§23. Circuit Breaker — Auto-fail-fast when LLM is down
use echo_agent::prelude::*;
use std::time::Duration;
fn main() -> echo_agent::error::Result<()> {
let mut agent = ReactAgentBuilder::new()
.model("qwen3-max")
.build()?;
let cb_config = CircuitBreakerConfig {
failure_threshold: 5,
timeout: Duration::from_secs(30),
..Default::default()
};
agent.set_circuit_breaker(cb_config);
Ok(())
}§Macro Reference
| Macro | Type | Generates |
|---|---|---|
#[tool] | Proc | TypedTool from async fn |
#[callback] | Proc | AgentCallback from impl block |
#[guard] | Proc | Guard from async fn |
#[handler] | Proc | HumanLoopHandler from impl block |
#[compressor] | Proc | ContextCompressor from async fn |
#[permission_policy] | Proc | PermissionPolicy from async fn |
#[audit_logger] | Proc | AuditLogger from impl block |
agent!{} | Decl | Agent construction |
messages![] | Decl | Message list builder |
tool_params!{} | Decl | JSON Schema builder |
chat_request!{} | Decl | ChatRequest construction |
§Examples
Examples are classified into Acceptance, Conditional acceptance, and Teaching contracts.
See examples/README.md for the full bucketed inventory and maintenance rules.
| # | Example | Demonstrates |
|---|---|---|
| 01 | demo01_tools | Custom tools with #[tool] |
| 02 | demo02_tasks | DAG task planning |
| 03 | demo03_approval | Human-in-the-loop |
| 04 | demo04_suagent | Multi-agent orchestration |
| 05 | demo05_compressor | Context compression |
| 06 | demo06_mcp | MCP tool server |
| 07 | demo07_skills | Built-in skills |
| 08 | demo08_external_skills | External skill loading |
| 09 | demo09_file_shell | File & shell tools |
| 10 | demo10_streaming | Streaming output |
| 11 | demo11_callbacks | Lifecycle callbacks |
| 12 | demo12_resilience | Retry & fault tolerance |
| 13 | demo13_tool_execution | Tool execution config |
| 14 | demo14_memory_isolation | Memory isolation |
| 15 | demo15_structured_output | JSON Schema output |
| 16 | demo16_testing | Mock testing |
| 17 | demo17_chat | Interactive chat |
| 18 | demo18_semantic_memory | Semantic memory |
| 19 | demo19_guard | Guard system |
| 20 | demo20_audit | Audit logging |
| 21 | demo21_handoff | Agent handoff |
| 22 | demo22_plan_execute | Plan-and-Execute |
| 23 | demo23_a2a | A2A protocol |
| 24 | demo24_topology | Topology visualization |
| 25 | demo25_macros | Macro system showcase |
| 26 | demo26_provider_factory | Dynamic LLM factory |
| 27 | demo27_sqlite_memory | SQLite persistence |
| 28 | demo28_workflow | Workflow pipeline |
| 29 | demo29_sandbox | Sandbox execution |
| 30 | demo30_mcp_server | MCP server mode |
| 31 | demo31_memory_tools | Memory tool injection |
| 32 | demo32_token_budget | Token budget control |
| 33 | demo33_retry_policy | Unified retry |
| 34 | demo34_workflow_stream | Workflow streaming |
| 35 | demo35_dynamic_tools | Dynamic tool management |
| 36 | demo36_multimodal | Multi-modal messages |
| 37 | demo37_declarative_workflow | YAML/JSON workflows |
| 38 | demo38_im_channels | IM channel integration |
| 39 | demo39_workflow | Graph workflow engine |
| 40 | demo40_snapshot | Snapshot & rollback |
| 41 | demo41_web_tools | Web search + fetch |
| 42 | demo42_playwright_mcp | Playwright MCP browser automation |
| 43 | demo43_data_tools | Excel / CSV / Word / Text processing |
Plus 6 comprehensive examples demonstrating real-world use cases:
| Example | Scenario |
|---|---|
comprehensive_code_laboratory | Code execution assistant |
comprehensive_customer_service | Intelligent customer service |
comprehensive_data_analyst | Data analysis assistant |
comprehensive_enterprise | Enterprise workflow automation |
comprehensive_personal_assistant | Personal smart assistant |
comprehensive_research_agent | Research & report assistant |
§Compatibility
Any OpenAI-compatible API, plus native Anthropic and Ollama:
| Provider | Endpoint | Notes |
|---|---|---|
| OpenAI | https://api.openai.com/v1 | GPT-4o, GPT-4-turbo |
| Anthropic | https://api.anthropic.com/v1 | Native Claude API |
| DeepSeek | https://api.deepseek.com/v1 | DeepSeek-V3/R1 |
| Alibaba Qwen | https://dashscope.aliyuncs.com/compatible-mode/v1 | Qwen3-max, Qwen-plus |
| Ollama (local) | http://localhost:11434 | Native protocol |
| LM Studio | http://localhost:1234/v1 | Any GGUF model |
§Documentation
| Topic | English | Chinese |
|---|---|---|
| ReAct Agent | EN | ZH |
| Tool System | EN | ZH |
| Memory System | EN | ZH |
| Context Compression | EN | ZH |
| Human-in-the-Loop | EN | ZH |
| Multi-Agent | EN | ZH |
| Skill System | EN | ZH |
| MCP Protocol | EN | ZH |
| DAG Tasks | EN | ZH |
| Streaming | EN | ZH |
| Structured Output | EN | ZH |
| Mock Testing | EN | ZH |
| IM Channels | EN | ZH |
| Plan-and-Execute | EN | ZH |
| Graph Workflow | EN | ZH |
| Guard System | EN | ZH |
| Self-Reflection | EN | ZH |
§Contributing
Contributions are welcome! See CONTRIBUTING.md for guidelines.
Before submitting a PR, please run locally:
git clone https://github.com/EchoYue-lp/echo-agent
cd echo-agent
# Code formatting
cargo fmt --check
# Linting
cargo clippy --workspace --all-targets
# Tests
cargo test --workspace§Changelog
See CHANGELOG.md for release history.
§License
MIT © echo-agent contributors
§Star History
Modules§
- a2a
a2a - A2A (Agent-to-Agent) protocol support
- advanced
- Advanced type re-exports for optional modules (requires corresponding features).
- agent
- Agent module
- audit
- Structured audit logging for agent actions and decisions.
- channels
channels - IM channel integration module.
- compression
- Context compression strategies to manage token budget.
- config
- Unified configuration management.
- error
- Unified error types for the echo-agent framework.
- guard
- Content filtering and safety guardrails.
- handoff
handoff - Handoff module — control transfer between Agents
- human_
loop human-loop - Human-loop facade
- llm
- LLM client façade — provider abstraction and chat APIs.
- mcp
mcp - MCP (Model Context Protocol) façade.
- memory
- Dual-layer memory system for persistent agent state.
- prelude
- Common type re-exports.
- project_
rules project-rules - Project-level rules file loading
- retry
- Retry façade.
- sandbox
- Multi-layer code execution sandbox.
- skills
- Skill System — agentskills.io aligned (core re-export from echo_core + echo_execution)
- tasks
tasks - Tasks facade
- telemetry
telemetry - OpenTelemetry Integration
- tokenizer
- Tokenizer façade.
- tools
- Tool system — define and register tools for agents to call.
- topology
topology - Agent topology visualization
- utils
- Common utility modules (re-exported from echo_core)
- workflow
- Graph-based workflow engine (LangGraph-style state machines).
- workspace
- Direct access to split workspace crates during migration.
Macros§
- agent
- Quickly create an Agent (declarative syntax, replaces builder chaining).
- chat_
request - Quickly build a chat request.
- messages
- Quickly build a message list.
- tool_
params - Quickly build tool parameter JSON Schema.
Attribute Macros§
- audit_
logger - Generate an
AuditLoggerimplementation from an impl block. - callback
- Generate an
AgentCallbackimplementation from an impl block, overriding only the methods you define. - compressor
- Generate a
ContextCompressorimplementation from an async function. - guard
- Generate a
Guardimplementation from an async function. - handler
- Generate a
HumanLoopHandlerimplementation from an impl block. - permission_
policy - Generate a
PermissionPolicyimplementation from an async function. - tool
- Generate a
Toolimplementation from an async function, auto-creating the parameter struct and JSON Schema.