Skip to main content

Crate echo_agent

Crate echo_agent 

Source
Expand description

§echo-agent

§The Production-Grade AI Agent Framework for Rust

ReAct Engine • Multi-Agent • Memory • Streaming • MCP • IM Channels • Workflows

crates.io docs.rs CI Rust License: MIT OpenAI Compatible Async

中文文档 · Documentation · Examples · Changelog


§Quick Start

Add to Cargo.toml:

[dependencies]
echo-agent = "0.1.4"
tokio = { version = "1", features = ["full"] }

Define a tool and run an agent — in under 20 lines:

use echo_agent::prelude::*;
use echo_agent::{agent, tool};

#[tool(name = "add", description = "Add two numbers")]
async fn add(a: f64, b: f64) -> Result<ToolResult> {
    Ok(ToolResult::success(format!("{}", a + b)))
}

#[tokio::main]
async fn main() -> Result<()> {
    let mut agent = agent! {
        model: "qwen3-max",
        system_prompt: "You are a helpful math assistant",
        tools: [AddTool],
    }?;

    let answer = agent.execute("What is 1337 * 42?").await?;
    println!("{answer}");
    Ok(())
}

§Why echo-agent?

Most AI agent frameworks live in Python. echo-agent brings full-featured Agent development to Rust — matching LangGraph, CrewAI, and AutoGen feature parity, with the performance, type safety, and reliability that only Rust can deliver.

echo-agentLangGraphCrewAIAutoGen
LanguageRustPythonPythonPython
Memory safetyCompile-timeGCGCGC
ReAct loopBuilt-inBuilt-inBuilt-inBuilt-in
Tool system#[tool] macro + JSON SchemaDecoratorDecoratorFunction calling
Multi-agentSubAgent + HandoffGraphCrewConversation
StreamingNative async streamsCallbackLimitedCallback
MCP protocolNative (stdio/SSE/HTTP)Via LangChainNoNo
IM channelsQQ + Feishu built-inNoNoNo
WorkflowGraph + DAG + SequentialStateGraphSequentialSequential
Context compressionSlidingWindow + LLM + HybridNoNoNo
GuardrailsRule + LLM filteringNoNoNo
SandboxLocal + Docker + K8sNoNoDocker
Single binary deployYesNoNoNo

§Deploy to IM in 5 lines

// Requires feature: channels
use echo_agent::channels::{ChannelManager, QqChannel, QqConfig, FeishuChannel, FeishuConfig};

let mut manager = ChannelManager::new();
manager.register(Box::new(QqChannel::new(QqConfig::new("app_id", "secret"))?));
manager.register(Box::new(FeishuChannel::new(FeishuConfig::new_long_poll("app_id".into(), "secret".into()))?));
manager.start_all(handler).await?;

§Run examples

cargo run --example demo01_tools          # Custom tools
cargo run --example demo25_macros         # Macro system
cargo run --example demo34_workflow_stream # Workflow streaming
cargo run --example demo36_multimodal     # Multi-modal messages
cargo run --example demo38_im_channels --features channels  # IM channels

§Feature Flags

FeatureDefaultDescription
fullonAll features enabled
webonWeb search + page fetch (DuckDuckGo/Brave/Tavily)
mcponModel Context Protocol client
mediaonPDF/Excel/Word/image extraction
channelsonQQ Bot + Feishu IM integrations
human-looponHuman-in-the-loop approval (Console/Webhook/WebSocket)
plan-executeonPlan-then-execute agent pattern
self-reflectiononLLM self-critique and refinement
subagentonMulti-agent orchestration
sqliteonSQLite-backed persistent memory
telemetryonOpenTelemetry tracing + metrics
a2aonAgent-to-Agent protocol (server + client)
handoffonAgent handoff/collaboration
topologyonMulti-agent topology tracking
tasksonDAG task scheduling
dataonPolars-powered data tools
ragonRetrieval-Augmented Generation
chartonChart generation tools
gitonGit operations tools
databaseSQL database tools (requires sqlx)
content-guardContent filtering guardrails
project-rules.claude/rules project rule parsing

§Architecture

                              ┌─────────────┐
                              │   Your App   │
                              └──────┬───────┘
                                     │
                    ┌────────────────▼────────────────┐
                    │          ReactAgent              │
                    │                                  │
                    │  ┌──────────┐  ┌──────────────┐  │
                    │  │  Context  │  │    Tools      │  │
                    │  │ Manager   │  │   Manager     │  │
                    │  │(compress) │  │(retry/limit)  │  │
                    │  └──────────┘  └──────────────┘  │
                    │                                  │
                    │  ┌──────────┐  ┌──────────────┐  │
                    │  │  Memory   │  │    Human      │  │
                    │  │Store+Cp   │  │ Approval      │  │
                    │  └──────────┘  └──────────────┘  │
                    │                                  │
                    │  ┌──────────┐  ┌──────────────┐  │
                    │  │  Skills   │  │   SubAgent    │  │
                    │  │ Registry  │  │   Registry    │  │
                    │  └──────────┘  └──────────────┘  │
                    └────────────────┬────────────────┘
                                     │
              ┌──────────────────────▼──────────────────────┐
              │              LLM Providers                    │
              │  OpenAI · Anthropic · DeepSeek · Qwen · Ollama │
              └─────────────────────────────────────────────┘

§Feature Matrix

echo-agent ships with 28+ capabilities across 6 crates, all accessible through a single use echo_agent::prelude::*.

§Core

FeatureDescriptionAPI Preview
ReAct EngineThought → Action → Observation loop with CoTagent.execute("task").await?
Tool System#[tool] macro with auto JSON Schema, timeout + retry#[tool(name = "calc")] async fn calc(...)
Dual-layer MemoryStore (long-term KV) + Checkpointer (session).with_memory_tools(store)
Context CompressionSlidingWindow / LLM Summary / HybridSlidingWindowCompressor::new(4096)
Token BudgetAuto-truncation + pre-think compression trigger.max_tool_output_tokens(2000)
Unified RetryOne RetryPolicy for LLM, MCP, A2A, sandboxwith_retry(&policy, || ...)
Dynamic ToolsAdd / remove / replace tools mid-conversationagent.remove_tool("old")
StreamingReal-time AgentEvent stream (tokens + tool calls)agent.execute_stream(task).await?
Structured OutputLLM output → typed Rust structs via JSON Schemaagent.extract::<Contact>(text)
Multi-ModalText + images (base64/URL) + files in one messageMessage::user_with_image(...)
Guard SystemRule-based / LLM-powered content filtering#[guard(name = "safety")] async fn ...
Permission ModelDeclarative tool permissions with pluggable policiesDefaultPermissionPolicy::new()
Audit LoggingStructured events with pluggable backendsagent.set_audit_logger(...)
Macro System11 macros: #[tool], agent!{}, messages![], …agent! { model: "..", tools: [...] }

§Multi-Agent & Orchestration

FeatureDescriptionAPI Preview
SubAgentSync / Fork / Teammate execution modesagent.register_agent(sub)
Agent HandoffContext-aware transfer between agentsHandoffManager::new()
Plan-and-ExecuteExplicit planning phase → step-by-step executionPlanExecuteAgent::new(...)
Self-ReflectionLLM-based self-critique and refinement loopsSelfReflectionAgent::new(...)
Graph WorkflowLinear, conditional, loop, parallel fan-out/fan-inGraphBuilder::new("pipeline")
DAG TasksDependency-aware task scheduling with hooksTaskManager::default()
Declarative WorkflowDefine graphs in YAML/JSON — no Rust code neededGraph::from_yaml("wf.yaml")?

§Integrations

FeatureDescriptionAPI Preview
MCP ProtocolConnect any MCP server (stdio / SSE / HTTP)mcp.connect(McpServerConfig::stdio(...))
A2A ProtocolAgent Card publishing, cross-framework collaborationA2AServer::bind("0.0.0.0:3000")
Skill SystemProgressive disclosure: discover → activate → useagent.load_skill("web_research")
IM ChannelsQQ Bot (WebSocket) & Feishu (Webhook) built-inChannelManager::new()
Web ToolsSearch (DuckDuckGo/Brave/Tavily) + Page FetchWebSearchTool::auto()
Media ToolsPDF, Excel, Word, Image analysis built-inImageAnalysisTool
Data ToolsPolars-powered filter, aggregate, transform, statsDataReadTool
SandboxLocal / Docker / K8s code execution with limitsLocalSandbox::new()
OpenTelemetryDistributed tracing and metrics via OTLPinit_telemetry(&config)
Snapshot/RollbackCapture & restore agent state at any pointagent.snapshot() / agent.rollback(1)
Circuit BreakerAuto-fail-fast when LLM is downagent.set_circuit_breaker(config)

§Feature Flags

# Minimal — just the ReAct engine
echo-agent = { version = "0.1.4", default-features = false }

# Full (default) — all features enabled
echo-agent = "0.1.4"

# Pick only what you need
echo-agent = { version = "0.1.4", default-features = false, features = ["mcp", "web"] }
FeatureEnablesKey Dependencies
mcpMCP protocol clientecho-mcp, tokio-tungstenite
webWeb search + fetch toolsscraper, html2text
mediaPDF, Excel, Word, Image toolslopdf, calamine, docx-rs
dataPolars data analysispolars
sqliteSQLite memory persistencerusqlite
channelsQQ Bot + Feishu integrationsecho-channels
human-loopHuman-in-the-loop approvalstokio-tungstenite
tasksDAG task management
workflowGraph workflow engine
plan-executePlan-and-Execute agent
self-reflectionSelf-critique agent
subagentMulti-agent orchestration
handoffAgent handoff
a2aAgent-to-Agent protocol
topologyAgent topology visualization
telemetryOpenTelemetry tracingopentelemetry

§Workspace Structure

echo-agent/
├── echo-core/           Core traits: Tool, Agent, LlmClient, Guard, Error, Retry
├── echo-macros/         Procedural macros: #[tool], #[callback], #[guard], #[handler]
├── echo-execution/      Sandbox, skills, and tool execution
├── echo-state/          Memory, compression, and audit logging
├── echo-orchestration/  Workflow, human-loop, and DAG tasks
├── echo-integration/    LLM providers, MCP, and IM channels (QQ/Feishu)
├── src/                 Agent engine, re-exports, and facade layer
├── examples/            40+ runnable demos
├── docs/                Bilingual documentation (en + zh)
├── skills/              External skill packs (Markdown-based)
└── echo-agent.yaml      Example configuration

Note: echo-agent is a library framework. For a ready-to-use application with CLI, Web UI, and WebSocket, see echo-agent-cli.


§Configuration

Create echo-agent.yaml in your project root:

# Provider / model registry (used by ProviderFactory and config-backed clients)
models:
  qwen3-max:
    provider: dashscope
    api_key: ${DASHSCOPE_API_KEY}

  deepseek-chat:
    provider: deepseek
    api_key: ${DEEPSEEK_API_KEY}

# Embedding config (used by semantic memory / vector search demos)
embedding:
  base_url: https://api.openai.com
  api_key: ${OPENAI_API_KEY}
  model: text-embedding-3-small
  timeout_secs: 30

# Runtime app config (used by examples such as IM channels)
model:
  name: qwen3-max
  max_tokens: 4096
  temperature: 0.7

agent:
  name: my-assistant
  system_prompt: "You are a helpful assistant."
  max_iterations: 10
  enable_tools: true
  enable_memory: true

channels:
  qq:
    enabled: false
    app_id: ${QQ_APP_ID}
    client_secret: ${QQ_CLIENT_SECRET}
  feishu:
    enabled: false
    app_id: ${FEISHU_APP_ID}
    app_secret: ${FEISHU_APP_SECRET}
    mode: long_poll
  session:
    timeout_minutes: 60
    reset_keywords: ["重置对话", "新对话", "清除记忆"]
    reset_commands: ["/reset", "/clear", "/new"]

mcp:
  config_path: ./mcp.json

server:
  host: 0.0.0.0
  port: 3000

logging:
  level: info

Notes:

  • models: is the registry used by ProviderFactory, LlmConfig::from_model(), and config-backed LLM clients.
  • embedding: is used by semantic memory / vector search examples.
  • model: / agent: / channels: / mcp: / server: / logging: are the framework runtime settings loaded by echo_agent::config.

Set secrets via environment variables:

export DASHSCOPE_API_KEY=sk-xxx      # Alibaba Qwen
export DEEPSEEK_API_KEY=sk-xxx       # DeepSeek
export OPENAI_API_KEY=sk-xxx         # OpenAI
export ANTHROPIC_API_KEY=sk-ant-xxx  # Anthropic
export QQ_APP_ID=your-qq-app-id
export QQ_CLIENT_SECRET=your-qq-client-secret
export FEISHU_APP_ID=your-feishu-app-id
export FEISHU_APP_SECRET=your-feishu-app-secret

§Highlights

  • 40+ capabilities — ReAct loop, tools, memory, streaming, multi-agent, skills, MCP, IM channels, guards, audit, and more
  • 40 runnable examples — every feature has a demo you can cargo run immediately
  • 629+ unit tests — comprehensive coverage across all modules
  • 6 crates, 1 import — modular workspace, but use echo_agent::prelude::* is all you need
  • Multi-modal — text, images (base64 & URL), and file attachments in a single message
  • IM integration — QQ Bot (WebSocket) & Feishu (Webhook) out of the box
  • Declarative workflows — define agent graphs in YAML/JSON, no Rust code required
  • Unified retry — one RetryPolicy for all external calls (LLM, MCP, A2A, sandbox)
  • Zero-cost abstractions — compiled to native code, no runtime overhead

§Core Concepts

echo-agent is built around several key concepts that enable flexible, production-ready agent development:

§1. ReAct Engine — Thought → Action → Observation loop

The foundation of echo-agent is the ReAct (Reasoning + Acting) pattern with built-in Chain-of-Thought prompting. Agents think step-by-step, decide which tool to call, observe results, and continue until they reach a final answer.

use echo_agent::prelude::*;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .system_prompt("You are a helpful assistant")
        .build()?;
    let answer = agent.execute("What is 42 * 1337?").await?;
    println!("{answer}");
    Ok(())
}

Three builder presets for different needs:

use echo_agent::prelude::*;

fn main() -> echo_agent::error::Result<()> {
    // Minimal — no tools, no memory, just chat
    let _agent = ReactAgentBuilder::simple("qwen3-max", "Be helpful")?;

    // Standard — tools + CoT enabled
    let _agent = ReactAgentBuilder::standard("qwen3-max", "assistant", "Be helpful")?;

    // Full-featured — tools + memory + tasks + CoT
    let _agent = ReactAgentBuilder::full_featured("qwen3-max", "assistant", "Be helpful")?;
    Ok(())
}

§2. Tool System — #[tool] macro + auto JSON Schema

Define tools as simple async functions. The #[tool] macro generates parameter schemas, descriptions, and the TypedTool implementation automatically.

use echo_agent::{tool, prelude::*};

#[tool(name = "weather", description = "Get weather for a city")]
async fn weather(city: String) -> Result<ToolResult> {
    Ok(ToolResult::success(format!("Sunny in {city}")))
}

// Use it: agent.add_tool(Box::new(WeatherTool));

Built-in media tools (feature media): PDF extract/info, Excel read/info/to_csv, Word read/info/structure, Image analysis, Text read/search/stats/process/export.

Built-in data tools (feature data): Polars-powered read/filter/aggregate/stats/transform/export.

§3. Dual-layer Memory — Store + Checkpointer

  • Store: Long-term key-value storage with namespace isolation (InMemoryStore, FileStore, SqliteStore)
  • Checkpointer: Session history preservation across restarts (FileCheckpointer, InMemoryCheckpointer)

One line to give your agent persistent memory — no manual tool wiring:

use echo_agent::prelude::*;
use std::sync::Arc;

fn main() -> echo_agent::error::Result<()> {
    let store = Arc::new(InMemoryStore::new());
    let _agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .with_memory_tools(store)  // registers remember + recall + search_memory + forget
        .build()?;
    Ok(())
}

§4. Multi-Modal Messages — Text, images, files in one message

Send and receive images (base64 or URLs) and file attachments alongside text, compatible with OpenAI Vision and Anthropic APIs.

use echo_agent::prelude::*;

fn main() {
    let base64_data = "...";  // your base64-encoded image
    let _msg = Message::user_with_image(
        "What's in this image?",
        "image/png",
        base64_data,
    );
}

§5. Context Compression — Sliding window, LLM summary, hybrid

Manage token limits with configurable compression strategies that preserve conversation context.

use echo_agent::prelude::*;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .build()?;
    agent.set_compressor(SlidingWindowCompressor::new(4096)).await;
    Ok(())
}

Three strategies:

  • SlidingWindow — keeps the most recent messages within token budget
  • SummaryCompressor — uses LLM to summarize older messages
  • HybridCompressor — combines both for best quality

Token counting — estimate token usage before calling the LLM:

use echo_agent::prelude::*;

fn main() {
    let tokenizer = HeuristicTokenizer;
    let count = tokenizer.count_tokens("Hello, world!");
    println!("~{count} tokens");  // ~4 tokens

    // For cost tracking across requests:
    use echo_agent::tokenizer::TokenUsageTracker;
    let tracker = TokenUsageTracker::new("gpt-4o");
    tracker.record(1500, 800, Some(2300));
    println!("{}", tracker.summary());
}

§6. Unified Retry Policy — One policy for all external calls

Configure retry, timeout, and backoff once, apply to LLM calls, MCP requests, A2A communication, and sandbox execution.

use echo_agent::prelude::*;
use std::time::Duration;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let policy = RetryPolicy::new(3, Duration::from_millis(500))
        .max_delay(Duration::from_secs(30))
        .jitter(true);
    // Apply to any fallible async operation:
    let _response = with_retry(&policy, || async { Ok::<_, &str>("done") }).await.unwrap();
    Ok(())
}

§7. Dynamic Tool Management — Add/remove/replace tools mid-conversation

Adapt toolset based on conversation phase or user needs without restarting the agent.

use echo_agent::{prelude::*, tool};

// User-defined tools (via #[tool] macro):
#[tool(name = "search_web", description = "Search the web")]
async fn search_web(query: String) -> Result<ToolResult> {
    Ok(ToolResult::success(format!("Results for: {query}")))
}

fn main() -> echo_agent::error::Result<()> {
    let mut agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .build()?;
    agent.add_tool(Box::new(SearchWebTool));
    agent.remove_tool("search_web");
    agent.replace_tool(Box::new(SearchWebTool));
    Ok(())
}

§8. Human-in-the-Loop — Approval gates for critical actions

Require human approval before executing sensitive tools via Console, Webhook, or WebSocket interfaces.

// Requires feature: human-loop
use echo_agent::prelude::*;
use echo_agent::advanced::ConsoleHumanLoopProvider;
use std::sync::Arc;

fn main() -> echo_agent::error::Result<()> {
    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .build()?;
    let approval: Arc<ConsoleHumanLoopProvider> = Arc::new(ConsoleHumanLoopProvider);
    agent.set_human_loop_provider(approval);
    Ok(())
}

Full 7-stage permission pipeline (inspired by Claude Code):

Bypass → Plan → Rules(deny-first) → ProtectedPaths → Cache(TTL) → DenialTracker → Mode dispatch
  • SessionApprovalCache with configurable TTL (default 30 min)
  • Audit Trail: PermissionAuditSink trait + InMemory/Logging/Composite implementations
  • ProtectedPathChecker: .git/.env/.ssh always protected
  • AI Classifier: RuleClassifier/LlmClassifier/CompositeClassifier for Auto mode
  • DenialTracker: auto-fallback after consecutive denials
  • PermissionMode: Default/Plan/Auto/AcceptEdits/BypassPermissions/DontAsk/Bubble
use echo_agent::prelude::*;

#[tokio::main]
async fn main() {
    // Grant read access, require approval for execute
    let policy = DefaultPermissionPolicy::new()
        .grant(ToolPermission::Read)
        .require_approval(ToolPermission::Execute);

    let decision = policy.check("shell", &[ToolPermission::Execute]).await;
    assert!(decision.requires_approval());
}

§9. Multi-Agent Orchestration — Orchestrator + SubAgent teams

Coordinate multiple specialized agents with context isolation and handoff protocols.

Three execution modes:

  • Sync — parent blocks until subagent returns
  • Fork — subagent runs in background, parent continues
  • Teammate — collaborative mode with shared Mailbox
// Requires feature: subagent
use echo_agent::prelude::*;

fn main() -> echo_agent::error::Result<()> {
    let math_agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .name("math_expert")
        .system_prompt("You solve math problems.")
        .build()?;

    let mut agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .enable_subagent()
        .build()?;

    agent.register_agent(Box::new(math_agent));
    Ok(())
}

§10. Skill System — Progressive capability disclosure

Packages of related tools and prompts that can be discovered, activated, and used on demand.

use echo_agent::prelude::*;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .build()?;

    // Discover and activate file-based skills (SKILL.md packs):
    agent.load_skills_from_dir("./skills/web_research").await?;
    Ok(())
}

Pre-built skills: code_review, data_analyst, project-stats, python-linter, web_researcher.

§11. MCP Protocol — Connect any Model Context Protocol server

Integrate filesystem, databases, browsers, and other resources via standardized MCP servers.

// Requires feature: mcp
use echo_agent::prelude::*;
use echo_agent::advanced::{McpManager, McpServerConfig};

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let mut mcp = McpManager::new();
    let tools = mcp.connect(McpServerConfig::stdio(
        "filesystem", "npx", vec!["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
    )).await?;

    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .build()?;
    agent.add_tools(tools);
    Ok(())
}

Supports three transports: stdio, SSE, HTTP.

§12. Plan-and-Execute — Explicit planning phase before execution

Planner agent creates a task DAG, Executor agent follows it step-by-step with optional replanning.

// Requires feature: plan-execute
use echo_agent::prelude::*;
use echo_agent::advanced::PlanExecuteAgent;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let planner = PlanExecuteAgent::new("research_agent", planner_config, executor_config);
    let result = planner.execute("Research quantum computing trends").await?;
    println!("{result}");
    Ok(())
}

§13. Streaming — Real-time token-by-token output

Receive AgentEvent streams including tokens, tool calls, and final answers as they happen.

use echo_agent::prelude::*;
use futures::StreamExt;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .build()?;
    let mut stream = agent.execute_stream("Explain quantum entanglement").await?;
    while let Some(event) = stream.next().await {
        match event? {
            AgentEvent::Token(t) => print!("{t}"),
            AgentEvent::FinalAnswer(a) => { println!("\n{a}"); break; }
            _ => {}
        }
    }
    Ok(())
}

§14. Structured Output — LLM responses to typed Rust structs

Extract structured data from LLM responses using JSON Schema validation.

use echo_agent::prelude::*;
use echo_agent::llm::ResponseFormat;
use serde::Deserialize;
use serde_json::json;

#[derive(Deserialize, Debug)]
struct Contact { name: String, email: String, phone: String }

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .system_prompt("You are an extraction assistant")
        .build()?;
    let contacts: Vec<Contact> = agent.extract(
        "Extract contacts from this text...",
        ResponseFormat::json_schema("contacts", json!({
            "type": "array",
            "items": {
                "type": "object",
                "properties": {
                    "name": {"type": "string"},
                    "email": {"type": "string"},
                    "phone": {"type": "string"}
                },
                "required": ["name", "email", "phone"]
            }
        })),
    ).await?;
    println!("{:?}", contacts);
    Ok(())
}

§15. Declarative Workflow — YAML/JSON workflow definitions

Define agent graphs without writing Rust code.

name: research_pipeline
nodes:
  - name: researcher
    type: agent
    model: qwen3-max
    system_prompt: "You are a research assistant"
    input_key: task
    output_key: research
  - name: writer
    type: agent
    model: qwen3-max
    system_prompt: "You are a writing assistant"
    input_key: research
    output_key: result
edges:
  - from: researcher
    to: writer
entry: researcher
finish: [writer]
use echo_agent::prelude::*;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let wf = WorkflowDefinition::from_yaml("workflow.yaml")?;
    let graph = wf.build_graph()?;
    let state = SharedState::new();
    let result = graph.run(state).await?;
    Ok(())
}

§16. Guard System — Rule-based and LLM-powered content filtering

Block or modify unsafe content on input and output with customizable guard pipelines.

use echo_agent::{guard, prelude::*};

#[guard(name = "length-limit")]
async fn check_length(content: &str, _: GuardDirection) -> Result<GuardResult> {
    if content.len() > 50000 {
        Ok(GuardResult::Block { reason: "Content too long".into() })
    } else {
        Ok(GuardResult::Pass)
    }
}

§17. Graph Workflow Engine — LangGraph-style state machines

Build complex workflows with linear pipelines, conditional branches, loops, and parallel fan-out/fan-in.

use echo_agent::prelude::*;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let state = SharedState::new();
    let graph = GraphBuilder::new("etl_pipeline")
        .add_function_node("extract", |state| Box::pin(async move {
            state.set("data", vec!["hello", "world"])?;
            Ok(())
        }))
        .add_function_node("transform", |state| Box::pin(async move {
            // transform data...
            Ok(())
        }))
        .add_edge("extract", "transform")
        .add_edge("transform", Graph::END)
        .build()?;
    
    let result = graph.run(state).await?;
    Ok(())
}

Also supports streaming execution: graph.run_stream(state).await? yields WorkflowEvent per node.

§18. IM Channels — Deploy agents to messaging platforms

Connect your agent to QQ (WebSocket) and Feishu (Webhook) with automatic token management and reconnection.

// Requires feature: channels
use echo_agent::channels::{ChannelManager, QqChannel, QqConfig, FeishuChannel, FeishuConfig};

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    // QQ Bot — WebSocket gateway
    let qq = QqChannel::new(QqConfig::new("your_app_id", "your_client_secret"))?;

    // Feishu — HTTP webhook
    let feishu = FeishuChannel::new(FeishuConfig::new_webhook(
        "your_app_id".into(),
        "your_app_secret".into(),
        "0.0.0.0:8080".into(),
        "/webhook".into(),
        None,
    ))?;

    let mut manager = ChannelManager::new();
    manager.register(Box::new(qq));
    manager.register(Box::new(feishu));
    manager.start_all(handler).await?;
    Ok(())
}

Features:

  • Unified ChannelPlugin interface — add new platforms by implementing one trait
  • Automatic token management — OAuth caching and refresh, no manual handling
  • WebSocket reconnection — exponential backoff, never drops silently
  • Message queuing — async mpsc channel prevents lost messages under load
  • Whitelist supportChatConfig::with_allow_from() for access control

§19. Macro System — Declarative APIs for common patterns

#[tool], #[callback], #[guard], #[handler], agent!{}, messages![] and more.

use echo_agent::callback;

struct MyCallback;

#[callback]
impl MyCallback {
    async fn on_tool_start(&self, _agent: &str, tool: &str, _args: &serde_json::Value) {
        println!("[tool] {tool}");
    }
}

§20. Web Tools — Search the internet and fetch web pages

Give your Agent real-time internet access with web search and page fetching.

// Requires feature: web
use echo_agent::prelude::*;
use echo_agent::tools::web::{WebSearchTool, WebFetchTool};

fn main() -> echo_agent::error::Result<()> {
    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .build()?;

    // Auto-select best provider: Tavily > Brave > DuckDuckGo
    agent.add_tool(Box::new(WebSearchTool::auto()));
    agent.add_tool(Box::new(WebFetchTool::new()));
    Ok(())
}
ProviderCostQualityNotes
DuckDuckGoFreeMediumHTML scraping, no API key needed
BraveFree 2k/moHighOfficial API
TavilyPaid (free tier)HighestAI-optimized for agents

§21. Self-Reflection Agent — LLM self-critique and refinement

// Requires feature: self-reflection
use echo_agent::prelude::*;
use echo_agent::advanced::LlmCritic;
use echo_agent::agent::self_reflection::SelfReflectionAgent;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let generator = ReactAgentBuilder::new()
        .model("qwen3-max")
        .system_prompt("You are a technical writer.")
        .build()?;

    let critic = LlmCritic::new("qwen3-max");

    let agent = SelfReflectionAgent::new("reflection_agent", generator, critic)
        .max_reflections(3);

    let result = agent.execute("Write a summary of quantum computing").await?;
    println!("{result}");
    Ok(())
}

§22. Snapshot & Rollback — Time-travel debugging

use echo_agent::prelude::*;

#[tokio::main]
async fn main() -> echo_agent::error::Result<()> {
    let agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .snapshot_policy(SnapshotPolicy::default())
        .build()?;
    let snapshot_id = agent.snapshot().await;  // Option<String>
    // ... some operations that go wrong ...
    if let Some(id) = snapshot_id {
        agent.rollback_to(&id).await;          // rollback to specific snapshot
    }
    agent.rollback(1).await;                   // go back 1 step
    Ok(())
}

§23. Circuit Breaker — Auto-fail-fast when LLM is down

use echo_agent::prelude::*;
use std::time::Duration;

fn main() -> echo_agent::error::Result<()> {
    let mut agent = ReactAgentBuilder::new()
        .model("qwen3-max")
        .build()?;
    let cb_config = CircuitBreakerConfig {
        failure_threshold: 5,
        timeout: Duration::from_secs(30),
        ..Default::default()
    };
    agent.set_circuit_breaker(cb_config);
    Ok(())
}

§Macro Reference

MacroTypeGenerates
#[tool]ProcTypedTool from async fn
#[callback]ProcAgentCallback from impl block
#[guard]ProcGuard from async fn
#[handler]ProcHumanLoopHandler from impl block
#[compressor]ProcContextCompressor from async fn
#[permission_policy]ProcPermissionPolicy from async fn
#[audit_logger]ProcAuditLogger from impl block
agent!{}DeclAgent construction
messages![]DeclMessage list builder
tool_params!{}DeclJSON Schema builder
chat_request!{}DeclChatRequest construction

§Examples

Examples are classified into Acceptance, Conditional acceptance, and Teaching contracts. See examples/README.md for the full bucketed inventory and maintenance rules.

#ExampleDemonstrates
01demo01_toolsCustom tools with #[tool]
02demo02_tasksDAG task planning
03demo03_approvalHuman-in-the-loop
04demo04_suagentMulti-agent orchestration
05demo05_compressorContext compression
06demo06_mcpMCP tool server
07demo07_skillsBuilt-in skills
08demo08_external_skillsExternal skill loading
09demo09_file_shellFile & shell tools
10demo10_streamingStreaming output
11demo11_callbacksLifecycle callbacks
12demo12_resilienceRetry & fault tolerance
13demo13_tool_executionTool execution config
14demo14_memory_isolationMemory isolation
15demo15_structured_outputJSON Schema output
16demo16_testingMock testing
17demo17_chatInteractive chat
18demo18_semantic_memorySemantic memory
19demo19_guardGuard system
20demo20_auditAudit logging
21demo21_handoffAgent handoff
22demo22_plan_executePlan-and-Execute
23demo23_a2aA2A protocol
24demo24_topologyTopology visualization
25demo25_macrosMacro system showcase
26demo26_provider_factoryDynamic LLM factory
27demo27_sqlite_memorySQLite persistence
28demo28_workflowWorkflow pipeline
29demo29_sandboxSandbox execution
30demo30_mcp_serverMCP server mode
31demo31_memory_toolsMemory tool injection
32demo32_token_budgetToken budget control
33demo33_retry_policyUnified retry
34demo34_workflow_streamWorkflow streaming
35demo35_dynamic_toolsDynamic tool management
36demo36_multimodalMulti-modal messages
37demo37_declarative_workflowYAML/JSON workflows
38demo38_im_channelsIM channel integration
39demo39_workflowGraph workflow engine
40demo40_snapshotSnapshot & rollback
41demo41_web_toolsWeb search + fetch
42demo42_playwright_mcpPlaywright MCP browser automation
43demo43_data_toolsExcel / CSV / Word / Text processing

Plus 6 comprehensive examples demonstrating real-world use cases:

ExampleScenario
comprehensive_code_laboratoryCode execution assistant
comprehensive_customer_serviceIntelligent customer service
comprehensive_data_analystData analysis assistant
comprehensive_enterpriseEnterprise workflow automation
comprehensive_personal_assistantPersonal smart assistant
comprehensive_research_agentResearch & report assistant

§Compatibility

Any OpenAI-compatible API, plus native Anthropic and Ollama:

ProviderEndpointNotes
OpenAIhttps://api.openai.com/v1GPT-4o, GPT-4-turbo
Anthropichttps://api.anthropic.com/v1Native Claude API
DeepSeekhttps://api.deepseek.com/v1DeepSeek-V3/R1
Alibaba Qwenhttps://dashscope.aliyuncs.com/compatible-mode/v1Qwen3-max, Qwen-plus
Ollama (local)http://localhost:11434Native protocol
LM Studiohttp://localhost:1234/v1Any GGUF model

§Documentation

TopicEnglishChinese
ReAct AgentENZH
Tool SystemENZH
Memory SystemENZH
Context CompressionENZH
Human-in-the-LoopENZH
Multi-AgentENZH
Skill SystemENZH
MCP ProtocolENZH
DAG TasksENZH
StreamingENZH
Structured OutputENZH
Mock TestingENZH
IM ChannelsENZH
Plan-and-ExecuteENZH
Graph WorkflowENZH
Guard SystemENZH
Self-ReflectionENZH

§Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.

Before submitting a PR, please run locally:

git clone https://github.com/EchoYue-lp/echo-agent
cd echo-agent

# Code formatting
cargo fmt --check

# Linting
cargo clippy --workspace --all-targets

# Tests
cargo test --workspace

§Changelog

See CHANGELOG.md for release history.


§License

MIT © echo-agent contributors

§Star History

Star History Chart

Modules§

a2aa2a
A2A (Agent-to-Agent) protocol support
advanced
Advanced type re-exports for optional modules (requires corresponding features).
agent
Agent module
audit
Structured audit logging for agent actions and decisions.
channelschannels
IM channel integration module.
compression
Context compression strategies to manage token budget.
config
Unified configuration management.
error
Unified error types for the echo-agent framework.
guard
Content filtering and safety guardrails.
handoffhandoff
Handoff module — control transfer between Agents
human_loophuman-loop
Human-loop facade
llm
LLM client façade — provider abstraction and chat APIs.
mcpmcp
MCP (Model Context Protocol) façade.
memory
Dual-layer memory system for persistent agent state.
prelude
Common type re-exports.
project_rulesproject-rules
Project-level rules file loading
retry
Retry façade.
sandbox
Multi-layer code execution sandbox.
skills
Skill System — agentskills.io aligned (core re-export from echo_core + echo_execution)
taskstasks
Tasks facade
telemetrytelemetry
OpenTelemetry Integration
tokenizer
Tokenizer façade.
tools
Tool system — define and register tools for agents to call.
topologytopology
Agent topology visualization
utils
Common utility modules (re-exported from echo_core)
workflow
Graph-based workflow engine (LangGraph-style state machines).
workspace
Direct access to split workspace crates during migration.

Macros§

agent
Quickly create an Agent (declarative syntax, replaces builder chaining).
chat_request
Quickly build a chat request.
messages
Quickly build a message list.
tool_params
Quickly build tool parameter JSON Schema.

Attribute Macros§

audit_logger
Generate an AuditLogger implementation from an impl block.
callback
Generate an AgentCallback implementation from an impl block, overriding only the methods you define.
compressor
Generate a ContextCompressor implementation from an async function.
guard
Generate a Guard implementation from an async function.
handler
Generate a HumanLoopHandler implementation from an impl block.
permission_policy
Generate a PermissionPolicy implementation from an async function.
tool
Generate a Tool implementation from an async function, auto-creating the parameter struct and JSON Schema.