Agent SDK
A Rust SDK for building AI agents powered by large language models (LLMs). Create agents that can reason, use tools, and take actions through a streaming, event-driven architecture.
⚠️ Early Development: This library is in active development (v0.4.x). APIs may change between versions and there may be bugs. Use in production at your own risk. Feedback and contributions are welcome!
What is an Agent?
An agent is an LLM that can do more than just chat—it can use tools to interact with the world. This SDK provides the infrastructure to:
- Send messages to an LLM and receive streaming responses
- Define tools the LLM can call (APIs, file operations, databases, etc.)
- Execute tool calls and feed results back to the LLM
- Control the agent loop with hooks for logging, security, and approval workflows
Features
- Agent Loop - Core orchestration that handles the LLM conversation and tool execution cycle
- Provider Agnostic - Built-in support for Anthropic (Claude), OpenAI, and Google Gemini, plus a trait for custom providers
- Tool System - Define tools with JSON schema validation and typed tool names; the LLM decides when to use them
- Async Tools - Long-running operations with progress streaming via
AsyncTooltrait - Lifecycle Hooks - Intercept tool calls for logging, user confirmation, rate limiting, or security checks
- Streaming Events - Real-time event stream for building responsive UIs
- Extended Thinking - Support for Anthropic's extended thinking feature via
ThinkingConfig - Primitive Tools - Ready-to-use tools for file operations (Read, Write, Edit, Glob, Grep, Bash, Notebooks)
- Web Tools - Web search and URL fetching with SSRF protection
- Subagents - Spawn isolated child agents for complex subtasks
- MCP Support - Model Context Protocol integration for external tool servers
- Task Tracking - Built-in todo system for tracking multi-step tasks
- User Interaction - Tools for asking questions and requesting confirmations
- Security Model - Capability-based permissions and tool tiers (Observe, Confirm)
- Yield/Resume Pattern - Pause agent execution for tool confirmation and resume with user decision
- Single-Turn Execution - Run one turn at a time for external orchestration (e.g., message queues)
- Persistence - Trait-based storage for conversation history, agent state, and tool execution tracking
- Context Compaction - Automatic token management to handle long conversations
Requirements
- Rust 1.85+ (2024 edition)
- An API key for your chosen LLM provider
Installation
Add to your Cargo.toml:
[]
= "0.4"
= { = "1", = ["rt-multi-thread", "macros"] }
= "1"
Or to install the latest development version from git:
[]
= { = "https://github.com/bipa-app/agent-sdk", = "main" }
Quick Start
use ;
async
Examples
Clone the repo and run the examples:
# Basic conversation (no tools)
ANTHROPIC_API_KEY=your_key
# Agent with custom tools
ANTHROPIC_API_KEY=your_key
# Using lifecycle hooks for logging and rate limiting
ANTHROPIC_API_KEY=your_key
# Agent with file operation tools
ANTHROPIC_API_KEY=your_key
Creating Custom Tools
Tools let your agent interact with external systems. Implement the Tool trait:
use ;
use ;
/// A tool that fetches the current weather for a city
;
// Register tools with the agent
let mut tools = new;
tools.register;
let agent =
.provider
.tools
.build;
Async Tools (Long-Running Operations)
For operations that take time (API calls, file processing, etc.), implement AsyncTool:
use ;
use ;
use ;
use Stream;
// Define progress stages for your operation
;
// Register async tools separately
let mut tools = new;
tools.register_async;
Extended Thinking
Enable Anthropic's extended thinking for complex reasoning:
use ;
let agent =
.provider
.config
.build;
When enabled, the agent emits AgentEvent::Thinking events with the model's reasoning process.
Lifecycle Hooks
Hooks let you intercept and control agent behavior:
use ;
use async_trait;
use Value;
;
let agent =
.provider
.hooks
.build;
Custom Context
The generic parameter T in Tool<T> and builder::<T>() lets you pass custom data to your tools:
// Define your context type
// Implement tools with access to context
// Build agent with your context type
let agent =
.provider
.tools
.build;
// Pass context when running
let tool_ctx = new;
agent.run;
Architecture
┌──────────────────────────────────────────────────────────────────────────┐
│ Agent Loop │
│ Orchestrates: prompt → LLM → tool calls → results → LLM │
├──────────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ LlmProvider │ │ Tools │ │ Hooks │ │ Events │ │
│ │ (trait) │ │ Registry │ │ (pre/post) │ │ (stream) │ │
│ │ │ │ │ │ │ │ │ │
│ │ - Anthropic │ │ - Tool │ │ - Default │ │ - Text │ │
│ │ - OpenAI │ │ - AsyncTool │ │ - AllowAll │ │ - ToolCall │ │
│ │ - Gemini │ │ - MCP Bridge │ │ - Logging │ │ - Progress │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ │
├──────────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ MessageStore │ │ StateStore │ │ ToolExec │ │ Environment │ │
│ │ (trait) │ │ (trait) │ │ Store │ │ (trait) │ │
│ │ │ │ │ │ (trait) │ │ │ │
│ │ Conversation │ │ Agent state │ │ Idempotency │ │ File + exec │ │
│ │ history │ │ checkpoints │ │ tracking │ │ operations │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ │
├──────────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Subagents │ │ MCP │ │ Web Tools │ │ Todo │ │
│ │ │ │ │ │ │ │ System │ │
│ │ Nested agent │ │ External │ │ Search + │ │ │ │
│ │ execution │ │ tool servers │ │ fetch URLs │ │ Task track │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ └──────────────┘ │
└──────────────────────────────────────────────────────────────────────────┘
Streaming Events
The agent emits events during execution for real-time UI updates.
Event Channel Behavior
The agent uses a bounded channel (capacity 100) for events. The SDK is designed to be resilient to slow consumers:
- Non-blocking sends: Events are sent using
try_sendfirst. If the channel is full, the SDK waits up to 30 seconds before timing out. - Consumer disconnection: If the event receiver is dropped, the agent continues processing the LLM response without blocking.
- Backpressure handling: If your consumer is slow, you'll see warnings like
Event channel full, waiting for consumer...in the logs.
Best practices for consuming events:
// GOOD: Process events quickly, offload heavy work
while let Some = events.recv.await
// GOOD: Spawn heavy processing to avoid blocking
while let Some = events.recv.await
// BAD: Blocking I/O in the event loop
while let Some = events.recv.await
Event Types
| Event | Description |
|---|---|
Start |
Agent begins processing a turn |
Thinking |
Extended thinking output (when enabled) |
TextDelta |
Streaming text chunk from LLM |
Text |
Complete text block from LLM |
ToolCallStart |
Tool execution starting |
ToolCallEnd |
Tool execution completed |
ToolProgress |
Progress update from async tool |
ToolRequiresConfirmation |
Tool needs user approval |
TurnComplete |
One LLM round-trip finished |
ContextCompacted |
Conversation was summarized to save tokens |
SubagentProgress |
Progress from nested subagent |
Done |
Agent completed successfully |
Error |
An error occurred |
while let Some = events.recv.await
Built-in Providers
| Provider | Models | Usage |
|---|---|---|
| Anthropic | Claude Sonnet, Opus, Haiku | AnthropicProvider::sonnet(api_key) |
| OpenAI | GPT-4, GPT-3.5, etc. | OpenAiProvider::new(api_key, model) |
| Gemini Pro, etc. | GeminiProvider::new(api_key, model) |
Implement LlmProvider trait to add your own.
Built-in Primitive Tools
For agents that need file system access:
| Tool | Description |
|---|---|
ReadTool |
Read file contents |
WriteTool |
Create or overwrite files |
EditTool |
Make targeted edits to files |
GlobTool |
Find files matching patterns |
GrepTool |
Search file contents with regex |
BashTool |
Execute shell commands |
NotebookReadTool |
Read Jupyter notebook contents |
NotebookEditTool |
Edit Jupyter notebook cells |
These require an Environment (use InMemoryFileSystem for sandboxed testing or LocalFileSystem for real file access).
Web Tools
For agents that need internet access:
| Tool | Description |
|---|---|
WebSearchTool |
Search the web via pluggable providers |
LinkFetchTool |
Fetch URL content with SSRF protection |
use ;
// Web search with Brave
let search_provider = new;
let search_tool = new;
// URL fetching
let fetch_tool = new;
Task Tracking
Built-in todo system for tracking multi-step tasks:
use ;
use Arc;
use RwLock;
let state = new;
let write_tool = new;
let read_tool = new;
Task statuses: Pending (○), InProgress (⚡), Completed (✓)
Subagents
Spawn isolated child agents for complex subtasks:
use ;
let factory = new;
let config = SubagentConfig ;
let subagent_tool = new;
Subagents run in isolated threads with their own context and stream progress events back to the parent.
MCP Support
Integrate external tools via the Model Context Protocol:
use ;
// Connect to an MCP server
let transport = spawn?;
let client = new;
// Register all tools from the MCP server
register_mcp_tools.await?;
User Interaction
Tools for agent-initiated questions and confirmations:
use AskUserQuestionTool;
let question_tool = new;
Persistence
The SDK provides trait-based storage for production deployments:
| Store | Purpose |
|---|---|
MessageStore |
Conversation history per thread |
StateStore |
Agent state checkpoints for recovery |
ToolExecutionStore |
Write-ahead tool execution tracking (idempotency) |
InMemoryStore and InMemoryExecutionStore are provided for testing. For production, implement the traits with your database (Postgres, Redis, etc.):
use ;
// Use in-memory stores for development
let message_store = new;
let state_store = new;
let exec_store = new;
let agent =
.provider
.message_store
.state_store
.execution_store
.build;
The ToolExecutionStore enables crash recovery by recording tool calls before execution, ensuring idempotency on retry.
Security Considerations
#[forbid(unsafe_code)]- No unsafe Rust anywhere in the codebase- Capability-based permissions - Control read/write/exec access via
AgentCapabilities - Tool tiers - Classify tools by risk level; use hooks to require confirmation
- Sandboxing - Use
InMemoryFileSystemfor testing without real file access
See SECURITY.md for the full security policy.
Contributing
Contributions are welcome! Please read CONTRIBUTING.md for:
- Development setup
- Code quality requirements
- Pull request process
License
Licensed under the Apache License, Version 2.0. See LICENSE for details.