Expand description
§claude-agent-rs
A lightweight AI coding agent in Rust that runs Claude on your Max subscription for $0.
§Quick Start
use claude_agent::{Agent, AgentConfig};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let agent = Agent::new(AgentConfig::default()).await?;
let response = agent.chat("what is 2+2?").await?;
println!("{}", response.text());
Ok(())
}§Two Inference Backends
CLI mode (default) — pipes through claude -p --output-format json, using
your Claude Max subscription. Zero per-token cost.
API mode — direct POST to api.anthropic.com/v1/messages. Set
ANTHROPIC_API_KEY and AGENT_BACKEND=api.
§Features
- Zero API cost — calls
claude -pusing your Max subscription - 8 built-in tools — bash, read, write, edit, glob, grep, web_fetch, agent
- SSE streaming — real-time token output for both API and CLI backends
- MCP integration — full JSON-RPC 2.0 client, connects to all your MCP servers
- Session persistence — SQLite-backed save/restore with
--resume - Multi-model routing — auto-route Opus (complex) vs Haiku (simple) with
--auto-route - Sub-agents — spawn parallel
claude -pprocesses with optional git worktree isolation - Hooks & skills — lifecycle hooks and /command dispatch
- 4.5MB binary — single static binary with SQLite bundled
§Architecture
The agent is organized into 11 layers:
| Layer | Name | Purpose |
|---|---|---|
| 1 | Crown | Skill registry + /command dispatch |
| 2 | Skull | Lifecycle hooks (pre/post tool use) |
| 3 | Security Cage | Permission-gated tool access |
| 4 | Brain Core | Dual-backend inference (CLI + API) |
| 5 | Frontal Lobe | Context engine + CLAUDE.md injection |
| 6 | Temporal Lobe | Sub-agent spawner with worktree isolation |
| 7 | Auth Gate | Authentication routing |
| 8 | Brainstem | 8 built-in tools |
| 9 | Root System | MCP server registry (JSON-RPC 2.0) |
| 10 | Pedestal | HTTP client (reqwest + rustls) |
| 11 | Workbench | Feature flags + session metrics |
§Using as a Library
use claude_agent::{InferenceEngine, InferenceBackend, Message, Role, MessageContent};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let engine = InferenceEngine::new(Some("sk-ant-..."), InferenceBackend::Api);
let messages = vec![Message {
role: Role::User,
content: MessageContent::Text("Hello!".into()),
}];
let request = engine.build_request(&messages, None, &[], None);
// Tokens stream to the callback as they arrive
let response = engine.chat_stream(&request, &mut |text| {
print!("{}", text);
}).await?;
println!("\nTokens used: {}", response.usage.output_tokens);
Ok(())
}Re-exports§
pub use types::Message;pub use types::Role;pub use types::MessageContent;pub use types::ContentBlock;pub use types::InferenceRequest;pub use types::InferenceResponse;pub use types::Usage;pub use types::ToolDefinition;pub use types::ToolCall;pub use types::ToolResult;pub use types::HookEvent;pub use types::HookResult;pub use types::FeatureFlags;pub use types::AgentStats;pub use types::AgentConfig;pub use auth::AuthGate;pub use inference::InferenceEngine;pub use inference::InferenceBackend;pub use context::ContextEngine;pub use permissions::PermissionEngine;pub use hooks::HookSystem;pub use skills::SkillRegistry;pub use tools::Tool;pub use tools::ToolRegistry;pub use mcp::McpRegistry;
Modules§
- agents
- auth
- context
- flags
- hooks
- inference
- mcp
- memory
- permissions
- plugins
- sessions
- skills
- task_
queue - telemetry
- tools
- types
Structs§
- Agent
- High-level agent interface for library consumers.
- Agent
Response - Response from a single agent turn.