Skip to main content

Crate claude_agent

Crate claude_agent 

Source
Expand description

§claude-agent-rs

A lightweight AI coding agent in Rust that runs Claude on your Max subscription for $0.

§Quick Start

use claude_agent::{Agent, AgentConfig};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let agent = Agent::new(AgentConfig::default()).await?;
    let response = agent.chat("what is 2+2?").await?;
    println!("{}", response.text());
    Ok(())
}

§Two Inference Backends

CLI mode (default) — pipes through claude -p --output-format json, using your Claude Max subscription. Zero per-token cost.

API mode — direct POST to api.anthropic.com/v1/messages. Set ANTHROPIC_API_KEY and AGENT_BACKEND=api.

§Features

  • Zero API cost — calls claude -p using your Max subscription
  • 8 built-in tools — bash, read, write, edit, glob, grep, web_fetch, agent
  • SSE streaming — real-time token output for both API and CLI backends
  • MCP integration — full JSON-RPC 2.0 client, connects to all your MCP servers
  • Session persistence — SQLite-backed save/restore with --resume
  • Multi-model routing — auto-route Opus (complex) vs Haiku (simple) with --auto-route
  • Sub-agents — spawn parallel claude -p processes with optional git worktree isolation
  • Hooks & skills — lifecycle hooks and /command dispatch
  • 4.5MB binary — single static binary with SQLite bundled

§Architecture

The agent is organized into 11 layers:

LayerNamePurpose
1CrownSkill registry + /command dispatch
2SkullLifecycle hooks (pre/post tool use)
3Security CagePermission-gated tool access
4Brain CoreDual-backend inference (CLI + API)
5Frontal LobeContext engine + CLAUDE.md injection
6Temporal LobeSub-agent spawner with worktree isolation
7Auth GateAuthentication routing
8Brainstem8 built-in tools
9Root SystemMCP server registry (JSON-RPC 2.0)
10PedestalHTTP client (reqwest + rustls)
11WorkbenchFeature flags + session metrics

§Using as a Library

use claude_agent::{InferenceEngine, InferenceBackend, Message, Role, MessageContent};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let engine = InferenceEngine::new(Some("sk-ant-..."), InferenceBackend::Api);
    let messages = vec![Message {
        role: Role::User,
        content: MessageContent::Text("Hello!".into()),
    }];
    let request = engine.build_request(&messages, None, &[], None);

    // Tokens stream to the callback as they arrive
    let response = engine.chat_stream(&request, &mut |text| {
        print!("{}", text);
    }).await?;

    println!("\nTokens used: {}", response.usage.output_tokens);
    Ok(())
}

Re-exports§

pub use types::Message;
pub use types::Role;
pub use types::MessageContent;
pub use types::ContentBlock;
pub use types::InferenceRequest;
pub use types::InferenceResponse;
pub use types::Usage;
pub use types::ToolDefinition;
pub use types::ToolCall;
pub use types::ToolResult;
pub use types::HookEvent;
pub use types::HookResult;
pub use types::FeatureFlags;
pub use types::AgentStats;
pub use types::AgentConfig;
pub use auth::AuthGate;
pub use inference::InferenceEngine;
pub use inference::InferenceBackend;
pub use context::ContextEngine;
pub use permissions::PermissionEngine;
pub use hooks::HookSystem;
pub use skills::SkillRegistry;
pub use tools::Tool;
pub use tools::ToolRegistry;
pub use mcp::McpRegistry;

Modules§

agents
auth
context
flags
hooks
inference
mcp
memory
permissions
plugins
sessions
skills
task_queue
telemetry
tools
types

Structs§

Agent
High-level agent interface for library consumers.
AgentResponse
Response from a single agent turn.