Expand description
§Agent Development Kit (ADK) for Rust
A flexible and modular framework for developing and deploying AI agents in Rust. While optimized for Gemini and the Google ecosystem, ADK is model-agnostic, deployment-agnostic, and compatible with other frameworks.
§Quick Start
Create your first AI agent in minutes:
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<()> {
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = GeminiModel::new(&api_key, "gemini-2.0-flash-exp")?;
let agent = LlmAgentBuilder::new("assistant")
.description("A helpful AI assistant")
.instruction("You are a friendly assistant. Answer questions concisely.")
.model(Arc::new(model))
.build()?;
// Run in interactive console mode
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}§Installation
Add to your Cargo.toml:
[dependencies]
adk-rust = "0.1"
tokio = { version = "1.40", features = ["full"] }
dotenv = "0.15" # For loading .env files§Feature Presets
# Full (default) - Everything included
adk-rust = "0.1"
# Minimal - Agents + Gemini + Runner only
adk-rust = { version = "0.1", default-features = false, features = ["minimal"] }
# Custom - Pick exactly what you need
adk-rust = { version = "0.1", default-features = false, features = [
"agents", "gemini", "tools", "sessions"
] }§Agent Types
ADK-Rust provides several agent types for different use cases:
§LlmAgent - AI-Powered Reasoning
The core agent type that uses Large Language Models for intelligent reasoning:
use adk_rust::prelude::*;
use std::sync::Arc;
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = GeminiModel::new(&api_key, "gemini-2.0-flash-exp")?;
let agent = LlmAgentBuilder::new("researcher")
.description("Research assistant with web search")
.instruction("Search for information and provide detailed summaries.")
.model(Arc::new(model))
.tool(Arc::new(GoogleSearchTool::new())) // Add tools
.build()?;§Workflow Agents - Deterministic Pipelines
For predictable, multi-step workflows:
use adk_rust::prelude::*;
use std::sync::Arc;
// Sequential: Execute agents in order
let pipeline = SequentialAgent::new(
"content_pipeline",
vec![researcher, writer, reviewer]
);
// Parallel: Execute agents concurrently
let parallel = ParallelAgent::new(
"multi_analysis",
vec![analyst1, analyst2]
);
// Loop: Iterate until condition met
let loop_agent = LoopAgent::new("iterative_refiner", refiner, 5);§Multi-Agent Systems
Build hierarchical agent systems with automatic delegation:
use adk_rust::prelude::*;
use std::sync::Arc;
let coordinator = LlmAgentBuilder::new("coordinator")
.description("Development team coordinator")
.instruction("Delegate coding tasks to specialists.")
.model(model)
.sub_agent(code_agent) // Delegate to sub-agents
.sub_agent(test_agent)
.build()?;§Tools
Give your agents capabilities beyond conversation:
§Function Tools - Custom Operations
Convert any async function into a tool:
use adk_rust::prelude::*;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Debug, Deserialize, JsonSchema)]
struct WeatherInput {
/// City name to get weather for
city: String,
}
#[derive(Debug, Serialize)]
struct WeatherOutput {
temperature: f64,
conditions: String,
}
async fn get_weather(_ctx: ToolContext, input: WeatherInput) -> Result<WeatherOutput> {
// Your weather API call here
Ok(WeatherOutput {
temperature: 72.0,
conditions: "Sunny".to_string(),
})
}
let weather_tool = FunctionTool::new(
"get_weather",
"Get current weather for a city",
get_weather,
);§Built-in Tools
Ready-to-use tools included with ADK:
GoogleSearchTool- Web search via GoogleExitLoopTool- Control loop terminationLoadArtifactsTool- Access stored artifacts
§MCP Tools - External Integrations
Connect to Model Context Protocol servers:
use adk_rust::prelude::*;
// Connect to an MCP server (e.g., filesystem, database)
let mcp_tools = McpToolset::from_command("npx", &[
"-y", "@anthropic/mcp-server-filesystem", "/path/to/dir"
]).await?;
// Add all MCP tools to your agent
let agent = builder.toolset(Arc::new(mcp_tools)).build()?;§Sessions & State
Manage conversation context and working memory:
use adk_rust::prelude::*;
// Create a session
let session = session_service.create("user_123", None).await?;
// Store state with scoped prefixes
let state = session.state();
state.set("app:config", "production"); // App-level config
state.set("user:preference", "dark_mode"); // User preferences
state.set("temp:cache", "computed_value"); // Temporary data
// State persists across conversation turns
let config = state.get::<String>("app:config")?;§Callbacks
Intercept and customize agent behavior:
use adk_rust::prelude::*;
use std::sync::Arc;
let agent = LlmAgentBuilder::new("monitored_agent")
.model(model)
// Log all agent invocations
.before_agent(|ctx| {
Box::pin(async move {
println!("Agent starting: {}", ctx.agent_name);
Ok(None) // Continue execution
})
})
// Modify or cache model responses
.after_model(|ctx, response| {
Box::pin(async move {
println!("Model responded with {} tokens", response.usage.output_tokens);
Ok(response)
})
})
// Track tool usage
.before_tool(|ctx, name, args| {
Box::pin(async move {
println!("Calling tool: {} with {:?}", name, args);
Ok(None)
})
})
.build()?;§Artifacts
Store and retrieve binary data (images, files, etc.):
use adk_rust::prelude::*;
// Save an artifact
let image_data = std::fs::read("chart.png")?;
artifact_service.save(
"reports", // namespace
"sales_chart.png", // filename
&image_data,
"image/png", // MIME type
).await?;
// Load an artifact
let artifact = artifact_service.load("reports", "sales_chart.png", None).await?;§Deployment Options
§Console Mode (Interactive CLI)
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
// Interactive chat in terminal
Launcher::new(agent).run().await?;§Server Mode (REST API)
# Run your agent as a web server
cargo run -- serve --port 8080Provides endpoints:
POST /chat- Send messagesGET /sessions- List sessionsGET /health- Health check
§Agent-to-Agent (A2A) Protocol
Expose your agent for inter-agent communication:
use adk_rust::server::{A2AServer, AgentCard};
let card = AgentCard::new("my_agent", "https://my-agent.example.com")
.with_description("A helpful assistant")
.with_skill("research", "Can search and summarize information");
let server = A2AServer::new(agent, card, session_service, artifact_service);
server.serve(8080).await?;§Observability
Built-in OpenTelemetry support for production monitoring:
use adk_rust::telemetry::{TelemetryConfig, init_telemetry};
let config = TelemetryConfig::new("my-agent-service")
.with_otlp_endpoint("http://localhost:4317");
init_telemetry(config)?;
// All agent operations now emit traces and metrics§Architecture
ADK-Rust uses a layered architecture for modularity:
┌─────────────────────────────────────────────────────────────┐
│ Application Layer │
│ Launcher • REST Server • A2A │
├─────────────────────────────────────────────────────────────┤
│ Runner Layer │
│ Agent Execution • Event Streaming │
├─────────────────────────────────────────────────────────────┤
│ Agent Layer │
│ LlmAgent • CustomAgent • Sequential • Parallel • Loop │
├─────────────────────────────────────────────────────────────┤
│ Service Layer │
│ Models • Tools • Sessions • Artifacts • Memory │
└─────────────────────────────────────────────────────────────┘§Feature Flags
| Feature | Description | Default |
|---|---|---|
agents | Agent implementations | ✅ |
models | Model integrations | ✅ |
gemini | Gemini model support | ✅ |
tools | Tool system | ✅ |
mcp | MCP integration | ✅ |
sessions | Session management | ✅ |
artifacts | Artifact storage | ✅ |
memory | Semantic memory | ✅ |
runner | Execution runtime | ✅ |
server | HTTP server | ✅ |
telemetry | OpenTelemetry | ✅ |
cli | CLI launcher | ✅ |
§Examples
The examples directory contains working examples for every feature:
- Agents: LLM agent, workflow agents, multi-agent systems
- Tools: Function tools, Google Search, MCP integration
- Sessions: State management, conversation history
- Callbacks: Logging, guardrails, caching
- Deployment: Console, server, A2A protocol
§Related Crates
ADK-Rust is composed of modular crates that can be used independently:
adk-core- Core traits and typesadk-agent- Agent implementationsadk-model- LLM integrationsadk-tool- Tool systemadk-session- Session managementadk-artifact- Artifact storageadk-runner- Execution runtimeadk-server- HTTP serveradk-telemetry- Observability
Re-exports§
pub use anyhow;pub use futures;pub use serde;pub use serde_json;pub use tokio;
Modules§
- agent
agents - Agent implementations (LLM, Custom, Workflow agents).
- agent_
loader - Core traits and types.
- artifact
artifacts - Artifact storage.
- callbacks
- Core traits and types.
- context
- Core traits and types.
- error
- Core traits and types.
- event
- Core traits and types.
- instruction_
template - Core traits and types.
- memory
memory - Memory system with semantic search.
- model
models - Model integrations (Gemini, etc.).
- prelude
- Convenience prelude for common imports.
- runner
runner - Agent execution runtime.
- server
server - HTTP server (REST + A2A).
- session
sessions - Session management.
- telemetry
telemetry - Telemetry (OpenTelemetry integration).
- tool
tools - Tool system and built-in tools.
- types
- Core traits and types.
Structs§
- Content
- Core traits and types.
- Event
- Core traits and types.
- Event
Actions - Core traits and types.
- Generate
Content Config - Core traits and types.
- Launcher
cli - CLI launcher for running agents.
- LlmRequest
- Core traits and types.
- LlmResponse
- Core traits and types.
- Memory
Entry - Core traits and types.
- Multi
Agent Loader - Core traits and types.
- RunConfig
- Core traits and types.
- Single
Agent Loader cli - CLI launcher for running agents.
- Usage
Metadata - Core traits and types.
Enums§
- AdkError
- Core traits and types.
- Before
Model Result - Core traits and types.
- Finish
Reason - Core traits and types.
- Include
Contents - Core traits and types.
- Part
- Core traits and types.
- Streaming
Mode - Core traits and types.
Constants§
- KEY_
PREFIX_ APP - Core traits and types.
- KEY_
PREFIX_ TEMP - Core traits and types.
- KEY_
PREFIX_ USER - Core traits and types.
Traits§
- Agent
- Core traits and types.
- Agent
Loader - Core traits and types.
- Artifacts
- Core traits and types.
- Callback
Context - Core traits and types.
- Invocation
Context - Core traits and types.
- Llm
- Core traits and types.
- Memory
- Core traits and types.
- Readonly
Context - Core traits and types.
- Readonly
State - Core traits and types.
- Session
- Core traits and types.
- State
- Core traits and types.
- Tool
- Core traits and types.
- Tool
Context - Core traits and types.
- Toolset
- Core traits and types.
Functions§
- inject_
session_ state - Core traits and types.
Type Aliases§
- After
Agent Callback - Core traits and types.
- After
Model Callback - Core traits and types.
- After
Tool Callback - Core traits and types.
- Before
Agent Callback - Core traits and types.
- Before
Model Callback - Core traits and types.
- Before
Tool Callback - Core traits and types.
- Event
Stream - Core traits and types.
- Global
Instruction Provider - Core traits and types.
- Instruction
Provider - Core traits and types.
- LlmResponse
Stream - Core traits and types.
- Result
- Core traits and types.
- Tool
Predicate - Core traits and types.