adk-agent
Agent implementations for ADK-Rust (LLM, Custom, Workflow agents).
Overview
adk-agent provides ready-to-use agent implementations for ADK-Rust:
- LlmAgent - Core agent powered by LLM reasoning with tools and callbacks
- CustomAgent - Define custom logic without LLM
- SequentialAgent - Execute agents in sequence
- ParallelAgent - Execute agents concurrently
- LoopAgent - Iterate until exit condition or max iterations
- ConditionalAgent - Branch based on conditions
- LlmConditionalAgent - LLM-powered routing to sub-agents
- LlmEventSummarizer - LLM-based context compaction for long conversations
Installation
[]
= "0.4"
Or use the meta-crate:
[]
= { = "0.4", = ["agents"] }
Quick Start
LLM Agent
use LlmAgentBuilder;
use GeminiModel;
use Arc;
let model = new;
let agent = new
.description
.instruction
.model
.tool
.build?;
LlmAgentBuilder Methods
| Method | Description |
|---|---|
new(name) |
Create builder with agent name |
description(desc) |
Set agent description |
model(llm) |
Set the LLM model (required) |
instruction(text) |
Set static instruction |
instruction_provider(fn) |
Set dynamic instruction provider |
global_instruction(text) |
Set global instruction (shared across agents) |
with_skills(index) |
Attach a preloaded skills index |
with_auto_skills() |
Auto-load skills from .skills/ in current directory |
with_skills_from_root(path) |
Auto-load skills from .skills/ under a specific root |
with_skill_policy(policy) |
Configure matching policy (top_k, threshold, tags) |
with_skill_budget(chars) |
Cap injected skill content length |
tool(tool) |
Add a tool |
sub_agent(agent) |
Add a sub-agent for transfers |
max_iterations(n) |
Set maximum LLM round-trips (default: 100) |
tool_timeout(duration) |
Set per-tool execution timeout (default: 5 min) |
require_tool_confirmation(names) |
Require user confirmation for specific tools |
require_tool_confirmation_for_all() |
Require user confirmation for all tools |
tool_confirmation_policy(policy) |
Set custom tool confirmation policy |
input_schema(json) |
Set input JSON schema |
output_schema(json) |
Set output JSON schema |
output_key(key) |
Set state key for output |
input_guardrails(set) |
Add input validation guardrails |
output_guardrails(set) |
Add output validation guardrails |
before_callback(fn) |
Add before-agent callback |
after_callback(fn) |
Add after-agent callback |
before_model_callback(fn) |
Add before-model callback |
after_model_callback(fn) |
Add after-model callback |
before_tool_callback(fn) |
Add before-tool callback |
after_tool_callback(fn) |
Add after-tool callback |
toolset(toolset) |
Add a dynamic toolset for per-invocation tool resolution |
default_retry_budget(budget) |
Set default retry policy for all tools |
tool_retry_budget(name, budget) |
Set retry policy for a specific tool |
circuit_breaker_threshold(n) |
Disable tools after N consecutive failures |
on_tool_error(callback) |
Add fallback handler for tool failures |
build() |
Build the LlmAgent |
Backward Compatibility
Existing builder paths remain valid and unchanged:
let agent = new
.description
.instruction
.model
.build?;
Skills are opt-in. No skill content is injected unless you call a skills method.
Minimal Skills Usage
let agent = new
.model
.with_auto_skills? // loads .skills/**/*.md when present
.build?;
Workflow Agents
use ;
use Arc;
// Sequential: A -> B -> C
let pipeline = new;
// Parallel: A, B, C simultaneously
let team = new;
// Loop: repeat until exit or max iterations
let iterator = new
.with_max_iterations;
// Default max iterations is 1000 (DEFAULT_LOOP_MAX_ITERATIONS)
Conditional Agents
use ;
// Function-based condition
let conditional = new.with_else;
// LLM-powered routing
let llm_router = builder
.instruction
.route
.route
.build?;
Multi-Agent Systems
// Agent with sub-agents for transfer
let coordinator = new
.instruction
.model
.sub_agent
.sub_agent
.build?;
Guardrails
use LlmAgentBuilder;
use ;
let input_guardrails = new
.with
.with;
let agent = new
.model
.input_guardrails
.build?;
Custom Agent
use CustomAgentBuilder;
let custom = new
.description
.handler
.build?;
Toolset Support
Use .toolset() for context-dependent tools that need per-invocation resolution — for example, per-user browser sessions from a pool. Toolsets are resolved at the start of each run() call using the invocation's ReadonlyContext.
use LlmAgentBuilder;
use ;
use Arc;
let pool = new;
// Pool-backed toolset: each user gets their own browser session
let browser_toolset = with_pool_and_profile;
let agent = new
.description
.instruction
.model
.toolset
.build?;
Static tools (.tool()) and dynamic toolsets (.toolset()) can be mixed on the same agent. Duplicate tool names across static tools and toolsets produce a deterministic error at resolution time.
Retry Budget
Configure automatic retries for transient tool failures with RetryBudget. Set a default policy for all tools and override per tool name:
use RetryBudget;
use Duration;
let agent = new
.model
.tool
.default_retry_budget
.tool_retry_budget
.build?;
Per-tool budgets take precedence over the default. When no budget is configured, tools execute once (current behavior).
Circuit Breaker
Temporarily disable tools after repeated consecutive failures within an invocation. This prevents the agent from wasting LLM iterations on a consistently failing tool:
let agent = new
.model
.toolset
.circuit_breaker_threshold
.build?;
After 5 consecutive failures for a given tool, the circuit breaker opens and returns an immediate error to the LLM without executing the tool. The breaker resets at the start of each new invocation.
Tool Error Callbacks
Register on_tool_error callbacks to provide fallback results when tools fail (after retries are exhausted):
let agent = new
.model
.tool
.on_tool_error
.build?;
Multiple callbacks can be registered. They are tried in order — the first to return Some(value) wins.
Features
- Dynamic toolset resolution for per-invocation tool provisioning
- Automatic tool execution loop (with configurable timeout:
DEFAULT_TOOL_TIMEOUT= 5 min) - Configurable retry budgets (default and per-tool)
- Circuit breaker for consecutive tool failures
- Tool error callbacks with fallback result substitution
- Configurable max iterations (
DEFAULT_MAX_ITERATIONS= 100 for LlmAgent,DEFAULT_LOOP_MAX_ITERATIONS= 1000 for LoopAgent) - Agent transfer between sub-agents (with validation against registered sub-agents)
- Streaming event output
- Callback hooks at every stage
- Input/output guardrails
- Schema validation
- Tool confirmation policies (
ToolConfirmationPolicy::Never,Always,PerTool) - Context compaction via
LlmEventSummarizer
Context Compaction
LlmEventSummarizer uses an LLM to summarize older conversation events, reducing context size for long-running sessions. This is the Rust equivalent of ADK Python's LlmEventSummarizer.
use LlmEventSummarizer;
use EventsCompactionConfig;
use Arc;
let summarizer = new;
// Optionally customize the prompt template:
// let summarizer = summarizer.with_prompt_template("Custom: {conversation_history}");
let compaction_config = EventsCompactionConfig ;
Pass compaction_config to RunnerConfig to enable automatic compaction. See Context Compaction for full details.
Related Crates
- adk-rust - Meta-crate with all components
- adk-core - Core
Agenttrait - adk-model - LLM integrations
- adk-tool - Tool system
- adk-guardrail - Guardrails
License
Apache-2.0
Part of ADK-Rust
This crate is part of the ADK-Rust framework for building AI agents in Rust.