adk-agent
Agent implementations for ADK-Rust (LLM, Custom, Workflow agents).
Overview
adk-agent provides ready-to-use agent implementations for ADK-Rust:
LlmAgent— core agent powered by LLM reasoning with tools, callbacks, guardrails, and skillsCustomAgent— define custom logic without LLMSequentialAgent— execute agents in sequenceParallelAgent— execute agents concurrentlyLoopAgent— iterate until exit condition or max iterationsConditionalAgent— branch based on a function conditionLlmConditionalAgent— LLM-powered multi-way routing to sub-agentsLlmEventSummarizer— LLM-based context compaction for long conversations
Installation
[]
= "0.5.0"
Or use the umbrella crate:
[]
= { = "0.5.0", = ["agents"] }
Quick Start
LLM Agent
use LlmAgentBuilder;
use GeminiModel;
use Arc;
let model = new;
let agent = new
.description
.instruction
.model
.tool
.build?;
LlmAgentBuilder Methods
| Method | Description |
|---|---|
new(name) |
Create builder with agent name |
description(desc) |
Set agent description |
model(llm) |
Set the LLM model (required) |
instruction(text) |
Set static instruction |
instruction_provider(fn) |
Set dynamic instruction provider |
global_instruction(text) |
Set global instruction (shared across agents) |
global_instruction_provider(fn) |
Set dynamic global instruction provider |
generate_content_config(config) |
Set full GenerateContentConfig (temperature, top_p, etc.) |
temperature(f32) |
Shorthand for setting temperature only |
top_p(f32) |
Shorthand for setting top_p only |
top_k(i32) |
Shorthand for setting top_k only |
max_output_tokens(i32) |
Shorthand for setting max output tokens only |
with_skills(index) |
Attach a preloaded skills index |
with_auto_skills() |
Auto-load skills from .skills/ in current directory |
with_skills_from_root(path) |
Auto-load skills from .skills/ under a specific root |
with_skill_policy(policy) |
Configure matching policy (top_k, threshold, tags) |
with_skill_budget(chars) |
Cap injected skill content length (default: 2000) |
tool(tool) |
Add a static tool |
toolset(toolset) |
Add a dynamic toolset for per-invocation tool resolution |
sub_agent(agent) |
Add a sub-agent for transfers |
max_iterations(n) |
Set maximum LLM round-trips (default: 100) |
tool_timeout(duration) |
Set per-tool execution timeout (default: 5 min) |
default_retry_budget(budget) |
Set default retry policy for all tools |
tool_retry_budget(name, budget) |
Set retry policy for a specific tool |
circuit_breaker_threshold(n) |
Disable tools after N consecutive failures |
on_tool_error(callback) |
Add fallback handler for tool failures |
require_tool_confirmation(name) |
Require user confirmation for a specific tool |
require_tool_confirmation_for_all() |
Require user confirmation for all tools |
tool_confirmation_policy(policy) |
Set custom tool confirmation policy |
disallow_transfer_to_parent(bool) |
Prevent agent from transferring back to parent |
disallow_transfer_to_peers(bool) |
Prevent agent from transferring to sibling agents |
include_contents(mode) |
Control content inclusion in sub-agent context |
input_schema(json) |
Set input JSON schema |
output_schema(json) |
Set output JSON schema |
output_key(key) |
Set state key for output |
input_guardrails(set) |
Add input validation guardrails |
output_guardrails(set) |
Add output validation guardrails |
before_callback(fn) |
Add before-agent callback |
after_callback(fn) |
Add after-agent callback |
before_model_callback(fn) |
Add before-model callback |
after_model_callback(fn) |
Add after-model callback |
before_tool_callback(fn) |
Add before-tool callback |
after_tool_callback(fn) |
Add after-tool callback |
after_tool_callback_full(fn) |
Rich after-tool callback with tool, args, and response |
build() |
Build the LlmAgent |
Generation Config
Control LLM generation parameters per-agent. Use the shorthand methods for common settings or provide a full config:
use GenerateContentConfig;
// Shorthand
let agent = new
.model
.temperature
.max_output_tokens
.build?;
// Full config
let agent = new
.model
.generate_content_config
.build?;
Skills
Skills are opt-in. No skill content is injected unless you call a skills method:
let agent = new
.model
.with_auto_skills? // loads .skills/**/*.md when present
.build?;
Skills are also supported on all workflow agents (LoopAgent, SequentialAgent, ParallelAgent, ConditionalAgent, LlmConditionalAgent).
Workflow Agents
use ;
use Arc;
// Sequential: A -> B -> C
let pipeline = new;
// Parallel: A, B, C simultaneously
let team = new;
// Loop: repeat until exit or max iterations
let iterator = new
.with_max_iterations;
// Default max iterations is 1000 (DEFAULT_LOOP_MAX_ITERATIONS)
All workflow agents support .with_description(), .before_callback(), .after_callback(), and the full skills API (with_skills, with_auto_skills, with_skill_policy, with_skill_budget).
Conditional Agents
use ;
// Function-based condition
let conditional = new.with_else;
// LLM-powered routing
let llm_router = builder
.instruction
.route
.route
.default_route
.build?;
LlmConditionalAgent normalizes the LLM's classification to lowercase and does substring matching against route labels, so the LLM doesn't need to produce an exact match.
Multi-Agent Systems
// Agent with sub-agents for transfer
let coordinator = new
.instruction
.model
.sub_agent
.sub_agent
.build?;
Control transfer behavior:
let agent = new
.model
.disallow_transfer_to_parent // can't transfer back up
.disallow_transfer_to_peers // can't transfer to siblings
.build?;
Toolset Support
Use .toolset() for context-dependent tools that need per-invocation resolution — for example, per-user browser sessions from a pool. Toolsets are resolved at the start of each run() call using the invocation's ReadonlyContext.
use LlmAgentBuilder;
use ;
use Arc;
let pool = new;
let browser_toolset = with_pool_and_profile;
let agent = new
.description
.instruction
.model
.toolset
.build?;
Static tools (.tool()) and dynamic toolsets (.toolset()) can be mixed on the same agent. Duplicate tool names across static tools and toolsets produce a deterministic error at resolution time.
Retry Budget
Configure automatic retries for transient tool failures:
use RetryBudget;
use Duration;
let agent = new
.model
.tool
.default_retry_budget
.tool_retry_budget
.build?;
Per-tool budgets take precedence over the default. When no budget is configured, tools execute once.
Circuit Breaker
Temporarily disable tools after repeated consecutive failures within an invocation:
let agent = new
.model
.toolset
.circuit_breaker_threshold
.build?;
After 5 consecutive failures for a given tool, the circuit breaker opens and returns an immediate error to the LLM without executing the tool. Resets at the start of each new invocation.
Tool Error Callbacks
Register on_tool_error callbacks to provide fallback results when tools fail (after retries are exhausted):
let agent = new
.model
.tool
.on_tool_error
.build?;
Multiple callbacks can be registered. They are tried in order — the first to return Some(value) wins.
Rich After-Tool Callbacks
after_tool_callback_full receives the tool, arguments, and response value — aligned with the Python/Go ADK callback model. Return Ok(None) to keep the original response, or Ok(Some(value)) to replace it:
let agent = new
.model
.after_tool_callback_full
.build?;
These run after the legacy after_tool_callback chain.
Guardrails
use LlmAgentBuilder;
use ;
let input_guardrails = new
.with
.with;
let agent = new
.model
.input_guardrails
.build?;
Custom Agent
use CustomAgentBuilder;
let custom = new
.description
.handler
.build?;
Tool Call Markup Normalization
The tool_call_markup module handles LLMs that emit tool calls as text markup (e.g., <tool_call>...</tool_call>) instead of structured function calls. normalize_content parses these text blocks into proper Part::FunctionCall parts so the tool execution loop can handle them:
use normalize_content;
normalize_content;
This is applied automatically inside LlmAgent — you only need it if building custom agent logic.
Context Compaction
LlmEventSummarizer uses an LLM to summarize older conversation events, reducing context size for long-running sessions:
use LlmEventSummarizer;
use EventsCompactionConfig;
use Arc;
let summarizer = new;
// Optionally customize the prompt template:
// let summarizer = summarizer.with_prompt_template("Custom: {conversation_history}");
let compaction_config = EventsCompactionConfig ;
Pass compaction_config to RunnerConfig to enable automatic compaction.
Features
| Feature | Description |
|---|---|
| (default) | All agent types, callbacks, skills, toolsets, retry/circuit breaker |
guardrails |
Input/output guardrails via adk-guardrail |
Related Crates
- adk-rust — Umbrella crate
- adk-core — Core
Agenttrait - adk-model — LLM integrations
- adk-tool — Tool system
- adk-guardrail — Guardrails
- adk-skill — Skill discovery and injection
License
Apache-2.0
Part of ADK-Rust
This crate is part of the ADK-Rust framework for building AI agents in Rust.