A Rust SDK for building reliable AI agent systems with first-class A2A protocol support.
Radkit prioritizes developer experience and control above all else.
Developers maintain complete control over agent behavior, execution flow, context management, and state.
While the library provides abstractions, developers can always drop down to lower-level APIs when needed.

Features
- 🤝 A2A Protocol First - Native support for Agent-to-Agent communication standard
- 🔄 Unified LLM Interface - Single API for Anthropic, OpenAI, Gemini, Grok, DeepSeek
- 🛠️ Tool Execution - Automatic tool calling with multi-turn loops and state management
- 📝 Structured Outputs - Type-safe response deserialization with JSON Schema
- 🔒 Type Safety - Leverage Rust's type system for reliability and correctness
Installation
Add radkit to your Cargo.toml.
Default (Minimal)
For using core types and helpers like LlmFunction and LlmWorker without the agent server runtime:
[dependencies]
radkit = "0.0.3"
tokio = { version = "1", features = ["rt-multi-thread", "sync", "net", "process", "macros"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
schemars = "1"
With Agent Server Runtime
To include the DefaultRuntime and enable the full A2A agent server capabilities (on native targets), enable the runtime feature:
[dependencies]
radkit = { version = "0.0.3", features = ["runtime"] }
tokio = { version = "1", features = ["rt-multi-thread", "sync", "net", "process", "macros"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
schemars = "1"
Feature Flags
Radkit ships optional capabilities that you can opt into per target:
runtime: Enables the native DefaultRuntime, HTTP server, tracing, and other dependencies required to run A2A-compliant agents locally.
dev-ui: Builds on top of runtime and serves an interactive UI (native-only) where you can trigger tasks, and inspect streaming output.
Core Concepts
Thread - Conversation Context
A Thread represents the complete conversation history with the LLM, including system prompts and message exchanges.
use radkit::models::{Thread, Event};
let thread = Thread::from_user("Hello, world!");
let thread = Thread::from_system("You are a helpful coding assistant")
.add_event(Event::user("Explain Rust ownership"));
let thread = Thread::new(vec![
Event::user("What is 2+2?"),
Event::assistant("2+2 equals 4."),
Event::user("What about 3+3?"),
]);
let thread = Thread::new(vec![])
.with_system("You are an expert in mathematics")
.add_event(Event::user("Calculate the area of a circle with radius 5"));
Type Conversions:
let thread: Thread = "Hello".into();
let thread: Thread = String::from("World").into();
let thread: Thread = Event::user("Question").into();
let thread: Thread = vec![
Event::user("First"),
Event::assistant("Response"),
].into();
Content - Multi-Modal Messages
Content represents the payload of a message, supporting text, images, documents, tool calls, and tool responses.
use radkit::models::{Content, ContentPart};
use serde_json::json;
let content = Content::from_text("Hello!");
let content = Content::from_parts(vec![
ContentPart::Text("Check this image:".to_string()),
ContentPart::from_data(
"image/png",
"base64_encoded_image_data_here",
Some("photo.png".to_string())
)?,
]);
for text in content.texts() {
println!("{}", text);
}
if content.has_text() {
println!("First text: {}", content.first_text().unwrap());
}
if content.has_tool_calls() {
println!("Tool calls: {}", content.tool_calls().len());
}
if let Some(combined) = content.joined_texts() {
println!("Combined: {}", combined);
}
Event - Conversation Messages
Event represents a single message in a conversation with an associated role.
use radkit::models::{Event, Role};
let system_event = Event::system("You are a helpful assistant");
let user_event = Event::user("What is Rust?");
let assistant_event = Event::assistant("Rust is a systems programming language...");
match event.role() {
Role::System => println!("System message"),
Role::User => println!("User message"),
Role::Assistant => println!("Assistant message"),
Role::Tool => println!("Tool response"),
}
let content = event.content();
println!("Message: {}", content.first_text().unwrap_or(""));
LLM Providers
Radkit supports multiple LLM providers with a unified interface.
Anthropic (Claude)
use radkit::models::providers::AnthropicLlm;
use radkit::models::{BaseLlm, Thread};
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let llm = AnthropicLlm::new("claude-sonnet-4-5-20250929", "sk-ant-...");
let llm = AnthropicLlm::from_env("claude-opus-4-1-20250805")?
.with_max_tokens(8192)
.with_temperature(0.7);
let thread = Thread::from_user("Explain quantum computing");
let response = llm.generate_content(thread, None).await?;
println!("Response: {}", response.content().first_text().unwrap());
println!("Tokens used: {}", response.usage().total_tokens());
OpenAI (GPT)
use radkit::models::providers::OpenAILlm;
let llm = OpenAILlm::from_env("gpt-4o")?;
let llm = OpenAILlm::from_env("gpt-4o-mini")?
.with_max_tokens(2000)
.with_temperature(0.5);
let response = llm.generate("What is machine learning?", None).await?;
Google Gemini
use radkit::models::providers::GeminiLlm;
let llm = GeminiLlm::from_env("gemini-2.0-flash-exp")?;
let response = llm.generate("Explain neural networks", None).await?;
Grok (xAI)
use radkit::models::providers::GrokLlm;
let llm = GrokLlm::from_env("grok-2-latest")?;
let response = llm.generate("What is the meaning of life?", None).await?;
DeepSeek
use radkit::models::providers::DeepSeekLlm;
let llm = DeepSeekLlm::from_env("deepseek-chat")?;
let response = llm.generate("Code review best practices", None).await?;
LlmFunction - Simple Structured Outputs
LlmFunction<T> is perfect for when you want structured, typed responses without tool execution.
Basic Usage
use radkit::agent::LlmFunction;
use radkit::models::providers::AnthropicLlm;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
struct MovieRecommendation {
title: String,
year: u16,
genre: String,
rating: f32,
reason: String,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let movie_fn = LlmFunction::<MovieRecommendation>::new(llm);
let recommendation = movie_fn
.run("Recommend a sci-fi movie for someone who loves The Matrix")
.await?;
println!("🎬 {}", recommendation.title);
println!("📅 Year: {}", recommendation.year);
println!("🎭 Genre: {}", recommendation.genre);
println!("⭐ Rating: {}/10", recommendation.rating);
println!("💡 {}", recommendation.reason);
Ok(())
}
With System Instructions
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
struct CodeReview {
issues: Vec<String>,
suggestions: Vec<String>,
severity: String,
}
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let review_fn = LlmFunction::<CodeReview>::new_with_system_instructions(
llm,
"You are a senior code reviewer. Be thorough but constructive."
);
let code = r#"
fn divide(a: i32, b: i32) -> i32 {
a / b
}
"#;
let review = review_fn.run(format!("Review this code:\n{}", code)).await?;
println!("Severity: {}", review.severity);
println!("\nIssues:");
for issue in review.issues {
println!(" - {}", issue);
}
println!("\nSuggestions:");
for suggestion in review.suggestions {
println!(" - {}", suggestion);
}
Multi-Turn Conversations
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
struct Answer {
response: String,
confidence: f32,
}
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let qa_fn = LlmFunction::<Answer>::new(llm);
let (answer1, thread) = qa_fn
.run_and_continue("What is Rust?")
.await?;
println!("Q1: {}", answer1.response);
let (answer2, thread) = qa_fn
.run_and_continue(
thread.add_event(Event::user("What are its main benefits?"))
)
.await?;
println!("Q2: {}", answer2.response);
let (answer3, _) = qa_fn
.run_and_continue(
thread.add_event(Event::user("Give me a code example"))
)
.await?;
println!("Q3: {}", answer3.response);
Complex Data Structures
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
struct Recipe {
name: String,
prep_time_minutes: u32,
cook_time_minutes: u32,
servings: u8,
ingredients: Vec<Ingredient>,
instructions: Vec<String>,
tags: Vec<String>,
}
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
struct Ingredient {
name: String,
amount: String,
unit: String,
}
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let recipe_fn = LlmFunction::<Recipe>::new_with_system_instructions(
llm,
"You are a professional chef. Provide detailed, accurate recipes."
);
let recipe = recipe_fn
.run("Create a recipe for chocolate chip cookies")
.await?;
println!("🍪 {}", recipe.name);
println!("⏱️ Prep: {}min, Cook: {}min", recipe.prep_time_minutes, recipe.cook_time_minutes);
println!("👥 Servings: {}", recipe.servings);
println!("\n📋 Ingredients:");
for ingredient in recipe.ingredients {
println!(" - {} {} {}", ingredient.amount, ingredient.unit, ingredient.name);
}
println!("\n👨🍳 Instructions:");
for (i, instruction) in recipe.instructions.iter().enumerate() {
println!(" {}. {}", i + 1, instruction);
}
LlmWorker - Tool Execution
LlmWorker<T> adds automatic tool calling and multi-turn execution loops to LlmFunction.
use radkit::agent::LlmWorker;
use radkit::models::providers::AnthropicLlm;
use radkit::tools::{tool, ToolResult};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use serde_json::json;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
struct WeatherReport {
location: String,
temperature: f64,
condition: String,
forecast: String,
}
#[derive(Deserialize, JsonSchema)]
struct GetWeatherArgs {
location: String,
}
#[tool(description = "Get current weather for a location")]
async fn get_weather(args: GetWeatherArgs) -> ToolResult {
let weather_data = json!({
"temperature": 72.5,
"condition": "Sunny",
"humidity": 65,
"location": args.location,
});
ToolResult::success(weather_data)
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let worker = LlmWorker::<WeatherReport>::builder(llm)
.with_system_instructions("You are a weather assistant")
.with_tool(get_weather) .build();
let report = worker.run("What's the weather in San Francisco?").await?;
println!("📍 Location: {}", report.location);
println!("🌡️ Temperature: {}°F", report.temperature);
println!("☀️ Condition: {}", report.condition);
println!("📅 {}", report.forecast);
Ok(())
}
Multiple Tools
use radkit::tools::tool;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
struct TravelPlan {
destination: String,
weather_summary: String,
hotel_recommendation: String,
estimated_cost: f64,
}
#[derive(Deserialize, JsonSchema)]
struct WeatherArgs {
location: String,
}
#[derive(Deserialize, JsonSchema)]
struct HotelArgs {
location: String,
}
#[derive(Deserialize, JsonSchema)]
struct TripCostArgs {
hotel_price: f64,
nights: i64,
}
#[tool(description = "Get weather forecast")]
async fn get_weather(args: WeatherArgs) -> ToolResult {
ToolResult::success(json!({
"forecast": format!("Sunny and 75°F in {}", args.location)
}))
}
#[tool(description = "Search for hotels in a location")]
async fn search_hotels(args: HotelArgs) -> ToolResult {
ToolResult::success(json!({
"hotels": [{
"name": "Grand Hotel",
"price": 150,
"rating": 4.5,
"location": args.location
}]
}))
}
#[tool(description = "Calculate estimated trip cost")]
async fn calculate_trip_cost(args: TripCostArgs) -> ToolResult {
let total = args.hotel_price * args.nights as f64 + 500.0; ToolResult::success(json!({"total": total}))
}
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let worker = LlmWorker::<TravelPlan>::builder(llm)
.with_system_instructions("You are a travel planning assistant")
.with_tools(vec![get_weather, search_hotels, calculate_trip_cost])
.build();
let plan = worker.run("Plan a 3-day trip to Tokyo").await?;
println!("🗺️ {}", plan.destination);
println!("🌤️ {}", plan.weather_summary);
println!("🏨 {}", plan.hotel_recommendation);
println!("💰 Estimated cost: ${:.2}", plan.estimated_cost);
Stateful Tools
Tools can maintain state across calls using ToolContext.
use radkit::tools::{tool, ToolContext};
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
struct ShoppingCart {
items: Vec<String>,
total_items: u32,
estimated_total: f64,
}
#[derive(Deserialize, JsonSchema)]
struct AddToCartArgs {
item: String,
price: f64,
}
#[tool(description = "Add an item to the shopping cart")]
async fn add_to_cart(args: AddToCartArgs, ctx: ToolContext) -> ToolResult {
let mut items: Vec<String> = ctx
.state()
.get_state("items")
.and_then(|v| serde_json::from_value(v).ok())
.unwrap_or_default();
let total_price: f64 = ctx
.state()
.get_state("total_price")
.and_then(|v| v.as_f64())
.unwrap_or(0.0);
items.push(args.item.clone());
let new_total = total_price + args.price;
ctx.state().set_state("items", json!(items));
ctx.state().set_state("total_price", json!(new_total));
ToolResult::success(json!({
"item_added": args.item,
"cart_size": items.len(),
"total": new_total
}))
}
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let worker = LlmWorker::<ShoppingCart>::builder(llm)
.with_tool(add_to_cart)
.build();
let cart = worker.run("Add a laptop for $999 and a mouse for $25").await?;
println!("🛒 Cart:");
for item in cart.items {
println!(" - {}", item);
}
println!("📦 Total items: {}", cart.total_items);
println!("💵 Total: ${:.2}", cart.estimated_total);
A2A Agents
Radkit provides first-class support for building Agent-to-Agent (A2A) protocol compliant agents. The framework ensures that if your code compiles, it's automatically A2A compliant.
What is A2A?
The A2A protocol is an open standard that enables seamless communication and collaboration between AI agents. It provides:
- Standardized agent discovery via Agent Cards
- Task lifecycle management (submitted, working, completed, etc.)
- Multi-turn conversations with input-required states
- Streaming support for long-running operations
- Artifact generation for tangible outputs
Building A2A Agents
Agents in radkit are composed of skills. Each skill handles a specific capability and is annotated with the #[skill] macro to provide A2A metadata.
Defining a Skill
use radkit::agent::{Artifact, LlmFunction, OnRequestResult, SkillHandler};
use radkit::errors::AgentError;
use radkit::macros::skill;
use radkit::models::{BaseLlm, Content};
use radkit::runtime::context::{Context, TaskContext};
use radkit::runtime::Runtime;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, JsonSchema)]
struct UserProfile {
name: String,
email: String,
role: String,
}
#[skill(
id = "extract_profile",
name = "Profile Extractor",
description = "Extracts structured user profiles from text",
tags = ["extraction", "profiles"],
examples = [
"Extract profile: John Doe, john@example.com, Software Engineer",
"Parse this resume into a profile"
],
input_modes = ["text/plain", "application/pdf"],
output_modes = ["application/json"]
)]
pub struct ProfileExtractorSkill;
#[cfg_attr(all(target_os = "wasi", target_env = "p1"), async_trait::async_trait(?Send))]
#[cfg_attr(
not(all(target_os = "wasi", target_env = "p1")),
async_trait::async_trait
)]
impl SkillHandler for ProfileExtractorSkill {
async fn on_request(
&self,
task_context: &mut TaskContext,
context: &Context,
runtime: &dyn Runtime,
content: Content,
) -> Result<OnRequestResult, AgentError> {
let llm = runtime.llm_provider().default_llm()?;
task_context.send_intermediate_update("Analyzing text...").await?;
let profile = extract_profile_data(llm)
.run(content.first_text().unwrap())
.await?;
let artifact = Artifact::from_json("user_profile.json", &profile)?;
Ok(OnRequestResult::Completed {
message: Some(Content::from_text("Profile extracted successfully")),
artifacts: vec![artifact],
})
}
}
fn extract_profile_data(llm: impl BaseLlm + 'static) -> LlmFunction<UserProfile> {
LlmFunction::new_with_system_instructions(
llm,
"Extract name, email, and role from the provided text."
)
}
Multi-Turn Conversations
Skills can request additional input from users when needed. Use slot enums to track different input states:
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
enum ProfileSlot {
Email,
PhoneNumber,
Department,
}
#[cfg_attr(all(target_os = "wasi", target_env = "p1"), async_trait::async_trait(?Send))]
#[cfg_attr(
not(all(target_os = "wasi", target_env = "p1")),
async_trait::async_trait
)]
impl SkillHandler for ProfileExtractorSkill {
async fn on_request(
&self,
task_context: &mut TaskContext,
context: &Context,
runtime: &dyn Runtime,
content: Content,
) -> Result<OnRequestResult, AgentError> {
let llm = runtime.llm_provider().default_llm()?;
let profile = extract_profile_data(llm)
.run(content.first_text().unwrap())
.await?;
if profile.email.is_empty() {
task_context.save_data("partial_profile", &profile)?;
return Ok(OnRequestResult::InputRequired {
message: Content::from_text("Please provide the user's email address"),
slot: SkillSlot::new(ProfileSlot::Email),
});
}
if profile.phone.is_empty() {
task_context.save_data("partial_profile", &profile)?;
return Ok(OnRequestResult::InputRequired {
message: Content::from_text("Please provide the user's phone number"),
slot: SkillSlot::new(ProfileSlot::PhoneNumber),
});
}
let artifact = Artifact::from_json("user_profile.json", &profile)?;
Ok(OnRequestResult::Completed {
message: Some(Content::from_text("Profile complete!")),
artifacts: vec![artifact],
})
}
async fn on_input_received(
&self,
task_context: &mut TaskContext,
context: &Context,
runtime: &dyn Runtime,
content: Content,
) -> Result<OnInputResult, AgentError> {
let slot: ProfileSlot = task_context.load_slot()?.unwrap();
let mut profile: UserProfile = task_context
.load_data("partial_profile")?
.ok_or_else(|| anyhow!("No partial profile found"))?;
match slot {
ProfileSlot::Email => {
profile.email = content.first_text().unwrap().to_string();
if profile.phone.is_empty() {
task_context.save_data("partial_profile", &profile)?;
return Ok(OnInputResult::InputRequired {
message: Content::from_text("Please provide your phone number"),
slot: SkillSlot::new(ProfileSlot::PhoneNumber),
});
}
}
ProfileSlot::PhoneNumber => {
profile.phone = content.first_text().unwrap().to_string();
if !is_valid_phone(&profile.phone) {
return Ok(OnInputResult::Failed {
error: "Invalid phone number format".to_string(),
});
}
}
ProfileSlot::Department => {
profile.department = content.first_text().unwrap().to_string();
}
}
let artifact = Artifact::from_json("user_profile.json", &profile)?;
Ok(OnInputResult::Completed {
message: Some(Content::from_text("Profile completed!")),
artifacts: vec![artifact],
})
}
}
Intermediate Updates and Partial Artifacts
For long-running operations, send progress updates and partial results:
#[cfg_attr(all(target_os = "wasi", target_env = "p1"), async_trait::async_trait(?Send))]
#[cfg_attr(
not(all(target_os = "wasi", target_env = "p1")),
async_trait::async_trait
)]
impl SkillHandler for ReportGeneratorSkill {
async fn on_request(
&self,
task_context: &mut TaskContext,
context: &Context,
runtime: &dyn Runtime,
content: Content,
) -> Result<OnRequestResult, AgentError> {
let llm = runtime.llm_provider().default_llm()?;
task_context.send_intermediate_update("Analyzing data...").await?;
let analysis = analyze_data(llm.clone())
.run(content.first_text().unwrap())
.await?;
let partial = Artifact::from_json("analysis.json", &analysis)?;
task_context.send_partial_artifact(partial).await?;
task_context.send_intermediate_update("Generating visualizations...").await?;
let charts = generate_charts(llm.clone())
.run(&analysis)
.await?;
let charts_artifact = Artifact::from_json("charts.json", &charts)?;
task_context.send_partial_artifact(charts_artifact).await?;
task_context.send_intermediate_update("Compiling final report...").await?;
let report = compile_report(llm)
.run(&analysis, &charts)
.await?;
let final_artifact = Artifact::from_json("report.json", &report)?;
Ok(OnRequestResult::Completed {
message: Some(Content::from_text("Report complete!")),
artifacts: vec![final_artifact],
})
}
}
Composing an Agent
use radkit::agent::{Agent, AgentDefinition};
use radkit::models::providers::AnthropicLlm;
use radkit::runtime::DefaultRuntime;
pub fn configure_agents() -> Vec<AgentDefinition> {
let my_agent = Agent::builder()
.with_id("my-agent-v1")
.with_name("My A2A Agent")
.with_description("An intelligent agent with multiple skills")
.with_skill(ProfileExtractorSkill)
.with_skill(ReportGeneratorSkill)
.with_skill(DataAnalysisSkill)
.build();
vec![my_agent]
}
#[cfg(not(all(target_os = "wasi", target_env = "p1")))]
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let llm = AnthropicLlm::from_env("claude-sonnet-4-5-20250929")?;
let runtime = DefaultRuntime::new(llm);
runtime
.agents(configure_agents())
.serve("127.0.0.1:8080")
.await?;
Ok(())
}
How Radkit Guarantees A2A Compliance
Radkit ensures A2A compliance through compile-time guarantees and automatic protocol mapping:
1. Typed State Management
pub enum OnRequestResult {
InputRequired { message: Content, slot: SkillSlot }, Completed { message: Option<Content>, artifacts: Vec<Artifact> }, Failed { error: String }, Rejected { reason: String }, }
Guarantee: You can only return valid A2A task states. Invalid states won't compile.
2. Intermediate Updates
task_context.send_intermediate_update("Processing...").await?;
task_context.send_partial_artifact(artifact).await?;
Guarantee: You cannot accidentally send terminal states or mark intermediate updates as final.
3. Automatic Metadata Generation
The #[skill] macro automatically generates:
- A2A
AgentSkill entries for the Agent Card
- MIME type validation based on
input_modes/output_modes
- Proper skill discovery metadata
Guarantee: Your Agent Card is always consistent with your skill implementations.
4. Protocol Type Mapping
The framework automatically converts between radkit types and A2A protocol types:
| Radkit Type |
A2A Protocol Type |
Content |
Message with Part[] |
Artifact::from_json() |
Artifact with DataPart |
Artifact::from_text() |
Artifact with TextPart |
OnRequestResult::Completed |
Task with state=completed |
OnRequestResult::InputRequired |
Task with state=input-required |
Guarantee: You never handle A2A protocol types directly. The framework ensures correct serialization.
5. Lifecycle Enforcement
task_context.send_intermediate_update("Working...").await?;
task_context.send_partial_artifact(artifact).await?;
Ok(OnRequestResult::Completed {
artifacts: vec![final_artifact],
..
})
Guarantee: The type system prevents protocol violations at compile time.
How These Guarantees Work
Radkit enforces A2A compliance through several type-level mechanisms:
1. Unrepresentable Invalid States
The OnRequestResult and OnInputResult enums only expose valid A2A states as variants. There's no way to construct an invalid state because the type system doesn't allow it:
Ok(OnRequestResult::Completed { message: None, artifacts: vec![] })
Ok(OnRequestResult::InvalidState { ... })
2. Restricted Method APIs
Methods like task_context.send_intermediate_update() are internally hardcoded to use TaskState::Working with final=false. The API doesn't expose parameters that would allow setting invalid combinations:
pub async fn send_intermediate_update(&mut self, message: impl Into<Content>) -> Result<()> {
}
3. Separation of Concerns via Return Types
Intermediate updates go through task_context methods, while final states are only set via return values from on_request() and on_input_received(). This architectural separation, enforced by Rust's type system, makes it impossible to accidentally mark an intermediate update as final or send a terminal state mid-execution:
task_context.send_intermediate_update("Working...").await?;
Ok(OnRequestResult::Completed { ... })
4. Compile-Time WASM Compatibility
The library uses conditional compilation and the compat module to ensure WASM portability while maintaining the same API surface. The ?Send trait bound is conditionally applied based on target:
#[cfg_attr(all(target_os = "wasi", target_env = "p1"), async_trait(?Send))]
#[cfg_attr(not(all(target_os = "wasi", target_env = "p1")), async_trait)]
This means WASM compatibility is verified at compile time - if your agent compiles for native targets, it will compile for WASM without code changes.
Example: Complete A2A Agent
See the hr_agent example for a complete multi-skill A2A agent with:
- Resume processing with multi-turn input handling
- Onboarding plan generation with intermediate updates
- IT account creation via remote agent delegation
- Full A2A protocol compliance
Contributing
Contributions welcome!
We love agentic coding. We use Claude-Code, Gemini, Codex.
That doesn't mean this is a random vibe-coded project. Everything in this project is carefully crafted.
And we expect your contributions to be well-thought-out and have reasons for the changes you submit.
- Follow the AGENTS.md
- Add tests for new features
- Update documentation
- Ensure
cargo fmt and cargo clippy pass
License
MIT