A Rust SDK for building reliable AI agent systems with first-class A2A protocol support.
Radkit prioritizes developer experience and control above all else. Developers maintain complete control over agent behavior, execution flow, context management, and state.
While the library provides abstractions, developers can always drop down to lower-level APIs when needed.
Features
- A2A Protocol First - Native support for Agent-to-Agent communication standard
- Unified LLM Interface - Single API for Anthropic, OpenAI, Gemini, Grok, DeepSeek
- Tool Execution - Automatic tool calling with multi-turn loops and state management
- Structured Outputs - Type-safe response deserialization with JSON Schema
- Type Safety - Leverage Rust's type system for reliability and correctness
Installation
Add radkit to your Cargo.toml.
Default (Minimal)
For using core types and helpers like LlmFunction and LlmWorker without the agent server runtime:
[]
= "0.0.4"
= { = "1", = ["rt-multi-thread", "sync", "net", "process", "macros"] }
= { = "1", = ["derive"] }
= "1"
= "1"
With Agent Server Runtime
To include the runtime server handle and enable the full A2A agent server capabilities (on native targets), enable the runtime feature:
[]
= { = "0.0.4", = ["runtime"] }
= { = "1", = ["rt-multi-thread", "sync", "net", "process", "macros"] }
= { = "1", = ["derive"] }
= "1"
= "1"
Feature Flags
Radkit ships optional capabilities that you can opt into per target:
runtime: Enables the native runtime handle, HTTP server, tracing, and other dependencies required to run A2A-compliant agents locally.agentskill: EnablesAgentSkillDef,include_skill!, andwith_skill_dirfor loading skills fromSKILL.mdfiles. Included in themacrosfeature by default.dev-ui: Builds on top ofruntimeand serves an interactive UI (native-only) where you can trigger tasks, and inspect streaming output.task-store-sqlite: Enables a native SQLite-backedTaskStorefor persistent task, event, and state storage.
Core Concepts
Thread - Conversation Context
A Thread represents the complete conversation history with the LLM, including system prompts and message exchanges.
use ;
// Simple thread from user message
let thread = from_user;
// Thread with system prompt
let thread = from_system
.add_event;
// Multi-turn conversation
let thread = new;
// Builder pattern
let thread = new
.with_system
.add_event;
Type Conversions:
// From string slice
let thread: Thread = "Hello".into;
// From String
let thread: Thread = Stringfrom.into;
// From Event
let thread: Thread = user.into;
// From Vec<Event>
let thread: Thread = vec!.into;
Content - Multi-Modal Messages
Content represents the payload of a message, supporting text, images, documents, tool calls, and tool responses.
use ;
use json;
// Simple text content
let content = from_text;
// Multi-part content
let content = from_parts;
// Access text parts
for text in content.texts
// Query content
if content.has_text
if content.has_tool_calls
// Join all text parts
if let Some = content.joined_texts
Event - Conversation Messages
Event represents a single message in a conversation with an associated role.
use ;
// Create events with different roles
let system_event = system;
let user_event = user;
let assistant_event = assistant;
// Access event properties
match event.role
let content = event.content;
println!;
LLM Providers
Radkit supports multiple LLM providers with a unified interface.
Anthropic (Claude)
use AnthropicLlm;
use ;
// From environment variable (ANTHROPIC_API_KEY)
let llm = from_env?;
// With explicit API key
let llm = new;
// With configuration
let llm = from_env?
.with_max_tokens
.with_temperature;
// Generate content
let thread = from_user;
let response = llm.generate_content.await?;
println!;
println!;
OpenAI (GPT)
use OpenAILlm;
// From environment variable (OPENAI_API_KEY)
let llm = from_env?;
// With configuration
let llm = from_env?
.with_max_tokens
.with_temperature;
let response = llm.generate.await?;
OpenRouter
OpenRouter exposes an OpenAI-compatible endpoint that can route calls to hosted Anthropic, Google, Cohere, and other marketplace models behind a single API key.
use OpenRouterLlm;
// From environment variable (OPENROUTER_API_KEY)
let llm = from_env?
.with_site_url // optional attribution headers
.with_app_name;
let response = llm.generate.await?;
Google Gemini
use GeminiLlm;
// From environment variable (GEMINI_API_KEY)
let llm = from_env?;
let response = llm.generate.await?;
Grok (xAI)
use GrokLlm;
// From environment variable (XAI_API_KEY)
let llm = from_env?;
let response = llm.generate.await?;
DeepSeek
use DeepSeekLlm;
// From environment variable (DEEPSEEK_API_KEY)
let llm = from_env?;
let response = llm.generate.await?;
LlmFunction - Simple Structured Outputs
LlmFunction<T> is perfect for when you want structured, typed responses without tool execution.
Basic Usage
use LlmFunction;
use AnthropicLlm;
use JsonSchema;
use ;
async
With System Instructions
let llm = from_env?;
let review_fn = new_with_system_instructions;
let code = r#"
fn divide(a: i32, b: i32) -> i32 {
a / b
}
"#;
let review = review_fn.run.await?;
println!;
println!;
for issue in review.issues
println!;
for suggestion in review.suggestions
Multi-Turn Conversations
let llm = from_env?;
let qa_fn = new;
// First question
let = qa_fn
.run_and_continue
.await?;
println!;
// Follow-up question (continues conversation)
let = qa_fn
.run_and_continue
.await?;
println!;
// Another follow-up
let = qa_fn
.run_and_continue
.await?;
println!;
Complex Data Structures
let llm = from_env?;
let recipe_fn = new_with_system_instructions;
let recipe = recipe_fn
.run
.await?;
println!;
println!;
println!;
println!;
for ingredient in recipe.ingredients
println!;
for in recipe.instructions.iter.enumerate
LlmWorker - Tool Execution
LlmWorker<T> adds automatic tool calling and multi-turn execution loops to LlmFunction.
use LlmWorker;
use AnthropicLlm;
use ;
use JsonSchema;
use ;
use json;
// Define tool arguments
// Define the weather tool using the #[tool] macro
async
async
Multiple Tools
use tool;
// Define tool argument structs
// Define tools using the #[tool] macro
async
async
async
let llm = from_env?;
let worker = builder
.with_system_instructions
.with_tools
.build;
let plan = worker.run.await?;
println!;
println!;
println!;
println!;
Stateful Tools
Tools can maintain state across calls using ToolContext.
use ;
// Define tool arguments
// Add to cart tool with state management
async
let llm = from_env?;
let worker = builder
.with_tool
.build;
// The worker can call add_to_cart multiple times, maintaining state
let cart = worker.run.await?;
println!;
for item in cart.items
println!;
println!;
A2A Agents
Radkit provides first-class support for building Agent-to-Agent (A2A) protocol compliant agents. The framework ensures that if your code compiles, it's automatically A2A compliant.
What is A2A?
The A2A protocol is an open standard that enables seamless communication and collaboration between AI agents. It provides:
- Standardized agent discovery via Agent Cards
- Task lifecycle management (submitted, working, completed, etc.)
- Multi-turn conversations with input-required states
- Streaming support for long-running operations
- Artifact generation for tangible outputs
Building A2A Agents
Agents in radkit are composed of skills. Each skill handles a specific capability and is annotated with the #[skill] macro to provide A2A metadata.
Defining a Skill
use ;
use AgentError;
use skill;
use ;
use ;
use Runtime;
use JsonSchema;
use ;
// Define your output types
// Annotate with A2A metadata
;
// Implement the SkillHandler trait
Multi-Turn Conversations
Skills can request additional input from users when needed. Use slot enums to track different input states:
use ;
// Define slot enum to track different input requirements
Intermediate Updates and Partial Artifacts
For long-running operations, send progress updates and partial results:
Composing an Agent
use ;
use AnthropicLlm;
use Runtime;
// Local development
async
How Radkit Guarantees A2A Compliance
Radkit ensures A2A compliance through compile-time guarantees and automatic protocol mapping:
1. Typed State Management
Guarantee: You can only return valid A2A task states. Invalid states won't compile.
2. Intermediate Updates
// Always maps to A2A TaskState::Working with final=false
progress.send_update.await?;
// Always creates A2A TaskArtifactUpdateEvent
progress.send_artifact.await?;
Guarantee: You cannot accidentally send terminal states or mark intermediate updates as final.
3. Automatic Metadata Generation
The #[skill] macro automatically generates:
- A2A
AgentSkillentries for the Agent Card - MIME type validation based on
input_modes/output_modes - Proper skill discovery metadata
Guarantee: Your Agent Card is always consistent with your skill implementations.
4. Protocol Type Mapping
The framework automatically converts between radkit types and A2A protocol types:
| Radkit Type | A2A Protocol Type |
|---|---|
Content |
Message with Part[] |
Artifact::from_json() |
Artifact with DataPart |
Artifact::from_text() |
Artifact with TextPart |
OnRequestResult::Completed |
Task with state=TASK_STATE_COMPLETED |
OnRequestResult::InputRequired |
Task with state=TASK_STATE_INPUT_REQUIRED |
Guarantee: You never handle A2A protocol types directly. The framework ensures correct serialization.
5. Lifecycle Enforcement
// ✅ Allowed: Send intermediate updates during execution
progress.send_update.await?;
// ✅ Allowed: Send partial artifacts any time
progress.send_artifact.await?;
// ✅ Allowed: Return terminal state with final artifacts
Ok
// ❌ Not possible: Can't send "completed" state during execution
// ❌ Not possible: Can't mark intermediate update as final
// ❌ Not possible: Can't send invalid task states
Guarantee: The type system prevents protocol violations at compile time.
How These Guarantees Work
Radkit enforces A2A compliance through several type-level mechanisms:
1. Unrepresentable Invalid States
The OnRequestResult and OnInputResult enums only expose valid A2A states as variants. There's no way to construct an invalid state because the type system doesn't allow it:
// ✅ This compiles - valid A2A state
Ok
// ❌ This doesn't compile - InvalidState doesn't exist
Ok // Compilation error!
2. Restricted Method APIs
Methods like progress.send_update() are internally hardcoded to use TASK_STATE_WORKING with no final flag. The API doesn't expose parameters that would allow setting invalid combinations:
// Implementation detail (in radkit internals):
pub async
3. Separation of Concerns via Return Types
Intermediate updates go through ProgressSender methods, while final states are only set via return values from on_request() and on_input_received(). This architectural separation, enforced by Rust's type system, makes it impossible to accidentally mark an intermediate update as final or send a terminal state mid-execution:
// During execution: Only intermediate methods available via ProgressSender
progress.send_update.await?; // Always non-final
// At completion: Only way to set final state is via return
Ok // Compiler ensures this ends execution
4. Compile-Time WASM Compatibility
The library uses conditional compilation and the compat module to ensure WASM portability while maintaining the same API surface. The ?Send trait bound is conditionally applied based on target:
This means WASM compatibility is verified at compile time — if your agent compiles for native targets, it will compile for WASM without code changes.
AgentSkills — File-Based LLM Skills
In addition to programmatic Rust skills, radkit supports AgentSkills — skills defined entirely in a SKILL.md file, with no Rust code required. The LLM reads the instructions and drives the task.
AgentSkills follow the AgentSkills specification. A skill is a directory containing a SKILL.md file:
skills/
└── text-summariser/
└── SKILL.md
The SKILL.md file has YAML frontmatter followed by Markdown instructions:
name: text-summariser
description: Summarises text. Use when the user asks to summarise or condense text.
license: MIT
You are a precise text summariser.
1. 2.
Respond with a JSON object:
{ "status": "complete", "message": "Your summary here." }
If no text has been provided yet:
{ "status": "needs_input", "message": "Please provide the text to summarise." }
Registering AgentSkills
There are two ways to register an AgentSkill:
Compile-time embedding — SKILL.md is baked into the binary (like include_str!). No filesystem I/O at startup. Works on WASM.
use ;
let agent = builder
.with_name
.with_skill_def
.build;
Runtime loading — SKILL.md is read from disk at startup. Useful when you want to update skills without recompiling.
let agent = builder
.with_name
.with_skill_dir?
.build;
Both produce identical SkillRegistrations at runtime. You can mix programmatic Rust skills and AgentSkills freely:
builder
.with_name
.with_skill // Rust skill
.with_skill_def // compile-time AgentSkill
.with_skill_dir? // runtime AgentSkill
.build
Multi-turn AgentSkills
AgentSkills support multi-turn conversations out of the box. If the LLM responds with "status": "needs_input", the task enters InputRequired state and the full conversation thread is preserved in the slot. When the user replies, the thread is replayed and the LLM continues from where it left off.
The WorkStatus enum drives this:
| LLM responds with | Task state |
|---|---|
{ "status": "complete", "message": "..." } |
Completed |
{ "status": "needs_input", "message": "..." } |
InputRequired (multi-turn continues) |
{ "status": "failed", "reason": "..." } |
Failed |
Feature flag
AgentSkill support requires the agentskill feature (included in macros by default):
[]
= { = "0.0.4", = ["runtime", "agentskill"] }
Example: Complete A2A Agent
See the hr_agent example for a complete multi-skill A2A agent with:
- Resume processing with multi-turn input handling
- Onboarding plan generation with intermediate updates
- IT account creation via remote agent delegation
- Full A2A protocol compliance
Contributing
Contributions welcome!
We love agentic coding. We use Claude-Code, Gemini, Codex. That doesn't mean this is a random vibe-coded project. Everything in this project is carefully crafted. And we expect your contributions to be well-thought-out and have reasons for the changes you submit.
- Follow the AGENTS.md
- Add tests for new features
- Update documentation
- Ensure
cargo fmtandcargo clippypass
License
MIT