โ ๏ธ Early Stage Warning: This project is in its early stages of development. Breaking changes are expected as we iterate and improve the API. Please use pinned versions in production environments and be prepared to update your code when upgrading versions.
Table of Contents
- Table of Contents
- Overview
- ๐ Quick Start
- โจ Key Features
- ๐ ๏ธ Development
- ๐ API Reference
- ๐ง Advanced Usage
- ๐ A2A Ecosystem
- ๐ Requirements
- ๐ณ Docker Support
- ๐งช Testing
- ๐ License
- ๐ค Contributing
- ๐ Support
- ๐ Resources
Overview
The A2A ADK (Agent Development Kit) is a Rust library that simplifies building Agent-to-Agent (A2A) protocol compatible agents. A2A enables seamless communication between AI agents, allowing them to collaborate, delegate tasks, and share capabilities across different systems and providers.
What is A2A?
Agent-to-Agent (A2A) is a standardized protocol that enables AI agents to:
- Communicate with each other using a unified JSON-RPC interface
- Delegate tasks to specialized agents with specific capabilities
- Stream responses in real-time for better user experience
- Authenticate securely using OIDC/OAuth2
- Discover capabilities through standardized agent cards
๐ Quick Start
Installation
Add the ADK to your Cargo.toml:
[]
= "0.1.0"
Basic Usage (Minimal Server)
use ;
use tokio;
use ;
async
AI-Powered Server
use ;
use json;
use tokio;
use ;
async
Health Check Example
Monitor the health status of A2A agents for service discovery and load balancing:
use A2AClient;
use ;
use ;
async
Examples
For complete working examples, see the examples directory:
- Minimal Server - Basic A2A server without AI capabilities
- AI-Powered Server - Full A2A server with LLM integration
- JSON AgentCard Server - A2A server with agent metadata loaded from JSON file
- Client Example - A2A client implementation
- Health Check Example - Monitor agent health status
โจ Key Features
Core Capabilities
- ๐ค A2A Protocol Compliance: Full implementation of the Agent-to-Agent communication standard
- ๐ Multi-Provider Support: Works with OpenAI, Ollama, Groq, Cohere, and other LLM providers
- ๐ Real-time Streaming: Stream responses as they're generated from language models
- ๐ง Custom Tools: Easy integration of custom tools and capabilities
- ๐ Secure Authentication: Built-in OIDC/OAuth2 authentication support
- ๐จ Push Notifications: Webhook notifications for real-time task state updates
Developer Experience
- โ๏ธ Environment Configuration: Simple setup through environment variables
- ๐ Task Management: Built-in task queuing, polling, and lifecycle management
- ๐๏ธ Extensible Architecture: Pluggable components for custom business logic
- ๐ Type-Safe: Generated types from A2A schema for compile-time safety
- ๐งช Well Tested: Comprehensive test coverage with table-driven tests
Production Ready
- ๐ฟ Lightweight: Optimized binary size with Rust's zero-cost abstractions
- ๐ก๏ธ Production Hardened: Configurable timeouts, TLS support, and error handling
- ๐ณ Containerized: OCI compliant and works with Docker and Docker Compose
- โธ๏ธ Kubernetes Native: Ready for cloud-native deployments
- ๐ Observability: OpenTelemetry integration for monitoring and tracing
๐ ๏ธ Development
Prerequisites
- Rust 1.88 or later
- Task for build automation (optional, can use
cargodirectly)
Development Workflow
-
Download latest A2A schema:
-
Generate types from schema:
-
Run linting:
-
Run tests:
Available Tasks
| Task | Description |
|---|---|
task a2a:download-schema |
Download the latest A2A schema |
task a2a:generate-types |
Generate Rust types from A2A schema |
task lint |
Run static analysis and linting with clippy |
task test |
Run all tests |
task build |
Build the project |
task clean |
Clean up build artifacts |
Build-Time Agent Metadata
The ADK supports injecting agent metadata at build time using Rust's build script and environment variables. This makes agent information immutable and embedded in the binary, which is useful for production deployments.
Available Build-Time Variables
The following build-time metadata variables can be set:
AGENT_NAME- The agent's display nameAGENT_DESCRIPTION- A description of the agent's capabilitiesAGENT_VERSION- The agent's version number
Usage Examples
Direct Cargo Build:
# Build your application with custom metadata
AGENT_NAME="MyAgent" \
AGENT_DESCRIPTION="My custom agent description" \
AGENT_VERSION="1.2.3" \
Docker Build:
# Build with custom metadata in Docker
FROM rust:1.88 AS builder
ARG AGENT_NAME="Production Agent"
ARG AGENT_DESCRIPTION="Production deployment agent with enhanced capabilities"
ARG AGENT_VERSION="1.0.0"
WORKDIR /app
COPY Cargo.toml Cargo.lock ./
RUN cargo fetch
COPY . .
RUN AGENT_NAME="${AGENT_NAME}" \
AGENT_DESCRIPTION="${AGENT_DESCRIPTION}" \
AGENT_VERSION="${AGENT_VERSION}" \
cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /app/target/release/rust-adk .
CMD ["./rust-adk"]
๐ API Reference
Core Components
A2AServer
The main server trait that handles A2A protocol communication.
use ;
// Create a default A2A server
let server = new
.build
.await?;
// Create a server with agent integration
let server = new
.with_agent
.with_agent_card_from_file
.build
.await?;
// Create a server with custom configuration
let server = new
.with_config
.with_task_handler
.with_task_processor
.build
.await?;
A2AServerBuilder
Build A2A servers with custom configurations using a fluent interface:
use ;
// Basic server with agent
let server = new
.with_agent
.with_agent_card_from_file
.build
.await?;
// Server with custom task handler
let server = new
.with_task_handler
.with_task_processor
.with_agent_card_from_file
.build
.await?;
// Server with custom configuration
let server = new
.with_config
.with_agent
.with_agent_card_from_file
.build
.await?;
AgentBuilder
Build OpenAI-compatible agents that live inside the A2A server using a fluent interface:
use AgentBuilder;
// Basic agent with custom LLM
let agent = new
.with_config
.with_toolbox
.build
.await?;
// Agent with system prompt
let agent = new
.with_system_prompt
.with_max_chat_completion
.build
.await?;
// Use with A2A server builder
let server = new
.with_agent
.with_agent_card_from_file
.build
.await?;
A2AClient
The client struct for communicating with A2A servers:
use A2AClient;
// Basic client creation
let client = new?;
// Client with custom configuration
let config = ClientConfig ;
let client = with_config?;
// Using the client
let agent_card = client.get_agent_card.await?;
let health = client.get_health.await?;
let response = client.send_task.await?;
client.send_task_streaming.await?;
Agent Health Monitoring
Monitor the health status of A2A agents to ensure they are operational:
use A2AClient;
// Check agent health
let health = client.get_health.await?;
// Process health status
match health.status.as_str
Health Status Values:
healthy: Agent is fully operationaldegraded: Agent is partially operational (some functionality may be limited)unhealthy: Agent is not operational or experiencing significant issues
Use Cases:
- Monitor agent availability in distributed systems
- Implement health checks for load balancers
- Detect and respond to agent failures
- Service discovery and routing decisions
LLM Client
Create OpenAI-compatible LLM clients for agents:
use OpenAICompatibleClient;
// Create LLM client with configuration
let llm_client = new.await?;
// Use with agent builder
let agent = new
.with_llm_client
.build
.await?;
Configuration
The configuration is managed through environment variables and the config module:
use ;
๐ง Advanced Usage
Building Custom Agents with AgentBuilder
The AgentBuilder provides a fluent interface for creating highly customized agents with specific configurations, LLM clients, and toolboxes.
Basic Agent Creation
use AgentBuilder;
use tracing;
// Create a simple agent with defaults
let agent = new
.build
.await?;
// Or use the builder pattern for more control
let agent = new
.with_system_prompt
.with_max_chat_completion
.with_max_conversation_history
.build
.await?;
Agent with Custom Configuration
use AgentConfig;
use Duration;
let config = AgentConfig ;
let agent = new
.with_config
.build
.await?;
Agent with Custom LLM Client
use OpenAICompatibleClient;
// Create a custom LLM client
let llm_client = new.await?;
// Build agent with the custom client
let agent = new
.with_llm_client
.with_system_prompt
.build
.await?;
Fully Configured Agent
use AgentBuilder;
use ;
use json;
// Create tools for the agent's toolbox
let tools = vec!;
// Build a fully configured agent with toolbox
let agent = new
.with_config
.with_system_prompt
.with_max_chat_completion
.with_max_conversation_history
.with_toolbox
.build
.await?;
);
// Build a fully configured agent
let agent = new
.with_config
.with_llm_client
.with_toolbox
.with_system_prompt
.with_max_chat_completion
.with_max_conversation_history
.build
.await?;
// Use the agent in your server
let server = new
.with_agent
.with_agent_card_from_file
.build
.await?;
Custom Tools
Create custom tools to extend your agent's capabilities using the Inference Gateway SDK's tool system:
use AgentBuilder;
use ;
use json;
// Define tools for your agent's toolbox
let tools = vec!;
// Create an agent with the toolbox
let agent = new
.with_config
.with_system_prompt
.with_toolbox
.build
.await?;
The toolbox integrates with the Inference Gateway SDK's function calling system. When the LLM decides to use a tool, the tool call information is automatically sent through the gateway to the configured LLM provider, which will return tool call requests that can be processed by your application logic.
Custom Task Processing
Implement custom business logic for task completion:
use ;
use Message;
;
// Set the processor when building your server
let server = new
.with_task_processor
.with_agent_card_from_file
.build
.await?;
Push Notifications
Configure webhook notifications to receive real-time updates when task states change:
use ;
use TaskManager;
// Create an HTTP push notification sender
let notification_sender = new;
// Create a task manager with push notification support
let task_manager = with_notifications;
// Configure push notification webhooks for a task
let config = TaskPushNotificationConfig ;
// Set the configuration
task_manager.set_task_push_notification_config.await?;
Webhook Payload
When a task state changes, your webhook will receive a POST request with this payload:
Agent Metadata
Agent metadata can be configured in two ways: at build-time via environment variables (recommended for production) or at runtime via configuration.
Build-Time Metadata (Recommended)
Agent metadata is embedded directly into the binary during compilation using environment variables. This approach ensures immutable agent information and is ideal for production deployments:
# Build your application with custom metadata
AGENT_NAME="Weather Assistant" \
AGENT_DESCRIPTION="Specialized weather analysis agent" \
AGENT_VERSION="2.0.0" \
Runtime Metadata Configuration
For development or when dynamic configuration is needed, you can override the build-time metadata through the server's configuration:
use Config;
let mut config = from_env?;
// Override build-time metadata for development
config.agent_name = Some;
config.agent_description = Some;
config.agent_version = Some;
let server = new
.with_config
.with_agent_card_from_file
.build
.await?;
Note: Build-time metadata takes precedence as defaults, but can be overridden at runtime using the configuration.
Environment Configuration
Key environment variables for configuring your agent:
# Server configuration
PORT="8080"
# Agent metadata configuration (via build-time environment variables)
AGENT_NAME="My Agent" # Build-time only
AGENT_DESCRIPTION="My agent description" # Build-time only
AGENT_VERSION="1.0.0" # Build-time only
AGENT_CARD_FILE_PATH="./.well-known/agent.json" # Path to JSON AgentCard file (optional)
# LLM client configuration
AGENT_CLIENT_PROVIDER="openai" # openai, anthropic, deepseek, ollama
AGENT_CLIENT_MODEL="gpt-4" # Model name
AGENT_CLIENT_API_KEY="your-api-key" # Required for AI features
AGENT_CLIENT_BASE_URL="https://api.openai.com/v1" # Custom endpoint
AGENT_CLIENT_MAX_TOKENS="4096" # Max tokens for completion
AGENT_CLIENT_TEMPERATURE="0.7" # Temperature for completion
AGENT_CLIENT_SYSTEM_PROMPT="You are a helpful assistant"
# Capabilities
CAPABILITIES_STREAMING="true"
CAPABILITIES_PUSH_NOTIFICATIONS="true"
CAPABILITIES_STATE_TRANSITION_HISTORY="false"
# Authentication (optional)
AUTH_ENABLE="false"
AUTH_ISSUER_URL="http://keycloak:8080/realms/inference-gateway-realm"
AUTH_CLIENT_ID="inference-gateway-client"
AUTH_CLIENT_SECRET="your-secret"
# TLS (optional)
SERVER_TLS_ENABLE="false"
SERVER_TLS_CERT_PATH="/path/to/cert.pem"
SERVER_TLS_KEY_PATH="/path/to/key.pem"
๐ A2A Ecosystem
This ADK is part of the broader Inference Gateway ecosystem:
Related Projects
- Inference Gateway - Unified API gateway for AI providers
- Go ADK - Go library for building A2A agents
- Go SDK - Go client library for Inference Gateway
- TypeScript SDK - TypeScript/JavaScript client library
- Python SDK - Python client library
A2A Agents
- Awesome A2A - Curated list of A2A-compatible agents
- Google Calendar Agent - Google Calendar integration agent
๐ Requirements
- Rust: 1.70 or later
- Dependencies: See Cargo.toml for full dependency list
๐ณ Docker Support
Build and run your A2A agent application in a container. Here's an example Dockerfile for an application using the ADK:
FROM rust:1.70 AS builder
# Build arguments for agent metadata
ARG AGENT_NAME="My A2A Agent"
ARG AGENT_DESCRIPTION="A custom A2A agent built with the Rust ADK"
ARG AGENT_VERSION="1.0.0"
WORKDIR /app
COPY Cargo.toml Cargo.lock ./
RUN cargo fetch
COPY . .
# Build with custom agent metadata
RUN AGENT_NAME="${AGENT_NAME}" \
AGENT_DESCRIPTION="${AGENT_DESCRIPTION}" \
AGENT_VERSION="${AGENT_VERSION}" \
cargo build --release
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /app/target/release/rust-adk .
CMD ["./rust-adk"]
Build with custom metadata:
๐งช Testing
The ADK follows table-driven testing patterns and provides comprehensive test coverage:
Run tests with:
Or directly with cargo:
๐ License
This project is licensed under the MIT License. See the LICENSE file for details.
๐ค Contributing
We welcome contributions! Here's how you can help:
Getting Started
-
Fork the repository
-
Clone your fork:
-
Create a feature branch:
Development Guidelines
- Follow the established code style and conventions (use
rustfmt) - Write table-driven tests for new functionality
- Use early returns to simplify logic and avoid deep nesting
- Prefer match statements over if-else chains
- Ensure type safety with proper error handling
- Use lowercase log messages for consistency
Before Submitting
- Download latest schema:
task a2a:download-schema - Generate types:
task a2a:generate-types - Run linting:
task lint - All tests pass:
task test
Pull Request Process
- Update documentation for any new features
- Add tests for new functionality
- Ensure all CI checks pass
- Request review from maintainers
For more details, see CONTRIBUTING.md.
๐ Support
Issues & Questions
- Bug Reports: GitHub Issues
- Documentation: Official Docs