Agents Runtime SDK
Rust SDK for building autonomous AI agents that operate over the MXP (mxp://) protocol. The focus is low-latency planning, secure execution, and predictable behaviour—this SDK is what agents use before they are deployed onto the Relay mesh.
Install once via the bundled facade crate:
Why it exists
- Provide a unified runtime that wraps LLMs, tools, memory, and governance without depending on QUIC or third-party transports.
- Ensure every agent built for Relay speaks MXP natively and adheres to platform security, observability, and performance rules.
- Offer a developer-friendly path to compose agents locally, then promote them into the Relay platform when ready.
Scope
- In scope: agent lifecycle management, LLM connectors, tool registration, policy hooks, MXP message handling, memory integration (including the upcoming MXP Vector Store).
- Out of scope: Relay deployment tooling, mesh scheduling, or any "deep agents" research-oriented SDK—handled by separate projects.
Supported LLM stacks
- OpenAI, Anthropic, Gemini, Ollama, and future MXP-hosted models via a shared
ModelAdaptertrait.
MXP integration
- MXP crate (e.g.
mxp = "0.2.0") provides the transport primitives. We no longer rely on QUIC; all messaging assumes the custom MXP stack and UDP carrier. - Helpers for
AgentRegister,AgentHeartbeat,Call,Response,Event, andStream*payloads are part of the SDK surface. MxpRegistryClienthandles registry registration, heartbeats (includingneeds_registerresponses), and graceful deregistration over MXP so agents can bootstrap themselves inside the mesh.
Key concepts
- Tools are pure Rust functions annotated with
#[tool]; the SDK converts them into schemas consumable by LLMs and enforces capability scopes at runtime. - Agents can share external state (memory bus, MXP Vector Store) or remain fully isolated.
- Governance and policy enforcement are first-class: hooks exist for allow/deny decisions and human-in-the-loop steps.
Quick Start
use ;
use ;
async
System Prompts
All adapters support system prompts with provider-native optimizations:
// OpenAI/Ollama: Prepends as first message
let openai = new?;
// Anthropic: Uses dedicated 'system' parameter
let anthropic = new?;
// Gemini: Uses 'systemInstruction' field
let gemini = new?;
// Same API works across all providers
let request = new?
.with_system_prompt;
Context Window Management (Optional)
For long conversations, enable automatic context management:
use ContextWindowConfig;
let adapter = new?
.with_context_config;
// SDK automatically manages conversation history within token budget
Getting started
- Model your agent using the runtime primitives (
AgentKernel, adapters, tool registry). - Wire MXP endpoints for discovery and message handling.
- Configure memory providers (in-memory ring buffer today, pluggable MXP Vector Store soon).
- Instrument with
tracingspans and policy hooks.
See docs/overview.md for architectural detail and docs/usage.md for comprehensive examples.
Documentation Map
docs/architecture.md— crate layout, component contracts, roadmap.docs/features.md— current feature set and facade feature flags.docs/usage.md— end-to-end setup guide for building an agent, including tooling examples.docs/errors.md— error surfaces and troubleshooting tips.
Future
// Move Memories to external Github projects, like embeddings, vectors ...etc each as a repo, then treat them as external deps where the run time can pull what is required // by default the agents are note require a memory to run tho, then can be stateless. for now we will keep them in this project for the sake of simplicity.