Expand description
Core runtime for AI agent execution with integrated safety and cost controls.
Provides agent lifecycle management and local LLM proxy for request interception. Orchestrates all Iron Runtime subsystems (budget, PII detection, analytics, circuit breakers).
§Purpose
This crate is the execution engine for Iron Runtime:
- Agent lifecycle management (spawn, monitor, stop agents)
- LLM Router: Local proxy intercepting OpenAI/Anthropic API calls
- Integrated safety controls (PII detection, budget enforcement)
- Real-time metrics and state management
- Dashboard integration via REST API and WebSocket
§Architecture
Iron Runtime uses a modular architecture with clear separation:
§Core Components
- Agent Runtime: Manages agent processes and lifecycle
- LLM Router: Transparent proxy for LLM API requests
- State Manager: Persists agent state and metrics
- Telemetry: Structured logging for all operations
§Integration Layer
Runtime coordinates between modules:
- iron_cost: Budget validation before LLM requests
- iron_safety: PII scanning on LLM responses
- iron_runtime_analytics: Event tracking for dashboard
- iron_reliability: Circuit breakers for provider failures
- iron_runtime_state: Agent state persistence
§Python Bindings
Python bindings are provided by the iron_sdk crate (see ADR-010).
This crate (iron_runtime) is pure Rust with no PyO3 dependencies.
§Key Types
AgentRuntime- Main runtime managing agent lifecycleRuntimeConfig- Runtime configuration (budget, verbosity)AgentHandle- Handle to running agent for controlllm_router::LlmRouter- Local LLM proxy server
§Public API
§Rust API
use iron_runtime::{AgentRuntime, RuntimeConfig};
use std::path::Path;
#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
// Configure runtime
let config = RuntimeConfig {
budget: 100.0, // $100 budget
verbose: true,
};
// Create runtime
let runtime = AgentRuntime::new(config);
// Start agent from Python script
let handle = runtime.start_agent(Path::new("agent.py")).await?;
println!("Agent started: {}", handle.agent_id.as_str());
// Monitor metrics
if let Some(metrics) = runtime.get_metrics(handle.agent_id.as_str()) {
println!("Budget spent: ${}", metrics.budget_spent);
println!("PII detections: {}", metrics.pii_detections);
}
// Stop agent
runtime.stop_agent(handle.agent_id.as_str()).await?;
Ok(())
}§Safety Controls
Runtime enforces multiple safety layers:
§Budget Enforcement
- Pre-request budget validation
- Request blocked if budget exceeded
- Real-time cost tracking
- Budget alerts at configurable thresholds
§PII Detection
- Scans all LLM responses for PII
- Automatic redaction of sensitive data
- Compliance audit logging
- Configurable detection patterns
§Circuit Breakers
- Detects failing LLM providers
- Fast-fail on known-bad endpoints
- Automatic recovery after timeout
- Per-provider state isolation
§Feature Flags
enabled- Enable full runtime (disabled for library-only builds)analytics- Enable analytics recording via iron_runtime_analytics
§Performance
Runtime overhead on LLM requests:
- Budget check: <1ms
- PII detection: <5ms per KB
- Circuit breaker check: <0.1ms
- Analytics recording: <0.5ms
- Total proxy overhead: <10ms per request
Streaming responses have near-zero buffering latency.
Modules§
- llm_
router - LLM Router - Local proxy for LLM API requests
Structs§
- Agent
Handle - Agent runtime handle
- Agent
Runtime - Main agent runtime
- Runtime
Config - Runtime configuration