rustora
"The sharpest knife in the drawer." β A Rust-native, type-safe foundation for AI Agents, inspired by Pydantic AI.
rustora brings the "Type-First" development experience of Pydantic AI to the Rust ecosystem. It is designed for production-grade, high-performance, and strictly validated AI applications.
Built on top of llm-connector, it supports 11+ LLM providers (OpenAI, Anthropic, DeepSeek, Ollama, etc.) out of the box.
π Why rustora?
- Type Safety as a First-Class Citizen: Leverages Rust's type system,
serde, andschemarsto guarantee that LLM outputs match your code's expectations. No moretry-exceptguessing games. - Built-in Reflection Loop: Automatically catches validation errors (JSON schema violations, type mismatches) and feeds them back to the model for self-correction.
- Production Ready: Async-first, zero-overhead abstractions, and built-in Tracing for full observability.
- Model Agnostic: Powered by
llm-connector, switch between OpenAI, Claude, DeepSeek, or local Ollama models with a single line of code. - Developer Experience: Use the
#[tool]macro to turn any Rust function into an LLM-compatible tool with auto-generated JSON Schema.
π¦ Installation
Add rustora to your Cargo.toml:
[]
= "0.2.1"
= "0.5.19"
= "0.8"
= { = "1.0", = ["derive"] }
= { = "1.0", = ["full"] }
= "0.3"
β‘ Quick Start
Define your output structure, pick a model, and let rustora handle the rest.
use LlmClient;
use ;
use JsonSchema;
use Deserialize;
use StreamExt; // For streaming support
// Derive Validator for empty/default validation
async
π οΈ Features
1. #[tool] Macro
Automatically generate JSON Schemas for your functions.
use tool;
// Generates: ToolGetStockPrice::input_schema()
2. Custom Logic Validation
Go beyond JSON Schema. Implement the Validator trait to enforce business logic. If validation fails, rustora feeds the error back to the LLM for correction.
use Validator;
3. Conversation History (State)
Maintain context across multiple turns with ChatSession.
let agent = new;
let mut session = agent.chat_session;
// Turn 1
let response1 = session.send.await?;
// Turn 2 (Agent remembers context)
let response2 = session.send.await?;
4. Reflection Loop
If the LLM returns invalid JSON (e.g., Markdown blocks or missing fields) OR fails your custom Validator logic, rustora intercepts the error, feeds it back to the model, and retries automatically.
5. Observability
rustora emits structured tracing events.
init;
// Logs: INFO rustora: Starting agent run
// Logs: WARN rustora: Validation failed attempt=0 error=expected value...
// Logs: INFO rustora: Successfully validated output
6. Streaming with Validation
Stream tokens in real-time for low latency, then validate the final result against your schema and logic.
// Stream tokens
let mut stream = agent.stream.await?;
while let Some = stream.next.await
// Get validated struct after stream ends
// This automatically parses JSON and runs your validators
let poem: Poem = stream.finish.await?;
println!;
πΊοΈ Roadmap
- Core: Generic
Agent<Deps, Output, Model>with Reflection Loop. - Integration: Full
llm-connectorsupport (v0.5.19+). - Macros:
#[tool]for automatic Schema generation. - State: Conversation history management.
- Validation: Custom logic validators for output verification.
- Streaming: Real-time structured output streaming.
π License
MIT