# rustora
> **"The sharpest knife in the drawer."** β A Rust-native, type-safe foundation for AI Agents, inspired by Pydantic AI.
`rustora` brings the "Type-First" development experience of Pydantic AI to the Rust ecosystem. It is designed for production-grade, high-performance, and strictly validated AI applications.
Built on top of [llm-connector](https://github.com/lipish/llm-connector), it supports 11+ LLM providers (OpenAI, Anthropic, DeepSeek, Ollama, etc.) out of the box.
## π Why rustora?
- **Type Safety as a First-Class Citizen**: Leverages Rust's type system, `serde`, and `schemars` to guarantee that LLM outputs match your code's expectations. No more `try-except` guessing games.
- **Built-in Reflection Loop**: Automatically catches validation errors (JSON schema violations, type mismatches) and feeds them back to the model for self-correction.
- **Production Ready**: Async-first, zero-overhead abstractions, and built-in **Tracing** for full observability.
- **Model Agnostic**: Powered by `llm-connector`, switch between OpenAI, Claude, DeepSeek, or local Ollama models with a single line of code.
- **Developer Experience**: Use the `#[tool]` macro to turn any Rust function into an LLM-compatible tool with auto-generated JSON Schema.
## π¦ Installation
Add `rustora` to your `Cargo.toml`:
```toml
[dependencies]
rustora = "0.2.1"
llm-connector = "0.5.19"
schemars = "0.8"
serde = { version = "1.0", features = ["derive"] }
tokio = { version = "1.0", features = ["full"] }
futures = "0.3"
```
## β‘ Quick Start
Define your output structure, pick a model, and let `rustora` handle the rest.
```rust
use llm_connector::LlmClient;
use rustora::{Agent, RustoraLlmClient, Validator};
use schemars::JsonSchema;
use serde::Deserialize;
use futures::StreamExt; // For streaming support
// Derive Validator for empty/default validation
#[derive(Debug, Deserialize, JsonSchema, Validator)]
struct WeatherInfo {
city: String,
temperature: f64,
condition: String,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Setup the Client (e.g., DeepSeek, OpenAI, or Ollama)
// Use Builder pattern for configuration (Recommended)
let client = LlmClient::builder()
.deepseek("sk-...")
.build()?;
// 2. Create the Agent
let agent: Agent<(), WeatherInfo, _> = Agent::new(client);
// 3. Run with Auto-Validation & Retry
let output = agent.run("What's the weather in Tokyo?", &()).await?;
println!("City: {}", output.city);
println!("Temp: {}Β°C", output.temperature);
Ok(())
}
```
## π οΈ Features
### 1. `#[tool]` Macro
Automatically generate JSON Schemas for your functions.
```rust
use rustora::tool;
#[tool]
fn get_stock_price(ticker: String) -> String {
// Fetch price...
format!("$150.00")
}
// Generates: ToolGetStockPrice::input_schema()
```
### 2. Custom Logic Validation
Go beyond JSON Schema. Implement the `Validator` trait to enforce business logic. If validation fails, `rustora` feeds the error back to the LLM for correction.
```rust
use rustora::Validator;
#[derive(Deserialize, JsonSchema)]
struct CodeGen {
code: String,
}
#[rustora::async_trait]
impl Validator<()> for CodeGen {
async fn validate(&self, _deps: &()) -> Result<(), String> {
if !self.code.contains("fn main") {
return Err("Code must contain a main function".to_string());
}
Ok(())
}
}
```
### 3. Conversation History (State)
Maintain context across multiple turns with `ChatSession`.
```rust
let agent = Agent::new(client);
let mut session = agent.chat_session();
// Turn 1
let response1 = session.send("My name is Rustora.", &()).await?;
// Turn 2 (Agent remembers context)
let response2 = session.send("What is my name?", &()).await?;
```
### 4. Reflection Loop
If the LLM returns invalid JSON (e.g., Markdown blocks or missing fields) OR fails your custom `Validator` logic, `rustora` intercepts the error, feeds it back to the model, and retries automatically.
### 5. Observability
`rustora` emits structured `tracing` events.
```rust
tracing_subscriber::fmt::init();
// Logs: INFO rustora: Starting agent run
// Logs: WARN rustora: Validation failed attempt=0 error=expected value...
// Logs: INFO rustora: Successfully validated output
```
### 6. Streaming with Validation
Stream tokens in real-time for low latency, then validate the final result against your schema and logic.
```rust
// Stream tokens
let mut stream = agent.stream("Write a poem", &()).await?;
while let Some(chunk_res) = stream.next().await {
// Real-time token output
if let Ok(token) = chunk_res {
print!("{}", token);
}
}
// Get validated struct after stream ends
// This automatically parses JSON and runs your validators
let poem: Poem = stream.finish().await?;
println!("Validated title: {}", poem.title);
```
## πΊοΈ Roadmap
- [x] **Core**: Generic `Agent<Deps, Output, Model>` with Reflection Loop.
- [x] **Integration**: Full `llm-connector` support (v0.5.19+).
- [x] **Macros**: `#[tool]` for automatic Schema generation.
- [x] **State**: Conversation history management.
- [x] **Validation**: Custom logic validators for output verification.
- [x] **Streaming**: Real-time structured output streaming.
## π License
MIT