reson-agentic
Agents are just functions - production-grade LLM agent framework for Rust.
Features
- Multi-provider support: Anthropic, OpenAI, Google Gemini, OpenRouter, AWS Bedrock
- Native tool calling with structured outputs via
#[derive(Tool)] - Agent macro for ergonomic agent definitions with
#[agentic] - Streaming responses with reasoning/thinking support
- OpenAI/OpenRouter Responses API support (
openai:resp:*,openrouter:resp:*) - Google File API for video/large media uploads
- Retry with exponential backoff
- Clone-friendly clients for use in async contexts
Installation
[]
= "0.1"
= { = "1", = ["full"] }
= { = "1", = ["derive"] }
Quick Start
Basic Chat
use ;
use ChatMessage;
use ConversationMessage;
async
Tool Definitions with #[derive(Tool)]
Define type-safe tools that automatically generate JSON schemas for LLM function calling:
use Tool;
use ;
/// Search the web for information
/// Get weather for a location
// Access generated schema
let schema = schema; // JSON Schema object
let name = tool_name; // "web_search"
let desc = description; // "Search the web for information"
The #[derive(Tool)] macro:
- Converts struct name to snake_case for the tool name
- Uses doc comments as descriptions (struct doc -> tool description, field docs -> parameter descriptions)
- Generates proper JSON Schema with types, required fields, and array items
- Supports
String,bool,i32/i64/u32/u64,f32/f64,Vec<T>, andOption<T>
Agent Functions with #[agentic]
The #[agentic] macro transforms an async function into an agent. It:
- Creates a
Runtimeautomatically and injects it into the function - Validates that
runtime.run()orruntime.run_stream()is called - Configures the model from the macro attribute
use agentic;
use ;
use Result;
/// Analyze text and answer questions
async
// Call the agent - runtime parameter is NOT passed by caller
let result = analyze_text.await?;
Video/Media Upload (Google Gemini)
Upload and analyze videos using Google's File API:
use ;
use ;
let client = new;
// Upload video
let video_bytes = read?;
let uploaded = client.upload_file.await?;
// Wait for processing (required for videos)
if uploaded.state == Processing
// Create multimodal message
let message = MultimodalMessage ;
// Clean up when done
client.delete_file.await?;
Supported Media Types
| Type | Formats | Max Size |
|---|---|---|
| Video | MP4, MOV, AVI, WebM, MKV, FLV, 3GP | 2GB |
| Image | JPEG, PNG, GIF, WebP, HEIC | 20MB inline |
| Audio | MP3, WAV, FLAC, AAC, OGG, M4A | 2GB |
| Document | PDF, TXT, HTML, CSS, JS, etc. | Varies |
Providers
| Provider | Client | Model Format |
|---|---|---|
| Google Gemini | GoogleGenAIClient |
gemini-2.0-flash |
| Anthropic | AnthropicClient |
claude-sonnet-4-20250514 |
| OpenAI | OAIClient |
gpt-4o |
| OpenRouter | OpenRouterClient |
anthropic/claude-sonnet-4 |
| AWS Bedrock | BedrockClient |
anthropic.claude-sonnet-4-20250514-v1:0 |
| Vertex AI (Claude)* | GoogleAnthropicClient |
claude-sonnet-4@20250514 |
*Requires google-adc feature: reson-agentic = { version = "0.1", features = ["google-adc"] }
All clients implement Clone for easy use in async contexts.
Examples
See the examples directory:
video_upload.rs- Video analysis with Google Gemini and#[agentic]macrosimple_tools.rs- Basic tool registration and executiontool_call_chain.rs- Multi-turn tool callingdynamic_tool_parsing.rs- Type-safe tool parsing withDeserializabletemplating_example.rs- Prompt templates with minijinjastore_usage.rs- Context storage patterns
Run examples with:
GOOGLE_GEMINI_API_KEY=your_key
Feature Flags
[]
= { = "0.1", = ["full"] }
| Feature | Description |
|---|---|
full |
All features enabled |
storage |
Redis + SQLx storage backends |
bedrock |
AWS Bedrock support |
templating |
Minijinja prompt templates |
telemetry |
OpenTelemetry tracing |
google-adc |
Google Application Default Credentials (Vertex AI) |
License
Apache-2.0