oxi-agent
Agent runtime layer built on oxi-ai — manages the tool-calling loop, event emission, context compaction, and state.
Overview
oxi-agent provides the core agent loop that drives LLM interactions:
- Sends a user prompt to the LLM
- Streams the response as
AgentEvents - If the LLM requests a tool call, executes the tool and feeds the result back
- Repeats until the LLM produces a final response
- Emits events for every step (thinking, text, tool calls, completion)
Key Concepts
- Agent — the main runtime that holds a provider, config, tool registry, and shared state
- AgentTool — trait for defining tools the LLM can invoke
- AgentEvent — streaming events emitted during execution
- ToolRegistry — manages available tools and dispatches calls
- Compaction — automatic context compaction when conversations get too long
Quick Start
Add to your Cargo.toml:
[]
= { = "path/to/oxi-agent" }
Basic usage:
use Arc;
use ;
use get_provider;
async
Streaming Events
// Run with streaming callback
agent.run_streaming.await?;
Channel-Based Events
use mpsc;
let = ;
spawn;
while let Some = rx.recv.await
Tool Definition Guide
The AgentTool Trait
All tools implement the AgentTool trait:
use async_trait;
use ;
use Value;
use oneshot;
;
Registering Tools
use ToolRegistry;
let registry = new;
registry.register;
// Or with all built-in tools
let registry = with_builtins;
// Registers: ReadTool, WriteTool, EditTool, BashTool
// Register via the agent
agent.add_tool;
Built-in Tools
| Tool | Name | Description |
|---|---|---|
ReadTool |
read |
Read file contents |
WriteTool |
write |
Write content to a file |
EditTool |
edit |
Make targeted edits to files |
BashTool |
bash |
Execute shell commands |
Tool Results
// Success
success
// Error
error
// With metadata
success
.with_metadata
Progress Callbacks
Tools can emit progress updates during long-running operations:
use ProgressCallback;
use Arc;
let callback: ProgressCallback = new;
tool.on_progress;
Event System
AgentEvent Variants
| Event | Fields | Description |
|---|---|---|
Start |
prompt |
Agent begins processing |
Thinking |
— | LLM is reasoning |
TextChunk |
text |
Incremental text output |
ToolCall |
tool_call |
LLM requests tool execution |
ToolStart |
tool_call_id, tool_name |
Tool execution begins |
ToolProgress |
tool_call_id, message |
Tool progress update |
ToolComplete |
result |
Tool finished |
ToolError |
tool_call_id, error |
Tool failed |
Complete |
content, stop_reason |
Response finished |
Error |
message |
Error occurred |
Iteration |
number |
Agent loop iteration completed |
Usage |
input_tokens, output_tokens |
Token usage update |
Compaction |
event |
Context compaction event |
Compaction Events
When context compaction is enabled, the agent emits Compaction sub-events:
Compaction => match event
Agent Configuration
Model Switching
Switch models mid-conversation with automatic cross-provider message transformation:
// Switch from Anthropic to OpenAI
agent.switch_model?;
// Thinking blocks are automatically converted between formats
Agent State
let state = agent.state;
// state.messages — conversation history
// state.iteration — current loop iteration
// Reset for a new conversation
agent.reset;
// Update system prompt dynamically
agent.set_system_prompt;
License
MIT