<div align="center">
<img src="assets/logo.png" alt="AutoAgents Logo" width="200" height="200">
# AutoAgents
**A Modern Multi-Agent Framework in Rust**
[](https://crates.io/crates/autoagents)
[](https://liquidos-ai.github.io/AutoAgents)
[](https://github.com/liquidos-ai/AutoAgents#license)
[](https://github.com/liquidos-ai/AutoAgents/actions)
[](https://codecov.io/gh/liquidos-ai/AutoAgents)
</div>
---
## 🚀 Overview
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous
agents powered by Large Language Models (LLMs) and [Ractor](https://github.com/slawlor/ractor). Designed for
performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can
reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models
as well. It is built with a modular architecture with swappable components, Memory layer, Executors can be easily
swapped without much rework.
With our native WASM compilation support, You can depoloy the agent orchestration directly to Web Browser.
---
## ✨ Key Features
### 🤖 **Agent Execution**
- **Multiple Executors**: ReAct (Reasoning + Acting) and Basic executors with streaming support
- **Structured Outputs**: Type-safe JSON schema validation and custom output types
- **Memory Systems**: Configurable memory backends (sliding window, persistent storage - Coming Soon)
### 🔧 **Tool Integration**
- **Custom Tools**: Easy integration with derive macros
- **WASM Runtime for Tool Execution**: Sandboxed tool execution
### 🏗️ **Flexible Architecture**
- **Provider Agnostic**: Support for OpenAI, Anthropic, Ollama, and local models
- **Multi-Platform**: Native Rust, WASM for browsers, and server deployments
- **Multi-Agent**: Type-safe pub/sub communication and agent orchestration
### 🌐 **Deployment Options**
- **Native**: High-performance server and desktop applications
- **Browser**: Run agents directly in web browsers via WebAssembly
- **Edge**: Local inference with ONNX models
---
## 🌐 Supported LLM Providers
AutoAgents supports a wide range of LLM providers, allowing you to choose the best fit for your use case:
### Cloud Providers
| **OpenAI** | ✅ |
| **OpenRouter** | ✅ |
| **Anthropic** | ✅ |
| **DeepSeek** | ✅ |
| **xAI** | ✅ |
| **Phind** | ✅ |
| **Groq** | ✅ |
| **Google** | ✅ |
| **Azure OpenAI** | ✅ |
### Local Providers
| **Mistral-rs** | ⚠️ Under Development |
| **Burn** | ⚠️ Experimental |
| **Onnx** | ⚠️ Experimental |
| **Ollama** | ✅ |
_Provider support is actively expanding based on community needs._
---
## 📦 Installation
### Development Setup
For contributing to AutoAgents or building from source:
#### Prerequisites
- **Rust** (latest stable recommended)
- **Cargo** package manager
- **LeftHook** for Git hooks management
#### Install LeftHook
**macOS (using Homebrew):**
```bash
brew install lefthook
```
**Linux/Windows:**
```bash
# Using npm
npm install -g lefthook
```
#### Clone and Setup
```bash
# Clone the repository
git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents
# Install Git hooks using lefthook
lefthook install
# Build the project
cargo build --release
# Run tests to verify setup
cargo test --all-features
```
The lefthook configuration will automatically:
- Format code with `cargo fmt`
- Run linting with `cargo clippy`
- Execute tests before commits
---
## 🚀 Quick Start
### Basic Usage
```rust
use autoagents::core::agent::memory::SlidingWindowMemory;
use autoagents::core::agent::prebuilt::executor::{ReActAgent, ReActAgentOutput};
use autoagents::core::agent::task::Task;
use autoagents::core::agent::{AgentBuilder, AgentDeriveT, AgentOutputT, DirectAgent};
use autoagents::core::error::Error;
use autoagents::core::tool::{ToolCallError, ToolInputT, ToolRuntime, ToolT};
use autoagents::llm::LLMProvider;
use autoagents::llm::backends::openai::OpenAI;
use autoagents::llm::builder::LLMBuilder;
use autoagents_derive::{agent, tool, AgentHooks, AgentOutput, ToolInput};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use std::sync::Arc;
#[derive(Serialize, Deserialize, ToolInput, Debug)]
pub struct AdditionArgs {
#[input(description = "Left Operand for addition")]
left: i64,
#[input(description = "Right Operand for addition")]
right: i64,
}
#[tool(
name = "Addition",
description = "Use this tool to Add two numbers",
input = AdditionArgs,
)]
struct Addition {}
#[async_trait]
impl ToolRuntime for Addition {
async fn execute(&self, args: Value) -> Result<Value, ToolCallError> {
println!("execute tool: {:?}", args);
let typed_args: AdditionArgs = serde_json::from_value(args)?;
let result = typed_args.left + typed_args.right;
Ok(result.into())
}
}
/// Math agent output with Value and Explanation
#[derive(Debug, Serialize, Deserialize, AgentOutput)]
pub struct MathAgentOutput {
#[output(description = "The addition result")]
value: i64,
#[output(description = "Explanation of the logic")]
explanation: String,
#[output(description = "If user asks other than math questions, use this to answer them.")]
generic: Option<String>,
}
#[agent(
name = "math_agent",
description = "You are a Math agent",
tools = [Addition],
output = MathAgentOutput,
)]
#[derive(Default, Clone, AgentHooks)]
pub struct MathAgent {}
impl From<ReActAgentOutput> for MathAgentOutput {
fn from(output: ReActAgentOutput) -> Self {
let resp = output.response;
if output.done && !resp.trim().is_empty() {
// Try to parse as structured JSON first
if let Ok(value) = serde_json::from_str::<MathAgentOutput>(&resp) {
return value;
}
}
// For streaming chunks or unparseable content, create a default response
MathAgentOutput {
value: 0,
explanation: resp,
generic: None,
}
}
}
pub async fn simple_agent(llm: Arc<dyn LLMProvider>) -> Result<(), Error> {
let sliding_window_memory = Box::new(SlidingWindowMemory::new(10));
let agent_handle = AgentBuilder::<_, DirectAgent>::new(ReActAgent::new(MathAgent {}))
.llm(llm)
.memory(sliding_window_memory)
.build()
.await?;
println!("Running simple_agent with direct run method");
let result = agent_handle.agent.run(Task::new("What is 1 + 1?")).await?;
println!("Result: {:?}", result);
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Error> {
// Check if API key is set
let api_key = std::env::var("OPENAI_API_KEY").unwrap_or("".into());
// Initialize and configure the LLM client
let llm: Arc<OpenAI> = LLMBuilder::<OpenAI>::new()
.api_key(api_key) // Set the API key
.model("gpt-4o") // Use GPT-4o-mini model
.max_tokens(512) // Limit response length
.temperature(0.2) // Control response randomness (0.0-1.0)
.build()
.expect("Failed to build LLM");
let _ = simple_agent(llm).await?;
Ok(())
}
```
### AutoAgents CLI
Command-line interface for running and serving AutoAgents workflows from YAML.
### Installation
```bash
cargo build --package autoagents-cli --release
```
The binary will be available at `target/release/autoagents`.
### Usage
#### Run a Workflow
Execute a workflow from a YAML file:
```yaml
kind: Direct
name: ResearchAgent
stream: false
description: "A research agent designed to search, retrieve, and summarize information from the web."
workflow:
agent:
name: ResearchAgent
description: "A deep research agent capable of gathering accurate information, summarizing sources, and providing references."
instructions: |
You are a research expert. Your task is to find accurate and up-to-date information related to the user's query.
1. Search for relevant sources on the web.
2. Extract key insights and summarize them concisely.
3. Provide references and links to original sources.
4. Make sure to cross-verify facts and avoid unverified information.
5. Present the final answer in a structured and clear manner.
executor: ReAct
memory:
kind: sliding_window
parameters:
window_size: 100
model:
kind: llm
backend:
kind: Cloud
provider: OpenAI
model_name: gpt-4o-mini
parameters:
temperature: 0.2
max_tokens: 1500
tools:
- name: brave_search
output:
type: text
output:
type: text
```
```bash
autoagents run --workflow workflow.yaml --input "What is Rust?"
```
#### Serve Workflows over HTTP
Start an HTTP server to serve workflows via REST API:
```bash
autoagents serve --workflow workflow.yaml --port 8080
```
Optional arguments:
- `--name <NAME>` - Custom name for the workflow (defaults to filename)
- `--host <HOST>` - Host to bind to (default: 127.0.0.1)
- `--port <PORT>` - Port to bind to (default: 8080)
#### Examples
```bash
# Run a direct workflow
autoagents run -w workflow.yaml -i "Tell me about AI"
# Serve a workflow on custom port
autoagents serve -w workflow.yaml -p 9000 --name research
# serve from directory
autoagents serve --directory ./workflows
# Serve with custom name
autoagents serve -w workflow.yaml --name my_agent --host 0.0.0.0 --port 3000
```
---
## 📚 Examples
Explore our comprehensive examples to get started quickly:
### [Basic](examples/basic/)
Demonstrates various examples like Simple Agent with Tools, Very Basic Agent, Edge Agent, Chaining, Actor Based Model,
Streaming and Adding Agent Hooks.
### [MCP Integration](examples/mcp/)
Demonstrates how to integrate AutoAgents with the Model Context Protocol (MCP).
### [Local Models](examples/mistral_rs)
Demonstrates how to integrate AutoAgents with the Mistral-rs for Local Models.
### [Design Patterns](examples/design_patterns/)
Demonstrates various design patterns like Chaining, Planning, Routing, Parallel and Reflection.
### [Providers](examples/providers/)
Contains examples demonstrating how to use different LLM providers with AutoAgents.
### [WASM Tool Execution](examples/wasm_runner/)
A simple agent which can run tools in WASM runtime.
### [Coding Agent](examples/coding_agent/)
A sophisticated ReAct-based coding agent with file manipulation capabilities.
### [Wasm Agent](examples/wasm_agent/)
Compile agent runtime into WASM module and load it in a browser web app.
---
## 🏗️ Components
AutoAgents is built with a modular architecture:
```
AutoAgents/
├── crates/
│ ├── autoagents/ # Main library entry point
│ ├── autoagents-core/ # Core agent framework
│ ├── autoagents-llm/ # LLM provider implementations
│ ├── autoagents-toolkit/ # Collection of Ready to use Tools
│ ├── autoagents-burn/ # LLM provider implementations using Burn
│ ├── autoagents-mistral-rs/ # LLM provider implementations using Mistral-rs
│ ├── autoagents-onnx/ # Edge Runtime Implementation using Onnx
│ └── autoagents-derive/ # Procedural macros
│ └── autoagents-cli/ # AutoAgents CLI
│ └── autoagents-serve/ # Crate responsible for running and serving YAML based workflows
├── examples/ # Example implementations
```
### Core Components
- **Agent**: The fundamental unit of intelligence
- **Environment**: Manages agent lifecycle and communication
- **Memory**: Configurable memory systems
- **Tools**: External capability integration
- **Executors**: Different reasoning patterns (ReAct, Chain-of-Thought)
---
## 🛠️ Development
### Setup
For development setup instructions, see the [Installation](#-installation) section above.
### Running Tests
```bash
# Run all tests --
cargo test --all-features
# Run tests with coverage (requires cargo-tarpaulin)
cargo install cargo-tarpaulin
cargo tarpaulin --all-features --out html
```
### Git Hooks
This project uses LeftHook for Git hooks management. The hooks will automatically:
- Format code with `cargo fmt --check`
- Run linting with `cargo clippy -- -D warnings`
- Execute tests with `cargo test --all-features --workspace --exclude autoagents-burn`
### Contributing
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md)
and [Code of Conduct](CODE_OF_CONDUCT.md) for details.
---
## 📖 Documentation
- **[API Documentation](https://liquidos-ai.github.io/AutoAgents)**: Complete Framework Docs
- **[Examples](examples/)**: Practical implementation examples
---
## 🤝 Community
- **GitHub Issues**: Bug reports and feature requests
- **Discussions**: Community Q&A and ideas
- **Discord**: Join our Discord Community using https://discord.gg/Ghau8xYn
---
## 📊 Performance
AutoAgents is designed for high performance:
- **Memory Efficient**: Optimized memory usage with configurable backends
- **Concurrent**: Full async/await support with tokio
- **Scalable**: Horizontal scaling with multi-agent coordination
- **Type Safe**: Compile-time guarantees with Rust's type system
---
## 📜 License
AutoAgents is dual-licensed under:
- **MIT License** ([MIT_LICENSE](MIT_LICENSE))
- **Apache License 2.0** ([APACHE_LICENSE](APACHE_LICENSE))
You may choose either license for your use case.
---
## 🙏 Acknowledgments
Built with ❤️ by the [Liquidos AI](https://liquidos.ai) team and our amazing community contributors.
Special thanks to:
- The Rust community for the excellent ecosystem
- OpenAI, Anthropic, and other LLM providers for their APIs
- All contributors who help make AutoAgents better
---
<div align="center">
<strong>Ready to build intelligent agents? Get started with AutoAgents today!</strong>
</div>
## Star History
[](https://www.star-history.com/#liquidos-ai/AutoAgents&Date)