AutoAgents
A production-grade multi-agent framework in Rust
English | 中文 | 日本語 | Español | Français | Deutsch | 한국어 | Português (Brasil) Translations may lag behind the English README.
Documentation | Examples | Contributing
Overview
AutoAgents is a modular, multi-agent framework for building intelligent systems in Rust. It combines a type-safe agent model with structured tool calling, configurable memory, and pluggable LLM backends. The architecture is designed for performance, safety, and composability across server, edge.
Key Features
- Agent execution: ReAct and basic executors, streaming responses, and structured outputs
- Tooling: Derive macros for tools and outputs, plus a sandboxed WASM runtime for tool execution
- Memory: Sliding window memory with extensible backends
- LLM providers: Cloud and local backends behind a unified interface
- LLM Guardrails: Guardrail implementation for safeguarding LLM inference
- LLM Optimization: Build LLM pipelines with optimization passes like cache and retry for faster, more reliable inference
- Multi-agent orchestration: Typed pub/sub communication and environment management
- Speech-Processing: Local TTS and STT support
- Observability: OpenTelemetry tracing and metrics with pluggable exporters
Supported LLM Providers
Cloud Providers
| Provider | Status |
|---|---|
| OpenAI | ✅ |
| OpenRouter | ✅ |
| Anthropic | ✅ |
| DeepSeek | ✅ |
| xAI | ✅ |
| Phind | ✅ |
| Groq | ✅ |
| ✅ | |
| Azure OpenAI | ✅ |
| MiniMax | ✅ |
Local Providers
| Provider | Status |
|---|---|
| Ollama | ✅ |
| Mistral-rs | ✅ |
| Llama-Cpp | ✅ |
Experimental Providers
See https://github.com/liquidos-ai/AutoAgents-Experimental-Backends
| Provider | Status |
|---|---|
| Burn | ⚠️ Experimental |
| Onnx | ⚠️ Experimental |
Provider support is actively expanding based on community needs.
Benchmarks

More info at GitHub
Installation
Prerequisites
- Rust (latest stable recommended)
- Cargo package manager
- LeftHook for Git hooks management
- Python 3.9+ (required for Python bindings)
- uv for Python environment and package management
- maturin (required to build/install local Python bindings from source)
Prerequisite
Install LeftHook
macOS (Homebrew):
Linux/Windows (npm):
Clone and Build
Python Bindings
AutoAgents ships Python bindings to PyPI. Install the base package and add backends via extras:
Development install from this repo:
# Clean, build, and install all CPU bindings into the active venv
# Clean, build, and install CPU + CUDA bindings
The Make targets remove stale editable-install extension artifacts before
rebuilding, which avoids loading out-of-date .abi3.so files from the source
tree.
Example scripts:
- Core cloud example:
bindings/python/autoagents/examples/openai_agent.py - llama.cpp example:
bindings/python/autoagents-llamacpp/examples/llamacpp_agent.py - mistral-rs example:
bindings/python/autoagents-mistralrs/examples/mistral_rs_agent.py
Run Tests
Quick Start
use SlidingWindowMemory;
use ;
use Task;
use ;
use Error;
use ;
use LLMProvider;
use OpenAI;
use LLMBuilder;
use ;
use ;
use Value;
use Arc;
pub async
async
AutoAgents CLI
AutoAgents CLI helps in running Agentic Workflows from YAML configurations and serves them over HTTP. You can check it out at https://github.com/liquidos-ai/AutoAgents-CLI.
Examples
Explore the examples to get started quickly:
Basic
Demonstrates various examples like Simple Agent with Tools, Very Basic Agent, Edge Agent, Chaining, Actor Based Model, Streaming and Adding Agent Hooks.
LLM Pipelines
Demonstrates LLM pipelines with optimization passes such as cache and retry to improve performance and reliability.
Guardrails
Demonstrates configurable input and output guardrails with Block, Sanitize, and Audit policies using an LLMLayer in the pipeline.
MCP Integration
Demonstrates how to integrate AutoAgents with the Model Context Protocol (MCP).
Local Models
Demonstrates how to integrate AutoAgents with the Mistral-rs for Local Models.
Design Patterns
Demonstrates various design patterns like Chaining, Planning, Routing, Parallel and Reflection.
Providers
Contains examples demonstrating how to use different LLM providers with AutoAgents.
WASM Tool Execution
A simple agent which can run tools in WASM runtime.
Coding Agent
A sophisticated ReAct-based coding agent with file manipulation capabilities.
Speech
Run AutoAgents Speech Example with realtime TTS and STT.
Android Local Agent
Example App that runs AutoAgents with Local models in Android using AutoAgents-llamacpp backend
Components
AutoAgents is built with a modular architecture:
AutoAgents/
├── crates/
│ ├── autoagents/ # Main library entry point
│ ├── autoagents-core/ # Core agent framework
│ ├── autoagents-protocol/ # Shared protocol/event types
│ ├── autoagents-llm/ # LLM provider implementations
│ ├── autoagents-telemetry/ # OpenTelemetry integration
│ ├── autoagents-toolkit/ # Collection of ready-to-use tools
│ ├── autoagents-mistral-rs/ # LLM provider implementations using Mistral-rs
│ ├── autoagents-llamacpp/ # LLM provider implementation using LlamaCpp
│ ├── autoagents-speech/ # Speech model support for TTS and STT
│ ├── autoagents-guardrails/ # LLM Guardrails implementation
│ ├── autoagents-qdrant/ # Qdrant vector store
│ └── autoagents-derive/ # Procedural macros
├── examples/ # Example implementations
Core Components
- Agent: The fundamental unit of intelligence
- Environment: Manages agent lifecycle and communication
- Memory: Configurable memory systems
- Tools: External capability integration
- Executors: Different reasoning patterns (ReAct, Chain-of-Thought)
Development
Prerequisite
Running Tests
# Coverage (requires cargo-tarpaulin)
Running Benchmarks
Git Hooks
This project uses LeftHook for Git hooks management. The hooks will automatically:
- Format code with
cargo fmt --check - Run linting with
cargo clippy -- -D warnings - Execute tests with
cargo test --all-features --workspace --exclude autoagents-burn
Contributing
We welcome contributions. Please see our Contributing Guidelines and Code of Conduct for details.
Documentation
- API Documentation: Complete framework docs
- Examples: Practical implementation examples
Community
- GitHub Issues: Bug reports and feature requests
- Discussions: Community Q&A and ideas
- Discord: Join our Discord Community using https://discord.gg/zfAF9MkEtK
Performance
AutoAgents is designed for high performance:
- Memory Efficient: Optimized memory usage with configurable backends
- Concurrent: Full async/await support with tokio
- Scalable: Horizontal scaling with multi-agent coordination
- Type Safe: Compile-time guarantees with Rust's type system
License
AutoAgents is dual-licensed under:
- MIT License (MIT_LICENSE)
- Apache License 2.0 (APACHE_LICENSE)
You may choose either license for your use case.
Acknowledgments
Built by the Liquidos AI team and wonderful community of researchers and engineers.
Special thanks to:
- The Rust community for the excellent ecosystem
- LLM providers for enabling high-quality model APIs
- All contributors who help improve AutoAgents