A.R.E.S - Agentic Retrieval Enhanced Server

A production-grade agentic chatbot server built in Rust with multi-provider LLM support, tool calling, RAG, MCP integration, and advanced research capabilities.
Features
- π€ Multi-Provider LLM Support: Ollama, OpenAI, LlamaCpp (direct GGUF loading)
- βοΈ TOML Configuration: Declarative configuration with hot-reloading
- π Configurable Agents: Define agents via TOON (Token Oriented Object Notation) with custom models, tools, and prompts
- π Workflow Engine: Declarative workflow execution with agent routing
- π Local-First Development: Runs entirely locally with Ollama and SQLite by default
- π§ Tool Calling: Type-safe function calling with automatic schema generation
- π― Per-Agent Tool Filtering: Restrict which tools each agent can access
- π‘ Streaming: Real-time streaming responses from all providers
- π Authentication: JWT-based auth with Argon2 password hashing
- πΎ Database: Local SQLite (libsql) by default, optional Turso and Qdrant
- π MCP Support: Pluggable Model Context Protocol server integration
- πΈοΈ Agent Framework: Multi-agent orchestration with specialized agents
- π RAG: Pure-Rust vector store (ares-vector), multi-strategy search (semantic, BM25, fuzzy, hybrid), reranking
- π§ Memory: User personalization and context management
- π¬ Deep Research: Multi-step research with parallel subagents
- π Web Search: Built-in web search via daedra (no API keys required)
- π OpenAPI: Automatic API documentation generation
- π§ͺ Testing: Comprehensive unit and integration tests
- βοΈ Config Validation: Circular reference detection and unused config warnings
Installation
A.R.E.S can be used as a standalone server or as a library in your Rust project.
As a Library
Add to your Cargo.toml:
[]
= "0.2"
Basic usage:
use ;
async
As a Binary
# Install from crates.io (basic installation)
# Install with embedded Web UI
# Initialize a new project (creates ares.toml and config files)
# Run the server
CLI Commands
A.R.E.S provides a full-featured CLI with colored output:
# Initialize a new project with all configuration files
# Initialize with custom options
# Initialize with minimal configuration
# View configuration summary
# Validate configuration
# List all configured agents
# Show details for a specific agent
# Start the server
# Start with verbose logging
# Use a custom config file
# Disable colored output
Init Command Options
| Option | Description |
|---|---|
--force, -f |
Overwrite existing files |
--minimal, -m |
Create minimal configuration |
--no-examples |
Skip creating TOON example files |
--provider <NAME> |
LLM provider: ollama, openai, or both |
--host <ADDR> |
Server host address (default: 127.0.0.1) |
--port <PORT> |
Server port (default: 3000) |
Quick Start (Development)
Prerequisites
- Rust 1.91+: Install via rustup
- Ollama (recommended): For local LLM inference - Install Ollama
- just (recommended): Command runner - Install just
1. Clone and Setup
# Or use just to set up everything:
2. Start Ollama (Recommended)
# Install a model
# Or: just ollama-pull
# Ollama runs automatically as a service, or start manually:
3. Build and Run
# Build with default features (local-db + ollama)
# Or: just build
# Run the server
# Or: just run
Server runs on http://localhost:3000
Feature Flags
A.R.E.S uses Cargo features for conditional compilation:
LLM Providers
| Feature | Description | Default |
|---|---|---|
ollama |
Ollama local inference | β Yes |
openai |
OpenAI API (and compatible) | No |
llamacpp |
Direct GGUF model loading | No |
llamacpp-cuda |
LlamaCpp with CUDA | No |
llamacpp-metal |
LlamaCpp with Metal (macOS) | No |
llamacpp-vulkan |
LlamaCpp with Vulkan | No |
Database Backends
| Feature | Description | Default |
|---|---|---|
local-db |
Local SQLite via libsql | β Yes |
turso |
Remote Turso database | No |
qdrant |
Qdrant vector database | No |
ares-vector |
Pure-Rust vector store with HNSW indexing | No |
UI & Documentation
| Feature | Description | Default |
|---|---|---|
ui |
Embedded Leptos web UI served from backend | No |
swagger-ui |
Interactive API documentation at /swagger-ui/ |
No |
Note:
swagger-uiwas made optional in v0.2.5 to reduce binary size and build time. The feature requires network access during build to download Swagger UI assets.
Feature Bundles
| Feature | Includes |
|---|---|
all-llm |
ollama + openai + llamacpp |
all-db |
local-db + turso + qdrant |
full |
All optional features (except UI): ollama, openai, llamacpp, turso, qdrant, mcp, swagger-ui |
full-ui |
All optional features + UI |
minimal |
No optional features |
Building with Features
# Default (ollama + local-db)
# Or: just build
# With OpenAI support
# Or: just build-features "openai"
# With direct GGUF loading
# With CUDA GPU acceleration
# Full feature set
# Or: just build-all
# With embedded Web UI
# With Swagger UI (interactive API docs)
# Full feature set with UI
# Release build
# Or: just build-release
Configuration
A.R.E.S uses a TOML configuration file (ares.toml) for declarative configuration of all components. The server requires this file to start.
Quick Start
# Copy the example config
# Set required environment variables
Configuration File (ares.toml)
The configuration file defines providers, models, agents, tools, and workflows:
# Server settings
[]
= "127.0.0.1"
= 3000
= "info"
# Authentication (secrets loaded from env vars)
[]
= "JWT_SECRET"
= "API_KEY"
# Database
[]
= "./data/ares.db"
# LLM Providers (define named providers)
[]
= "ollama"
= "http://localhost:11434"
= "ministral-3:3b"
[] # Optional
= "openai"
= "OPENAI_API_KEY"
= "gpt-4"
# Models (reference providers, set parameters)
[]
= "ollama-local"
= "ministral-3:3b"
= 0.7
= 256
[]
= "ollama-local"
= "ministral-3:3b"
= 0.7
= 512
[]
= "ollama-local"
= "qwen3-vl:2b"
= 0.3
= 1024
# Tools (define available tools)
[]
= true
= 10
[]
= true
= 30
# Agents (reference models and tools)
[]
= "fast"
= "You route requests to specialized agents..."
[]
= "balanced"
= ["calculator"] # Tool filtering: only calculator
= "You are a Product Agent..."
[]
= "smart"
= ["web_search", "calculator"] # Multiple tools
= "You conduct research..."
# Workflows (define agent routing)
[]
= "router"
= "product"
= 5
[]
= "research"
= 10
Per-Agent Tool Filtering
Each agent can specify which tools it has access to:
[]
= "balanced"
= ["calculator"] # Only calculator, no web search
[]
= "balanced"
= ["calculator", "web_search"] # Both tools
If tools is empty or omitted, the agent has no tool access.
Configuration Validation
The configuration is validated on load with:
- Reference checking: Models must reference valid providers, agents must reference valid models
- Circular reference detection: Workflows cannot have circular agent references
- Environment variables: All referenced env vars must be set
For warnings about unused configuration items (providers, models, tools not referenced by anything), the validate_with_warnings() method is available.
Hot Reloading
Configuration changes are automatically detected and applied without restarting the server. Edit ares.toml and the changes will be picked up within 500ms.
Environment Variables
The following environment variables must be set (referenced by ares.toml):
# Required
JWT_SECRET=your-secret-key-at-least-32-characters
API_KEY=your-api-key
# Optional (for OpenAI provider)
OPENAI_API_KEY=sk-...
Legacy Environment Variables
For backward compatibility, these environment variables can also be used:
# Server
HOST=127.0.0.1
PORT=3000
# Database (local-first)
# Examples: ./data/ares.db | file:./data/ares.db | :memory:
DATABASE_URL=./data/ares.db
# Optional: Turso cloud (set both to enable)
# TURSO_URL=libsql://<your-db>-<your-org>.turso.io
# TURSO_AUTH_TOKEN=...
# LLM Provider - Ollama (default)
OLLAMA_URL=http://localhost:11434
# LLM Provider - OpenAI (optional)
# OPENAI_API_KEY=sk-...
# OPENAI_API_BASE=https://api.openai.com/v1
# OPENAI_MODEL=gpt-4
# LLM Provider - LlamaCpp (optional, highest priority if set)
# LLAMACPP_MODEL_PATH=/path/to/model.gguf
# Authentication
JWT_SECRET=your-secret-key-at-least-32-characters
API_KEY=your-api-key
# Optional: Qdrant for vector search
# QDRANT_URL=http://localhost:6334
# QDRANT_API_KEY=
Provider Priority
When multiple providers are configured, they are selected in this order:
- LlamaCpp - If
LLAMACPP_MODEL_PATHis set - OpenAI - If
OPENAI_API_KEYis set - Ollama - Default fallback (no API key required)
Dynamic Configuration (TOON)
In addition to ares.toml, A.R.E.S supports TOON (Token Oriented Object Notation) files for behavioral configuration with hot-reloading:
config/
βββ agents/
β βββ router.toon
β βββ orchestrator.toon
β βββ product.toon
βββ models/
β βββ fast.toon
β βββ balanced.toon
βββ tools/
β βββ calculator.toon
βββ workflows/
β βββ default.toon
βββ mcps/
βββ filesystem.toon
Example TOON agent config (config/agents/router.toon):
name: router
model: fast
max_tool_iterations: 5
parallel_tools: false
tools[0]:
system_prompt: |
You are a router agent that directs requests to specialized agents.
Enable TOON configs in ares.toml:
[]
= "config/agents"
= "config/models"
= "config/tools"
= "config/workflows"
= "config/mcps"
= true
TOON files are automatically hot-reloaded when changed. See docs/DIR-12-research.md for details.
User-Created Agents API
Users can create custom agents stored in the database with TOON import/export:
# Create a custom agent
# Export as TOON
# Import from TOON
Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β ares.toml (Configuration) β
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ β
β βproviders β β models β β agents β β tools β βworkflows β β
β ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ β
ββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββ
β Hot Reload
ββββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ
β AresConfigManager β
β (Thread-safe config access) β
ββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββΌββββββββββββββββββββββββββββ
β β β
ββββββββΌβββββββ ββββββββΌβββββββ ββββββββΌβββββββ
β Provider β β Agent β β Tool β
β Registry β β Registry β β Registry β
ββββββββ¬βββββββ ββββββββ¬βββββββ ββββββββ¬βββββββ
β β β
β ββββββββΌβββββββ β
β βConfigurable βββββββββββββββββββββ
β β Agent β (filtered tools)
β ββββββββ¬βββββββ
β β
ββββββββΌββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ
β LLM Clients β β
β ββββββββββ ββββββββββ β β
β βOllama β βOpenAI β β β
β ββββββββββ ββββββββββ β β
β ββββββββββ β β
β βLlamaCppβ β β
β ββββββββββ β β
βββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββββββββββββββ
β Workflow Engine β
β βββββββββββββββ execute_workflow() βββββββββββββββ β
β β Workflow βββββββββββββββββββββββββββΆβ Agent β β
β β Config β β Execution β β
β βββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββΌββββββββββββββββββββ
β β β
βββββββΌββββββββββ ββββββββΌββββββββ βββββββΌββββββββ
β API Layer β β Tool Calls β β Knowledge β
β (Axum) β β β β Bases β
β /api/chat β β - Calculator β β - SQLite β
β /api/research β β - Web Search β β - Qdrant β
β /api/workflowsβ β β β β
βββββββββββββββββ ββββββββββββββββ βββββββββββββββ
Key Components
- AresConfigManager: Thread-safe configuration management with hot-reloading
- ProviderRegistry: Creates LLM clients based on model configuration
- AgentRegistry: Creates ConfigurableAgents from TOML configuration
- ToolRegistry: Manages available tools and their configurations
- ConfigurableAgent: Generic agent implementation that uses config for behavior
- WorkflowEngine: Executes declarative workflows defined in TOML
API Documentation
Interactive Swagger UI available at: http://localhost:3000/swagger-ui/
Note: Swagger UI requires the
swagger-uifeature to be enabled at build time:# Or use the full bundle:
Authentication
Register
Login
Response:
Chat
Deep Research
Workflows
Workflows enable multi-agent orchestration. Define workflows in ares.toml:
[]
= "router" # Starting agent
= "orchestrator" # Used if routing fails
= 5 # Maximum agent chain depth
= 10 # Maximum total iterations
List Available Workflows
Response:
Execute a Workflow
Response:
Workflow with Context
RAG (Retrieval Augmented Generation)
A.R.E.S includes a comprehensive RAG system with a pure-Rust vector store. Requires the ares-vector feature.
Ingest Documents
Search Documents
Search Strategies:
semantic: Vector similarity searchbm25: Traditional keyword matchingfuzzy: Typo-tolerant searchhybrid: Weighted combination of semantic + BM25
List Collections
Tool Calling
A.R.E.S supports tool calling with Ollama models that support function calling (ministral-3:3b+, mistral, etc.):
Built-in Tools
- calculator: Basic arithmetic operations
- web_search: Web search via DuckDuckGo (no API key required)
Tool Calling Example
use ;
use ToolRegistry;
use ;
// Set up tools
let mut registry = new;
registry.register
## Testing
A.R.E.S has comprehensive test coverage with both mocked and live tests.
### Unit & Integration Tests
```bash
# Run all tests
cargo test
# Or: just test
# Run with verbose output
cargo test -- --nocapture
# Or: just test-verbose
Live Ollama Tests
Tests that connect to a real Ollama instance are available but ignored by default.
Prerequisites
- Running Ollama server at
http://localhost:11434 - A model installed (e.g.,
ollama pull ministral-3:3b)
Running Live Tests
# Set the environment variable and run ignored tests
OLLAMA_LIVE_TESTS=1
# Or: just test-ignored
# All tests (normal + ignored)
# With verbose output
# With custom Ollama URL or model
OLLAMA_URL=http://192.168.1.100:11434 OLLAMA_MODEL=mistral OLLAMA_LIVE_TESTS=1 \
Or add OLLAMA_LIVE_TESTS=1 to your .env file.
API Tests (Hurl)
End-to-end API tests using Hurl:
# Install Hurl
# Run API tests (server must be running)
# Run with verbose output
# Run specific test group
See CONTRIBUTING.md for more testing details.
Common Commands (just)
A.R.E.S uses just as a command runner. Run just --list to see all available commands:
# Show all commands
# Build & Run
# CLI Commands
# Testing
# Code Quality
# Docker
# UI Development
# Ollama
# Info
Troubleshooting
Configuration File Not Found
# Error: Configuration file 'ares.toml' not found!
# Solution: Initialize a new project
Port Already in Use
# Error: Address already in use (os error 48)
# Find the process using port 3000
|
# Kill the process
Ollama Connection Failed
# Check if Ollama is running
# Start Ollama
# Or start via Docker
Missing Environment Variables
# Error: MissingEnvVar("JWT_SECRET")
# Solution: Set up environment variables
# Edit .env and set JWT_SECRET (min 32 characters) and API_KEY
UI Build Errors (Node.js runtime required)
# Error: npx: command not found
# Solution: Install a Node.js runtime
# Option 1: Install Bun (recommended)
|
# Option 2: Install Node.js
# or download from https://nodejs.org
WASM Build Errors
# Error: target `wasm32-unknown-unknown` not found
# Solution: Add the WASM target
# Install trunk
Requirements
Minimum Requirements
- Rust: 1.91 or later
- Operating System: Linux, macOS, or Windows
- Memory: 2GB RAM (4GB+ recommended for larger models)
Optional Requirements
- Ollama: For local LLM inference (recommended)
- Node.js runtime: Bun, npm, or Deno (required for UI development)
- Docker: For containerized deployment
- GPU: NVIDIA (CUDA) or Apple Silicon (Metal) for accelerated inference
Security Considerations
- JWT_SECRET: Must be at least 32 characters. Generate with:
openssl rand -base64 32 - API_KEY: Should be unique per deployment
- Environment Variables: Never commit
.envfiles to version control - HTTPS: Use HTTPS in production (configure via reverse proxy)
- Rate Limiting: Consider adding rate limiting for production deployments
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Quick Contribution Guide
# 1. Fork and clone the repository
# 2. Create a feature branch
# 3. Make your changes and run tests
# 4. Commit and push
# 5. Open a Pull Request
Development Setup
# Install development dependencies
# Run pre-commit checks before pushing
Changelog
See CHANGELOG.md for a list of changes in each version.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Ollama - Local LLM inference
- llama.cpp - GGUF model support
- Axum - Web framework
- Leptos - Reactive web UI framework
- TOON Format - Token-optimized configuration format
Support
- π Documentation
- π Issue Tracker
- π¬ Discussions
- π Latest Release
Made with β€οΈ by Dirmacs