Helios Engine - LLM Agent Framework
Helios Engine is a powerful and flexible Rust framework for building LLM-powered agents with tool support, streaming chat capabilities, and easy configuration management. Create intelligent agents that can interact with users, call tools, and maintain conversation context - with both online and offline local model support.
Key Features
- 🆕 ReAct Mode: Enable agents to reason and plan before taking actions with a simple
.react()call - includes custom reasoning prompts for domain-specific tasks - 🆕 Forest of Agents: Multi-agent collaboration system where agents can communicate, delegate tasks, and share context
- Agent System: Create multiple agents with different personalities and capabilities
- 🆕 Tool Builder: Simplified tool creation with builder pattern - wrap any function as a tool without manual trait implementation
- Tool Registry: Extensible tool system for adding custom functionality
- Extensive Tool Suite: 16+ built-in tools including web scraping, JSON parsing, timestamp operations, file I/O, shell commands, HTTP requests, system info, and text processing
- 🆕 RAG System: Retrieval-Augmented Generation with vector stores (InMemory and Qdrant)
- 🆕 Custom Endpoints: Ultra-simple API for adding custom HTTP endpoints to your agent server - ~70% less code than before!
- Streaming Support: True real-time response streaming for both remote and local models with immediate token delivery
- Local Model Support: Run local models offline using llama.cpp with HuggingFace integration (optional
localfeature) - HTTP Server & API: Expose OpenAI-compatible API endpoints with full parameter support
- Dual Mode Support: Auto, online (remote API), and offline (local) modes
- CLI & Library: Use as both a command-line tool and a Rust library crate
- 🆕 Feature Flags: Optional
localfeature for offline model support - build only what you need! - 🆕 Improved Syntax: Cleaner, more ergonomic API for adding multiple tools and agents - use
.tools(vec![...])and.agents(vec![...])for bulk operations
Documentation
Online Resources
- Official Website - Complete interactive documentation with tutorials, guides, and examples
- Official Book - Comprehensive guide to Helios Engine
- API Reference - Detailed API documentation on docs.rs
Quick Links
- Getting Started - Installation and first steps
- Core Concepts - Agents, LLMs, chat, and error handling
- Tools - Using and creating tools
- Forest of Agents - Multi-agent systems
- RAG System - Retrieval-Augmented Generation
- Examples - Code examples and use cases
Local Documentation
- Getting Started - Comprehensive guide: installation, configuration, first agent, tools, and CLI
- Tools Guide - Built-in tools, custom tool creation, and Tool Builder
- Forest of Agents - Multi-agent systems, coordination, and communication
- RAG System - Retrieval-Augmented Generation with vector stores
- API Reference - Complete API documentation
- Configuration - Configuration options and local inference setup
- Using as Crate - Library usage guide
Full Documentation Index - Complete navigation and updated structure
Quick Start
Version 0.4.4
Install CLI Tool
# Install without local model support (lighter, faster install)
# Install with local model support (enables offline mode with llama-cpp-2)
Basic Usage
# Initialize configuration
# Start interactive chat
# Ask a quick question
As a Library Crate
Add to your Cargo.toml:
[]
= "0.5.0"
= { = "1.35", = ["full"] }
Simplest Agent (3 lines!)
use Agent;
async
With Tools & Custom Config
use ;
async
For local model support:
[]
= { = "0.5.0", = ["local"] }
= { = "1.35", = ["full"] }
Simple Example
use ;
async
See Getting Started Guide or visit the Official Book for detailed examples and comprehensive tutorials!
Custom Endpoints Made Simple
Create custom HTTP endpoints with minimal code:
use ;
let endpoints = vec!;
with_agent
.endpoints
.serve
.await?;
70% less code than the old API! See Custom Endpoints Guide for details.
Use Cases
- Chatbots & Virtual Assistants: Build conversational AI with tool access and memory
- Multi-Agent Systems: Coordinate multiple specialized agents for complex workflows
- Data Analysis: Agents that can read files, process data, and generate reports
- Web Automation: Scrape websites, make API calls, and process responses
- Knowledge Management: Build RAG systems for semantic search and Q&A
- API Services: Expose your agents via OpenAI-compatible HTTP endpoints
- Local AI: Run models completely offline for privacy and security
Built-in Tools (16+)
Helios Engine includes a comprehensive suite of production-ready tools:
- File Management: Read, write, edit, and search files
- Web & API: Web scraping, HTTP requests
- System Utilities: Shell commands, system information
- Data Processing: JSON parsing, text manipulation, timestamps
- Communication: Agent-to-agent messaging
- Knowledge: RAG tool for semantic search and retrieval
Learn more in the Tools Guide.
Project Structure
helios-engine/
├── src/ # Source code
├── examples/ # Example applications
├── docs/ # Documentation
├── book/ # mdBook source (deployed to vercel)
├── tests/ # Integration tests
├── Cargo.toml # Project configuration
└── README.md # This file
Contributing
We welcome contributions! See our Contributing Guide for details on:
- Development setup
- Code standards
- Documentation guidelines
- Testing procedures
Links
- Official Website & Book - Complete documentation and guides
- Crates.io - Package registry
- API Documentation - API reference
- GitHub Repository - Source code
- Examples - Code examples
License
This project is licensed under the MIT License - see the LICENSE file for details.