helios-engine 0.1.6

A powerful and flexible Rust framework for building LLM-powered agents with tool support
Documentation

๐Ÿš€ Helios Engine - LLM Agent Framework

Rust License

Helios Engine is a powerful and flexible Rust framework for building LLM-powered agents with tool support, chat capabilities, and easy configuration management. Create intelligent agents that can interact with users, call tools, and maintain conversation context.

โœจ Features

  • ๐Ÿค– Agent System: Create multiple agents with different personalities and capabilities
  • ๐Ÿ› ๏ธ Tool Registry: Extensible tool system for adding custom functionality
  • ๐Ÿ’ฌ Chat Management: Built-in conversation history and session management
  • โšก Streaming Support: Real-time response streaming with thinking tag detection
  • โš™๏ธ Configuration: TOML-based configuration for LLM settings
  • ๐Ÿ”Œ LLM Support: Compatible with OpenAI API and any OpenAI-compatible API
  • ๐Ÿ”„ Async/Await: Built on Tokio for high-performance async operations
  • ๐ŸŽฏ Type-Safe: Leverages Rust's type system for safe and reliable code
  • ๐Ÿ“ฆ Extensible: Easy to add custom tools and extend functionality
  • ๐Ÿ’ญ Thinking Tags: Automatic detection and display of model reasoning process

๐Ÿ“‹ Table of Contents

๐Ÿ”ง Installation

Helios Engine can be used both as a command-line tool and as a library crate in your Rust projects.

As a CLI Tool (Recommended for Quick Start)

Install globally using Cargo (once published):

cargo install helios-engine

Then use anywhere:

# Initialize configuration
helios-engine init

# Start interactive chat
helios-engine chat

# Ask a quick question
helios-engine ask "What is Rust?"

# Get help
helios-engine --help

As a Library Crate

Add Helios-Engine to your Cargo.toml:

[dependencies]
helios-engine = "0.1.0"
tokio = { version = "1.35", features = ["full"] }

Or use a local path during development:

[dependencies]
helios-engine = { path = "../helios" }
tokio = { version = "1.35", features = ["full"] }

Build from Source

git clone https://github.com/Ammar-Alnagar/Helios-Engine.git
cd Helios-Engine
cargo build --release

# Install locally
cargo install --path .

๐Ÿš€ Quick Start

Using as a Library Crate

The simplest way to use Helios Engine is to call LLM models directly:

use helios_engine::{LLMClient, ChatMessage};
use helios_engine::config::LLMConfig;

#[tokio::main]
async fn main() -> helios_engine::Result<()> {
    // Configure the LLM
    let llm_config = LLMConfig {
        model_name: "gpt-3.5-turbo".to_string(),
        base_url: "https://api.openai.com/v1".to_string(),
        api_key: std::env::var("OPENAI_API_KEY").unwrap(),
        temperature: 0.7,
        max_tokens: 2048,
    };

    // Create client
    let client = LLMClient::new(llm_config);

    // Make a call
    let messages = vec![
        ChatMessage::system("You are a helpful assistant."),
        ChatMessage::user("What is the capital of France?"),
    ];

    let response = client.chat(messages, None).await?;
    println!("Response: {}", response.content);

    Ok(())
}

๐Ÿ“š For detailed examples of using Helios Engine as a crate, see Using as a Crate Guide

Using with Agent System

For more advanced use cases with tools and persistent conversation:

1. Configure Your LLM

Create a config.toml file:

[llm]
model_name = "gpt-3.5-turbo"
base_url = "https://api.openai.com/v1"
api_key = "your-api-key-here"
temperature = 0.7
max_tokens = 2048

2. Create Your First Agent

use helios_engine::{Agent, Config, CalculatorTool};

#[tokio::main]
async fn main() -> helios_engine::Result<()> {
    // Load configuration
    let config = Config::from_file("config.toml")?;

    // Create an agent with tools
    let mut agent = Agent::builder("HeliosAgent")
        .config(config)
        .system_prompt("You are a helpful AI assistant.")
        .tool(Box::new(CalculatorTool))
        .build()?;

    // Chat with the agent
    let response = agent.chat("What is 15 * 7?").await?;
    println!("Agent: {}", response);

    Ok(())
}

3. Run the Interactive Demo

cargo run

โš™๏ธ Configuration

Helios Engine uses TOML for configuration. Here's a complete configuration example:

[llm]
# The model name (e.g., gpt-3.5-turbo, gpt-4, claude-3, etc.)
model_name = "gpt-3.5-turbo"

# Base URL for the API (OpenAI or compatible)
base_url = "https://api.openai.com/v1"

# Your API key
api_key = "sk-..."

# Temperature for response generation (0.0 - 2.0)
temperature = 0.7

# Maximum tokens in response
max_tokens = 2048

Supported LLM Providers

Helios Engine works with any OpenAI-compatible API:

  • OpenAI: https://api.openai.com/v1
  • Azure OpenAI: https://your-resource.openai.azure.com/openai/deployments/your-deployment
  • Local Models (LM Studio): http://localhost:1234/v1
  • Ollama with OpenAI compatibility: http://localhost:11434/v1
  • Any OpenAI-compatible API

๐Ÿ—๏ธ Architecture

System Overview

graph TB
    User[User] -->|Input| Agent[Agent]
    Agent -->|Messages| LLM[LLM Client]
    Agent -->|Tool Calls| Registry[Tool Registry]
    Registry -->|Execute| Tools[Tools]
    Tools -->|Results| Agent
    LLM -->|Response| Agent
    Agent -->|Output| User
    Config[Config TOML] -->|Load| Agent

    style Agent fill:#4CAF50
    style LLM fill:#2196F3
    style Registry fill:#FF9800
    style Tools fill:#9C27B0

Component Architecture

classDiagram
    class Agent {
        +name: String
        +llm_client: LLMClient
        +tool_registry: ToolRegistry
        +chat_session: ChatSession
        +chat(message) ChatMessage
        +register_tool(tool) void
        +clear_history() void
    }

    class LLMClient {
        +config: LLMConfig
        +chat(messages, tools) ChatMessage
        +generate(request) LLMResponse
    }

    class ToolRegistry {
        +tools: HashMap
        +register(tool) void
        +execute(name, args) ToolResult
        +get_definitions() Vec
    }

    class Tool {
        <<interface>>
        +name() String
        +description() String
        +parameters() HashMap
        +execute(args) ToolResult
    }

    class ChatSession {
        +messages: Vec
        +system_prompt: Option
        +add_message(msg) void
        +clear() void
    }

    class Config {
        +llm: LLMConfig
        +from_file(path) Config
        +save(path) void
    }

    Agent --> LLMClient
    Agent --> ToolRegistry
    Agent --> ChatSession
    Agent --> Config
    ToolRegistry --> Tool
    Tool <|-- CalculatorTool
    Tool <|-- EchoTool
    Tool <|-- CustomTool

Agent Execution Flow

sequenceDiagram
    participant User
    participant Agent
    participant LLM
    participant ToolRegistry
    participant Tool

    User->>Agent: Send Message
    Agent->>Agent: Add to Chat History

    loop Until No Tool Calls
        Agent->>LLM: Send Messages + Tool Definitions
        LLM->>Agent: Response (with/without tool calls)

        alt Has Tool Calls
            Agent->>ToolRegistry: Execute Tool
            ToolRegistry->>Tool: Call with Arguments
            Tool->>ToolRegistry: Return Result
            ToolRegistry->>Agent: Tool Result
            Agent->>Agent: Add Tool Result to History
        else No Tool Calls
            Agent->>User: Return Final Response
        end
    end

Tool Execution Pipeline

flowchart LR
    A[User Request] --> B{LLM Decision}
    B -->|Need Tool| C[Get Tool Definition]
    C --> D[Parse Arguments]
    D --> E[Execute Tool]
    E --> F[Format Result]
    F --> G[Add to Context]
    G --> B
    B -->|No Tool Needed| H[Return Response]
    H --> I[User]

    style B fill:#FFD700
    style E fill:#4CAF50
    style H fill:#2196F3

๐Ÿ“š Usage Examples

Basic Chat

use helios_engine::{Agent, Config};

#[tokio::main]
async fn main() -> helios_engine::Result<()> {
    let config = Config::from_file("config.toml")?;

    let mut agent = Agent::builder("Assistant")
        .config(config)
        .system_prompt("You are a helpful assistant.")
        .build()?;

    let response = agent.chat("Hello!").await?;
    println!("{}", response);

    Ok(())
}

Agent with Built-in Tools

use helios_engine::{Agent, Config, CalculatorTool, EchoTool};

#[tokio::main]
async fn main() -> helios_engine::Result<()> {
    let config = Config::from_file("config.toml")?;

    let mut agent = Agent::builder("ToolAgent")
        .config(config)
        .system_prompt("You have access to tools. Use them wisely.")
        .tool(Box::new(CalculatorTool))
        .tool(Box::new(EchoTool))
        .max_iterations(5)
        .build()?;

    // The agent will automatically use the calculator
    let response = agent.chat("What is 123 * 456?").await?;
    println!("{}", response);

    Ok(())
}

Multiple Agents

use helios_engine::{Agent, Config};

#[tokio::main]
async fn main() -> helios_engine::Result<()> {
    let config = Config::from_file("config.toml")?;

    let mut poet = Agent::builder("Poet")
        .config(config.clone())
        .system_prompt("You are a creative poet.")
        .build()?;

    let mut scientist = Agent::builder("Scientist")
        .config(config)
        .system_prompt("You are a knowledgeable scientist.")
        .build()?;

    let poem = poet.chat("Write a haiku about code").await?;
    let fact = scientist.chat("Explain quantum physics").await?;

    println!("Poet: {}\n", poem);
    println!("Scientist: {}", fact);

    Ok(())
}

๐Ÿ› ๏ธ Creating Custom Tools

Implement the Tool trait to create custom tools:

use async_trait::async_trait;
use helios_engine::{Tool, ToolParameter, ToolResult};
use serde_json::Value;
use std::collections::HashMap;

struct WeatherTool;

#[async_trait]
impl Tool for WeatherTool {
    fn name(&self) -> &str {
        "get_weather"
    }

    fn description(&self) -> &str {
        "Get the current weather for a location"
    }

    fn parameters(&self) -> HashMap<String, ToolParameter> {
        let mut params = HashMap::new();
        params.insert(
            "location".to_string(),
            ToolParameter {
                param_type: "string".to_string(),
                description: "City name".to_string(),
                required: Some(true),
            },
        );
        params
    }

    async fn execute(&self, args: Value) -> helios_engine::Result<ToolResult> {
        let location = args["location"].as_str().unwrap_or("Unknown");

        // Your weather API logic here
        let weather = format!("Weather in {}: Sunny, 72ยฐF", location);

        Ok(ToolResult::success(weather))
    }
}

// Use your custom tool
#[tokio::main]
async fn main() -> helios_engine::Result<()> {
    let config = Config::from_file("config.toml")?;

    let mut agent = Agent::builder("WeatherAgent")
        .config(config)
        .tool(Box::new(WeatherTool))
        .build()?;

    let response = agent.chat("What's the weather in Tokyo?").await?;
    println!("{}", response);

    Ok(())
}

๐Ÿ“– API Documentation

Core Types

Agent

The main agent struct that manages conversation and tool execution.

Methods:

  • builder(name) - Create a new agent builder
  • chat(message) - Send a message and get a response
  • register_tool(tool) - Add a tool to the agent
  • clear_history() - Clear conversation history
  • set_system_prompt(prompt) - Set the system prompt
  • set_max_iterations(max) - Set maximum tool call iterations

Config

Configuration management for LLM settings.

Methods:

  • from_file(path) - Load config from TOML file
  • default() - Create default configuration
  • save(path) - Save config to file

ToolRegistry

Manages and executes tools.

Methods:

  • new() - Create empty registry
  • register(tool) - Register a new tool
  • execute(name, args) - Execute a tool by name
  • get_definitions() - Get all tool definitions
  • list_tools() - List registered tool names

ChatSession

Manages conversation history.

Methods:

  • new() - Create new session
  • with_system_prompt(prompt) - Set system prompt
  • add_message(message) - Add message to history
  • clear() - Clear all messages

Built-in Tools

CalculatorTool

Performs basic arithmetic operations.

Parameters:

  • expression (string, required): Mathematical expression

Example:

agent.tool(Box::new(CalculatorTool));

EchoTool

Echoes back a message.

Parameters:

  • message (string, required): Message to echo

Example:

agent.tool(Box::new(EchoTool));

๐Ÿ“ Project Structure

helios/
โ”œโ”€โ”€ Cargo.toml              # Project configuration
โ”œโ”€โ”€ README.md               # This file
โ”œโ”€โ”€ config.example.toml     # Example configuration
โ”œโ”€โ”€ .gitignore             # Git ignore rules
โ”‚
โ”œโ”€โ”€ src/
โ”‚   โ”œโ”€โ”€ lib.rs             # Library entry point
โ”‚   โ”œโ”€โ”€ main.rs            # Binary entry point (interactive demo)
โ”‚   โ”œโ”€โ”€ agent.rs           # Agent implementation
โ”‚   โ”œโ”€โ”€ llm.rs             # LLM client and provider
โ”‚   โ”œโ”€โ”€ tools.rs           # Tool system and built-in tools
โ”‚   โ”œโ”€โ”€ chat.rs            # Chat message and session types
โ”‚   โ”œโ”€โ”€ config.rs          # Configuration management
โ”‚   โ””โ”€โ”€ error.rs           # Error types
โ”‚
โ”œโ”€โ”€ docs/
โ”‚   โ”œโ”€โ”€ API.md                    # API reference
โ”‚   โ”œโ”€โ”€ QUICKSTART.md             # Quick start guide
โ”‚   โ”œโ”€โ”€ TUTORIAL.md               # Detailed tutorial
โ”‚   โ””โ”€โ”€ USING_AS_CRATE.md         # Using Helios as a library
โ”‚
โ””โ”€โ”€ examples/
    โ”œโ”€โ”€ basic_chat.rs             # Simple chat example
    โ”œโ”€โ”€ agent_with_tools.rs       # Tool usage example
    โ”œโ”€โ”€ custom_tool.rs            # Custom tool implementation
    โ”œโ”€โ”€ multiple_agents.rs        # Multiple agents example
    โ””โ”€โ”€ direct_llm_usage.rs       # Direct LLM client usage

Module Overview

helios-engine/
โ”‚
โ”œโ”€โ”€ ๐Ÿ“ฆ agent           - Agent system and builder pattern
โ”œโ”€โ”€ ๐Ÿค– llm             - LLM client and API communication
โ”œโ”€โ”€ ๐Ÿ› ๏ธ tools           - Tool registry and implementations
โ”œโ”€โ”€ ๐Ÿ’ฌ chat            - Chat messages and session management
โ”œโ”€โ”€ โš™๏ธ config          - TOML configuration loading/saving
โ””โ”€โ”€ โŒ error           - Error types and Result alias

๐ŸŽฏ Examples

Run the included examples:

# Basic chat
cargo run --example basic_chat

# Agent with tools
cargo run --example agent_with_tools

# Custom tool
cargo run --example custom_tool

# Multiple agents
cargo run --example multiple_agents

๐Ÿงช Testing

Run tests:

cargo test

Run with logging:

RUST_LOG=debug cargo run

๐Ÿ” Advanced Features

Custom LLM Providers

Implement the LLMProvider trait for custom backends:

use async_trait::async_trait;
use helios_engine::{LLMProvider, LLMRequest, LLMResponse};

struct CustomProvider;

#[async_trait]
impl LLMProvider for CustomProvider {
    async fn generate(&self, request: LLMRequest) -> helios_engine::Result<LLMResponse> {
        // Your custom implementation
        todo!()
    }
}

Tool Chaining

Agents automatically chain tool calls:

// The agent can use multiple tools in sequence
let response = agent.chat(
    "Calculate 10 * 5, then echo the result"
).await?;

Conversation Context

Maintain conversation history:

let mut agent = Agent::new("Assistant", config);

agent.chat("My name is Alice").await?;
agent.chat("What is my name?").await?; // Agent remembers: "Alice"

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Development Setup

  1. Clone the repository:
git clone https://github.com/yourusername/helios.git
cd helios
  1. Build the project:
cargo build
  1. Run tests:
cargo test
  1. Format code:
cargo fmt
  1. Check for issues:
cargo clippy

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

Made with โค๏ธ in Rust