Expand description
The Semantic Infrastructure for Intelligent Applications
Enabling enterprises to build secure, scalable, and intelligent distributed systems
Solutions • Capabilities • Get Started • Website
§HOPE Agents
This crate implements the Hierarchical Optimistic Policy Engine (HOPE) for autonomous decision-making within the AIngle framework, providing reinforcement learning capabilities for agents.
§HOPE Agents - Hierarchical Optimizing Policy Engine
Autonomous AI agents framework for AIngle semantic networks.
§Overview
HOPE Agents provides a complete framework for building autonomous AI agents that can:
- Observe their environment (IoT sensors, network events, user inputs)
- Decide based on learned policies and hierarchical goals
- Execute actions in the AIngle network
- Learn and adapt over time using reinforcement learning
This crate is designed for use cases ranging from simple reactive agents to complex multi-agent systems with learning capabilities
§Architecture
┌─────────────────────────────────────────────────────────────┐
│ HOPE Agent │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────┐ │
│ │ Sensors │ │ Policy │ │ Actuators │ │
│ │ │ │ Engine │ │ │ │
│ │ • IoT data │─►│ │─►│ • Network calls │ │
│ │ • Events │ │ • Goals │ │ • State changes │ │
│ │ • Messages │ │ • Rules │ │ • Messages │ │
│ └──────────────┘ │ • Learning │ └──────────────────┘ │
│ └──────┬───────┘ │
│ │ │
│ ┌──────▼───────┐ │
│ │ Memory │ │
│ │ (Titans) │ │
│ │ │ │
│ │ STM ◄──► LTM │ │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘§Quick Start
§Simple Reactive Agent
ⓘ
use hope_agents::{Agent, SimpleAgent, Goal, Observation, Rule, Condition, Action};
// Create a simple reactive agent
let mut agent = SimpleAgent::new("sensor_monitor");
// Add a rule: if temperature > 30, alert
let rule = Rule::new(
"high_temp",
Condition::above("temperature", 30.0),
Action::alert("Temperature too high!"),
);
agent.add_rule(rule);
// Process observations
let obs = Observation::sensor("temperature", 35.0);
agent.observe(obs.clone());
let action = agent.decide();
let result = agent.execute(action.clone());
agent.learn(&obs, &action, &result);§HOPE Agent with Learning
ⓘ
use hope_agents::{HopeAgent, HopeConfig, Observation, Goal, Priority, Outcome};
// Create a HOPE agent with learning, prediction, and hierarchical goals
let mut agent = HopeAgent::with_default_config();
// Set a goal
let goal = Goal::maintain("temperature", 20.0..25.0)
.with_priority(Priority::High);
agent.set_goal(goal);
// Agent loop with reinforcement learning
for episode in 0..100 {
let obs = Observation::sensor("temperature", 22.0);
let action = agent.step(obs.clone());
// Execute action in environment and get reward
let reward = 1.0; // Example reward
let next_obs = Observation::sensor("temperature", 21.0);
let outcome = Outcome::new(action, result, reward, next_obs, false);
agent.learn(outcome);
}§Multi-Agent Coordination
ⓘ
use hope_agents::{AgentCoordinator, HopeAgent, Message, Observation};
use std::collections::HashMap;
// Create coordinator
let mut coordinator = AgentCoordinator::new();
// Register agents
let agent1 = HopeAgent::with_default_config();
let agent2 = HopeAgent::with_default_config();
let id1 = coordinator.register_agent(agent1);
let id2 = coordinator.register_agent(agent2);
// Broadcast message
coordinator.broadcast(Message::new("update", "System status changed"));
// Step all agents
let mut observations = HashMap::new();
observations.insert(id1, Observation::sensor("temp", 20.0));
observations.insert(id2, Observation::sensor("humidity", 60.0));
let actions = coordinator.step_all(observations);§State Persistence
ⓘ
use hope_agents::{HopeAgent, AgentPersistence};
use std::path::Path;
let mut agent = HopeAgent::with_default_config();
// Train the agent...
// Save agent state
agent.save_to_file(Path::new("agent_state.json")).unwrap();
// Later, load agent state
let loaded_agent = HopeAgent::load_from_file(Path::new("agent_state.json")).unwrap();§Agent Types
- ReactiveAgent: Simple stimulus-response behavior
- GoalBasedAgent: Works toward explicit goals
- LearningAgent: Adapts behavior over time
- CooperativeAgent: Coordinates with other agents
Re-exports§
pub use action::Action;pub use action::ActionResult;pub use action::ActionType;pub use agent::Agent;pub use agent::AgentId;pub use agent::AgentState;pub use agent::SimpleAgent;pub use config::AgentConfig;pub use coordination::AgentCoordinator;pub use coordination::ConsensusResult;pub use coordination::CoordinationError;pub use coordination::Message;pub use coordination::MessageBus;pub use coordination::MessageId;pub use coordination::MessagePayload;pub use coordination::MessagePriority;pub use error::Error;pub use error::Result;pub use goal::Goal;pub use goal::GoalPriority;pub use goal::GoalStatus;pub use goal::GoalType;pub use hierarchical::default_decomposition_rules;pub use hierarchical::ConflictResolution;pub use hierarchical::ConflictType;pub use hierarchical::DecompositionResult;pub use hierarchical::DecompositionRule;pub use hierarchical::DecompositionStrategy;pub use hierarchical::GoalConflict;pub use hierarchical::GoalTree;pub use hierarchical::GoalTypeFilter;pub use hierarchical::HierarchicalGoalSolver;pub use hierarchical::ParallelStrategy;pub use hierarchical::SequentialStrategy;pub use hope_agent::AgentStats;pub use hope_agent::GoalSelectionStrategy;pub use hope_agent::HopeAgent;pub use hope_agent::HopeConfig;pub use hope_agent::OperationMode;pub use hope_agent::Outcome;pub use hope_agent::SerializedState;pub use learning::ActionId;pub use learning::Experience;pub use learning::LearningAlgorithm;pub use learning::LearningConfig;pub use learning::LearningEngine;pub use learning::QValue;pub use learning::StateActionPair;pub use learning::StateId;pub use observation::Observation;pub use observation::ObservationType;pub use observation::Sensor;pub use persistence::AgentPersistence;pub use persistence::CheckpointManager;pub use persistence::LearningSnapshot;pub use persistence::PersistenceError;pub use persistence::PersistenceFormat;pub use persistence::PersistenceOptions;pub use policy::Condition;pub use policy::Policy;pub use policy::PolicyEngine;pub use policy::Rule;pub use predictive::AnomalyDetector;pub use predictive::PredictedState;pub use predictive::PredictiveConfig;pub use predictive::PredictiveModel;pub use predictive::StateEncoder;pub use predictive::StateSnapshot;pub use predictive::Trajectory;pub use predictive::TransitionModel;pub use types::*;
Modules§
- action
- Action types for HOPE Agents.
- agent
- The core
Agenttrait and a simple, concrete implementation. - config
- Configuration for HOPE Agents.
- coordination
- Multi-Agent Coordination.
- error
- Error types for the HOPE Agents framework.
- goal
- Goal types for HOPE Agents.
- hierarchical
- Hierarchical goal decomposition and management for HOPE agents.
- hope_
agent - The main HOPE Agent orchestrator.
- learning
- Learning module for HOPE Agents
- observation
- Observation types for HOPE Agents.
- persistence
- Agent State Persistence.
- policy
- Policy engine for HOPE Agents.
- predictive
- Predictive modeling for state and reward prediction in HOPE agents.
- types
- Core, general-purpose data types for the HOPE Agents framework.
Constants§
- VERSION
- HOPE framework version
Functions§
- create_
agent - Creates a simple agent with default configuration.
- create_
iot_ agent - Creates an IoT-optimized agent with reduced memory footprint.