pub struct LmmAgent {Show 18 fields
pub id: String,
pub persona: String,
pub behavior: String,
pub status: Status,
pub memory: Vec<Message>,
pub long_term_memory: Vec<Message>,
pub knowledge: Knowledge,
pub tools: Vec<Tool>,
pub planner: Option<Planner>,
pub reflection: Option<Reflection>,
pub scheduler: Option<TaskScheduler>,
pub profile: Profile,
pub context: ContextManager,
pub capabilities: HashSet<Capability>,
pub tasks: Vec<Task>,
pub knowledge_index: KnowledgeIndex,
pub learning_engine: Option<LearningEngine>,
pub internal_drive: InternalDrive,
}Expand description
The core agent type.
Use LmmAgent::builder() for fluent construction, or
LmmAgent::new() for the quick two-argument form.
§Examples
use lmm_agent::agent::LmmAgent;
let agent = LmmAgent::builder()
.persona("Research Agent")
.behavior("Research quantum computing.")
.build();
assert_eq!(agent.persona.as_str(), "Research Agent");
let agent2 = LmmAgent::new("Scientist".into(), "Do science.".into());
assert_eq!(agent2.persona.as_str(), "Scientist");Fields§
§id: StringUnique identifier for this agent instance (auto-generated UUIDv4).
persona: StringThe primary mission statement for this agent.
behavior: StringThe role or behavior label (e.g. "Research Assistant").
status: StatusCurrent lifecycle state.
memory: Vec<Message>Hot memory - recent messages kept in RAM.
long_term_memory: Vec<Message>Long-term memory - persisted between task executions (in-memory store).
knowledge: KnowledgeStructured knowledge facts for reasoning.
tools: Vec<Tool>Callable tools available to this agent.
planner: Option<Planner>Optional goal planner.
reflection: Option<Reflection>Self-reflection / evaluation module.
scheduler: Option<TaskScheduler>Time-based task scheduler.
profile: ProfileProfilelity traits and behavioural profile.
context: ContextManagerRecent-message context window.
capabilities: HashSet<Capability>Capabilities the agent possesses.
tasks: Vec<Task>Active task queue.
knowledge_index: KnowledgeIndexQueryable knowledge base built from ingested documents or URLs.
learning_engine: Option<LearningEngine>Optional HELM learning engine for in-environment lifelong learning.
internal_drive: InternalDriveInternalized drive system for intrinsic motivation signals.
Implementations§
Source§impl LmmAgent
impl LmmAgent
Sourcepub fn builder() -> LmmAgentBuilder
pub fn builder() -> LmmAgentBuilder
Returns a new LmmAgentBuilder.
The builder accepts every field with with_*-style setters and calls
.build() to produce the final LmmAgent.
Sourcepub fn add_message(&mut self, message: Message)
pub fn add_message(&mut self, message: Message)
Sourcepub fn add_ltm_message(&mut self, message: Message)
pub fn add_ltm_message(&mut self, message: Message)
Appends a Message to the agent’s long-term memory.
Sourcepub fn complete_goal(&mut self, description_substr: &str) -> bool
pub fn complete_goal(&mut self, description_substr: &str) -> bool
Marks a goal as completed by its description substring.
Returns true if a matching goal was found and updated.
Sourcepub async fn generate(&mut self, request: &str) -> Result<String>
pub async fn generate(&mut self, request: &str) -> Result<String>
Generates a textual response to request using lmm::predict::TextPredictor.
TextPredictor fits a tone trajectory and a rhythm trajectory over the
input tokens using symbolic regression, then selects continuation words
from compile-time lexical pools: entirely deterministic, no LLM API
required.
When the net feature is enabled, the seed is enriched with DuckDuckGo
search snippets before feeding it to the predictor.
§Examples
#[tokio::main]
async fn main() {
use lmm_agent::agent::LmmAgent;
let mut agent = LmmAgent::new("Tester".into(), "Rust is fast.".into());
let result = agent.generate("the universe reveals its truth").await;
assert!(result.is_ok());
assert!(!result.unwrap().is_empty());
}Sourcepub async fn search(&self, _query: &str, _limit: usize) -> Result<String>
pub async fn search(&self, _query: &str, _limit: usize) -> Result<String>
No-op search when the net feature is disabled.
Sourcepub async fn think(&mut self, goal: &str) -> Result<ThinkResult>
pub async fn think(&mut self, goal: &str) -> Result<ThinkResult>
Runs the closed-loop ThinkLoop reasoning cycle toward goal.
The agent transitions through Status::Thinking and back to
Status::Completed. At the end of the run the cold-store archive is
serialised into the agent’s long_term_memory so knowledge persists
across multiple think() calls.
§Parameters
goal- natural-language task description (the setpoint).
Defaults used internally:
max_iterations = 10convergence_threshold = 0.25k_proportional = 1.0k_integral = 0.05
Use LmmAgent::think_with for fine-grained control.
§Examples
#[tokio::main]
async fn main() {
use lmm_agent::agent::LmmAgent;
let mut agent = LmmAgent::new("Researcher".into(), "Explore Rust.".into());
let result = agent.think("What is Rust ownership?").await.unwrap();
assert!(result.steps > 0);
assert!(result.final_error >= 0.0 && result.final_error <= 1.0);
}Sourcepub async fn think_with(
&mut self,
goal: &str,
max_iterations: usize,
convergence_threshold: f64,
k_proportional: f64,
k_integral: f64,
) -> Result<ThinkResult>
pub async fn think_with( &mut self, goal: &str, max_iterations: usize, convergence_threshold: f64, k_proportional: f64, k_integral: f64, ) -> Result<ThinkResult>
Like think but exposes all ThinkLoop parameters.
§Arguments
goal- natural-language goal / setpoint.max_iterations- maximum feedback loop iterations (≥ 1).convergence_threshold- Jaccard error threshold ∈ [0, 1].k_proportional- proportional gain Kp.k_integral- integral gain Ki.
§Examples
#[tokio::main]
async fn main() {
use lmm_agent::agent::LmmAgent;
let mut agent = LmmAgent::new("Researcher".into(), "Explore Rust.".into());
let result = agent
.think_with("Rust memory safety", 5, 0.3, 1.0, 0.05)
.await
.unwrap();
assert!(result.steps <= 5);
}Sourcepub async fn ingest(&mut self, source: KnowledgeSource) -> Result<usize>
pub async fn ingest(&mut self, source: KnowledgeSource) -> Result<usize>
Ingests a KnowledgeSource into this agent’s KnowledgeIndex.
Returns the number of new sentence-level chunks added to the index.
§Examples
use lmm_agent::agent::LmmAgent;
use lmm_agent::cognition::knowledge::KnowledgeSource;
#[tokio::main]
async fn main() {
let mut agent = LmmAgent::new("KA Agent".into(), "Rust ownership.".into());
let n = agent
.ingest(KnowledgeSource::RawText(
"Rust prevents data races at compile time through its ownership system. \
The borrow checker enforces these rules statically.".into(),
))
.await
.unwrap();
assert!(n > 0);
}Sourcepub fn query_knowledge(&self, question: &str, top_k: usize) -> Vec<String>
pub fn query_knowledge(&self, question: &str, top_k: usize) -> Vec<String>
Returns the top-top_k relevant passages from the knowledge index for question.
Returns an empty Vec when the index contains no matching material.
Sourcepub fn answer_from_knowledge(&self, question: &str) -> Option<String>
pub fn answer_from_knowledge(&self, question: &str) -> Option<String>
Produces an extractive answer to question from the knowledge index.
Retrieves the top-5 relevant chunks, concatenates them, and runs
lmm::text::TextSummarizer to select the most informative sentences.
Returns None when the index is empty or no relevant material is found.
§Examples
use lmm_agent::agent::LmmAgent;
use lmm_agent::cognition::knowledge::KnowledgeSource;
#[tokio::main]
async fn main() {
let mut agent = LmmAgent::new("QA Agent".into(), "Rust.".into());
agent
.ingest(KnowledgeSource::RawText(
"Rust prevents data races through ownership. \
The borrow checker ensures memory safety at compile time.".into(),
))
.await
.unwrap();
let answer = agent.answer_from_knowledge("How does Rust handle memory?");
assert!(answer.is_some());
}Sourcepub fn save_learning(&self, path: &Path) -> Result<()>
pub fn save_learning(&self, path: &Path) -> Result<()>
Saves the current LearningEngine state to path as JSON.
Returns Ok(()) when no learning engine is attached.
§Examples
use lmm_agent::agent::LmmAgent;
use lmm_agent::cognition::learning::engine::LearningEngine;
use lmm_agent::cognition::learning::config::LearningConfig;
let mut agent = LmmAgent::builder()
.persona("Learner")
.behavior("Learn.")
.learning_engine(LearningEngine::new(LearningConfig::default()))
.build();
let path = std::env::temp_dir().join(format!("agent_helm_{}.json", uuid::Uuid::new_v4()));
agent.save_learning(&path).unwrap();Sourcepub fn load_learning(&mut self, path: &Path) -> Result<()>
pub fn load_learning(&mut self, path: &Path) -> Result<()>
Loads a previously saved LearningEngine state from path and
attaches it to this agent, replacing any existing engine.
§Examples
use lmm_agent::agent::LmmAgent;
use lmm_agent::cognition::learning::engine::LearningEngine;
use lmm_agent::cognition::learning::config::LearningConfig;
let mut agent = LmmAgent::builder()
.persona("Learner")
.behavior("Learn.")
.learning_engine(LearningEngine::new(LearningConfig::default()))
.build();
let path = std::env::temp_dir().join(format!("agent_helm_load_{}.json", uuid::Uuid::new_v4()));
agent.save_learning(&path).unwrap();
agent.load_learning(&path).unwrap();Sourcepub fn recall_learned(&mut self, query: &str, step: usize) -> Option<ActionKey>
pub fn recall_learned(&mut self, query: &str, step: usize) -> Option<ActionKey>
Returns the Q-table–recommended action for the current query string,
or None when no learning engine is attached or the state is unknown.
§Examples
use lmm_agent::agent::LmmAgent;
use lmm_agent::cognition::learning::engine::LearningEngine;
use lmm_agent::cognition::learning::config::LearningConfig;
let mut agent = LmmAgent::builder()
.persona("Learner")
.behavior("Learn.")
.learning_engine(LearningEngine::new(LearningConfig::default()))
.build();
let action = agent.recall_learned("rust memory safety", 0);
// No experience recorded yet, so the engine explores freely.
assert!(action.is_some());Sourcepub fn attribute_causes(
&self,
graph: &CausalGraph,
outcome_var: &str,
) -> Result<AttributionReport>
pub fn attribute_causes( &self, graph: &CausalGraph, outcome_var: &str, ) -> Result<AttributionReport>
Attributes the outcome of outcome_var in graph to its causal parents
by running Pearl do-calculus counterfactuals on each parent.
Returns an [AttributionReport] with normalised weights sorted
highest-first, or None when outcome_var has no parents.
§Examples
use lmm::causal::CausalGraph;
use lmm_agent::agent::LmmAgent;
let mut g = CausalGraph::new();
g.add_node("cause", Some(2.0));
g.add_node("effect", None);
g.add_edge("cause", "effect", Some(1.0)).unwrap();
g.forward_pass().unwrap();
let agent = LmmAgent::new("Analyst".into(), "Causal analysis.".into());
let report = agent.attribute_causes(&g, "effect").unwrap();
assert_eq!(report.weights[0].0, "cause");Sourcepub fn form_hypotheses(
&self,
graph: &CausalGraph,
observed: &HashMap<String, f64>,
max_hypotheses: usize,
) -> Result<Vec<Hypothesis>>
pub fn form_hypotheses( &self, graph: &CausalGraph, observed: &HashMap<String, f64>, max_hypotheses: usize, ) -> Result<Vec<Hypothesis>>
Generates causal hypotheses for variables whose observed values are not
explained by the current graph structure.
Returns up to max_hypotheses candidate new edges ranked by
explanatory power, highest first.
§Examples
use lmm::causal::CausalGraph;
use lmm_agent::agent::LmmAgent;
use std::collections::HashMap;
let mut g = CausalGraph::new();
g.add_node("x", Some(1.0));
g.add_node("y", Some(0.0));
let mut observed = HashMap::new();
observed.insert("y".to_string(), 0.9);
let agent = LmmAgent::new("Scientist".into(), "Discover causal laws.".into());
let hypotheses = agent.form_hypotheses(&g, &observed, 5).unwrap();
assert!(!hypotheses.is_empty());Sourcepub fn drive_state(&mut self) -> DriveState
pub fn drive_state(&mut self) -> DriveState
Emits the current [DriveState] by ticking the agent’s InternalDrive.
If no drive has been accumulated via LmmAgent::record_residual the
returned state will be idle. The drive counters are reset after each
call, matching the semantics of InternalDrive::tick.
§Examples
use lmm_agent::agent::LmmAgent;
let mut agent = LmmAgent::new("Curious".into(), "Learn everything.".into());
agent.record_residual(0.9);
let state = agent.drive_state();
assert!(!state.signals.is_empty());Sourcepub fn record_residual(&mut self, magnitude: f64)
pub fn record_residual(&mut self, magnitude: f64)
Feeds an unexplained prediction residual into the agent’s internal drive.
Calling this after each world-model error accumulates curiosity that
surfaces on the next drive_state call.
Sourcepub fn record_incoherence(&mut self, magnitude: f64)
pub fn record_incoherence(&mut self, magnitude: f64)
Feeds an incoherence signal into the agent’s internal drive.
Sourcepub fn record_contradiction(&mut self)
pub fn record_contradiction(&mut self)
Notifies the drive system that a contradiction was detected in memory.
Trait Implementations§
Source§impl Agent for LmmAgent
impl Agent for LmmAgent
Source§fn new(persona: Cow<'static, str>, behavior: Cow<'static, str>) -> Self
fn new(persona: Cow<'static, str>, behavior: Cow<'static, str>) -> Self
persona and behavior.