🤖 lmm-agent
lmm-agentis an equation-based, training-free autonomous agent framework built on top oflmm. Agents reason through the LMM symbolic engine: no LLM API key, no token quotas, no stochastic black boxes.
🤔 What does this crate provide?
LmmAgent: the batteries-included core agent with hot memory, long-term memory (LTM), tools, planner, reflection, and a time-based scheduler.Autoderive macro: zero-boilerplateAgent,Functions, andAsyncFunctionsimplementation. Onlyagent: LmmAgentis required in the struct.AutoAgentorchestrator: manages a heterogeneous pool of agents, running them concurrently with a configurable retry policy.agents![]macro: ergonomic syntax to declare a typedVec<Box<dyn Executor>>.ThinkLoop: closed-loop PI controller that drives iterative reasoning toward a goal using Jaccard-error feedback.- DuckDuckGo search (optional): built-in web search via the
duckduckgocrate (--features net). When real snippets are available, they are returned directly as factual output. - Symbolic generation:
AsyncFunctions::generateusesTextPredictor, a symbolic regression engine that fits tone and rhythm trajectories to produce text. No neural model, no weights.
📦 Installation
Add this to your Cargo.toml:
[]
= "0.0.2"
Or enable it as a feature from the root lmm workspace:
[]
= { = "0.2.1", = ["agent"] }
🚀 Quick Start
1. Define a custom agent
Your struct only needs one field: agent: LmmAgent. Everything else is derived automatically by #[derive(Auto)].
use *;
2. Run the agent
async
🧠 Core Concepts
| Concept | Description |
|---|---|
persona |
The agent's identity / role label (e.g. "Research Agent") |
behavior |
The agent's mission or goal description |
LmmAgent |
Core struct holding all state (memory, tools, planner, knowledge, profile) |
Message |
A single chat-style message (role + content) |
Status |
Idle → Active → Completed (or InUnitTesting, Thinking) |
Auto |
Derive macro that auto-implements Agent, Functions, AsyncFunctions |
Executor |
The only trait you must implement, contains your custom task logic |
AutoAgent |
The orchestrator that runs a pool of Executors |
ThinkLoop |
PI-controller feedback loop that drives iterative multi-step reasoning |
🔧 LmmAgent Builder API
use LmmAgent;
use ;
let agent = builder
.persona
.behavior
.status
.memory
.planner
.build;
📡 AsyncFunctions Trait
The Auto macro generates a full AsyncFunctions implementation for your struct:
| Method | Description |
|---|---|
generate(prompt) |
Symbolic text generation via TextPredictor (tone + rhythm regression). No LLM. |
search(query) |
DuckDuckGo web search (--features net). Returns real sentences when available. |
save_ltm(msg) |
Persist a message to the agent's long-term memory store |
get_ltm() |
Retrieve all LTM messages as a Vec<Message> |
ltm_context() |
Format LTM as a single context string |
🔬 How Generation Works
AsyncFunctions::generate dispatches to LmmAgent::generate, which uses the TextPredictor engine:
- Seed enrichment: the prompt is enriched with domain-specific words extracted from the agent's own
behaviorfield, so generation is topically grounded. - Tone trajectory: symbolic regression fits a mathematical expression mapping
token_position → mean_byte_valueover the input window. - Rhythm trajectory: a second regression fits
token_position → word_length. - Token selection: for each new token, the expected POS is determined from a grammar transition table; the word scoring lowest on a
tone_diff + length_diff + recency_penaltyscore is chosen from curated vocabulary pools. - Net mode (
--features net): if DuckDuckGo returns snippets, the sentence with the highest token overlap against the request is returned directly, producing factual, real-world text instead of symbolic continuation.
📄 License
Licensed under the MIT License.