🤖 lmm-agent
lmm-agentis an equation-based, training-free autonomous agent framework built on top oflmm. Agents reason through the LMM symbolic engine: no LLM API key, no token quotas, no stochastic black boxes.
🤔 What does this crate provide?
LmmAgent: the batteries-included core agent with hot memory, long-term memory (LTM), tools, planner, reflection, and a time-based scheduler.Autoderive macro: zero-boilerplateAgent,Functions, andAsyncFunctionsimplementation for any custom struct.AutoAgentorchestrator: manages a heterogeneous pool of agents, running them concurrently with a configurable retry policy.agents![]macro: ergonomic syntax to declare a typedVec<Box<dyn Executor>>.- DuckDuckGo search: built-in web search enrichment via the
duckduckgocrate. - Equation-based generation:
AsyncFunctions::generateuses n-gram symbolic regression, not a neural LLM.
📦 Installation
Add this to your Cargo.toml:
[]
= "0.0.2"
Or enable it as a feature from the root lmm workspace:
[]
= { = "0.2.1", = ["agent"] }
🚀 Quick Start
1. Define a custom agent
use *;
2. Run the agent
async
🧠 Core Concepts
| Concept | Description |
|---|---|
persona |
The agent's identity / role label (e.g. "Research Agent") |
behavior |
The agent's mission or goal description |
LmmAgent |
Core struct holding all state (memory, tools, planner, knowledge, profile) |
Message |
A single chat-style message (role + content) |
Status |
Idle → Active → Completed (or InUnitTesting) |
Auto |
Derive macro that auto-implements Agent, Functions, AsyncFunctions |
Executor |
The only trait you must implement, contains your custom task logic |
AutoAgent |
The orchestrator that runs a pool of Executors |
🔧 LmmAgent Builder API
use LmmAgent;
use ;
let agent = builder
.persona
.behavior
.status
.memory
.planner
.build;
📡 AsyncFunctions Trait
The Auto macro generates a full AsyncFunctions implementation for your struct:
| Method | Description |
|---|---|
generate(prompt) |
Equation-based n-gram text generation (no LLM) |
search(query) |
DuckDuckGo web search returning structured results |
save_ltm(msg) |
Persist a message to the agent's long-term memory store |
get_ltm() |
Retrieve all LTM messages as a Vec<Message> |
ltm_context() |
Format LTM as a single context string for injection into future generations |
🔬 How Generation Works
AsyncFunctions::generate dispatches to LmmAgent::equation_generate, which:
- Tokenises the prompt into a word list.
- Builds a reverse bigram index over the seed corpus.
- Walks the index guided by the
simple_ngram_generaten-gram engine (deterministic, no sampling). - Returns a coherent continuation, no API call, no model weights.
📄 License
Licensed under the MIT License.