π€ lmm-agent
lmm-agentis an equation-based, training-free autonomous agent framework built on top oflmm. Agents reason through the LMM symbolic engine: no LLM API key, no token quotas, no stochastic black boxes.
π€ What does this crate provide?
LmmAgent: the batteries-included core agent with hot memory, long-term memory (LTM), tools, planner, reflection, and a time-based scheduler.Autoderive macro: zero-boilerplateAgent,Functions, andAsyncFunctionsimplementation. Onlyagent: LmmAgentis required in the struct.AutoAgentorchestrator: manages a heterogeneous pool of agents, running them concurrently with a configurable retry policy.agents![]macro: ergonomic syntax to declare a typedVec<Box<dyn Executor>>.ThinkLoop: closed-loop PI controller that drives iterative reasoning toward a goal using Jaccard-error feedback.- Knowledge Acquisition: ingest
.txt,.md,.pdf(optional) or URLs into a queryableKnowledgeIndex; answer questions withTextSummarizerextractive summarisation, zero external AI. - DuckDuckGo search (optional,
--features net): built-in web search. When real snippets are available, they are returned directly as factual output. - Symbolic generation:
AsyncFunctions::generateusesTextPredictor, a symbolic regression engine that fits tone and rhythm trajectories to produce text. No neural model, no weights.
π·π»ββοΈ Agent Architecture
flowchart TD
User(["User / Caller"]) -->|"question / prompt"| EXEC
subgraph Agent["LmmAgent"]
direction TB
EXEC["Executor::execute()"]
EXEC --> GEN["generate(request)"]
GEN --> KI_CHECK{"KnowledgeIndex\nnon-empty?"}
KI_CHECK -- yes --> KI_ANSWER["KnowledgeIndex::answer()\n(IDF retrieval + TextSummarizer)"]
KI_CHECK -- no --> NET_CHECK{"net feature?"}
NET_CHECK -- yes --> DDG["DuckDuckGo search\nbest_sentence()"]
NET_CHECK -- no --> SYM["TextPredictor\n(symbolic regression)"]
DDG --> RESULT["Response text"]
KI_ANSWER --> RESULT
SYM --> RESULT
EXEC --> THINK["think_with()\nThinkLoop PI controller"]
THINK --> ORACLE["SearchOracle\n(DDG cache)"]
ORACLE --> THINK
EXEC --> MEM["Hot Memory\n(Vec<Message>)"]
EXEC --> LTM["Long-Term Memory"]
EXEC --> KB["Knowledge facts\n(keyβvalue)"]
EXEC --> PLAN["Planner\n(Goal priority queue)"]
EXEC --> REFLECT["Reflection\n(eval_fn)"]
end
subgraph Ingestion["Knowledge Acquisition"]
SRC_FILE["File (.txt / .md / .pdf)"] --> PARSE
SRC_DIR["Directory"] --> PARSE
SRC_URL["URL (net feature)"] --> PARSE
SRC_RAW["RawText"] --> PARSE
PARSE["DocumentParser\n(PlainText / Markdown / PDF)"] --> CHUNKS["DocumentChunk[]"]
CHUNKS --> INDEX["KnowledgeIndex\n(IDF inverted index)"]
end
INDEX -->|"agent.ingest()"| KI_CHECK
RESULT -->|"agent.memory"| MEM
RESULT --> User
π¦ Installation
[]
= "0.0.4"
# Optional features:
# lmm-agent = { version = "0.0.4", features = ["net", "knowledge"] }
π Quick Start
1. Define a custom agent
Your struct only needs one field: agent: LmmAgent. Everything else is derived automatically by #[derive(Auto)].
use *;
2. Run the agent
async
3. Ingest knowledge and ask questions
async
π§ Core Concepts
| Concept | Description |
|---|---|
persona |
The agent's identity / role label (e.g. "Research Agent") |
behavior |
The agent's mission or goal description |
LmmAgent |
Core struct holding all state (memory, tools, planner, knowledge, profile) |
Message |
A single chat-style message (role + content) |
Status |
Idle β Active β Completed (or InUnitTesting, Thinking) |
Auto |
Derive macro that auto-implements Agent, Functions, AsyncFunctions |
Executor |
The only trait you must implement, contains your custom task logic |
AutoAgent |
The orchestrator that runs a pool of Executors |
ThinkLoop |
PI-controller feedback loop that drives iterative multi-step reasoning |
KnowledgeIndex |
Inverted, IDF-weighted index over ingested document chunks |
KnowledgeSource |
Enum of ingestion origins: File, Dir, Url, RawText |
π§ LmmAgent Builder API
let agent = builder
.persona
.behavior
.planner
.knowledge_index
.build;
π Knowledge Acquisition
| Feature flag | What it enables |
|---|---|
| (none) | .txt and .md ingestion |
knowledge |
.pdf ingestion via lopdf |
net |
URL ingestion via reqwest |
Key methods
| Method | Description |
|---|---|
agent.ingest(source) |
Parse and index a KnowledgeSource; returns chunk count |
agent.query_knowledge(q, top_k) |
Return top-k raw passage strings |
agent.answer_from_knowledge(q) |
Retrieve + summarise; returns Option<String> |
agent.generate(prompt) |
Consults index first, then DDG/symbolic fallback |
π‘ AsyncFunctions Trait
| Method | Description |
|---|---|
generate(prompt) |
Knowledge-grounded β DDG factual β symbolic (TextPredictor) in that priority order |
search(query) |
DuckDuckGo web search (--features net). Returns real sentences when available |
save_ltm(msg) |
Persist a message to the agent's long-term memory store |
get_ltm() |
Retrieve all LTM messages as a Vec<Message> |
ltm_context() |
Format LTM as a single context string |
π¬ How Generation Works
AsyncFunctions::generate follows this priority chain:
- Knowledge index (highest priority): if the agent has ingested documents, the top-5 chunks are retrieved and fed to
TextSummarizer::summarize_with_query. If a relevant answer is found, it is returned immediately. - Net mode (
--features net): if DuckDuckGo returns snippets, the sentence with the highest token overlap is returned directly, producing factual, real-world text. - Symbolic fallback: the seed is enriched with domain words from
self.behaviorthen fed toTextPredictor(tone + rhythm regression). No API call, no model weights.
π License
Licensed under the MIT License.