Expand description
ยง๐๏ธ LMM ๐ฆ
LMM (Large Mathematical Model) is a pureโRust framework that models higherโdimensional reality through symbolic mathematics and physics simulation; Inspired by the Pharaonic model of intelligence: compress the world into durable, universal equations. No training. No GPU. No API key.
๐ง Linux (Recommended) | ๐ช Windows | ๐ณ Docker |
|---|---|---|
![]() | ![]() | ![]() |
Download lmm binary | Download lmm.exe binary | docker pull wiseaidev/lmm |
cargo install lmm --features rust-binary | cargo install lmm --features rust-binary | docker run -it wiseaidev/lmm |
lmm โ launches CLI | lmm โ launches CLI | Read DOCKER.md |
ยง๐ฌ Demo
The following demonstrates the symbolic prediction engine generating coherent English sentences powered entirely by deterministic mathematical equations and structural Subject-Verb-Object grammar; No neural networks, no statistical models. The engine supports a full suite of CLI subcommands including predict, summarize, sentence, paragraph, essay, and ask, enabling multi-paragraph construction driven entirely by mathematics.
ยง๐ง What Does LMM Provide?
LMM bridges multimodal perception and actionable scientific discovery through five tightly integrated layers:
| Layer | Modules | Purpose |
|---|---|---|
| Perception | perception.rs, tensor.rs | Raw bytes โ normalised tensors |
| Symbolic | equation.rs, symbolic.rs, discovery.rs | GP symbolic regression, differentiation, simplification |
| Physics | physics.rs, simulation.rs | ODE models + Euler / RK4 / RK45 / leapfrog integrators |
| Causal | causal.rs | SCM graphs, do-calculus interventions, counterfactuals |
| Cognition | consciousness.rs, world.rs, operator.rs | Full perceive โ encode โ predict โ act loop |
ยงโ๏ธ Architecture
flowchart TD
A["Raw Input\n(bytes / sensors)"]
B["MultiModalPerception\n โ Tensor"]
C["Consciousness Loop\nperceive โ encode โ predict\nevaluate โ plan (lookahead)"]
D["WorldModel\n(RK4 physics)"]
E["SymbolicRegression\n(GP equation search)"]
F["CausalGraph\nintervention / counterfactual"]
G["Expression AST\ndifferentiate / simplify"]
A --> B --> C
C --> D
C --> E
E --> G
G --> F
D --> Fยง๐ฌ Key Capabilities
- ๐งฌ Genetic Programming: population-based symbolic regression with template seeding (linear, quadratic, periodic) and variable-enforcement guards.
- ๐ Symbolic Calculus: automatic differentiation (chain rule, product rule, trig) and constant-folding simplification.
- ๐ Physics Suite: Harmonic Oscillator, Lorenz Attractor, Pendulum, SIR Epidemic, N-body Gravity; All implement
Simulatable. - ๐ข Field Calculus: N-D gradient, Laplacian, divergence, and 3-D curl via central differences.
- ๐ Causal Reasoning: structural causal models,
do(X=v)interventions, and counterfactual queries. - ๐งฉ Neural Operators: circular convolution with SGD kernel learning and Fourier spectral operators.
- ๐ค Text โ Equation: losslessly encode any text string into a symbolic equation and recover it exactly via integer residuals.
- ๐ฎ Symbolic Prediction: equation-native text continuation using sliding-window GP regression and vocabulary anchoring.
- ๐ฒ Stochastic Enhancement: synonym-bank word replacement (
--stochastic) delivers unique output each run while preserving mathematical sentence structure. - ๐จ Spectral Image Synthesis: generate procedural PPM images from a text prompt by hashing it into Fourier wave components.
ยง๐ฆ Installation
The lmm crate ships the following Cargo features:
| Feature | Description |
|---|---|
rust-binary | Enables the standalone lmm terminal CLI executable |
cli | Core CLI scaffolding (subsets of rust-binary) |
net | Internet-aware ask command via DuckDuckGo search |
python | Python extension module via pyo3 / maturin |
node | Node.js native add-on via napi-derive |
ยง๐ฆ Rust
The lmm library is available on crates.io. For the complete API reference, installation guide, and worked examples, see the Rust usage guide.
ยง๐ป Command-Line Interface
The lmm binary supports 15 subcommands spanning simulation, discovery, encoding, prediction, summarisation, and rich text generation: all powered by pure equations.
For the full option reference and usage examples, see the CLI documentation or run lmm --help after installing with cargo install lmm --features rust-binary.
ยง๐ Python
The Python bindings are published to PyPI as lmm-rs and are installed with pip install lmm-rs. Built with maturin, the package ships pre-compiled wheels for major CPython versions and runs a fully embedded Tokio runtime; no asyncio required.
For installation instructions, configuration options, and full method signatures, see the Python usage guide.
ยง๐ฉ Node.js
The Node.js bindings are published to npm as @wiseaidev/lmm and are installed with npm install @wiseaidev/lmm. Built with napi-rs, the package ships a pre-compiled .node add-on with TypeScript type definitions.
For installation instructions, type definitions, and examples, see the Node.js usage guide.
ยง๐ WebAssembly (WASM)
LMM natively targets wasm32-unknown-unknown. Because reqwest switches to the browser fetch API automatically, you can deploy LMM inside Rust frontend frameworks such as Yew, Dioxus, and Leptos without any additional glue code.
For CORS considerations, build steps, and usage details, see the WASM usage guide.
ยง๐ค Agent Framework
The lmm-agent crate extends LMM with a fully autonomous, equation-based agent layer; no LLM, no API key, no training data.
| Document | Description |
|---|---|
| AGENT.md | Architecture, quick-start, types, and async API reference |
| DERIVE.md | #[derive(Auto)] macro: generated traits and field contract |
| lmm-agent README | Crate-level API reference, builder, and example |
| lmm-derive README | Macro crate details and field rules |
ยง๐ฐ Publications & Research
The architecture, formal mathematics, and paradigm are fully documented in the official whitepaper: Read the Whitepaper (PDF).
ยงBlog Posts
- LLMs are Useful. LMMs will Break Reality: the original post that started this project.
- Training Is An Evil Concept. LMMs Eliminates It Altogether: ethical, architectural, and data advantages of training-free models.
ยง๐ Citation
If you use LMM in your research, please cite our whitepaper:
@article{harmouch2026lmm,
author = {Mahmoud Harmouch},
title = {Mathematics Is All You Need: Training-Free Language Generation via
Symbolic Regression and Stochastic Determinism},
year = {2026},
url = {https://github.com/wiseaidotdev/lmm}
}ยง๐ค Contributing
Contributions are welcome! Feel free to open issues or pull requests on GitHub.
ยง๐ License
Licensed under the MIT License.
ยงโญ Star Us
If you use or enjoy LMM, please leave us a star on GitHub! It helps others discover the project and keeps the momentum going โ.
ยงLMM Agent Framework ๐ค
The lmm-agent crate provides an equation-based, training-free autonomous agent framework built on top of the lmm core engine. Agents reason through symbolic mathematics, not neural networks: no GPU, no API key, no token quotas.
ยง๐ฆ Installation
# Cargo.toml
[dependencies]
lmm-agent = "0.1.2"ยง๐๏ธ Core Architecture
flowchart TD
U["Custom Struct\n#[derive(Auto)]"]
L["LmmAgent\n(core state)"]
E["Executor trait\n(your logic)"]
O["AutoAgent\norchestrator"]
G["TextPredictor\n(symbolic regression)"]
S["DuckDuckGo search\n(optional enrichment)"]
U -->|"delegates to"| L
U -->|"implements"| E
O -->|"runs pool of"| E
L -->|"uses"| G
L -->|"uses"| Sยง๐ Quick Start
use lmm_agent::prelude::*;
use async_trait::async_trait;
// Define your agent struct
// The `Auto` macro only requires one field: `agent: LmmAgent`
#[derive(Debug, Default, Auto)]
pub struct MyAgent {
pub agent: LmmAgent,
}
// Implement only your task logic
#[async_trait]
impl Executor for MyAgent {
async fn execute<'a>(
&'a mut self,
_tasks: &'a mut Task,
_execute: bool, _browse: bool, _max_tries: u64,
) -> Result<()> {
let prompt = self.agent.behavior.clone();
let response = self.generate(&prompt).await?;
self.agent.add_message(Message::new("assistant", response));
self.agent.update(Status::Completed);
Ok(())
}
}
// Run
#[tokio::main]
async fn main() {
let agent = MyAgent::new(
"Research Agent".into(),
"Survey the Rust ecosystem.".into()
);
let _ = AutoAgent::default()
.with(agents![agent]);
}ยง๐ง LmmAgent Builder
use lmm_agent::agent::LmmAgent;
use lmm_agent::types::{Message, Planner, Goal};
let agent = LmmAgent::builder()
.persona("My Agent")
.behavior("Summarise Rust papers.")
.memory(vec![Message::new("system", "You are an LMM agent.")])
.planner(Planner {
current_plan: vec![Goal {
description: "Read paper list.".into(),
priority: 1,
completed: false,
}],
})
.build();ยง๐งฉ Key Types
| Type | Purpose |
|---|---|
LmmAgent | Core agent struct (memory, tools, planner, knowledge, etc.) |
LmmAgentBuilder | Fluent builder for LmmAgent |
Message | A chat message with role + content |
Status | Idle, Active, InUnitTesting, Completed, Thinking |
Knowledge | Map of fact keys to natural-language descriptions |
Planner | Ordered list of Goals with priorities and completion flags |
Profile | Agent personality traits and behavioural script |
ContextManager | Recent messages + current focus topics |
Task | Work unit with description, scope, URLs, and code fields |
AutoAgent | Orchestrator that runs a pool of agents |
ยง๐ก AsyncFunctions Trait
The Auto derive macro generates:
// All below methods are generated automatically
async fn generate(&mut self, prompt: &str) -> Result<String>;
async fn search(&self, query: &str) -> Result<Vec<LiteSearchResult>>;
async fn think(&mut self, goal: &str) -> Result<ThinkResult>;
async fn save_ltm(&mut self, msg: Message) -> Result<()>;
async fn get_ltm(&self) -> Result<Vec<Message>>;
async fn ltm_context(&self) -> Result<String>;ยง๐ Agent Lifecycle
Idle โ [execute() called] โ Active โ [task done] โ Completed
โ [think() called] โ Thinking โ Completed
โ [testing] โ InUnitTestingยง๐ Further Reading
ยงLMM Derive Macros โ๏ธ
The lmm-derive crate provides procedural macros that eliminate agent boilerplate. The primary export is #[derive(Auto)].
ยง๐ฆ Installation
Pulled in automatically with lmm-agent. No manual dependency needed.
ยง๐ The Auto Macro
ยงRequired struct shape
use lmm_agent::prelude::*;
use async_trait::async_trait;
#[derive(Debug, Default, Auto)]
pub struct MyAgent {
pub agent: LmmAgent,
}
#[async_trait]
impl Executor for MyAgent {
async fn execute<'a>(
&'a mut self, _tasks: &'a mut Task,
_execute: bool, _browse: bool, _max_tries: u64,
) -> Result<()> { Ok(()) }
}Auto inspects the required field agent: LmmAgent and generates three trait implementations automatically.
ยงGenerated traits
ยงAgent
Delegates all methods to the inner LmmAgent field:
impl Agent for MyAgent {
fn new(persona: Cow<'static, str>, behavior: Cow<'static, str>) -> Self {
let mut s = Self::default();
s.agent = LmmAgent::new(persona, behavior);
s
}
fn persona(&self) -> &str { &self.agent.persona }
fn behavior(&self) -> &str { &self.agent.behavior }
fn status(&self) -> &Status { &self.agent.status }
fn memory(&self) -> &Vec<Message> { &self.agent.memory }
fn memory_mut(&mut self) -> &mut Vec<Message> { &mut self.agent.memory }
}ยงFunctions
impl Functions for MyAgent {
fn get_agent(&self) -> &LmmAgent { &self.agent }
fn get_agent_mut(&mut self) -> &mut LmmAgent { &mut self.agent }
}ยงAsyncFunctions
#[async_trait]
impl AsyncFunctions for MyAgent {
async fn generate(&mut self, prompt: &str) -> Result<String> { unimplemented!() }
async fn search(&self, query: &str) -> Result<Vec<LiteSearchResult>> { unimplemented!() }
async fn save_ltm(&mut self, msg: Message) -> Result<()> { unimplemented!() }
async fn get_ltm(&self) -> Result<Vec<Message>>{ unimplemented!() }
async fn ltm_context(&self) -> Result<String> { unimplemented!() }
}ยงAdditional fields
Fields beyond the five required ones are ignored by Auto. You can freely add domain-specific data:
use lmm_agent::prelude::*;
use std::collections::HashMap;
#[derive(Debug, Default, Auto)]
pub struct DataAgent {
pub agent: LmmAgent,
// custom fields, ignored by the macro
pub db_url: String,
pub cache: HashMap<String, String>,
}
#[async_trait]
impl Executor for DataAgent {
async fn execute<'a>(
&'a mut self, _tasks: &'a mut Task,
_execute: bool, _browse: bool, _max_tries: u64,
) -> Result<()> { Ok(()) }
}ยงโ ๏ธ Field Name Contract
| Field | Type | Must be named |
|---|---|---|
agent | LmmAgent | exactly agent |
Compile errors will occur if the agent field is missing or misnamed.
ยง๐ License
Licensed under the MIT License.
ยงLMM Rust Documentation ๐ฆ
The lmm library is a pure-Rust symbolic intelligence framework available on crates.io. It requires Rust 1.86+ and exposes the full engine as a library crate with no mandatory runtime dependencies.
ยง๐ฆ Installation
Add this to your Cargo.toml:
[dependencies]
lmm = "0.2.7"ยงCargo Features
| Feature | Description |
|---|---|
rust-binary | Enables the standalone lmm terminal CLI executable |
cli | Core CLI scaffolding (subset of rust-binary) |
net | Internet-aware ask command via DuckDuckGo Lite |
python | Python extension module (pyo3 / maturin) |
node | Node.js native add-on (napi-derive) |
Enable all features for a fully featured local build:
cargo build --release --all-featuresยง๐ Library Usage
ยงTensor arithmetic
use lmm::prelude::*;
let t = Tensor::new(vec![2, 3], vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0]).unwrap();
println!("{}", t.norm());ยงSymbolic expressions
use lmm::equation::Expression;
use std::collections::HashMap;
let expr: Expression = "(sin(x) * 2)".parse().unwrap();
let mut vars = HashMap::new();
vars.insert("x".to_string(), std::f64::consts::PI);
println!("{}", expr.evaluate(&vars).unwrap()); // โ 0
let deriv = expr.symbolic_diff("x").simplify();
println!("{}", deriv); // (cos(x) * 2)ยงCausal graph + do-calculus
use lmm::causal::CausalGraph;
let mut g = CausalGraph::new();
g.add_node("x", Some(3.0));
g.add_node("y", None);
g.add_edge("x", "y", Some(2.0)).unwrap();
g.forward_pass().unwrap();
let y_cf = g.counterfactual("x", 10.0, "y").unwrap();
println!("do(x=10) โ y = {y_cf}"); // 20.0ยงPhysics simulation
use lmm::prelude::*;
let osc = HarmonicOscillator::new(1.0, 1.0, 0.0).unwrap();
let sim = Simulator { step_size: 0.01 };
let state = sim.rk4_step(&osc, osc.state()).unwrap();
println!("{:?}", state.data);ยงText โ Symbolic equation (lossless round-trip)
use lmm::encode::{encode_text, decode_message};
let enc = encode_text("The Pharaohs encoded reality.", 80, 4).unwrap();
let recovered = decode_message(&enc).unwrap();
assert_eq!(recovered, "The Pharaohs encoded reality.");ยงSymbolic text continuation
use lmm::predict::TextPredictor;
let predictor = TextPredictor::new(20, 40, 3);
let result = predictor.predict_continuation("Wise AI built the first LMM", 80).unwrap();
println!("{}", result.continuation);ยงGenetic programming symbolic regression
use lmm::prelude::*;
let mut sr = SymbolicRegression::new(3, 100);
let inputs: Vec<Vec<f64>> = (0..10).map(|i| vec![i as f64 * 0.5]).collect();
let targets: Vec<f64> = (0..10).map(|i| 2.0 * i as f64 * 0.5 + 1.0).collect();
let eq = sr.fit(&inputs, &targets).unwrap();
println!("Discovered: {eq}");ยงConsciousness loop
use lmm::prelude::*;
let state = Tensor::zeros(vec![4]);
let mut brain = Consciousness::new(state, 3, 0.01);
let new_state = brain.tick(b"The Pharaohs built the pyramids").unwrap();
println!("{:?}", new_state);ยงSpectral image generation
use lmm::prelude::*;
let params = ImagenParams {
prompt: "ancient egypt mathematics".into(),
width: 512, height: 512, components: 8,
style: StyleMode::Plasma,
palette_name: "warm".into(),
output: "egypt.ppm".into(),
};
let path = render(¶ms).unwrap();
println!("Saved to {path}");ยง๐๏ธ Architecture
flowchart TD
A["Raw Input\n(bytes / sensors)"]
B["MultiModalPerception\n โ Tensor"]
C["Consciousness Loop\nperceive โ encode โ predict\nevaluate โ plan (lookahead)"]
D["WorldModel\n(RK4 physics)"]
E["SymbolicRegression\n(GP equation search)"]
F["CausalGraph\nintervention / counterfactual"]
G["Expression AST\ndifferentiate / simplify"]
A --> B --> C
C --> D
C --> E
E --> G
G --> F
D --> Fยง๐ Core Types Reference
| Type / Function | Description |
|---|---|
Tensor::new(shape, data) | N-D row-major f64 tensor; .norm(), .scale(), .add(), .dot() |
Expression (impl FromStr) | Symbolic AST; .evaluate(), .symbolic_diff(), .simplify(), Display |
CausalGraph | SCM with add_node, add_edge, forward_pass, intervene, counterfactual |
HarmonicOscillator, LorenzSystem, Pendulum, SIRModel | Physics models; all implement Simulatable |
Simulator | .euler_step_osc(), .rk4_step_osc(), .rk45_adaptive() |
SymbolicRegression | GP regressor; .fit(inputs, targets) โ String |
TextPredictor | .predict_continuation(text, length) โ PredictionResult |
SentenceGenerator | .generate(seed) โ String |
ParagraphGenerator | .generate(seed) โ String |
TextSummarizer | .summarize(text) โ String |
StochasticEnhancer | .enhance(text) โ String |
Consciousness | .tick(bytes) โ Vec<f64> |
encode_text(text, iters, depth) | Returns EncodedText { expression, length, residuals } |
decode_message(expr, length, residuals) | Losslessly reconstructs original text |
mdl_score, compute_mse, r_squared, aic_score, bic_score | Model-fit metrics |
render_image(prompt, w, h, style, palette, n, out) | Spectral field synthesis โ PPM file |
ยง๐ License
Licensed under the MIT License.
ยงLMM WebAssembly Guide ๐
LMM natively targets wasm32-unknown-unknown. Because the HTTP client (reqwest) automatically switches to the browser fetch API on WASM targets, you can deploy LMM inside Rust frontend frameworks without any additional glue code.
ยงSupported Frameworks
LMM WASM integration is actively used and tested with:
- Yew: the most popular Rust / WASM frontend framework
- Dioxus: cross-platform Rust UI framework
- Leptos: fine-grained reactive framework
ยง๐ฆ Adding to a WASM Project
Add lmm to your Cargo.toml with the appropriate feature set. Avoid enabling rust-binary, python, or node on WASM targets.
[dependencies]
lmm = { version = "0.2.7", default-features = false, features = ["wasm-net"] }Build with Trunk (Yew / Leptos):
trunk serveยง๐ CORS Considerations
DuckDuckGoโs endpoints set permissive CORS headers on most responses, but the behaviour can vary by endpoint and region. If you encounter CORS errors:
- Use a server-side proxy to forward requests from your own origin.
- Use the
wt-wt(worldwide) region which tends to have the broadest CORS coverage. - Consider caching results server-side and serving them from your own API.
ยง๐ฌ Feature Flags for WASM
| Feature | WASM-safe | Description |
|---|---|---|
| (default) | โ | Core symbolic engine (no networking) |
wasm-net | โ | Enables reqwest fetch-based HTTP on WASM |
net | โ | Native Tokio networking: incompatible with WASM |
rust-binary | โ | CLI binary: incompatible with WASM |
python | โ | pyo3 bindings: incompatible with WASM |
node | โ | napi bindings: incompatible with WASM |
ยง๐ Example: Yew Component
use lmm::prelude::*;
use yew::prelude::*;
#[function_component(PredictorDemo)]
fn predictor_demo() -> Html {
let output = use_state(|| String::from("Click to generate..."));
let onclick = {
let output = output.clone();
Callback::from(move |_| {
let predictor = TextPredictor::new(20, 30, 3);
if let Ok(result) = predictor.predict_continuation("Wise AI built the first LMM", 80) {
output.set(result.continuation);
}
})
};
html! {
<div>
<button {onclick}>{ "Generate" }</button>
<p>{ (*output).clone() }</p>
</div>
}
}ยง๐ License
Licensed under the MIT License.
Modulesยง
- app
- Application Entry-Point
- causal
- Causal Graphs
- compression
- Information-Theoretic Compression Metrics
- consciousness
- Consciousness Perception-Action Loop
- discovery
- Symbolic Regression
- encode
- Text-Equation Encoding and Decoding
- equation
- Symbolic Expression Engine
- error
- Error Types
- field
- Tensor Field Operations
- imagen
- Procedural Image Generation (โImagenโ)
- lexicon
- System Dictionary Lexicon
- models
- Mathematical Models
- operator
- Neural and Fourier Operators
- perception
- Multi-Modal Perception
- physics
- Physics Models
- predict
- Symbolic Text Prediction
- prelude
- Prelude
- reasoner
- Compositional Symbolic Reasoner
- simulation
- ODE/PDE Numerical Integration
- stochastic
- Stochastic Text Enhancement
- symbolic
- Symbolic Expression Tools
- tensor
- Tensor - N-Dimensional Array
- text
- Symbolic Text Generation
- traits
- Core Traits
- uncertainty
- Calibrated Uncertainty Quantification
- world
- World Model

