Skip to main content

Crate lmm

Crate lmm 

Source
Expand description

ยง๐Ÿ‘๏ธ LMM ๐Ÿฆ€

LMM

Work In Progress ASI Crates.io Docs.rs Crates.io Downloads PyPI version NPM version made-with-rust Rust License Maintenance

Share On Reddit Share On X Share On Linkedin

LMM (Large Mathematical Model) is a pureโ€‘Rust framework that models higherโ€‘dimensional reality through symbolic mathematics and physics simulation; Inspired by the Pharaonic model of intelligence: compress the world into durable, universal equations. No training. No GPU. No API key.

๐Ÿง Linux (Recommended)๐ŸชŸ Windows๐Ÿณ Docker
lmm-linuxlmm-windowslmm-linux
Download lmm binaryDownload lmm.exe binarydocker pull wiseaidev/lmm
cargo install lmm --features rust-binarycargo install lmm --features rust-binarydocker run -it wiseaidev/lmm
lmm โ† launches CLIlmm โ† launches CLIRead DOCKER.md

ยง๐ŸŽฌ Demo

The following demonstrates the symbolic prediction engine generating coherent English sentences powered entirely by deterministic mathematical equations and structural Subject-Verb-Object grammar; No neural networks, no statistical models. The engine supports a full suite of CLI subcommands including predict, summarize, sentence, paragraph, essay, and ask, enabling multi-paragraph construction driven entirely by mathematics.

ยง๐Ÿง  What Does LMM Provide?

LMM bridges multimodal perception and actionable scientific discovery through five tightly integrated layers:

LayerModulesPurpose
Perceptionperception.rs, tensor.rsRaw bytes โ†’ normalised tensors
Symbolicequation.rs, symbolic.rs, discovery.rsGP symbolic regression, differentiation, simplification
Physicsphysics.rs, simulation.rsODE models + Euler / RK4 / RK45 / leapfrog integrators
Causalcausal.rsSCM graphs, do-calculus interventions, counterfactuals
Cognitionconsciousness.rs, world.rs, operator.rsFull perceive โ†’ encode โ†’ predict โ†’ act loop

ยงโš™๏ธ Architecture

flowchart TD
    A["Raw Input\n(bytes / sensors)"]
    B["MultiModalPerception\n โ†’ Tensor"]
    C["Consciousness Loop\nperceive โ†’ encode โ†’ predict\nevaluate โ†’ plan (lookahead)"]
    D["WorldModel\n(RK4 physics)"]
    E["SymbolicRegression\n(GP equation search)"]
    F["CausalGraph\nintervention / counterfactual"]
    G["Expression AST\ndifferentiate / simplify"]

    A --> B --> C
    C --> D
    C --> E
    E --> G
    G --> F
    D --> F

ยง๐Ÿ”ฌ Key Capabilities

  • ๐Ÿงฌ Genetic Programming: population-based symbolic regression with template seeding (linear, quadratic, periodic) and variable-enforcement guards.
  • ๐Ÿ“ Symbolic Calculus: automatic differentiation (chain rule, product rule, trig) and constant-folding simplification.
  • ๐ŸŒ€ Physics Suite: Harmonic Oscillator, Lorenz Attractor, Pendulum, SIR Epidemic, N-body Gravity; All implement Simulatable.
  • ๐Ÿ”ข Field Calculus: N-D gradient, Laplacian, divergence, and 3-D curl via central differences.
  • ๐Ÿ”— Causal Reasoning: structural causal models, do(X=v) interventions, and counterfactual queries.
  • ๐Ÿงฉ Neural Operators: circular convolution with SGD kernel learning and Fourier spectral operators.
  • ๐Ÿ”ค Text โ†” Equation: losslessly encode any text string into a symbolic equation and recover it exactly via integer residuals.
  • ๐Ÿ”ฎ Symbolic Prediction: equation-native text continuation using sliding-window GP regression and vocabulary anchoring.
  • ๐ŸŽฒ Stochastic Enhancement: synonym-bank word replacement (--stochastic) delivers unique output each run while preserving mathematical sentence structure.
  • ๐ŸŽจ Spectral Image Synthesis: generate procedural PPM images from a text prompt by hashing it into Fourier wave components.

ยง๐Ÿ“ฆ Installation

The lmm crate ships the following Cargo features:

FeatureDescription
rust-binaryEnables the standalone lmm terminal CLI executable
cliCore CLI scaffolding (subsets of rust-binary)
netInternet-aware ask command via DuckDuckGo search
pythonPython extension module via pyo3 / maturin
nodeNode.js native add-on via napi-derive

ยง๐Ÿฆ€ Rust

The lmm library is available on crates.io. For the complete API reference, installation guide, and worked examples, see the Rust usage guide.

ยง๐Ÿ’ป Command-Line Interface

The lmm binary supports 15 subcommands spanning simulation, discovery, encoding, prediction, summarisation, and rich text generation: all powered by pure equations.

For the full option reference and usage examples, see the CLI documentation or run lmm --help after installing with cargo install lmm --features rust-binary.

ยง๐Ÿ Python

The Python bindings are published to PyPI as lmm-rs and are installed with pip install lmm-rs. Built with maturin, the package ships pre-compiled wheels for major CPython versions and runs a fully embedded Tokio runtime; no asyncio required.

For installation instructions, configuration options, and full method signatures, see the Python usage guide.

ยง๐ŸŸฉ Node.js

The Node.js bindings are published to npm as @wiseaidev/lmm and are installed with npm install @wiseaidev/lmm. Built with napi-rs, the package ships a pre-compiled .node add-on with TypeScript type definitions.

For installation instructions, type definitions, and examples, see the Node.js usage guide.

ยง๐ŸŒ WebAssembly (WASM)

LMM natively targets wasm32-unknown-unknown. Because reqwest switches to the browser fetch API automatically, you can deploy LMM inside Rust frontend frameworks such as Yew, Dioxus, and Leptos without any additional glue code.

For CORS considerations, build steps, and usage details, see the WASM usage guide.

ยง๐Ÿค– Agent Framework

The lmm-agent crate extends LMM with a fully autonomous, equation-based agent layer; no LLM, no API key, no training data.

DocumentDescription
AGENT.mdArchitecture, quick-start, types, and async API reference
DERIVE.md#[derive(Auto)] macro: generated traits and field contract
lmm-agent READMECrate-level API reference, builder, and example
lmm-derive READMEMacro crate details and field rules

ยง๐Ÿ“ฐ Publications & Research

The architecture, formal mathematics, and paradigm are fully documented in the official whitepaper: Read the Whitepaper (PDF).

ยงBlog Posts

ยง๐Ÿ“ Citation

If you use LMM in your research, please cite our whitepaper:

@article{harmouch2026lmm,
  author  = {Mahmoud Harmouch},
  title   = {Mathematics Is All You Need: Training-Free Language Generation via
             Symbolic Regression and Stochastic Determinism},
  year    = {2026},
  url     = {https://github.com/wiseaidotdev/lmm}
}

ยง๐Ÿค Contributing

Contributions are welcome! Feel free to open issues or pull requests on GitHub.

ยง๐Ÿ“„ License

Licensed under the MIT License.

ยงโญ Star Us

If you use or enjoy LMM, please leave us a star on GitHub! It helps others discover the project and keeps the momentum going โ˜•.

Star History Chart

ยงLMM Agent Framework ๐Ÿค–

The lmm-agent crate provides an equation-based, training-free autonomous agent framework built on top of the lmm core engine. Agents reason through symbolic mathematics, not neural networks: no GPU, no API key, no token quotas.

ยง๐Ÿ“ฆ Installation

# Cargo.toml
[dependencies]
lmm-agent = "0.1.2"

ยง๐Ÿ—๏ธ Core Architecture

flowchart TD
    U["Custom Struct\n#[derive(Auto)]"]
    L["LmmAgent\n(core state)"]
    E["Executor trait\n(your logic)"]
    O["AutoAgent\norchestrator"]
    G["TextPredictor\n(symbolic regression)"]
    S["DuckDuckGo search\n(optional enrichment)"]

    U -->|"delegates to"| L
    U -->|"implements"| E
    O -->|"runs pool of"| E
    L -->|"uses"| G
    L -->|"uses"| S

ยง๐Ÿš€ Quick Start

use lmm_agent::prelude::*;
use async_trait::async_trait;

// Define your agent struct
// The `Auto` macro only requires one field: `agent: LmmAgent`
#[derive(Debug, Default, Auto)]
pub struct MyAgent {
    pub agent: LmmAgent,
}

// Implement only your task logic
#[async_trait]
impl Executor for MyAgent {
    async fn execute<'a>(
        &'a mut self,
        _tasks: &'a mut Task,
        _execute: bool, _browse: bool, _max_tries: u64,
    ) -> Result<()> {
        let prompt   = self.agent.behavior.clone();
        let response = self.generate(&prompt).await?;
        self.agent.add_message(Message::new("assistant", response));
        self.agent.update(Status::Completed);
        Ok(())
    }
}

// Run
#[tokio::main]
async fn main() {
    let agent = MyAgent::new(
        "Research Agent".into(),
        "Survey the Rust ecosystem.".into()
    );
    let _ = AutoAgent::default()
        .with(agents![agent]);
}

ยง๐Ÿ”ง LmmAgent Builder

use lmm_agent::agent::LmmAgent;
use lmm_agent::types::{Message, Planner, Goal};

let agent = LmmAgent::builder()
    .persona("My Agent")
    .behavior("Summarise Rust papers.")
    .memory(vec![Message::new("system", "You are an LMM agent.")])
    .planner(Planner {
        current_plan: vec![Goal {
            description: "Read paper list.".into(),
            priority: 1,
            completed: false,
        }],
    })
    .build();

ยง๐Ÿงฉ Key Types

TypePurpose
LmmAgentCore agent struct (memory, tools, planner, knowledge, etc.)
LmmAgentBuilderFluent builder for LmmAgent
MessageA chat message with role + content
StatusIdle, Active, InUnitTesting, Completed, Thinking
KnowledgeMap of fact keys to natural-language descriptions
PlannerOrdered list of Goals with priorities and completion flags
ProfileAgent personality traits and behavioural script
ContextManagerRecent messages + current focus topics
TaskWork unit with description, scope, URLs, and code fields
AutoAgentOrchestrator that runs a pool of agents

ยง๐Ÿ“ก AsyncFunctions Trait

The Auto derive macro generates:

โ“˜
// All below methods are generated automatically
async fn generate(&mut self, prompt: &str) -> Result<String>;
async fn search(&self, query: &str)         -> Result<Vec<LiteSearchResult>>;
async fn think(&mut self, goal: &str)       -> Result<ThinkResult>;
async fn save_ltm(&mut self, msg: Message)  -> Result<()>;
async fn get_ltm(&self)                     -> Result<Vec<Message>>;
async fn ltm_context(&self)                 -> Result<String>;

ยง๐Ÿ”„ Agent Lifecycle

Idle โ†’ [execute() called] โ†’ Active โ†’ [task done] โ†’ Completed
                                   โ†˜ [think() called]   โ†’ Thinking โ†’ Completed
                                   โ†˜ [testing]          โ†’ InUnitTesting

ยง๐Ÿ“Ž Further Reading

ยงLMM Derive Macros โš™๏ธ

The lmm-derive crate provides procedural macros that eliminate agent boilerplate. The primary export is #[derive(Auto)].

ยง๐Ÿ“ฆ Installation

Pulled in automatically with lmm-agent. No manual dependency needed.

ยง๐Ÿš€ The Auto Macro

ยงRequired struct shape

use lmm_agent::prelude::*;
use async_trait::async_trait;

#[derive(Debug, Default, Auto)]
pub struct MyAgent {
    pub agent: LmmAgent,
}

#[async_trait]
impl Executor for MyAgent {
    async fn execute<'a>(
        &'a mut self, _tasks: &'a mut Task,
        _execute: bool, _browse: bool, _max_tries: u64,
    ) -> Result<()> { Ok(()) }
}

Auto inspects the required field agent: LmmAgent and generates three trait implementations automatically.

ยงGenerated traits

ยงAgent

Delegates all methods to the inner LmmAgent field:

โ“˜
impl Agent for MyAgent {
    fn new(persona: Cow<'static, str>, behavior: Cow<'static, str>) -> Self {
         let mut s = Self::default();
         s.agent = LmmAgent::new(persona, behavior);
         s
    }
    fn persona(&self)     -> &str          { &self.agent.persona }
    fn behavior(&self)    -> &str          { &self.agent.behavior }
    fn status(&self)      -> &Status       { &self.agent.status }
    fn memory(&self)      -> &Vec<Message> { &self.agent.memory }
    fn memory_mut(&mut self) -> &mut Vec<Message> { &mut self.agent.memory }
}
ยงFunctions
โ“˜
impl Functions for MyAgent {
    fn get_agent(&self)         -> &LmmAgent     { &self.agent }
    fn get_agent_mut(&mut self) -> &mut LmmAgent { &mut self.agent }
}
ยงAsyncFunctions
โ“˜
#[async_trait]
impl AsyncFunctions for MyAgent {
    async fn generate(&mut self, prompt: &str) -> Result<String>      { unimplemented!() }
    async fn search(&self, query: &str)         -> Result<Vec<LiteSearchResult>>   { unimplemented!() }
    async fn save_ltm(&mut self, msg: Message)  -> Result<()>         { unimplemented!() }
    async fn get_ltm(&self)                     -> Result<Vec<Message>>{ unimplemented!() }
    async fn ltm_context(&self)                 -> Result<String>     { unimplemented!() }
}

ยงAdditional fields

Fields beyond the five required ones are ignored by Auto. You can freely add domain-specific data:

use lmm_agent::prelude::*;
use std::collections::HashMap;

#[derive(Debug, Default, Auto)]
pub struct DataAgent {
    pub agent:     LmmAgent,
    // custom fields, ignored by the macro
    pub db_url:    String,
    pub cache:     HashMap<String, String>,
}

#[async_trait]
impl Executor for DataAgent {
    async fn execute<'a>(
        &'a mut self, _tasks: &'a mut Task,
        _execute: bool, _browse: bool, _max_tries: u64,
    ) -> Result<()> { Ok(()) }
}

ยงโš ๏ธ Field Name Contract

FieldTypeMust be named
agentLmmAgentexactly agent

Compile errors will occur if the agent field is missing or misnamed.

ยง๐Ÿ“„ License

Licensed under the MIT License.

ยงLMM Rust Documentation ๐Ÿฆ€

The lmm library is a pure-Rust symbolic intelligence framework available on crates.io. It requires Rust 1.86+ and exposes the full engine as a library crate with no mandatory runtime dependencies.

ยง๐Ÿ“ฆ Installation

Add this to your Cargo.toml:

[dependencies]
lmm = "0.2.7"

ยงCargo Features

FeatureDescription
rust-binaryEnables the standalone lmm terminal CLI executable
cliCore CLI scaffolding (subset of rust-binary)
netInternet-aware ask command via DuckDuckGo Lite
pythonPython extension module (pyo3 / maturin)
nodeNode.js native add-on (napi-derive)

Enable all features for a fully featured local build:

cargo build --release --all-features

ยง๐Ÿ“š Library Usage

ยงTensor arithmetic

use lmm::prelude::*;

let t = Tensor::new(vec![2, 3], vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0]).unwrap();
println!("{}", t.norm());

ยงSymbolic expressions

use lmm::equation::Expression;
use std::collections::HashMap;

let expr: Expression = "(sin(x) * 2)".parse().unwrap();
let mut vars = HashMap::new();
vars.insert("x".to_string(), std::f64::consts::PI);
println!("{}", expr.evaluate(&vars).unwrap()); // โ‰ˆ 0

let deriv = expr.symbolic_diff("x").simplify();
println!("{}", deriv); // (cos(x) * 2)

ยงCausal graph + do-calculus

use lmm::causal::CausalGraph;

let mut g = CausalGraph::new();
g.add_node("x", Some(3.0));
g.add_node("y", None);
g.add_edge("x", "y", Some(2.0)).unwrap();
g.forward_pass().unwrap();

let y_cf = g.counterfactual("x", 10.0, "y").unwrap();
println!("do(x=10) โ†’ y = {y_cf}"); // 20.0

ยงPhysics simulation

use lmm::prelude::*;

let osc = HarmonicOscillator::new(1.0, 1.0, 0.0).unwrap();
let sim = Simulator { step_size: 0.01 };
let state = sim.rk4_step(&osc, osc.state()).unwrap();
println!("{:?}", state.data);

ยงText โ†’ Symbolic equation (lossless round-trip)

use lmm::encode::{encode_text, decode_message};

let enc = encode_text("The Pharaohs encoded reality.", 80, 4).unwrap();
let recovered = decode_message(&enc).unwrap();
assert_eq!(recovered, "The Pharaohs encoded reality.");

ยงSymbolic text continuation

use lmm::predict::TextPredictor;

let predictor = TextPredictor::new(20, 40, 3);
let result = predictor.predict_continuation("Wise AI built the first LMM", 80).unwrap();
println!("{}", result.continuation);

ยงGenetic programming symbolic regression

use lmm::prelude::*;

let mut sr = SymbolicRegression::new(3, 100);
let inputs: Vec<Vec<f64>> = (0..10).map(|i| vec![i as f64 * 0.5]).collect();
let targets: Vec<f64> = (0..10).map(|i| 2.0 * i as f64 * 0.5 + 1.0).collect();
let eq = sr.fit(&inputs, &targets).unwrap();
println!("Discovered: {eq}");

ยงConsciousness loop

use lmm::prelude::*;

let state = Tensor::zeros(vec![4]);
let mut brain = Consciousness::new(state, 3, 0.01);
let new_state = brain.tick(b"The Pharaohs built the pyramids").unwrap();
println!("{:?}", new_state);

ยงSpectral image generation

use lmm::prelude::*;

let params = ImagenParams {
    prompt: "ancient egypt mathematics".into(),
    width: 512, height: 512, components: 8,
    style: StyleMode::Plasma,
    palette_name: "warm".into(),
    output: "egypt.ppm".into(),
};
let path = render(&params).unwrap();
println!("Saved to {path}");

ยง๐Ÿ—๏ธ Architecture

flowchart TD
    A["Raw Input\n(bytes / sensors)"]
    B["MultiModalPerception\n โ†’ Tensor"]
    C["Consciousness Loop\nperceive โ†’ encode โ†’ predict\nevaluate โ†’ plan (lookahead)"]
    D["WorldModel\n(RK4 physics)"]
    E["SymbolicRegression\n(GP equation search)"]
    F["CausalGraph\nintervention / counterfactual"]
    G["Expression AST\ndifferentiate / simplify"]

    A --> B --> C
    C --> D
    C --> E
    E --> G
    G --> F
    D --> F

ยง๐Ÿ“– Core Types Reference

Type / FunctionDescription
Tensor::new(shape, data)N-D row-major f64 tensor; .norm(), .scale(), .add(), .dot()
Expression (impl FromStr)Symbolic AST; .evaluate(), .symbolic_diff(), .simplify(), Display
CausalGraphSCM with add_node, add_edge, forward_pass, intervene, counterfactual
HarmonicOscillator, LorenzSystem, Pendulum, SIRModelPhysics models; all implement Simulatable
Simulator.euler_step_osc(), .rk4_step_osc(), .rk45_adaptive()
SymbolicRegressionGP regressor; .fit(inputs, targets) โ†’ String
TextPredictor.predict_continuation(text, length) โ†’ PredictionResult
SentenceGenerator.generate(seed) โ†’ String
ParagraphGenerator.generate(seed) โ†’ String
TextSummarizer.summarize(text) โ†’ String
StochasticEnhancer.enhance(text) โ†’ String
Consciousness.tick(bytes) โ†’ Vec<f64>
encode_text(text, iters, depth)Returns EncodedText { expression, length, residuals }
decode_message(expr, length, residuals)Losslessly reconstructs original text
mdl_score, compute_mse, r_squared, aic_score, bic_scoreModel-fit metrics
render_image(prompt, w, h, style, palette, n, out)Spectral field synthesis โ†’ PPM file

ยง๐Ÿ“„ License

Licensed under the MIT License.

ยงLMM WebAssembly Guide ๐ŸŒ

LMM natively targets wasm32-unknown-unknown. Because the HTTP client (reqwest) automatically switches to the browser fetch API on WASM targets, you can deploy LMM inside Rust frontend frameworks without any additional glue code.

ยงSupported Frameworks

LMM WASM integration is actively used and tested with:

  • Yew: the most popular Rust / WASM frontend framework
  • Dioxus: cross-platform Rust UI framework
  • Leptos: fine-grained reactive framework

ยง๐Ÿ“ฆ Adding to a WASM Project

Add lmm to your Cargo.toml with the appropriate feature set. Avoid enabling rust-binary, python, or node on WASM targets.

[dependencies]
lmm = { version = "0.2.7", default-features = false, features = ["wasm-net"] }

Build with Trunk (Yew / Leptos):

trunk serve

ยง๐ŸŒ CORS Considerations

DuckDuckGoโ€™s endpoints set permissive CORS headers on most responses, but the behaviour can vary by endpoint and region. If you encounter CORS errors:

  • Use a server-side proxy to forward requests from your own origin.
  • Use the wt-wt (worldwide) region which tends to have the broadest CORS coverage.
  • Consider caching results server-side and serving them from your own API.

ยง๐Ÿ”ฌ Feature Flags for WASM

FeatureWASM-safeDescription
(default)โœ…Core symbolic engine (no networking)
wasm-netโœ…Enables reqwest fetch-based HTTP on WASM
netโŒNative Tokio networking: incompatible with WASM
rust-binaryโŒCLI binary: incompatible with WASM
pythonโŒpyo3 bindings: incompatible with WASM
nodeโŒnapi bindings: incompatible with WASM

ยง๐Ÿ“– Example: Yew Component

use lmm::prelude::*;
use yew::prelude::*;

#[function_component(PredictorDemo)]
fn predictor_demo() -> Html {
    let output = use_state(|| String::from("Click to generate..."));
    let onclick = {
        let output = output.clone();
        Callback::from(move |_| {
            let predictor = TextPredictor::new(20, 30, 3);
            if let Ok(result) = predictor.predict_continuation("Wise AI built the first LMM", 80) {
                output.set(result.continuation);
            }
        })
    };
    html! {
        <div>
            <button {onclick}>{ "Generate" }</button>
            <p>{ (*output).clone() }</p>
        </div>
    }
}

ยง๐Ÿ“„ License

Licensed under the MIT License.

Modulesยง

app
Application Entry-Point
causal
Causal Graphs
compression
Information-Theoretic Compression Metrics
consciousness
Consciousness Perception-Action Loop
discovery
Symbolic Regression
encode
Text-Equation Encoding and Decoding
equation
Symbolic Expression Engine
error
Error Types
field
Tensor Field Operations
imagen
Procedural Image Generation (โ€œImagenโ€)
lexicon
System Dictionary Lexicon
models
Mathematical Models
operator
Neural and Fourier Operators
perception
Multi-Modal Perception
physics
Physics Models
predict
Symbolic Text Prediction
prelude
Prelude
reasoner
Compositional Symbolic Reasoner
simulation
ODE/PDE Numerical Integration
stochastic
Stochastic Text Enhancement
symbolic
Symbolic Expression Tools
tensor
Tensor - N-Dimensional Array
text
Symbolic Text Generation
traits
Core Traits
uncertainty
Calibrated Uncertainty Quantification
world
World Model