Skip to main content

Crate lmm

Crate lmm 

Source
Expand description

ยง๐Ÿ‘๏ธ LMM ๐Ÿฆ€

LMM

Work In Progress Crates.io Downloads made-with-rust Rust License Maintenance

Share On Reddit Share On X Share On Linkedin

LMM is a pureโ€‘Rust framework that represents higherโ€‘dimensional realities through symbolic mathematics and physics simulation, inspired by the Pharaonic model of intelligence: compress the world into durable, universal equations.

ยง๐ŸŽฌ Demo

The following is a proof-of-concept demonstration of the predictive engine generating coherent English sentences. It is powered purely by deterministic mathematical equations and structural Subject-Verb-Object loops, utilizing the standard Linux system dictionary (/usr/share/dict/words) as its vocabulary fallback to function.

ยง๐Ÿง  Framework Overview

LMM bridges multimodal perception and actionable scientific discovery through five tightly integrated layers:

LayerModulesPurpose
Perceptionperception.rs, tensor.rsRaw bytes โ†’ normalised tensors
Symbolicequation.rs, symbolic.rs, discovery.rsGP symbolic regression, differentiation, simplification
Physicsphysics.rs, simulation.rsODE models + Euler / RK4 / RK45 / leapfrog integrators
Causalcausal.rsSCM graphs, do-calculus interventions, counterfactuals
Cognitionconsciousness.rs, world.rs, operator.rsFull perceive โ†’ encode โ†’ predict โ†’ act loop

ยงโš™๏ธ Architecture

flowchart TD
    A["Raw Input\n(bytes / sensors)"]
    B["MultiModalPerception\nโ”€โ”€โ–บ Tensor"]
    C["Consciousness Loop\nperceive โ†’ encode โ†’ predict\nevaluate โ†’ plan (lookahead)"]
    D["WorldModel\n(RK4 physics)"]
    E["SymbolicRegression\n(GP equation search)"]
    F["CausalGraph\nintervention / counterfactual"]
    G["Expression AST\ndifferentiate / simplify"]

    A --> B --> C
    C --> D
    C --> E
    E --> G
    G --> F
    D --> F

ยง๐Ÿ”ฌ Key Capabilities

  • ๐Ÿงฌ Genetic Programming: real population-based symbolic regression that seeds templates (linear, quadratic, periodic) and enforces variable-containing equations.
  • ๐Ÿ“ Symbolic Calculus: automatic differentiation (chain rule, product rule), constant folding simplification.
  • ๐ŸŒ€ Physics Suite: Harmonic, Lorenz, Pendulum, SIR epidemic, N-body gravity: all implement Simulatable.
  • ๐Ÿ”ข Field Calculus: N-D gradient, Laplacian, divergence, 3-D curl (central differences).
  • ๐Ÿ”— Causal Reasoning: structural causal models, do(X=v) interventions, counterfactual queries.
  • ๐Ÿงฉ Neural Operators: circular convolution with SGD kernel learning, Fourier spectral operators.
  • ๐Ÿ”ค Text โ†” Equation: encode any text into a symbolic equation; decode it back exactly (lossless via residuals).
  • ๐Ÿ”ฎ Symbolic Prediction: LMM-native text continuation via sliding-window GP regression and vocabulary anchoring.

ยง๐Ÿ“ฆ Installation

ยงFrom Source

git clone https://github.com/wiseaidotdev/lmm
cd lmm
cargo build --release

The binary is at ./target/release/lmm.

ยงVia Cargo

cargo install lmm --all-features

[!NOTE] Requires Rust 1.86+. Install via rustup.

ยง๐Ÿš€ CLI Usage

lmm <SUBCOMMAND> [OPTIONS]

Subcommands:
  simulate     Run a harmonic oscillator simulation
  physics      Run a named physics model (lorenz | pendulum | sir | harmonic)
  discover     Discover an equation from synthetic data using GP
  consciousness  Run a perceiveโ†’predictโ†’act consciousness loop tick
  causal       Build a causal graph and apply do-calculus intervention
  field        Compute gradient or Laplacian of a scalar field
  encode       Encode text into a symbolic mathematical equation
  decode       Decode a symbolic equation back to text
  predict      Predict text continuation via sliding-window symbolic regression

ยง๐Ÿ“– Subcommand Reference

ยง1. simulate: Harmonic Oscillator

Runs a harmonic oscillator using the RK4 integrator.

lmm simulate --step 0.01 --steps 200
Simulated 200 steps with step_size=0.01
Final state: [-0.41614683639502004, -0.9092974268937748]
FlagDefaultDescription
-s, --step0.01Integration step size (ฮ”t)
-t, --steps100Number of integration steps

ยง2. physics: Physics Model Simulation

Simulate one of four built-in physics models.

# Lorenz chaotic attractor (ฯƒ=10, ฯ=28, ฮฒ=8/3)
lmm physics --model lorenz --steps 500 --step-size 0.01

# Nonlinear pendulum
lmm physics --model pendulum --steps 300 --step-size 0.005

# SIR epidemic model
lmm physics --model sir --steps 1000 --step-size 0.5

# Damped harmonic oscillator (default)
lmm physics --model harmonic --steps 200

Lorenz example:

Lorenz: 500 steps. Final xyz: [-8.900269690476492, -7.413716837503834, 29.311877708359006]

SIR example:

SIR: 1000 steps. Final [S,I,R]: [58.797367656865795, 7.649993277129408e-15, 941.2026323431321]
FlagDefaultDescription
-m, --modelharmonicModel: lorenz, pendulum, sir, harmonic
-s, --steps200Number of integration steps
-z, --step-size0.01Step size ฮ”t

ยง3. discover: Symbolic Regression

Runs Genetic Programming (GP) to discover a symbolic equation from data.

lmm discover --iterations 200
Discovered equation: (x + (1.002465056833142 + x))

The engine fits data points (i*0.5, 2*i*0.5 + 1) by default and finds the underlying linear law. Increase --iterations for more complex datasets.

FlagDefaultDescription
-d, --data-pathsyntheticData source (synthetic = built-in linear data)
-i, --iterations100Number of GP evolution iterations

ยง4. consciousness: Perceive โ†’ Predict โ†’ Act Loop

Runs one tick of the full consciousness loop: raw bytes โ†’ perception tensor โ†’ world model prediction โ†’ action plan.

lmm consciousness --lookahead 5
Consciousness ticked. New state: [0.0019607843137254832, -0.24901960784313726, -0.37450980392156863, 0.5]
Mean prediction error: 0
FlagDefaultDescription
-l, --lookahead3Multi-step lookahead horizon depth

ยง5. causal: Causal Graph + do-Calculus

Builds a 3-node Structural Causal Model (x โ†’ y โ†’ z) and applies an intervention do(node = value), printing before/after values.

# Intervene on x: set x = 10, observe how y and z change
lmm causal --intervene-node x --intervene-value 10.0
Before intervention: x=Some(3.0), y=Some(6.0), z=Some(7.0)
After do(x=10): x=Some(10.0), y=Some(20.0), z=Some(21.0)

The SCM is:

  • y = 2 * x
  • z = y + 1
FlagDefaultDescription
-n, --intervene-nodexName of the node to intervene on
-v, --intervene-value1.0Value to set the node to (do-calculus)

ยง6. field: Scalar Field Calculus

Computes differential operators on a 1-D scalar field f(i) = iยฒ.

# Gradient: should approach 2i (central differences)
lmm field --size 8 --operation gradient

# Laplacian: should be โ‰ˆ 2 everywhere (second derivative of xยฒ)
lmm field --size 8 --operation laplacian
Gradient of xยฒ: [1.0, 2.0, 4.0, 6.0, 8.0, 10.0, 12.0, 13.0]
Laplacian of xยฒ: [0.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 0.0]
FlagDefaultDescription
-s, --size10Number of field points
-o, --operationgradientOperation: gradient or laplacian

ยง7. encode: Text โ†’ Symbolic Equation

This is the flagship demonstration of LMMโ€™s power. Any text is treated as a sequence of byte values indexed by position. The GP engine discovers a symbolic equation f(x) โ‰ˆ byte[x]. Integer residuals (byte[x] โˆ’ round(f(x))) are stored alongside the equation, guaranteeing lossless round-trip recovery.

lmm encode --text "The Pharaohs encoded reality in mathematics." \
           --iterations 150 --depth 5
โ”โ”โ” LMM ENCODER โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
Input text  : "The Pharaohs encoded reality in mathematics."
Characters  : 44
Running GP symbolic regression (150 iterations, depth 5)โ€ฆ

Equation: (95.09620435614187 - cos(x))
Length: 44 chars
MSE: 646.3067
Max residual: 64

โ”โ”โ” ENCODED DATA โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
{"eq":"(95.09620435614187 - cos(x))","len":44,"mse":646.306722,"res":[-10,9,5,-64,-16,9,3,20,2,15,8,20,-62,7,15,3,15,5,7,6,-63,18,5,1,13,11,22,26,-64,9,15,-62,15,2,20,8,6,15,3,21,9,3,20,-49]}

โ”โ”โ” VERIFY ROUND-TRIP โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
Decoded text: "The Pharaohs encoded reality in mathematics."
Round-trip  : โœ… PERFECT

To decode later, run:
  lmm decode --equation "(95.09620435614187 - cos(x))" --length 44 --residuals "-10,9,5,-64,-16,9,3,20,2,15,8,20,-62,7,15,3,15,5,7,6,-63,18,5,1,13,11,22,26,-64,9,15,-62,15,2,20,8,6,15,3,21,9,3,20,-49"

[!NOTE] GP is stochastic: the discovered equation and residual values will differ across runs. The round-trip recovery is always โœ… PERFECT because the integer residuals correct for any approximation error.

# Encode from a file
lmm encode --input ./my_message.txt --iterations 200 --depth 5
FlagDefaultDescription
-i, --input-Path to a text file to encode (- = use --text)
-t, --textHello, LMM!Inline text (used when --input is -)
--iterations80GP evolution iterations
--depth4Maximum expression tree depth

ยง8. decode: Symbolic Equation โ†’ Text

Reconstructs the original text from the equation and residuals printed by encode.

lmm decode \
  --equation "(95.09620435614187 - cos(x))" \
  --length 44 \
  --residuals "-10,9,5,-64,-16,9,3,20,2,15,8,20,-62,7,15,3,15,5,7,6,-63,18,5,1,13,11,22,26,-64,9,15,-62,15,2,20,8,6,15,3,21,9,3,20,-49"
โ”โ”โ” LMM DECODER โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
Equation : (95.09620435614187 - cos(x))
Length   : 44

โ”โ”โ” DECODED TEXT โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
The Pharaohs encoded reality in mathematics.
FlagRequiredDescription
-e, --equationโœ…Equation string (from encode output)
-l, --lengthโœ…Number of characters to recover
-r, --residualsโœ…Comma-separated residuals. Use --residuals="-3,1,..." for negative values

[!IMPORTANT] Use --residuals="-3,..." (with =) or quote the argument when residuals contain negative values to prevent the shell from treating them as flags.

ยง9. predict: Symbolic Text Continuation

The predict command acts as LMMโ€™s continuation engine. Unlike neural network LLMs that use massive statistical models, LMM strings together coherent English output using Pure Mathematics.

It does this by operating on three distinct, deterministic signals:

  • GP Trajectory Equation: f(pos) โ†’ word_byte_tone (discovers long-range subject themes)
  • GP Rhythm Equation: g(pos) โ†’ word_length (discovers alternating phonetic cadence)
  • Dictionary Grammar Engine: Maps mathematical values to a curated pool of English nouns, verbs, adjectives with system dictionary fallback (/usr/share/dict), while flowing through cyclic Subject-Verb-Object (SVO) POS grammar loops.
lmm predict --text "Wise AI built the first LMM" --window 10 --predict-length 80
Loaded 63746 dictionary words
โ”โ”โ” LMM PREDICTOR โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
Input text  : "Wise AI built the first LMM"
Window used : 6 words
Trajectory  : (99.77577741824268 + ((x + 3.4804258799212793) + 1.7728570078579993))
Rhythm      : (cos(exp(x)) + 3.851491814600415)

โ”โ”โ” PREDICTED CONTINUATION โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
Wise AI built the first LMM in the true law often long time and a open path of an old scope is the solid order

[!NOTE] Text is parsed completely via pure equations over a carefully constructed English vocabulary pool, mapping geometric relationships and POS states into elegant and mysterious sentences.

FlagDefaultDescription
-i, --input-Path to a text file (- = use --text)
-t, --textThe Pharaohs encoded reality inInline text seed
-w, --window32Context window in words
-p, --predict-length16Approximate character budget for continuation
--iterations80GP evolution iterations for the prediction model
--depth4Maximum expression tree depth

ยง๐Ÿ”ฌ Architecture Deep Dive

ยงGenetic Programming Symbolic Regression

flowchart TD
    A["Seeded population\n(linear/quadratic/periodic templates + random)"]
    B["Evaluate fitness\nMDL = nยทln(MSE) + complexityยทln(2)"]
    C["Tournament selection (k=5)"]
    D["Crossover & mutation"]
    E["Reject constant collapse\n(inject fresh random expr 70% of the time)"]
    F{Iterations done?}
    G["Best variable-containing expression (simplified)"]

    A --> B --> C --> D --> E --> F
    F -- No --> B
    F -- Yes --> G

ยงMulti-Signal Prediction Engine

flowchart TD
    In["Context Window\n(Recent Tokens)"]

    In -->|Train| M["2nd-Order Markov Chain\nTransition probabilities"]
    In -->|GP Fit| T["Word-ID Trajectory GP\nf(pos) โ‰ˆ word_id"]
    In -->|GP Fit| R["Word-Length Rhythm GP\ng(pos) โ‰ˆ length"]

    In --> S["Suffix Pattern Matcher"]

    S -- "Match Found" --> Out["Exact Phrase Continuation"]
    S -- "No Match" --> Score["Composite Scorer"]

    M --> Score
    T --> Score
    R --> Score

    Score -->|Lowest Score| W["Select Best Word from Vocab"]
    W -->|Update Recency| Out

ยงRK45 Adaptive Integrator

All Butcher-tableau coefficients are named package-level constants:

const RK45_A41: f64 = 1932.0 / 2197.0;
const RK45_A42: f64 = -7200.0 / 2197.0;
const RK45_A43: f64 = 7296.0 / 2197.0;
const RK45_B5_1: f64 = 16.0 / 135.0;
// ... etc.

Step size is adapted each iteration using the error estimate:

h_new = 0.9 ยท h ยท (tol / error)^0.2

ยง๐Ÿ“ฐ Whitepaper

LLMs are Usefull. LMMs will Break Reality: the blog post that started this project.

ยง๐Ÿค Contributing

Contributions are welcome! Feel free to open issues or pull requests.

ยง๐Ÿ“ License

This project is licensed under the MIT License: see the LICENSE file for details.

Modulesยง

causal
cli
compression
consciousness
discovery
encode
equation
error
field
lexicon
models
operator
perception
physics
predict
prelude
simulation
symbolic
tensor
traits
world