🦀 micro_grad — A tiny autograd + MLP engine in pure Rust
A minimal scalar autograd engine and small MLP framework inspired by micrograd (by Karpathy), implemented entirely in Rust.
It supports basic operations (+, -, *, /, ReLU), backpropagation, and multi-layer perceptron training.
✨ Features
- ✅ Automatic differentiation on scalar computation graphs
- ✅ Topological backward pass (DFS-based)
- ✅ ReLU / LeakyReLU activation
- ✅ Multi-layer perceptron (MLP) abstraction
- ✅ SGD training loop demo
- ✅ Fully safe,
Rc<RefCell<>>based — nounsafecode
📦 Installation
Add this to your Cargo.toml:
[]
= "0.1"
Then in your Rust code:
use Var;
use ;
🚀 Quick Start
Train an XOR neural network from scratch:
use ;
use Var;
✅ Expected output (after ~1000 epochs):
epoch 2000, loss = 0.000000
== After training ==
x=[0.0, 0.0] -> pred≈0.00 (target=0)
x=[0.0, 1.0] -> pred≈1.00 (target=1)
x=[1.0, 0.0] -> pred≈1.00 (target=1)
x=[1.0, 1.0] -> pred≈0.00 (target=0)
🧠 How it works
-
Each
Varstores:data: the scalar valuegrad: the accumulated gradientOp: reference to how it was created (Add,Mul, etc.)
-
Var::backward()performs a reverse topological traversal, applying the chain rule.
A minimal graph example:
let a = new;
let b = new;
let c = a.mul; // c = a * b
let d = c.add; // d = a*b + a
backward;
println!;
// ∂d/∂a = b + 1 = 4, ∂d/∂b = a = 2
🧩 Roadmap
- Scalar autograd
- ReLU / LeakyReLU
- MLP training example
📄 License
Licensed under either of
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
🧑💻 Author
Developed by BriceLucifer Inspired by Andrej Karpathy’s micrograd.