minimum_ml
Experimental Machine Learning Library in Rust
minimum_ml is a lightweight, experimental machine learning library designed for educational purposes and minimal dependency environments. It provides basic building blocks for neural networks with a focus on simplicity and low footprint.
Features
- Minimal Dependencies: The core library depends only on
getrandom(for seeding) and a local derive macro. No heavyweight crates likendarray,tch, orrandare required by default. - Custom RNG: Uses a custom
XorShift64implementation for random number generation. - Quantization Support: Includes 8-bit integer (i8) quantization layers and SIMD-optimized (AVX2) matrix multiplication.
- Lightweight Logging: Built-in TensorBoard-compatible logger (Scalars only) using pure Rust.
- Automatic Differentiation: Basic backward propagation engine.
Installation
Add this to your Cargo.toml:
[]
= { = "https://github.com/aokyut/minimum_ml" }
Features
default: No extra features.logging: Enables TensorBoard logging support (std-based, no external deps).full: Enables all features.
Usage Guide
1. Defining a Network with sequential!
You can define a neural network layer-by-layer using the sequential! macro. This handles the connections between layers automatically.
use ;
use MM;
use ;
2. Training Loop (Forward & Backward)
Once the graph is set up, you can run the training loop:
// Inside the training loop...
// let input_tensor = ...;
// let target_tensor = ...;
// Forward Pass
let loss_val = g.forward;
println!;
// Backward Pass (Gradient Calculation)
g.backward;
// Optimization (Update Weights)
g.optimize;
// Reset Gradients/Flows for next iteration
g.reset;
3. Using Datasets and Dataloader
Implement the Dataset trait for your data, then use Dataloader for batching and shuffling.
use ;
// 1. Define your data item structure
// Macro to help batching
// 2. Define your Dataset struct
// 3. Implement Dataset trait
// 4. Use Dataloader
let dataset = MyDataset ;
let loader = new;
for batch in loader.iter_batch
4. Saving and Loading Models
You can save and load the trained parameters.
// Save model
// Creates a directory "my_model" and saves parameters inside
g.save;
// Load model
// Loads parameters from "my_model" directory
g.load;
5. Available Components
| Component | Description |
|---|---|
ml.params.MM |
Matrix Multiplication (Linear Layer) |
ml.params.Bias |
Bias addition layer |
ml.params.Linear |
Combined MM + Bias (Helper) |
ml.funcs.ReLU |
ReLU Activation |
ml.funcs.Sigmoid |
Sigmoid Activation |
ml.funcs.Softmax |
Softmax Activation |
ml.funcs.CrossEntropyLoss |
Cross Entropy Loss function |
ml.funcs.MSELoss |
Mean Squared Error Loss function |
quantize.funcs.Quantize |
Quantize when inference mode |
quantize.funcs.Dequantize |
Dequantize when inference mode |
quantize.funcs.QReLU |
Quantized ReLU |
quantize.params.QuantizedLinear |
Linear layer for int8 quantization |
quantize.params.QuantizedMM |
Matrix Multiplication for int8 quantization |
Running Tests
To run the test suite:
License
MIT