nbml
A minimal machine learning library built on ndarray for low-level ML algorithm development in Rust.
Unlike high-level frameworks, nbml provides bare primitives and a lightweight optimizer API for building custom neural networks from scratch. If you want comfortable abstractions, see Burn. If you want to understand what's happening under the hood and have full control, nbml gives you the building blocks.
Features
- Core primitives: Attention, LSTM, RNN, feedforward layers
- Activation functions: ReLU, Sigmoid, Tanh, Softmax
- Optimizers: AdamW, SGD
- Utilities: Variable Sequence Batching, Gradient Clipping, Gumbel Softmax, Plots, etc
- Minimal abstractions: Direct ndarray integration for custom algorithms
Installation
[]
= "0.2.2"
Quick Start
use FFN;
use Activation;
use AdamW;
use ToParams;
// Build a simple feedforward network
let mut model = FFNnew;
// Create optimizer
let mut optimizer = default.with;
// Training loop (simplified)
for batch in training_data
Architecture
NN Layers (nbml::nn)
Layer: Single nonlinear projection layerFFN: Feedforward network with configurable layersLSTM: Long Short-Term Memory with merged weight matricesRNN: Vanilla recurrent neural networkLayerNorm: Layer normalizationPooling: Sequence mean-poolingAttentionHead: Multi-head self-attention mechanismTransformerEncoder: Pre-norm transformer encoderTransformerDecoder: Pre-norm transformer decoder
Optimizers (nbml::optim)
Implement the ToParams trait for gradient-based optimization:
// impl Affine {}
You can bubble params up:
// impl AffineAffine {}
ToParams will also let you zero gradients:
let mut aa = new;
aa.forward // <- implement this yourself
aa.backward // <- implement this yourself
aa.zero_grads;
Available optimizers:
Adam: Adaptive moment estimation with bias correctionSGD: Stochastic gradient descent with optional momentum
Use .with(&mut impl ToParams) to prepare a stateful optimizer (like AdamW) for your network:
let mut model = new;
let mut optim = default.with; // <- adamw creates momentums, values for all parameters in Model
Activation Functions (nbml::f)
use f;
let x = from_vec;
let activated = relu;
let softmax = softmax;
Includes derivatives for backpropagation: d_relu, d_tanh, d_sigmoid, etc.
Design Philosophy
nbml is designed for:
- Learning: Understanding how neural networks work at a low level
- Experimentation: Rapid prototyping of novel architectures
- Research: Full control over forward and backward passes
- Transparency: No hidden magic, every operation is explicit
- Compute-Constrained Deployment: Lightweight + no C deps. Very quick for small models.
nbml is not designed for:
- Large Scale Production deployment (use PyTorch, TensorFlow, or Burn)
- Automatic differentiation (you write the backward pass)
- GPU acceleration (CPU-only via ndarray)
- Plug-and-play models (you build everything yourself)
Examples
Custom LSTM Training
use LSTM;
use Adam;
let mut lstm = LSTMnew;
let mut optimizer = default.with;
for sequence in data
Multi-Head Attention
use AttentionHead;
let mut attention = new;
let output = attention.forward;