Wyrm
A reverse mode, define-by-run, low-overhead autodifferentiation library.
Features
Performs backpropagation through arbitrary, define-by-run computation graphs, emphasizing low overhead estimation of sparse, small models on the CPU.
Highlights:
- Low overhead.
- Built-in support for sparse gradients.
- Define-by-run.
- Trivial Hogwild-style parallelisation, scaling linearly with the number of CPU cores available.
Quickstart
The following defines a univariate linear regression model, then backpropagates through it.
let slope = new;
let intercept = new;
let x = new;
let y = new;
let y_hat = slope.clone * x.clone + intercept.clone;
let mut loss = .square;
To optimize the parameters, create an optimizer object and go through several epochs of learning:
let mut optimizer = SGD new;
for _ in 0..num_epochs
You can use rayon
to fit your model in parallel, by first creating a set of shared
parameters, then building a per-thread copy of the model:
let slope_param = new;
let intercept_param = new;
let num_epochs = 10;
.into_par_iter
.for_each;
BLAS support
You should enable BLAS support to get (much) better performance out of matrix-multiplication-heavy
workloads. To do so, add the following to your Cargo.toml
:
ndarray = { version = "0.11.0", features = ["blas", "serde-1"] }
blas-src = { version = "0.1.2", default-features = false, features = ["openblas"] }
openblas-src = { version = "0.5.6", default-features = false, features = ["cblas"] }