autograd
Differentiable operations and tensors backed by ndarray.
Installation
[dependencies]
autograd = { version = "0.9.4", features = ["mkl"] }
mkl
feature is recommended to speedup gemm operations using Intel MKL.
Features
Lazy, zero-copy tensor evaluation
Computation graphs are created on the fly (a.k.a define-by-run), but is not evaluated until Tensor::eval
or ag::eval
is called.
This mechanism balances better performance and flexibility.
extern crate autograd as ag;
let a: Tensor = ones;
let b: Tensor = ones;
let c: Tensor = reshape;
let d: Tensor = tensordot;
d.eval; // Getting `ndarray::Array` here.
Reverse-mode automatic differentiation
There are a lot of built-in operations that support higher-order derivatives, but you can also define your own differentiable ops with ndarrays easily.
Here we are just computing partial derivatives of z = 2x^2 + 3y + 1
.
extern crate autograd as ag;
let ref x = placeholder;
let ref y = placeholder;
let ref z = 2.*x*x + 3.*y + 1.;
// dz/dy
let gy = & grad;
println!; // => Some(3.)
// dz/dx (requires to fill the placeholder `x`)
let gx = & grad;
println!; // => Some(8.)
// ddz/dx (differentiates `z` again)
let ggx = & grad;
println!; // => Some(4.)
Neural networks
This crate has various low-level features inspired by tensorflow/theano to train neural networks.
// This is a softmax regression for MNIST digits classification with Adam.
// This achieves 0.918 test accuracy after 3 epochs (0.11 sec/epoch on 2.7GHz Intel Core i5).
let ref w = variable;
let ref b = variable;
let ref x = placeholder;
let ref y = placeholder;
let ref z = matmul + b;
let ref loss = sparse_softmax_cross_entropy;
let ref params = ;
let ref grads = grad;
let ref predictions = argmax;
let ref accuracy = reduce_mean;
let ref adam = default;
let mut stateful_params = vars_with_states;
let ref update_ops = adam.compute_updates;
// -- dataset --
let = load;
// -- training loop --
for epoch in 0..max_epoch
ConvNet, LSTM example can be found in examples
Hooks
You can register hooks on ag::Tensor
objects for debugging.
extern crate autograd as ag;
// `.p()` is a shorthand for `.with(ag::Hook::Print)`.
let a: Tensor = zeros.p;
let b: Tensor = ones;
let c = matmul;
c.eval;
// Zeros:
// [[0.0, 0.0],
// [0.0, 0.0],
// [0.0, 0.0],
// [0.0, 0.0]] shape=[4, 2], strides=[2, 1], layout=C (0x1)
Why Rust?
-
No need for bridges for fast languages. The entire logic including hotspots (kernels etc) is implemented in pure Rust, without compromising performance.
-
Memory safety. For example, Rust's lifetime checker makes it possible to implement zero-copy computation graphs without GC.
For more, see documentation or examples