autograd
Tensors and differentiable operations backed by ndarray.
Cargo.toml
If you use basic linalg operations, especially matrix multiplications, blas
feature would be important to speed them up.
[]
= {features = ["blas", "<blas-implementation-choice>"] }
<blas-implementation-choice>
must be one of the following (See also blas-src)
accelerate
macOS onlyintel-mkl
Intel/AMD CPU only. Includes Vector Mathematics (VM) opsopenblas
Features
Reverse-mode automatic differentiation using lazy tensors
Here we are just computing partial derivatives of z = 2x^2 + 3y + 1
.
use autograd as ag;
use *;
run;
Neural networks
This crate has various low-level features inspired by tensorflow/theano to train neural networks. Since computation graphs require only bare minimum of heap allocations, the overhead is small, even for complex networks.
// MNIST digits classification with multi-layer-perceptron
use autograd as ag;
use Adam;
use *;
use *;
let mut env = new;
let rng = default;
// Register variables in this env.
env.name.set;
env.name.set;
let adam = default;
for epoch in 0..3
For detailed, see documentation or examples