autograd
Provides differentiable operations and tensors.
Features
-
Lazy, side-effect-free tensors.
autograd::Tensor<T>
itself doesn't have its value basically. It realizes graphs that are immutable and eagerly executable at any timing, that is, it supports both run-by-define and define-by-run naturally in the context of neural networks. -
Reverse-mode automatic differentiation. There are a lot of built-in operations that support higher-order derivatives, and you can define your own ops with ndarrays easily.
-
Pure Rust. The graph execution engine is implemented in pure Rust, so it's compilable to WebAssembly.
Installation
[dependencies]
autograd = "0.9.0"
mkl
feature is enabled by default to speedup gemm operations.
Examples
Here we are computing partial derivatives of z = 2x^2 + 3y + 1
.
extern crate autograd as ag;
let ref x = placeholder;
let ref y = placeholder;
let ref z = 2.*x*x + 3.*y + 1.;
// dz/dy
let gy = & grad;
println!; // => Some(3.)
// dz/dx (requires to fill the placeholder `x`)
let gx = & grad;
println!; // => Some(8.)
// ddz/dx (differentiates `z` again)
let ggx = & grad;
println!; // => Some(4.)
Another example: softmax regression for MNIST digits classification with Adam.
// This achieves 0.918 test accuracy after 3 epochs, 0.14 sec/epoch on 2.7GHz Intel Core i5
let ref w = variable;
let ref b = variable;
let ref x = placeholder;
let ref y = placeholder;
let ref z = matmul + b;
let ref loss = reduce_mean;
let ref params = ;
let ref grads = grad;
let ref predictions = argmax;
let ref accuracy = reduce_mean;
let ref adam = default;
let mut stateful_params = vars_with_states;
let ref update_ops = adam.compute_updates;
// -- dataset --
let = load;
// -- training loop --
for epoch in 0..max_epoch
For more, see documentation or examples