Crate drug

source ·
Expand description

∂rug - Differentiable Rust Graph

This crate is a collection of utilities to build build neural networks (differentiable programs). See examples for implementations of canonical neural networks. You may need to download those datasets yourself to use them. Examples include:

  • Mnist with dense networks
  • Mnist with convolutional neural networks (though embarassingly slowly)
  • Penn TreeBank character prediction with RNN and GRU

Planned Future Features

  • Higher level API
    • Building complexes of nodes (conv + bias + relu) / RNN cells, with parameter reuse
    • Subgraphs / updating subsets of graphs (e.g. for GAN) with separate optimizers
  • Parallel backprop multiple arguments of 1 node
  • ndarray-parallel or OpenMPI for graph replication and parallelization
  • Link to some optimized OpenCL maths backend for GPU utilization

Reinforcement learning applications may also challenge the archiecture but I don’t understand the process well enough yet to consider adding it to the library.

Wish list

  • Operator overloading API + Taking advantage of the type system and const generics
    • May require total overhaul.. or may be possible with a “Graph Cursor” trait and more sophisticaed handles beyond current Idxs
  • Automatic differentiation of operations defined only from loops (proc macros?)
  • Taking advantage of just in time compilation and fusion of operations / kernels
  • Other kinds of derivatives e.g. jacobian

Re-exports

pub extern crate ndarray;
pub use nodes::Operation;

Modules

This module holds the different types nodes that exist in a computation graph. Nodes that represent a differentiable computation are implemented by a struct with the “Operation” trait. Use Graph methods to create and register nodes inside a graph. See Node for the types of node available. This module may eventually be made private…

Structs

A differentiable computation graph. Use this struct to hold your differentiable program which is a directed acyclic graph of Nodes, their associated values and losses (gradients). The graph computes values moving forward in insertion order (see forward method) and propagates losses backwards in reverse insertion order (see backward method). The default graph comes with an xavier initializer and a vanilla stochastic gradient descent optimizer.
A placeholder to help index into a graph. These should not be interchanged between graphs.
Here is a good blog that explains various optimizers. Currently only SGD, RMSProp, Adam, and SGD-with-momentum are implemented. The Optimizerstruct builds and holds OptimizerInstances which hold runtime information about every parameter that’s being optimized. If beta_momentum or beta_magnitude are set to zero, then the optimizer does not keep momentum and magnitude correction information information about parameters. epsilon is added to denominators to avoid divide by zero errors.

Enums

Type of pooling operation (currently there is only average pooling). TODO enum max pool, avg pool, sum pool, min pool Implements Operation. See Node constructor for full description.
Type of padding to use in a Conv node . No padding means a non-strided convolution will shrink by the dimensions of the kernel as pixels at the edge will not be the center of a convolution. Same padding allows for convolution of edge pixels by assuming the values beyond the images are equal to the edge. Other not implemented padding strategies are “Zero” padding or “Reflection” padding.

Functions

Take the softmax of an array of shape batch_size * num_classes
A loss function used for classification.
The default (and only provided) initializer. Only works with convolution kernels and matrices.