logo

Crate neuronika[][src]

Expand description

The neuronika crate provides auto-differentiation and dynamic neural networks.

Neuronika is a machine learning framework written in pure Rust, built with a focus on ease of use, fast experimentation and performance.

Highlights

  • Define by run computational graphs.
  • Reverse-mode automatic differentiation.
  • Dynamic neural networks.

Variables

The main building blocks of neuronika are variables and differentiable variables. This means that when using this crate you will be handling and manipulating instances of Var and VarDiff.

Variables are lean and powerful abstractions over the computational graph’s nodes. Neuronika empowers you with the ability of imperatively building and differentiating such graphs with minimal amount of code and effort.

Both differentiable and non-differentiable variables can be understood as tensors. You can perform all the basic arithmetic operations on them, such as: +, -, * and /. Refer to Var and VarDiff for a complete list of the available operations.

It is important to note that cloning variables is extremely memory efficient as only a shallow copy is returned. Cloning a variable is thus the way to go if you need to use it several times.

The provided API is linear in thought and minimal as it is carefully tailored around you, the user.

Quickstart

If you’re familiar with Pytorch or Numpy, you will easily follow these example. If not, brace yourself and follow along.

First thing first, you should import neuronika.

use neuronika;

Neuronika’s variables are designed to work with the f32 data type, although this may change in the future, and can be initialized in many ways. In the following, we will show some of the possible alternatives:

With random or constant values:

Here shape determines the dimensionality of the output variable.

let shape = [3, 4];

let rand_variable = neuronika::rand(shape);
let ones_variable = neuronika::ones(shape);
let constant_variable = neuronika::full(shape, 7.);

print!("Full variable:\n{}", constant_variable);

Out:

[[7, 7, 7, 7],
[7, 7, 7, 7],
[7, 7, 7, 7]]

From a ndarray array

use ndarray::array;

let array = array![1., 2.];
let x_ndarray = neuronika::from_ndarray(array);

print!("From ndarray:\n{}", x_ndarray);

Out:

[1, 2]

Accessing the underlying data is possible by using .data():

let dim = (2, 2);

let x = neuronika::rand(dim);

assert_eq!(x.data().dim(), dim);

Leaf Variables

You can create leaf variables by using one of the many provided functions, such as zeros(), ones(), full() and rand(). Refer to the complete list for additional information.

Leaf variables are so called because they form the leaves of the computational graph, as are not the result of any computation.

Every leaf variable is by default created as non-differentiable, to promote it to a differentiable leaf, i. e. a variable for which you can compute the gradient, you can use .requires_grad().

Differentiable leaf variables are leaves that have been promoted. You will encounter them very often in your journey through neuronika as they are the the main components of the neural networks’ building blocks. To learn more in detail about those check the nn module.

Differentiable leaves hold a gradient, you can access it with .grad().

Differentiability Arithmetic

As stated before, you can manipulate variables by performing operations on them; the results of those computations will also be variables, although not leaf ones.

The result of an operation between two differentiable variables will also be a differentiable variable and the converse holds for non-differentiable variables. However, things behave slightly differently when an operation is performed between a non-differentiable variable and a differentiable one, as the resulting variable will be differentiable.

You can think of differentiability as a sticky property. The table that follows is a summary of how differentiability is broadcasted through variables.

OperandsVarVarDiff
VarVarVarDiff
VarDiffVarDiffVarDiff

Differentiable Ancestors

The differentiable ancestors of a variable are the differentiable leaves of the graph involved in its computation. Obviously, only VarDiff can have a set of ancestors.

You can gain access, via mutable views, to all the ancestors of a variable by iterating through the vector of Param returned by .parameters(). To gain more insights about the role that such components fulfil in neuronika feel free to check the optim module.

Computational Graph

A computational graph is implicitly created as you write your program. You can differentiate it with respect to some of the differentiable leaves, thus populating their gradients, by using .backward().

It is important to note that the computational graph is lazily evaluated, this means that neuronika decouples the construction of the graph from the actual computation of the nodes’ values. You must use .forward() in order to obtain the actual result of the computation.

use neuronika;

let x = neuronika::rand(5);      //----+
let q = neuronika::rand((5, 5)); //    |- Those lines build the graph.
                                 //    |
let y = x.clone().vm(q).vv(x);   //----+
                                 //
y.forward();                     // After .forward() is called y contains the result.

Freeing and keeping the graph

By default, computational graphs will persist in the program’s memory. If you want or need to be more conservative about that you can wrap any arbitrary subset of the computations in an inner scope. This allows for the corresponding portion of the graph to be freed when the end of the scope is reached by the execution of your program.

use neuronika;

let w = neuronika::rand((3, 3)).requires_grad(); // -----------------+
let b = neuronika::rand(3).requires_grad();      //                  |
let x = neuronika::rand((10, 3));                // -----------------+- Leaves are created
                                                 //                  
{                                                // ---+             
     let h = x.mm(w.t()) + b;                    //    | w's and b's
     h.forward();                                //    | grads are   
     h.backward(1.0);                            //    | accumulated
}                                                // ---+             |- Graph is freed and
                                                 // -----------------+  only leaves remain

Modules

Data loading and manipulation utilities.

Basic building blocks for neural networks.

Implementations of various optimization algorithms and penalty regularizations.

Structs

Mutable views over a differentiable variable’s data and gradient.

A non-differentiable variable.

A differentiable variable.

Traits

Back-propagation behavior.

Concatenation.

Convolution.

Grouped convolution.

Data representation.

Eval mode behavior.

Forward-propagation behavior.

Gradient representation.

Matrix-matrix multiplication.

Matrix-matrix multiplication with transposed right hand side operand.

Gradient accumulation’s mode.

Stacking.

Vector-matrix multiplication.

Vector-vector multiplication, a.k.a. dot product or inner product.

Functions

Concatenates the variables lhs and rhs along axis.

Creates a variable with an identity matrix of size n.

Creates a variable from a ndarray array that owns its data.

Creates a variable with data filled with a constant value.

Creates a one-dimensional variable with n geometrically spaced elements.

Creates a one-dimensional variable with n evenly spaced elements.

Creates a one-dimensional variable with n logarithmically spaced elements.

Creates a variable with data filled with ones.

Creates a variable with values sampled from a uniform distribution on the interval [0,1).

Creates a one-dimensional variable with elements from start to end spaced by step.

Stacks the variables lhs and rhs along axis.

Creates a variable with zeroed data.