logo

Struct neuronika::Var[][src]

pub struct Var<T: Data + 'static> { /* fields omitted */ }
Expand description

A non-differentiable variable.

This, together with its differentiable counterpart VarDiff, is the main building block of every computation.

Conceptually, it can be thought of as a ndarray::Array for which the computations are automatically kept track of.

Implementations

Promotes self to a differentiable variable. A subsequent call to .backward() will compute its gradient.

Examples

This is the preferred usage.

 use neuronika;

 let x = neuronika::ones(5).requires_grad();

This is also permitted, however, one should be aware of the difference between x_diff and x.

 use neuronika;

 let x = neuronika::ones(5);
 let y = x.clone() + neuronika::ones(1);

 let x_diff = x.requires_grad();

Assigns array to the variable’s data.

Arguments

array - new content.

Propagates the computations forwards and populates all the variables from the leaves of the graph to self.

This has effect only on certain ancestor variables of self. It sets such variables in training mode.

See also .dropout().

Examples

The following snippet pictures the effect of several calls placed at different locations inside the program. The last call switches all the dropout variables in training mode.

This has effect only on certain ancestor variables of self. It sets such variables in evaluation mode.

See also .dropout().

Performs a vector-matrix multiplication between the vector variable self and the matrix variable rhs.

If self is n and rhs is (n, m) the output will be m.

Vector-vector product, a.k.a. scalar product or inner product.

Performs the scalar product between the two vector variables self and rhs.

Performs a matrix multiplication between the matrix variables self and rhs. If self is (n, m) and rhs is (m, o) the output will be (n, o).

Performs a matrix multiplication between the matrix variables self and rhs. This is a fused operation as rhs is implicitly transposed. Fusing the two operations it’s marginally faster than computing the matrix multiplication and the transposition separately.

If self is (n, m) and rhs is (o, m) the output will be (n, o).

Performs a matrix-vector multiplication between the matrix variable self and the vector variable rhs.

If self is (n, m) and rhs is m the output will be n.

Returns an immutable reference to the data inside self.

At the variable’s creation the data is filled with zeros. You can populate it with a call to .forward().

Returns a mutable reference to the data inside self.

At the variable’s creation the data is filled with zeros. You can populate it with a call to .forward().

Returns the sum of all elements in self.

Returns the mean of all elements in self.

Takes the power of each element in self with exponent exp and returns a variable with the result.

Takes the square root element-wise and returns a variable with the result.

Applies the rectified linear unit element-wise and returns a variable with the result.

ReLU(x) = max(0, x)

Applies the leaky rectified linear unit element-wise and returns a variable with the result.

LeakyReLU(x) = max(0, x) + 0.01 * min(0, x)

Applies the softplus element-wise and returns a variable with the result.

Softplus(x) = log(1 + exp(x))

Applies the sigmoid element-wise and returns a variable with the result.

Applies the tanh element-wise and returns a variable with the result.

Applies the natural logarithm element-wise and returns a variable with the result.

Applies the exponential element-wise and returns a variable with the result.

Applies the softmax to self and returns a variable with the result.

The softmax is applied to all slices along axis, and will re-scale them so that the elements lie in the range [0, 1] and sum to 1.0.

Applies the log-softmax to self and returns a variable with the result.

Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.

See also .softmax().

Returns a variable equivalent to self with its dimensions reversed.

Applies dropout to self and returns a variable with the result.

It is strongly suggested to use nn::Dropout instead of this method when working with neural networks.

During training, randomly zeroes some of the elements of self with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.

This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors.

Furthermore, the outputs are scaled by a factor of 1/(1 - p) during training. This means that during evaluation the resulting variable simply computes an identity function.

Splits self into a certain number of chunks of size chunk_size skipping the remainder along each dimension that doesn’t fit evenly.

Returns a new variable with a dimension of size one inserted at the position specified by axis.

Concatenates the given sequence of non-differentiable variables variables, including self, along the given axis, and returns a non-differentiable variable with the results.

Arguments
  • variables - sequence of non-differentiable variables.

  • axis - axis to concatenate along to.

Panics

If the variables have mismatching shapes, apart from along axis, if the variables are empty, if axis is out of bounds or if the result is larger than is possible to represent.

Examples
use std::boxed::Box;
use neuronika;
use ndarray;


let a = neuronika::ones((3, 2));
let b = neuronika::full((3, 2), 4.);
let c = neuronika::full((3, 2), 3.);

let mut d = a.cat(&[Box::new(b), Box::new(c)], 1);
d.forward();

assert_eq!(*d.data(), ndarray::array![[1., 1., 4., 4., 3., 3.],
                                      [1., 1., 4., 4., 3., 3.],
                                      [1., 1., 4., 4., 3., 3.]]);

Stacks the given sequence of non-differentiable variables variables, including self, along the given axis, and returns a non-differentiable variable with the results.

All variables must have the same shape.

Arguments
  • variables - sequence of non-differentiable variables.

  • axis - axis to stack along to.

Panics

If the variables have mismatching shapes, apart from along axis, if the variables are empty, if axis is out of bounds or if the result is larger than is possible to represent.

Examples
use std::boxed::Box;
use neuronika;
use ndarray;


let a = neuronika::ones((2, 2));
let b = neuronika::ones((2, 2));
let c = neuronika::ones((2, 2));

let mut d = a.stack(&[Box::new(b), Box::new(c)], 0);
d.forward();

assert_eq!(*d.data(), ndarray::array![[[1., 1.],
                                       [1., 1.]],
                                      [[1., 1.],
                                       [1., 1.]],
                                      [[1., 1.],
                                       [1., 1.]]]);

Trait Implementations

The resulting type after applying the + operator.

Performs the + operation. Read more

The resulting type after applying the + operator.

Performs the + operation. Read more

The resulting type after applying the + operator.

Performs the + operation. Read more

The resulting type after applying the + operator.

Performs the + operation. Read more

The resulting type after applying the + operator.

Performs the + operation. Read more

The type of the concatenation’s result. See the differentiability arithmetic for more details. Read more

Concatenates variables along the given axis.

The type of the concatenation’s result. See the differentiability arithmetic for more details. Read more

Concatenates variables along the given axis.

The type of the concatenation’s result. See the differentiability arithmetic for more details. Read more

Concatenates variables along the given axis.

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

The type of the convolution’s result. See the differentiability arithmetic for more details. Read more

Applies a n-dimensional convolution with the given parameters. n can be either 1, 2 or 3. Read more

The type of the convolution’s result. See the differentiability arithmetic for more details. Read more

Applies a n-dimensional convolution with the given parameters. n can be either 1, 2 or 3. Read more

The type of the grouped convolution’s result. See the differentiability arithmetic for more details. Read more

Applies a n-dimensional grouped convolution with the given parameters. n can be either 1, 2 or 3. Read more

The type of the grouped convolution’s result. See the differentiability arithmetic for more details. Read more

Applies a n-dimensional grouped convolution with the given parameters. n can be either 1, 2 or 3. Read more

Formats the value using the given formatter. Read more

Formats the value using the given formatter. Read more

The resulting type after applying the / operator.

Performs the / operation. Read more

The resulting type after applying the / operator.

Performs the / operation. Read more

The resulting type after applying the / operator.

Performs the / operation. Read more

The resulting type after applying the / operator.

Performs the / operation. Read more

The resulting type after applying the / operator.

Performs the / operation. Read more

The type of the matrix-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more

Computes the matrix-matrix multiplication between self and other.

The type of the matrix-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more

Computes the matrix-matrix multiplication between self and other.

The type of the matrix-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more

Computes the matrix-matrix multiplication between self and other.

The type of the matrix-matrix multiplication with transposed right hand side operand’s result. See the differentiability arithmetic for more details. Read more

Computes the matrix-matrix multiplication between self and transposed other.

The type of the matrix-matrix multiplication with transposed right hand side operand’s result. See the differentiability arithmetic for more details. Read more

Computes the matrix-matrix multiplication between self and transposed other.

The type of the matrix-matrix multiplication with transposed right hand side operand’s result. See the differentiability arithmetic for more details. Read more

Computes the matrix-matrix multiplication between self and transposed other.

The resulting type after applying the * operator.

Performs the * operation. Read more

The resulting type after applying the * operator.

Performs the * operation. Read more

The resulting type after applying the * operator.

Performs the * operation. Read more

The resulting type after applying the * operator.

Performs the * operation. Read more

The resulting type after applying the * operator.

Performs the * operation. Read more

The resulting type after applying the - operator.

Performs the unary - operation. Read more

The type of the stacking’s result. See the differentiability arithmetic for more details. Read more

Stacks variables along the given axis.

The type of the stacking’s result. See the differentiability arithmetic for more details. Read more

Stacks variables along the given axis.

The type of the stacking’s result. See the differentiability arithmetic for more details. Read more

Stacks variables along the given axis.

The resulting type after applying the - operator.

Performs the - operation. Read more

The resulting type after applying the - operator.

Performs the - operation. Read more

The resulting type after applying the - operator.

Performs the - operation. Read more

The resulting type after applying the - operator.

Performs the - operation. Read more

The resulting type after applying the - operator.

Performs the - operation. Read more

The type of the vector-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more

Computes the vector-matrix multiplication between self and other.

The type of the vector-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more

Computes the vector-matrix multiplication between self and other.

The type of the vector-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more

Computes the vector-matrix multiplication between self and other.

The type of the dot product’s result. See the differentiability arithmetic for more details. Read more

Computes the dot product between self and other.

The type of the dot product’s result. See the differentiability arithmetic for more details. Read more

Computes the dot product between self and other.

The type of the dot product’s result. See the differentiability arithmetic for more details. Read more

Computes the dot product between self and other.

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Performs the conversion.

Performs the conversion.

The alignment of pointer.

The type for initializers.

Initializes a with the given initializer. Read more

Dereferences the given pointer. Read more

Mutably dereferences the given pointer. Read more

Drops the object pointed to by the given pointer. Read more

The resulting type after obtaining ownership.

Creates owned data from borrowed data, usually by cloning. Read more

🔬 This is a nightly-only experimental API. (toowned_clone_into)

Uses borrowed data to replace owned data, usually by cloning. Read more

Converts the given value to a String. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.