Struct neuronika::Var [−][src]
pub struct Var<T: Data + 'static> { /* fields omitted */ }
Expand description
A non-differentiable variable.
This, together with its differentiable counterpart VarDiff
, is the main building block of
every computation.
Conceptually, it can be thought of as a ndarray::Array
for which the computations are
automatically kept track of.
Implementations
Promotes self
to a differentiable variable. A subsequent call to .backward()
will compute its gradient.
Examples
This is the preferred usage.
use neuronika;
let x = neuronika::ones(5).requires_grad();
This is also permitted, however, one should be aware of the difference between x_diff
and
x
.
use neuronika;
let x = neuronika::ones(5);
let y = x.clone() + neuronika::ones(1);
let x_diff = x.requires_grad();
Propagates the computations forwards and populates all the variables from the leaves of the
graph to self
.
This has effect only on certain ancestor variables of self
. It sets such variables
in training mode.
See also .dropout()
.
Examples
The following snippet pictures the effect of several calls placed at different locations inside the program. The last call switches all the dropout variables in training mode.
This has effect only on certain ancestor variables of self
. It sets such variables
in evaluation mode.
See also .dropout()
.
Performs a vector-matrix multiplication between the vector variable self
and the matrix
variable rhs
.
If self
is n and rhs
is (n, m) the output will be m.
Performs a matrix multiplication between the matrix variables self
and rhs
. If self
is (n, m) and rhs
is (m, o) the output will be (n, o).
pub fn mm_t<Rhs>(self, rhs: Rhs) -> <Self as MatMatMulT<Rhs>>::Output where
Self: MatMatMulT<Rhs>,
pub fn mm_t<Rhs>(self, rhs: Rhs) -> <Self as MatMatMulT<Rhs>>::Output where
Self: MatMatMulT<Rhs>,
Performs a matrix multiplication between the matrix variables self
and rhs
.
This is a fused operation as rhs
is implicitly transposed. Fusing the two operations
it’s marginally faster than computing the matrix multiplication and the transposition
separately.
If self
is (n, m) and rhs
is (o, m) the output will be (n, o).
Returns an immutable reference to the data inside self
.
At the variable’s creation the data is filled with zeros. You can populate it with a
call to .forward()
.
Returns a mutable reference to the data inside self
.
At the variable’s creation the data is filled with zeros. You can populate it with a
call to .forward()
.
Takes the power of each element in self
with exponent exp
and returns a variable with the
result.
Takes the square root element-wise and returns a variable with the result.
Applies the rectified linear unit element-wise and returns a variable with the result.
ReLU(x) = max(0, x)
Applies the leaky rectified linear unit element-wise and returns a variable with the result.
LeakyReLU(x) = max(0, x) + 0.01 * min(0, x)
Applies the softplus element-wise and returns a variable with the result.
Softplus(x) = log(1 + exp(x))
Applies the sigmoid element-wise and returns a variable with the result.
Applies the tanh element-wise and returns a variable with the result.
Applies the natural logarithm element-wise and returns a variable with the result.
Applies the exponential element-wise and returns a variable with the result.
Applies the softmax to self
and returns a variable with the result.
The softmax is applied to all slices along axis
, and will re-scale them so
that the elements lie in the range [0, 1] and sum to 1.0.
Applies the log-softmax to self
and returns a variable with the result.
Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.
See also .softmax()
.
Returns a variable equivalent to self
with its dimensions reversed.
Applies dropout to self
and returns a variable with the result.
It is strongly suggested to use nn::Dropout
instead of this method when working with
neural networks.
During training, randomly zeroes some of the elements of self
with probability p using
samples from a Bernoulli distribution. Each channel will be zeroed out independently on
every forward call.
This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors.
Furthermore, the outputs are scaled by a factor of 1/(1 - p) during training. This means that during evaluation the resulting variable simply computes an identity function.
Splits self
into a certain number of chunks of size chunk_size
skipping the
remainder along each dimension that doesn’t fit evenly.
Concatenates the given sequence of non-differentiable variables variables
, including
self
, along the given axis, and returns a non-differentiable variable with the results.
Arguments
-
variables
- sequence of non-differentiable variables. -
axis
- axis to concatenate along to.
Panics
If the variables have mismatching shapes, apart from along axis, if the variables are empty,
if axis
is out of bounds or if the result is larger than is possible to represent.
Examples
use std::boxed::Box;
use neuronika;
use ndarray;
let a = neuronika::ones((3, 2));
let b = neuronika::full((3, 2), 4.);
let c = neuronika::full((3, 2), 3.);
let mut d = a.cat(&[Box::new(b), Box::new(c)], 1);
d.forward();
assert_eq!(*d.data(), ndarray::array![[1., 1., 4., 4., 3., 3.],
[1., 1., 4., 4., 3., 3.],
[1., 1., 4., 4., 3., 3.]]);
Stacks the given sequence of non-differentiable variables variables
, including
self
, along the given axis, and returns a non-differentiable variable with the results.
All variables must have the same shape.
Arguments
-
variables
- sequence of non-differentiable variables. -
axis
- axis to stack along to.
Panics
If the variables have mismatching shapes, apart from along axis, if the variables are empty,
if axis
is out of bounds or if the result is larger than is possible to represent.
Examples
use std::boxed::Box;
use neuronika;
use ndarray;
let a = neuronika::ones((2, 2));
let b = neuronika::ones((2, 2));
let c = neuronika::ones((2, 2));
let mut d = a.stack(&[Box::new(b), Box::new(c)], 0);
d.forward();
assert_eq!(*d.data(), ndarray::array![[[1., 1.],
[1., 1.]],
[[1., 1.],
[1., 1.]],
[[1., 1.],
[1., 1.]]]);
Trait Implementations
The type of the convolution’s result. See the differentiability arithmetic for more details. Read more
impl<F1, F2, B2, Pad> Convolve<Var<F1>, VarDiff<F2, B2>, Pad> for Var<F1> where
F1: NData + 'static,
F1::Dim: RemoveAxis,
<F1::Dim as Dimension>::Smaller: RemoveAxis,
<<F1::Dim as Dimension>::Smaller as Dimension>::Smaller: ReflPad + ReplPad,
F2: NData<Dim = F1::Dim> + 'static,
B2: Gradient<Dim = F2::Dim> + Overwrite + Display + Debug,
Pad: PaddingMode + 'static,
impl<F1, F2, B2, Pad> Convolve<Var<F1>, VarDiff<F2, B2>, Pad> for Var<F1> where
F1: NData + 'static,
F1::Dim: RemoveAxis,
<F1::Dim as Dimension>::Smaller: RemoveAxis,
<<F1::Dim as Dimension>::Smaller as Dimension>::Smaller: ReflPad + ReplPad,
F2: NData<Dim = F1::Dim> + 'static,
B2: Gradient<Dim = F2::Dim> + Overwrite + Display + Debug,
Pad: PaddingMode + 'static,
The type of the convolution’s result. See the differentiability arithmetic for more details. Read more
impl<F1, F2, Pad> ConvolveWithGroups<Var<F1>, Var<F2>, Pad> for Var<F1> where
F1: NData + 'static,
F1::Dim: RemoveAxis,
<F1::Dim as Dimension>::Smaller: RemoveAxis,
<<F1::Dim as Dimension>::Smaller as Dimension>::Smaller: ReflPad + ReplPad,
F2: NData<Dim = F1::Dim> + 'static,
Pad: PaddingMode + 'static,
impl<F1, F2, Pad> ConvolveWithGroups<Var<F1>, Var<F2>, Pad> for Var<F1> where
F1: NData + 'static,
F1::Dim: RemoveAxis,
<F1::Dim as Dimension>::Smaller: RemoveAxis,
<<F1::Dim as Dimension>::Smaller as Dimension>::Smaller: ReflPad + ReplPad,
F2: NData<Dim = F1::Dim> + 'static,
Pad: PaddingMode + 'static,
The type of the grouped convolution’s result. See the differentiability arithmetic for more details. Read more
impl<F1, F2, B2, Pad> ConvolveWithGroups<Var<F1>, VarDiff<F2, B2>, Pad> for Var<F1> where
F1: NData + 'static,
F1::Dim: RemoveAxis,
<F1::Dim as Dimension>::Smaller: RemoveAxis,
<<F1::Dim as Dimension>::Smaller as Dimension>::Smaller: ReflPad + ReplPad,
F2: NData<Dim = F1::Dim> + 'static,
B2: Gradient<Dim = F2::Dim> + Overwrite,
Pad: PaddingMode + 'static,
impl<F1, F2, B2, Pad> ConvolveWithGroups<Var<F1>, VarDiff<F2, B2>, Pad> for Var<F1> where
F1: NData + 'static,
F1::Dim: RemoveAxis,
<F1::Dim as Dimension>::Smaller: RemoveAxis,
<<F1::Dim as Dimension>::Smaller as Dimension>::Smaller: ReflPad + ReplPad,
F2: NData<Dim = F1::Dim> + 'static,
B2: Gradient<Dim = F2::Dim> + Overwrite,
Pad: PaddingMode + 'static,
The type of the grouped convolution’s result. See the differentiability arithmetic for more details. Read more
The type of the matrix-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more
The type of the matrix-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more
The type of the matrix-matrix multiplication with transposed right hand side operand’s result. See the differentiability arithmetic for more details. Read more
The type of the matrix-matrix multiplication with transposed right hand side operand’s result. See the differentiability arithmetic for more details. Read more
The type of the matrix-matrix multiplication with transposed right hand side operand’s result. See the differentiability arithmetic for more details. Read more
The type of the vector-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more
The type of the vector-matrix multiplication’s result. See the differentiability arithmetic for more details. Read more
Auto Trait Implementations
impl<T> !RefUnwindSafe for Var<T>
impl<T> !UnwindSafe for Var<T>
Blanket Implementations
Mutably borrows from an owned value. Read more