Modules§
Traits§
- Activate
- Activate
Gradient - Apply
Gradient - A trait declaring basic gradient-related routines for a neural network
- Apply
Gradient Ext - This trait extends the ApplyGradient trait by allowing for momentum-based optimization
- Array
Like - Backward
- Backward propagate a delta through the system;
- Binary
Action - Clip
- A trait denoting objects capable of being clipped between some minimum and some maximum.
- ClipMut
- This trait enables tensor clipping; it is implemented for
ArrayBase - Cross
Entropy - A trait for computing the cross-entropy loss of a tensor or array
- Default
Like - DropOut
- [Dropout] randomly zeroizes elements with a given probability (
p). - Fill
Like - Forward
- This trait denotes entities capable of performing a single forward step
- Init
- A trait for creating custom initialization routines for models or other entities.
- Init
Inplace - This trait enables models to implement custom, in-place initialization methods.
- L1Norm
- a trait for computing the L1 norm of a tensor or array
- L2Norm
- a trait for computing the L2 norm of a tensor or array
- Mean
Absolute Error - A trait for computing the mean absolute error of a tensor or array
- Mean
Squared Error - A trait for computing the mean squared error of a tensor or array
- NdLike
- Norm
- The Norm trait serves as a unified interface for various normalization routnines. At the moment, the trait provides L1 and L2 techniques.
- Numerical
- Numerical is a trait for all numerical types; implements a number of core operations
- Ones
Like - Scalar
- The Scalar trait extends the Numerical trait to include additional mathematical operations for the purpose of reducing the number of overall traits required to complete various machine-learning tasks.
- Scalar
Complex - Tensor
- Zeros
Like