Expand description
§concision-core
This library provides the core abstractions and utilities for the concision (cnc) machine learning framework.
§Features
- ParamsBase: A structure for defining the parameters within a neural network.
- Backward: This trait establishes a common interface for backward propagation.
- Forward: This trait denotes a single forward pass through a layer of a neural network
Re-exports§
- pub use rand;
- pub use rand_distr;
- pub use super::error::ParamsError;
- pub use super::params::ParamsBase;
- pub use super::Params;
- pub use super::ParamsView;
- pub use super::ParamsViewMut;
- pub use concision_init as init;
- pub use concision_utils as utils;
Modules§
- activate
- this module is dedicated to activation function This module implements various activation functions for neural networks.
- error
- this module provides the base Errortype for the library This module implements the coreErrortype for the framework and provides aResulttype alias for convenience.
- loss
- this module focuses on the loss functions used in training neural networks.
- ops
- This module provides the core operations for tensors, including filling, padding, reshaping, and tensor manipulation.
- params
- this module provides the ParamsBasetype for the library, which is used to define the parameters of a neural network. Parameters for constructing neural network models. This module implements parameters using the ParamsBase struct and its associated types. The ParamsBase struct provides:
- traits
- This module provides the core traits for the library, such as  BackwardandForward
Structs§
- LecunNormal 
- LecunNormal is a truncated normal distribution centered at 0
with a standard deviation that is calculated as $σ = sqrt(1/n_in)$ where $n_in$ is the number of input units.
- PadActionIter 
- An iterator over the variants of PadAction
- Padding
- TruncatedNormal 
- A truncated normal distribution is similar to a normal distribution, however, any generated value over two standard deviations from the mean is discarded and re-generated.
- XavierNormal 
- Normal Xavier initializers leverage a normal distribution with a mean of 0 and a standard deviation (σ) computed by the formula: $σ = sqrt(2/(d_in + d_out))$
- XavierUniform 
- Uniform Xavier initializers use a uniform distribution to initialize the weights of a neural network within a given range.
Enums§
- Error
- The Errortype enumerates various errors that can occur within the framework.
- PadAction
- PadError
- PadMode
Traits§
- Activate
- The Activatetrait establishes a common interface for entities that can be activated according to some function
- ActivateExt 
- This trait extends the Activatetrait with a number of additional activation functions and their derivatives. Note: this trait is automatically implemented for any type that implements theActivatetrait eliminating the need to implement it manually.
- ActivateMut 
- A trait for establishing a common mechanism to activate entities in-place.
- Affine
- apply an affine transformation to a tensor;
affine transformation is defined as mul * self + add
- ApplyGradient 
- A trait declaring basic gradient-related routines for a neural network
- ApplyGradient Ext 
- This trait extends the ApplyGradient trait by allowing for momentum-based optimization
- ArrayLike 
- AsComplex
- Backward
- Backward propagate a delta through the system;
- Biased
- Clip
- A trait denoting objects capable of being clipped between some minimum and some maximum.
- ClipMut
- This trait enables tensor clipping; it is implemented for ArrayBase
- Codex
- CrossEntropy 
- A trait for computing the cross-entropy loss of a tensor or array
- Decode
- Decode defines a standard interface for decoding data.
- DecrementAxis 
- The DecrementAxistrait defines a method enabling an axis to decrement itself,
- DefaultLike 
- DropOut
- [Dropout] randomly zeroizes elements with a given probability (p).
- Encode
- Encode defines a standard interface for encoding data.
- FillLike 
- FloorDiv 
- Forward
- This trait denotes entities capable of performing a single forward step
- Gradient
- The Gradienttrait defines a common interface for all gradients
- Heavyside
- IncrementAxis 
- The IncrementAxistrait defines a method enabling an axis to increment itself, effectively adding a new axis to the array.
- Init
- A trait for creating custom initialization routines for models or other entities.
- InitInplace 
- This trait enables models to implement custom, in-place initialization methods.
- Initialize
- This trait provides the base methods required for initializing tensors with random values.
The trait is similar to the RandomExttrait provided by thendarray_randcrate, however, it is designed to be more generic, extensible, and optimized for neural network initialization routines. Initialize is implemented forArrayBaseas well asParamsBaseallowing you to randomly initialize new tensors and parameters.
- IntoAxis 
- The IntoAxistrait is used to define a conversion routine that takes a type and wraps it in anAxistype.
- IntoComplex 
- Trait for converting a type into a complex number.
- Inverse
- The Inversetrait generically establishes an interface for computing the inverse of a type, regardless of if its a tensor, scalar, or some other compatible type.
- IsSquare
- IsSquareis a trait for checking if the layout, or dimensionality, of a tensor is square.
- L1Norm
- a trait for computing the L1 norm of a tensor or array
- L2Norm
- a trait for computing the L2 norm of a tensor or array
- LinearActivation 
- Loss
- The Losstrait defines a common interface for any custom loss function implementations. This trait requires the implementor to define their algorithm for calculating the loss between two values,lhsandrhs, which can be of different types,XandYrespectively. These terms are used generically to allow for flexibility in the allowed types, such as tensors, scalars, or other data structures while clearly defining the “order” in which the operations are performed. It is most common to expect thelhsto be the predicted output and therhsto be the actual output, but this is not a strict requirement. The trait also defines an associated typeOutput, which represents the type of the loss value returned by thelossmethod. This allows for different loss functions to return different types of loss values, such as scalars or tensors, depending on the specific implementation of the loss function.
- MaskFill 
- This trait is used to fill an array with a value based on a mask. The mask is a boolean array of the same shape as the array.
- MatMul
- The MatMultrait defines an interface for matrix multiplication.
- MatPow
- The MatPowtrait defines an interface for computing the exponentiation of a matrix.
- MeanAbsolute Error 
- Compute the mean absolute error (MAE) of the object; more formally, we define the MAE as the average of the absolute differences between the predicted and actual values:
- MeanSquared Error 
- The MeanSquaredError(MSE) is the average of the squared differences between the ($\hat{y_{i}}$) and actual values ($y_{i}$):
- NdActivateMut 
- NdLike
- Norm
- The Norm trait serves as a unified interface for various normalization routnines. At the moment, the trait provides L1 and L2 techniques.
- Numerical
- Numerical is a trait for all numerical types; implements a number of core operations
- OnesLike 
- Pad
- The Padtrait defines a padding operation for tensors.
- PercentDiff 
- Compute the percentage difference between two values. The percentage difference is defined as:
- RawTensor
- The RawTensortrait defines the base interface for all tensors,
- ReLU
- Rho
- The Rhotrait enables the definition of new activation functions often implemented as fieldless structs.
- RhoGradient
- Root
- The Roottrait provides methods for computing the nth root of a number.
- RoundTo
- Scalar
- The Scalar trait extends the Numerical trait to include additional mathematical operations for the purpose of reducing the number of overall traits required to complete various machine-learning tasks.
- ScalarComplex 
- Sigmoid
- Softmax
- SoftmaxAxis 
- SummaryStatistics 
- This trait describes the fundamental methods of summary statistics. These include the mean, standard deviation, variance, and more.
- Tanh
- Tensor
- The Tensortrait extends theRawTensortrait to provide additional functionality for tensors, such as creating tensors from shapes, applying functions, and iterating over elements. It is generic over the element typeAand the dimension type `D
- Transpose
- The Transposetrait generically establishes an interface for transposing a type
- Unsqueeze
- The Unsqueezetrait establishes an interface for a routine that unsqueezes an array, by inserting a new axis at a specified position. This is useful for reshaping arrays to meet specific dimensional requirements.
- Weighted
- ZerosLike 
Functions§
- calculate_pattern_ similarity 
- Calculate similarity between two patterns
- clip_gradient 
- Clip the gradient to a maximum value.
- clip_inf_ nan 
- concat_iter 
- Creates an n-dimensional array from an iterator of n dimensional arrays.
- extract_patterns 
- Extract common patterns from historical sequences
- floor_div 
- divide two values and round down to the nearest integer.
- genspace
- heavyside
- Heaviside activation function
- hstack
- stack a 1D array into a 2D array by stacking them horizontally.
- inverse
- is_similar_ pattern 
- Check if two patterns are similar enough to be considered duplicates
- layer_norm 
- layer_norm_ axis 
- linarr
- pad
- pad_to
- randc
- Generate a random array of complex numbers with real and imaginary parts in the range [0, 1)
- relu
- the relu activation function: $f(x) = \max(0, x)$
- relu_derivative 
- round_to 
- Round the given value to the given number of decimal places.
- sigmoid
- the sigmoid activation function: $f(x) = \frac{1}{1 + e^{-x}}$
- sigmoid_derivative 
- the derivative of the sigmoid function
- softmax
- Softmax function: $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
- softmax_axis 
- Softmax function along a specific axis: $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
- stack_iter 
- Creates a larger array from an iterator of smaller arrays.
- stdnorm
- Given a shape, generate a random array using the StandardNormal distribution
- stdnorm_from_ seed 
- tanh
- the tanh activation function: $f(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}$
- tanh_derivative 
- the derivative of the tanh function
- tril
- Returns the lower triangular portion of a matrix.
- triu
- Returns the upper triangular portion of a matrix.
- uniform_from_ seed 
- Creates a random array from a uniform distribution using a given key
- vstack
- stack a 1D array into a 2D array by stacking them vertically.