Expand description
§concision-core
This library provides the core abstractions and utilities for the concision (cnc) machine learning framework.
§Features
ParamsBase
: A structure for defining the parameters within a neural network.Backward
: This trait establishes a common interface for backward propagation.Forward
: This trait denotes a single forward pass through a layer of a neural network
Re-exports§
pub use rand;
pub use rand_distr;
pub use super::Activate;
pub use super::ActivateGradient;
pub use super::error::ParamsError;
pub use super::params::ParamsBase;
pub use super::Params;
pub use super::ParamsView;
pub use super::ParamsViewMut;
Modules§
- activate
- this module is dedicated to activation function This module implements various activation functions for neural networks.
- error
- this module provides the base
Error
type for the library This module implements the coreError
type for the framework and provides aResult
type alias for convenience. - init
- this module establishes generic random initialization routines for models, params, and tensors. This module works to provide the crate with various initialization methods suitable for machine-learning models.
- loss
- this module focuses on the loss functions used in training neural networks. This module provides various loss functions used in machine learning.
- ops
- This module provides the core operations for tensors, including filling, padding, reshaping, and tensor manipulation.
- params
- this module provides the
ParamsBase
type for the library, which is used to define the parameters of a neural network. Parameters for constructing neural network models. This module implements parameters using the ParamsBase struct and its associated types. The ParamsBase struct provides: - prelude
- traits
- This module provides the core traits for the library, such as
Backward
andForward
- utils
- A suite of utilities tailored toward neural networks.
Structs§
- PadAction
Iter - An iterator over the variants of PadAction
- Padding
Enums§
- Error
- The
Error
type enumerates various errors that can occur within the framework. - PadAction
- PadError
- PadMode
- Utility
Error
Traits§
- Activate
Ext - This trait extends the [
Activate
] trait with a number of additional activation functions and their derivatives. Note: this trait is automatically implemented for any type that implements the [Activate
] trait eliminating the need to implement it manually. - Activate
Mut - A trait for establishing a common mechanism to activate entities in-place.
- Affine
- apply an affine transformation to a tensor;
affine transformation is defined as
mul * self + add
- Apply
Gradient - A trait declaring basic gradient-related routines for a neural network
- Apply
Gradient Ext - This trait extends the ApplyGradient trait by allowing for momentum-based optimization
- Array
Like - AsComplex
- Backward
- Backward propagate a delta through the system;
- Biased
- Clip
- A trait denoting objects capable of being clipped between some minimum and some maximum.
- ClipMut
- This trait enables tensor clipping; it is implemented for
ArrayBase
- Codex
- Cross
Entropy - A trait for computing the cross-entropy loss of a tensor or array
- Decode
- Decode defines a standard interface for decoding data.
- Decrement
Axis - This trait enables an array to remove an axis from itself
- Default
Like - DropOut
- [Dropout] randomly zeroizes elements with a given probability (
p
). - Encode
- Encode defines a standard interface for encoding data.
- Fill
Like - Floor
Div - Forward
- This trait denotes entities capable of performing a single forward step
- Gradient
- The
Gradient
trait defines a common interface for all gradients - Heavyside
- Increment
Axis - Init
- A trait for creating custom initialization routines for models or other entities.
- Init
Inplace - This trait enables models to implement custom, in-place initialization methods.
- Into
Axis - Into
Complex - Trait for converting a type into a complex number.
- Inverse
- this trait enables the inversion of a matrix
- IsSquare
- L1Norm
- a trait for computing the L1 norm of a tensor or array
- L2Norm
- a trait for computing the L2 norm of a tensor or array
- Linear
Activation - Mask
Fill - This trait is used to fill an array with a value based on a mask. The mask is a boolean array of the same shape as the array.
- Matmul
- A trait denoting objects capable of matrix multiplication.
- Matpow
- a trait denoting objects capable of matrix exponentiation
- Mean
Absolute Error - Compute the mean absolute error (MAE) of the object.
- Mean
Squared Error - Compute the mean squared error (MSE) of the object.
- NdActivate
Mut - NdLike
- Norm
- The Norm trait serves as a unified interface for various normalization routnines. At the moment, the trait provides L1 and L2 techniques.
- Numerical
- Numerical is a trait for all numerical types; implements a number of core operations
- Ones
Like - Pad
- The
Pad
trait defines a padding operation for tensors. - Percent
Diff - Compute the percentage difference between two values. The percentage difference is defined as:
- ReLU
- Root
- RoundTo
- Scalar
- The Scalar trait extends the Numerical trait to include additional mathematical operations for the purpose of reducing the number of overall traits required to complete various machine-learning tasks.
- Scalar
Complex - Sigmoid
- Softmax
- Softmax
Axis - Summary
Statistics - This trait describes the fundamental methods of summary statistics. These include the mean, standard deviation, variance, and more.
- Tanh
- Tensor
- Transpose
- the trait denotes the ability to transpose a tensor
- Unsqueeze
- Weighted
- Zeros
Like
Functions§
- calculate_
pattern_ similarity - Calculate similarity between two patterns
- clip_
gradient - Clip the gradient to a maximum value.
- clip_
inf_ nan - concat_
iter - Creates an n-dimensional array from an iterator of n dimensional arrays.
- extract_
patterns - Extract common patterns from historical sequences
- floor_
div - divide two values and round down to the nearest integer.
- genspace
- heavyside
- Heaviside activation function
- hstack
- stack a 1D array into a 2D array by stacking them horizontally.
- inverse
- is_
similar_ pattern - Check if two patterns are similar enough to be considered duplicates
- layer_
norm - layer_
norm_ axis - linarr
- pad
- pad_to
- relu
- the relu activation function: $f(x) = \max(0, x)$
- relu_
derivative - round_
to - Round the given value to the given number of decimal places.
- sigmoid
- the sigmoid activation function: $f(x) = \frac{1}{1 + e^{-x}}$
- sigmoid_
derivative - the derivative of the sigmoid function
- softmax
- Softmax function: $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
- softmax_
axis - Softmax function along a specific axis: $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
- stack_
iter - Creates a larger array from an iterator of smaller arrays.
- tanh
- the tanh activation function: $f(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}$
- tanh_
derivative - the derivative of the tanh function
- tril
- Returns the lower triangular portion of a matrix.
- triu
- Returns the upper triangular portion of a matrix.
- vstack
- stack a 1D array into a 2D array by stacking them vertically.