Expand description
concision
aims to be a complete machine-learning toolkit written in Rust. The framework
is designed to be performant, extensible, and easy to use while offering a wide range of
features for building and training machine learning models.
§Features
ndarray
: extensive support for multi-dimensional arrays, enabling efficient data manipulation.
§Long term goals
- DSL: Create a pseudo-DSL for defining machine learning models and training processes.
- GPU: Support for GPU acceleration to speed up training and inference.
- Interoperability: Integrate with other libraries and frameworks (TensorFlow, PyTorch)
- Visualization: Utilities for visualizing model architectures and training progress
- WASM: Native support for WebAssembly enabling models to be run in web browsers.
Modules§
- activate
- This module implements various activation functions for neural networks.
- data
- Datasets and data loaders for the Concision framework.
- error
- init
- This module works to provide the crate with various initialization methods suitable for machine-learning models.
- loss
- This module provides various loss functions used in machine learning.
- nn
- The neural network abstractions used to create and train models.
- ops
- params
- Parameters for constructing neural network models. This module implements parameters using the ParamsBase struct and its associated types. The ParamsBase struct provides:
- prelude
- traits
- utils
- A suite of utilities tailored toward neural networks.
Structs§
- PadAction
Iter - An iterator over the variants of PadAction
- Padding
- Params
Base - this structure extends the
ArrayBase
type to include bias
Enums§
Traits§
- Activate
- The Activate trait enables the definition of new activation functions often implemented as fieldless structs.
- Activate
Gradient - Affine
- apply an affine transformation to a tensor;
affine transformation is defined as
mul * self + add
- Apply
Gradient - A trait declaring basic gradient-related routines for a neural network
- Apply
Gradient Ext - This trait extends the ApplyGradient trait by allowing for momentum-based optimization
- Array
Like - Backward
- Backward propagate a delta through the system;
- Binary
Action - Clip
- A trait denoting objects capable of being clipped between some minimum and some maximum.
- ClipMut
- This trait enables tensor clipping; it is implemented for
ArrayBase
- Codex
- Cross
Entropy - A trait for computing the cross-entropy loss of a tensor or array
- Decode
- Decode defines a standard interface for decoding data.
- Decrement
Axis - This trait enables an array to remove an axis from itself
- Default
Like - DropOut
- [Dropout] randomly zeroizes elements with a given probability (
p
). - Encode
- Encode defines a standard interface for encoding data.
- Fill
Like - Floor
Div - Forward
- This trait denotes entities capable of performing a single forward step
- Heavyside
- Increment
Axis - Init
- A trait for creating custom initialization routines for models or other entities.
- Init
Inplace - This trait enables models to implement custom, in-place initialization methods.
- Into
Axis - Inverse
- this trait enables the inversion of a matrix
- IsSquare
- L1Norm
- a trait for computing the L1 norm of a tensor or array
- L2Norm
- a trait for computing the L2 norm of a tensor or array
- Linear
Activation - Mask
Fill - This trait is used to fill an array with a value based on a mask. The mask is a boolean array of the same shape as the array.
- Matmul
- A trait denoting objects capable of matrix multiplication.
- Matpow
- a trait denoting objects capable of matrix exponentiation
- Mean
Absolute Error - Compute the mean absolute error (MAE) of the object.
- Mean
Squared Error - Compute the mean squared error (MSE) of the object.
- NdActivate
- NdActivate
Mut - NdLike
- Norm
- The Norm trait serves as a unified interface for various normalization routnines. At the moment, the trait provides L1 and L2 techniques.
- Numerical
- Numerical is a trait for all numerical types; implements a number of core operations
- Ones
Like - Pad
- Percent
Diff - Compute the percentage difference between two values. The percentage difference is defined as:
- ReLU
- Root
- RoundTo
- Scalar
- The Scalar trait extends the Numerical trait to include additional mathematical operations for the purpose of reducing the number of overall traits required to complete various machine-learning tasks.
- Sigmoid
- Softmax
- Softmax
Axis - Summary
Statistics - This trait describes the fundamental methods of summary statistics. These include the mean, standard deviation, variance, and more.
- Tanh
- Tensor
- Transpose
- the trait denotes the ability to transpose a tensor
- Unsqueeze
- Zeros
Like
Functions§
- calculate_
pattern_ similarity - Calculate similarity between two patterns
- clip_
gradient - Clip the gradient to a maximum value.
- clip_
inf_ nan - concat_
iter - Creates an n-dimensional array from an iterator of n dimensional arrays.
- extract_
patterns - Extract common patterns from historical sequences
- floor_
div - genspace
- heavyside
- Heaviside activation function
- hstack
- inverse
- is_
similar_ pattern - Check if two patterns are similar enough to be considered duplicates
- layer_
norm - layer_
norm_ axis - linarr
- pad
- pad_to
- relu
- relu_
derivative - round_
to - Round the given value to the given number of decimal places.
- sigmoid
- the sigmoid activation function: $f(x) = \frac{1}{1 + e^{-x}}$
- sigmoid_
derivative - the derivative of the sigmoid function
- softmax
- softmax_
axis - stack_
iter - Creates a larger array from an iterator of smaller arrays.
- tanh
- tanh_
derivative - tril
- Returns the lower triangular portion of a matrix.
- triu
- Returns the upper triangular portion of a matrix.
- vstack
Type Aliases§
- PadResult
- Params
- a type alias for owned parameters
- Params
View - a type alias for an immutable view of the parameters
- Params
View Mut - a type alias for a mutable view of the parameters
- Result
- a type alias for a Result with a Error
Derive Macros§
- Keyed
- This macro generates a parameter struct and an enum of parameter keys.