Expand description
This crate focuses on materializing the surface of the headspace. Each surface is a neural network that is dynamically configured using the vertices to define the input layer while the tonic or centroid defines the network’s output. The hidden layers essentially fill in the remaining space in-between the input and output layers, using barycentric coordinates as “goalposts” to guide the network’s learning process.
Re-exports§
Modules§
- activate
- this module is dedicated to activation function This module implements various activation functions for neural networks.
- error
- model
- this modules implements the model that enables the materialziation of the surface of the headspace.
- network
- ops
- This module provides the core operations for tensors, including filling, padding, reshaping, and tensor manipulation.
- params
- this module provides the
ParamsBasetype for the library, which is used to define the parameters of a neural network. Parameters for constructing neural network models. This module implements parameters using the ParamsBase struct and its associated types. The ParamsBase struct provides: - points
- Points for binding layers to particular locations along a given surface.
- prelude
- traits
- This module provides the core traits for the library, such as
BackwardandForward
Structs§
- Dropout
- The Dropout layer is randomly zeroizes inputs with a given probability (
p). This regularization technique is often used to prevent overfitting. - Hyperparameters
Iter - An iterator over the variants of Hyperparameters
- Layer
Base - Lecun
Normal - LecunNormal is a truncated normal distribution centered at 0
with a standard deviation that is calculated as
σ = sqrt(1/n_in)wheren_inis the number of input units. - Model
Features - The
ModelFeaturesprovides a common way of defining the layout of a model. This is used to define the number of input features, the number of hidden layers, the number of hidden features, and the number of output features. - Model
Params Base - This object is an abstraction over the parameters of a deep neural network model. This is done to isolate the necessary parameters from the specific logic within a model allowing us to easily create additional stores for tracking velocities, gradients, and other metrics we may need.
- PadAction
Iter - An iterator over the variants of PadAction
- Padding
- Params
Base - The
ParamsBasestruct is a generic container for a set of weights and biases for a model. The implementation is designed around theArrayBasetype from thendarraycrate, which allows for flexible and efficient storage of multi-dimensional arrays. - Point
Kind Iter - An iterator over the variants of PointKind
- Standard
Model Config - Surface
Model - A multi-layer perceptron implementation
- Surface
Model Config - Hyperparameters for the multi-layer perceptron model
- Surface
Network - A neural network capable of dynamic configuration. Essentially, each network is designed to materialize the surface of a 2-simplex (triad) using barycentric coordinates to define three critical points. These critical points define the minimum number of hidden layers within the network and serve as goalposts that guide the learning process. The remaining points continue this trend, simply mapping each extra hidden layer to another position within space. The verticies of the simplex are used to inform the input layer and in finding the centroid of the facet. The centroid defines the output layer of the facet, serving as the final piece in a pseudo sink-source dyanamic.
- Trainer
- Truncated
Normal - A truncated normal distribution is similar to a normal distribution, however, any generated value over two standard deviations from the mean is discarded and re-generated.
- Xavier
Normal - Normal Xavier initializers leverage a normal distribution with a mean of 0 and a standard deviation (
σ) computed by the formula:σ = sqrt(2/(d_in + d_out)) - Xavier
Uniform - Uniform Xavier initializers use a uniform distribution to initialize the weights of a neural network within a given range.
Enums§
- Error
- The
Errortype enumerates various errors that can occur within the framework. - Hyperparameters
- Neural
Error - PadAction
- PadError
- PadMode
- Params
Error - Point
Kind - Enumerates the different kinds of points considered by the system
- Surface
Error - Training
Error - Utility
Error
Traits§
- Activate
- The Activate trait enables the definition of new activation functions often implemented as fieldless structs.
- Activate
Ext - This trait extends the [
Activate] trait with a number of additional activation functions and their derivatives. Note: this trait is automatically implemented for any type that implements the [Activate] trait eliminating the need to implement it manually. - Activate
Gradient - Activate
Mut - A trait for establishing a common mechanism to activate entities in-place.
- Affine
- apply an affine transformation to a tensor;
affine transformation is defined as
mul * self + add - Apply
Gradient - A trait declaring basic gradient-related routines for a neural network
- Apply
Gradient Ext - This trait extends the ApplyGradient trait by allowing for momentum-based optimization
- Array
Like - Backward
- Backward propagate a delta through the system;
- Biased
- Clip
- A trait denoting objects capable of being clipped between some minimum and some maximum.
- ClipMut
- This trait enables tensor clipping; it is implemented for
ArrayBase - Codex
- Cross
Entropy - A trait for computing the cross-entropy loss of a tensor or array
- Decode
- Decode defines a standard interface for decoding data.
- Decrement
Axis - This trait enables an array to remove an axis from itself
- Default
Like - DropOut
- [Dropout] randomly zeroizes elements with a given probability (
p). - Encode
- Encode defines a standard interface for encoding data.
- Fill
Like - Floor
Div - Forward
- This trait denotes entities capable of performing a single forward step
- Gradient
- The
Gradienttrait defines a common interface for all gradients - Heavyside
- Increment
Axis - Init
- A trait for creating custom initialization routines for models or other entities.
- Init
Inplace - This trait enables models to implement custom, in-place initialization methods.
- Initialize
- This trait provides the base methods required for initializing tensors with random values.
The trait is similar to the
RandomExttrait provided by thendarray_randcrate, however, it is designed to be more generic, extensible, and optimized for neural network initialization routines. Initialize is implemented forArrayBaseas well asParamsBaseallowing you to randomly initialize new tensors and parameters. - Into
Axis - Inverse
- this trait enables the inversion of a matrix
- IsSquare
- L1Norm
- a trait for computing the L1 norm of a tensor or array
- L2Norm
- a trait for computing the L2 norm of a tensor or array
- Linear
Activation - Mask
Fill - This trait is used to fill an array with a value based on a mask. The mask is a boolean array of the same shape as the array.
- Matmul
- A trait denoting objects capable of matrix multiplication.
- Matpow
- a trait denoting objects capable of matrix exponentiation
- Mean
Absolute Error - Compute the mean absolute error (MAE) of the object.
- Mean
Squared Error - Compute the mean squared error (MSE) of the object.
- Model
- The base interface for all models; each model provides access to a configuration object
defined as the associated type
Config. The configuration object is used to provide hyperparameters and other control related parameters. In addition, the model’s layout is defined by thefeaturesmethod which aptly returns a copy of its ModelFeatures object. - Model
Ext - Model
Layout - NdActivate
Mut - NdLike
- Network
Config - Norm
- The Norm trait serves as a unified interface for various normalization routnines. At the moment, the trait provides L1 and L2 techniques.
- Numerical
- Numerical is a trait for all numerical types; implements a number of core operations
- Ones
Like - Pad
- The
Padtrait defines a padding operation for tensors. - Percent
Diff - Compute the percentage difference between two values. The percentage difference is defined as:
- Predict
- Predict isn’t designed to be implemented directly, rather, as a blanket impl for any
entity that implements the
Forwardtrait. This is primarily used to define the base functionality of theModeltrait. - Predict
With Confidence - This trait extends the
Predicttrait to include a confidence score for the prediction. The confidence score is calculated as the inverse of the variance of the output. - ReLU
- Root
- RoundTo
- Scalar
- The Scalar trait extends the Numerical trait to include additional mathematical operations for the purpose of reducing the number of overall traits required to complete various machine-learning tasks.
- Sigmoid
- Softmax
- Softmax
Axis - Summary
Statistics - This trait describes the fundamental methods of summary statistics. These include the mean, standard deviation, variance, and more.
- Tanh
- Tensor
- Train
- This trait defines the training process for the network
- Training
Configuration - Transpose
- the trait denotes the ability to transpose a tensor
- Unsqueeze
- Weighted
- Zeros
Like
Functions§
- calculate_
pattern_ similarity - Calculate similarity between two patterns
- clip_
gradient - Clip the gradient to a maximum value.
- clip_
inf_ nan - concat_
iter - Creates an n-dimensional array from an iterator of n dimensional arrays.
- extract_
patterns - Extract common patterns from historical sequences
- floor_
div - divide two values and round down to the nearest integer.
- genspace
- heavyside
- Heaviside activation function
- hstack
- stack a 1D array into a 2D array by stacking them horizontally.
- inverse
- is_
similar_ pattern - Check if two patterns are similar enough to be considered duplicates
- layer_
norm - layer_
norm_ axis - linarr
- pad
- pad_to
- randc
- Generate a random array of complex numbers with real and imaginary parts in the range [0, 1)
- relu
- the relu activation function: $f(x) = \max(0, x)$
- relu_
derivative - round_
to - Round the given value to the given number of decimal places.
- sigmoid
- the sigmoid activation function: $f(x) = \frac{1}{1 + e^{-x}}$
- sigmoid_
derivative - the derivative of the sigmoid function
- softmax
- Softmax function: $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
- softmax_
axis - Softmax function along a specific axis: $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
- stack_
iter - Creates a larger array from an iterator of smaller arrays.
- stdnorm
- Given a shape, generate a random array using the StandardNormal distribution
- stdnorm_
from_ seed - tanh
- the tanh activation function: $f(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}$
- tanh_
derivative - the derivative of the tanh function
- tril
- Returns the lower triangular portion of a matrix.
- triu
- Returns the upper triangular portion of a matrix.
- uniform_
from_ seed - Creates a random array from a uniform distribution using a given key
- vstack
- stack a 1D array into a 2D array by stacking them vertically.
Type Aliases§
- Layer
Dyn - Model
Params - Neural
Result - a type alias for a Result with a NeuralError
- PadResult
- Params
- a type alias for owned parameters
- Params
View - a type alias for an immutable view of the parameters
- Params
View Mut - a type alias for a mutable view of the parameters
- Result
- a type alias for a Result with a Error
- Surface
Result - a type alias for a Result with a SurfaceError
- Uniform
Result