Crate eryon_surface

Crate eryon_surface 

Source
Expand description

This crate focuses on materializing the surface of the headspace. Each surface is a neural network that is dynamically configured using the vertices to define the input layer while the tonic or centroid defines the network’s output. The hidden layers essentially fill in the remaining space in-between the input and output layers, using barycentric coordinates as “goalposts” to guide the network’s learning process.

Re-exports§

pub use cnc::nn;
pub use cnc::utils;
pub use super::point::*;

Modules§

activate
this module is dedicated to activation function This module implements various activation functions for neural networks.
error
model
this modules implements the model that enables the materialziation of the surface of the headspace.
network
ops
This module provides the core operations for tensors, including filling, padding, reshaping, and tensor manipulation.
params
this module provides the ParamsBase type for the library, which is used to define the parameters of a neural network. Parameters for constructing neural network models. This module implements parameters using the ParamsBase struct and its associated types. The ParamsBase struct provides:
points
Points for binding layers to particular locations along a given surface.
prelude
traits
This module provides the core traits for the library, such as Backward and Forward

Structs§

Dropout
The Dropout layer is randomly zeroizes inputs with a given probability (p). This regularization technique is often used to prevent overfitting.
HyperparametersIter
An iterator over the variants of Hyperparameters
LayerBase
LecunNormal
LecunNormal is a truncated normal distribution centered at 0 with a standard deviation that is calculated as σ = sqrt(1/n_in) where n_in is the number of input units.
ModelFeatures
The ModelFeatures provides a common way of defining the layout of a model. This is used to define the number of input features, the number of hidden layers, the number of hidden features, and the number of output features.
ModelParamsBase
This object is an abstraction over the parameters of a deep neural network model. This is done to isolate the necessary parameters from the specific logic within a model allowing us to easily create additional stores for tracking velocities, gradients, and other metrics we may need.
PadActionIter
An iterator over the variants of PadAction
Padding
ParamsBase
The ParamsBase struct is a generic container for a set of weights and biases for a model. The implementation is designed around the ArrayBase type from the ndarray crate, which allows for flexible and efficient storage of multi-dimensional arrays.
PointKindIter
An iterator over the variants of PointKind
StandardModelConfig
SurfaceModel
A multi-layer perceptron implementation
SurfaceModelConfig
Hyperparameters for the multi-layer perceptron model
SurfaceNetwork
A neural network capable of dynamic configuration. Essentially, each network is designed to materialize the surface of a 2-simplex (triad) using barycentric coordinates to define three critical points. These critical points define the minimum number of hidden layers within the network and serve as goalposts that guide the learning process. The remaining points continue this trend, simply mapping each extra hidden layer to another position within space. The verticies of the simplex are used to inform the input layer and in finding the centroid of the facet. The centroid defines the output layer of the facet, serving as the final piece in a pseudo sink-source dyanamic.
Trainer
TruncatedNormal
A truncated normal distribution is similar to a normal distribution, however, any generated value over two standard deviations from the mean is discarded and re-generated.
XavierNormal
Normal Xavier initializers leverage a normal distribution with a mean of 0 and a standard deviation (σ) computed by the formula: σ = sqrt(2/(d_in + d_out))
XavierUniform
Uniform Xavier initializers use a uniform distribution to initialize the weights of a neural network within a given range.

Enums§

Error
The Error type enumerates various errors that can occur within the framework.
Hyperparameters
NeuralError
PadAction
PadError
PadMode
ParamsError
PointKind
Enumerates the different kinds of points considered by the system
SurfaceError
TrainingError
UtilityError

Traits§

Activate
The Activate trait enables the definition of new activation functions often implemented as fieldless structs.
ActivateExt
This trait extends the [Activate] trait with a number of additional activation functions and their derivatives. Note: this trait is automatically implemented for any type that implements the [Activate] trait eliminating the need to implement it manually.
ActivateGradient
ActivateMut
A trait for establishing a common mechanism to activate entities in-place.
Affine
apply an affine transformation to a tensor; affine transformation is defined as mul * self + add
ApplyGradient
A trait declaring basic gradient-related routines for a neural network
ApplyGradientExt
This trait extends the ApplyGradient trait by allowing for momentum-based optimization
ArrayLike
Backward
Backward propagate a delta through the system;
Biased
Clip
A trait denoting objects capable of being clipped between some minimum and some maximum.
ClipMut
This trait enables tensor clipping; it is implemented for ArrayBase
Codex
CrossEntropy
A trait for computing the cross-entropy loss of a tensor or array
Decode
Decode defines a standard interface for decoding data.
DecrementAxis
This trait enables an array to remove an axis from itself
DefaultLike
DropOut
[Dropout] randomly zeroizes elements with a given probability (p).
Encode
Encode defines a standard interface for encoding data.
FillLike
FloorDiv
Forward
This trait denotes entities capable of performing a single forward step
Gradient
The Gradient trait defines a common interface for all gradients
Heavyside
IncrementAxis
Init
A trait for creating custom initialization routines for models or other entities.
InitInplace
This trait enables models to implement custom, in-place initialization methods.
Initialize
This trait provides the base methods required for initializing tensors with random values. The trait is similar to the RandomExt trait provided by the ndarray_rand crate, however, it is designed to be more generic, extensible, and optimized for neural network initialization routines. Initialize is implemented for ArrayBase as well as ParamsBase allowing you to randomly initialize new tensors and parameters.
IntoAxis
Inverse
this trait enables the inversion of a matrix
IsSquare
L1Norm
a trait for computing the L1 norm of a tensor or array
L2Norm
a trait for computing the L2 norm of a tensor or array
LinearActivation
MaskFill
This trait is used to fill an array with a value based on a mask. The mask is a boolean array of the same shape as the array.
Matmul
A trait denoting objects capable of matrix multiplication.
Matpow
a trait denoting objects capable of matrix exponentiation
MeanAbsoluteError
Compute the mean absolute error (MAE) of the object.
MeanSquaredError
Compute the mean squared error (MSE) of the object.
Model
The base interface for all models; each model provides access to a configuration object defined as the associated type Config. The configuration object is used to provide hyperparameters and other control related parameters. In addition, the model’s layout is defined by the features method which aptly returns a copy of its ModelFeatures object.
ModelExt
ModelLayout
NdActivateMut
NdLike
NetworkConfig
Norm
The Norm trait serves as a unified interface for various normalization routnines. At the moment, the trait provides L1 and L2 techniques.
Numerical
Numerical is a trait for all numerical types; implements a number of core operations
OnesLike
Pad
The Pad trait defines a padding operation for tensors.
PercentDiff
Compute the percentage difference between two values. The percentage difference is defined as:
Predict
Predict isn’t designed to be implemented directly, rather, as a blanket impl for any entity that implements the Forward trait. This is primarily used to define the base functionality of the Model trait.
PredictWithConfidence
This trait extends the Predict trait to include a confidence score for the prediction. The confidence score is calculated as the inverse of the variance of the output.
ReLU
Root
RoundTo
Scalar
The Scalar trait extends the Numerical trait to include additional mathematical operations for the purpose of reducing the number of overall traits required to complete various machine-learning tasks.
Sigmoid
Softmax
SoftmaxAxis
SummaryStatistics
This trait describes the fundamental methods of summary statistics. These include the mean, standard deviation, variance, and more.
Tanh
Tensor
Train
This trait defines the training process for the network
TrainingConfiguration
Transpose
the trait denotes the ability to transpose a tensor
Unsqueeze
Weighted
ZerosLike

Functions§

calculate_pattern_similarity
Calculate similarity between two patterns
clip_gradient
Clip the gradient to a maximum value.
clip_inf_nan
concat_iter
Creates an n-dimensional array from an iterator of n dimensional arrays.
extract_patterns
Extract common patterns from historical sequences
floor_div
divide two values and round down to the nearest integer.
genspace
heavyside
Heaviside activation function
hstack
stack a 1D array into a 2D array by stacking them horizontally.
inverse
is_similar_pattern
Check if two patterns are similar enough to be considered duplicates
layer_norm
layer_norm_axis
linarr
pad
pad_to
randc
Generate a random array of complex numbers with real and imaginary parts in the range [0, 1)
relu
the relu activation function: $f(x) = \max(0, x)$
relu_derivative
round_to
Round the given value to the given number of decimal places.
sigmoid
the sigmoid activation function: $f(x) = \frac{1}{1 + e^{-x}}$
sigmoid_derivative
the derivative of the sigmoid function
softmax
Softmax function: $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
softmax_axis
Softmax function along a specific axis: $f(x_i) = \frac{e^{x_i}}{\sum_j e^{x_j}}$
stack_iter
Creates a larger array from an iterator of smaller arrays.
stdnorm
Given a shape, generate a random array using the StandardNormal distribution
stdnorm_from_seed
tanh
the tanh activation function: $f(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}$
tanh_derivative
the derivative of the tanh function
tril
Returns the lower triangular portion of a matrix.
triu
Returns the upper triangular portion of a matrix.
uniform_from_seed
Creates a random array from a uniform distribution using a given key
vstack
stack a 1D array into a 2D array by stacking them vertically.

Type Aliases§

LayerDyn
ModelParams
NeuralResult
a type alias for a Result with a NeuralError
PadResult
Params
a type alias for owned parameters
ParamsView
a type alias for an immutable view of the parameters
ParamsViewMut
a type alias for a mutable view of the parameters
Result
a type alias for a Result with a Error
SurfaceResult
a type alias for a Result with a SurfaceError
UniformResult