Expand description
Core traits defining fundamental abstractions and operations useful for neural networks.
Modules§
- math
- Mathematically oriented operators and functions useful in machine learning contexts.
- ops
- composable operators for tensor manipulations and transformations, neural networks, and more
- tensor
Traits§
- Abs
- Affine
- apply an affine transformation to a tensor;
affine transformation is defined as
mul * self + add - Apply
Applyis a composable binary operator generally used to apply some object or function onto the caller to produce some output.- Apply
Gradient - A trait declaring basic gradient-related routines for a neural network
- Apply
Gradient Ext - This trait extends the ApplyGradient trait by allowing for momentum-based optimization
- Apply
Mut ApplyMutprovides an interface for mutable containers that can apply a function onto their elements, modifying them in place.- Apply
Once - The
ApplyOncetrait consumes the container and applies the given function to every element before returning a new container with the results. - Array
Like - AsBias
Dim - The
AsBiasDimtrait is used to define a type that can be used to get the bias dimension of the parameters. - AsComplex
AsComplexdefines an interface for converting a reference of some numerical type into a complex number.- Backward
- The
Backwardtrait establishes a common interface for completing a single backward step in a neural network or machine learning model. - Backward
Step - Clip
- A trait denoting objects capable of being clipped between some minimum and some maximum.
- ClipMut
- This trait enables tensor clipping; it is implemented for
ArrayBase - Codex
- Conjugate
- Cos
- Cosh
- Cross
Entropy - A trait for computing the cross-entropy loss of a tensor or array
- Cubed
- Decode
- Decode defines a standard interface for decoding data.
- Decrement
Decrementis a chainable trait that defines a decrement method, effectively removing a single unit from the original object to create another- Decrement
Axis - The
DecrementAxisis used as a unary operator for removing a single axis from a multidimensional array or tensor-like structure. - Decrement
Mut - The
DecrementMuttrait defines a decrement method that operates in place, modifying the original object. - Default
Like - Dim
- the
Dimtrait is used to define a type that can be used as a raw dimension. This trait is primarily used to provide abstracted, generic interpretations of the dimensions of thendarraycrate to ensure long-term compatibility. - DimConst
- Encode
- Encode defines a standard interface for encoding data.
- Exp
- Fill
Like - Floor
Div - Forward
- The
Forwardtrait describes a common interface for objects designated to perform a single forward step in a neural network or machine learning model. - Forward
Mut - Forward
Once - A consuming implementation of forward propagation
- Gradient
- the
Gradienttrait defines the gradient of a function, which is a function that takes an input and returns a delta, which is the change in the output with respect to the input. - Increment
- The
Increment - Increment
Axis - The
IncrementAxistrait defines a method enabling an axis to increment itself, effectively adding a new axis to the array. - Increment
Mut - Init
With InitWithenables a container to- Initialize
Initializeprovides a mechanism for initializing some object using a value of typeTto produce another object.- Into
Axis - The
IntoAxistrait is used to define a conversion routine that takes a type and wraps it in anAxistype. - Into
Complex - Trait for converting a type into a complex number.
- Inverse
- The
Inversetrait generically establishes an interface for computing the inverse of a type, regardless of if its a tensor, scalar, or some other compatible type. - IsSquare
IsSquareis a trait for checking if the layout, or dimensionality, of a tensor is square.- L1Norm
- a trait for computing the L1 norm of a tensor or array
- L2Norm
- a trait for computing the L2 norm of a tensor or array
- Loss
- The
Losstrait defines a common interface for any custom loss function implementations. This trait requires the implementor to define their algorithm for calculating the loss between two values,lhsandrhs, which can be of different types,XandYrespectively. These terms are used generically to allow for flexibility in the allowed types, such as tensors, scalars, or other data structures while clearly defining the “order” in which the operations are performed. It is most common to expect thelhsto be the predicted output and therhsto be the actual output, but this is not a strict requirement. The trait also defines an associated typeOutput, which represents the type of the loss value returned by thelossmethod. This allows for different loss functions to return different types of loss values, such as scalars or tensors, depending on the specific implementation of the loss function. - MapInto
MapIntodefines an interface for containers that can consume themselves to apply a given function onto each of their elements.- MapTo
MapToestablishes an interface for containers capable of applying a given function onto each of their elements, by reference.- Mask
Fill - This trait is used to fill an array with a value based on a mask. The mask is a boolean array of the same shape as the array.
- MatMul
- The
MatMultrait defines an interface for matrix multiplication. - MatPow
- The
MatPowtrait defines an interface for computing the power of some matrix - Mean
Absolute Error - A trait for computing the mean absolute error of a tensor or array
- Mean
Squared Error - A trait for computing the mean squared error of a tensor or array
- NdGradient
- NdLike
- NdTensor
- Norm
- The Norm trait serves as a unified interface for various normalization routnines. At the moment, the trait provides L1 and L2 techniques.
- Ones
Like - Percent
Change - The
PercentChangetrait establishes a binary operator for computing the percent change between two values where the caller is considered the original value. - Percent
Diff - Compute the percentage difference between two values. The percentage difference is defined as:
- Predict
- The
Predicttrait is designed as a model-specific interface for making predictions. In the future, we may consider opening the trait up allowing for an alternative implementation of the trait, but for now, it is simply implemented for all implementors of theForwardtrait. - Predict
With Confidence - The
PredictWithConfidencetrait is an extension of thePredicttrait, providing an additional method to obtain predictions along with a confidence score. - RawStore
- The
RawStoretrait is used to define an interface for key-value stores like hash-maps, dictionaries, and similar data structures. - RawStore
Mut RawStoreMutextends theRawStoretrait by introducing various mutable operations and accessors for elements within the store.- RawTensor
- RawTensor
Data - Root
- The
Roottrait provides methods for computing the nth root of a number. - RoundTo
- Scalar
Tensor Data - A marker trait used to denote tensors that represent scalar values; more specifically, we
consider any type implementing the
RawTensorDatatype where theElemassociated type is the implementor itself a scalar value. - Sine
- Sinh
- Square
Root - Squared
- Store
- The
Storetrait is a more robust interface for key-value stores, building upon bothRawStoreandRawStoreMuttraits by introducing anentrymethod for in-place manipulation of key-value pairs. - Store
Entry - The
StoreEntrytrait establishes a common interface for all entries within a key-value store. These types enable in-place manipulation of key-value pairs by allowing for keys to point to empty or vacant slots within the store. - Summary
Statistics - This trait describes the fundamental methods of summary statistics. These include the mean, standard deviation, variance, and more.
- Tan
- Tanh
- Tensor
Base - Train
- This trait defines the training process for the network
- Transpose
- The
Transposetrait generically establishes an interface for transposing a type - Unsqueeze
- The
Unsqueezetrait establishes an interface for a routine that unsqueezes an array, by inserting a new axis at a specified position. This is useful for reshaping arrays to meet specific dimensional requirements. - Zeros
Like