Module easy_ml::linear_algebra

source ·
Expand description

Linear algebra algorithms on numbers and matrices

Note that many of these functions are also exposed as corresponding methods on the Matrix type, and the Tensor type, but in depth documentation is only presented here.

It is recommended to favor the corresponding methods on the Matrix and Tensor types as the Rust compiler can get confused with the generics on these functions if you use these methods without turbofish syntax.

Nearly all of these functions are generic over Numeric types, unfortunately, when using these functions the compiler may get confused about what type T should be and you will get the error:

overflow evaluating the requirement &'a _: easy_ml::numeric::NumericByValue<_, _>

In this case you need to manually specify the type of T by using the turbofish syntax like: linear_algebra::inverse::<f32>(&matrix)

You might be working with a generic type of T, in which case specify that linear_algebra::inverse::<T>(&matrix)

Generics

For the tensor variants of these functions, the generics allow very flexible input types.

A function like

pub fn inverse_tensor<T, S, I>(tensor: I) -> Option<Tensor<T, 2>> where
   T: Numeric,
   for<'a> &'a T: NumericRef<T>,
   I: Into<TensorView<T, S, 2>>,
   S: TensorRef<T, 2>,

Means it takes any type that can be converted to a TensorView, which includes Tensor, &Tensor, &mut Tensor as well as references to a TensorView.

Structs

The result of an LDL^T Decomposition of some matrix A such that LDL^T = A.
The result of an LDL^T Decomposition of some matrix A such that LDL^T = A.
The result of a QR Decomposition of some matrix A such that QR = A.
The result of a QR Decomposition of some matrix A such that QR = A.

Functions

Computes the cholesky decomposition of a matrix. This yields a matrix L such that for the provided matrix A, L * L^T = A. L will always be lower triangular, ie all entries above the diagonal will be 0. Hence cholesky decomposition can be interpreted as a generalised square root function.
Computes the cholesky decomposition of a Tensor matrix. This yields a matrix L such that for the provided matrix A, L * L^T = A. L will always be lower triangular, ie all entries above the diagonal will be 0. Hence cholesky decomposition can be interpreted as a generalised square root function.
Computes the covariance matrix for a 2 dimensional Tensor feature matrix.
Computes the covariance matrix for an NxM feature matrix, in which each N’th row has M features to find the covariance and variance of.
Computes the covariance matrix for an NxM feature matrix, in which each M’th column has N features to find the covariance and variance of.
Computes the determinant of a square matrix. For a 2 x 2 matrix this is given by ad - bc for:
Computes the determinant of a square matrix. For a 2 x 2 matrix this is given by ad - bc for:
Computes the F-1 score of the Precision and Recall
Computes the inverse of a matrix provided that it exists. To have an inverse a matrix must be square (same number of rows and columns) and it must also have a non zero determinant.
Computes the inverse of a matrix provided that it exists. To have an inverse a matrix must be square (same number of rows and columns) and it must also have a non zero determinant.
Computes the LDL^T decomposition of a matrix. This yields a matrix L and a matrix D such that for the provided matrix A, L * D * L^T = A. L will always be unit lower triangular, ie all entries above the diagonal will be 0, and all entries along the diagonal will br 1. D will always contain zeros except along the diagonal. This decomposition is closely related to the cholesky decomposition with the notable difference that it avoids taking square roots.
Computes the LDL^T decomposition of a matrix. This yields a matrix L and a matrix D such that for the provided matrix A, L * D * L^T = A. L will always be unit lower triangular, ie all entries above the diagonal will be 0, and all entries along the diagonal will br 1. D will always contain zeros except along the diagonal. This decomposition is closely related to the cholesky decomposition with the notable difference that it avoids taking square roots.
Computes the mean of the values in an iterator, consuming the iterator.
Computes a QR decomposition of a MxN matrix where M >= N.
Computes a QR decomposition of a MxN matrix where M >= N.
Computes the softmax of the values in an iterator, consuming the iterator.
Computes the variance of the values in an iterator, consuming the iterator.