Russell Lab - Matrix-vector laboratory including linear algebra tools
This crate is part of Russell - Rust Scientific Library
Contents
- Introduction
- Installation
- Setting Cargo.toml
- Complex numbers
- Examples
- About the column major representation
- Benchmarks
- For developers
Introduction
This crate implements specialized mathematical functions (e.g., Bessel, Erf, Gamma) and functions to perform linear algebra computations (e.g., Matrix, Vector, Matrix-Vector, Eigen-decomposition, SVD). This crate also implements a set of helpful function for comparing floating-point numbers, measuring computer time, reading table-formatted data, and more.
The code shall be implemented in native Rust code as much as possible. However, thin interfaces ("wrappers") are implemented for some of the best tools available in numerical mathematics, including OpenBLAS and Intel MKL.
The code is organized in modules:
check-- implements functions to assist in unit and integration testingbase-- implements a "base" functionality to help other modulesmath-- implements mathematical (specialized) functions and constantsvector-- implements the [NumVector] struct and associated functionsmatrix-- implements the [NumMatrix] struct and associated functionsmatvec-- implements functions operating on matrices and vectorsfftw-- implements a think wrap to a few FFTW routines. Warning: these routines are thread-unsafealgo-- implements algorithms that depend on the other modules (e.g, Lagrange interpolation)
For linear algebra, the main structures are NumVector and NumMatrix, that are generic Vector and Matrix structures. The Matrix data is stored as column-major. The Vector and Matrix are f64 and Complex64 aliases of NumVector and NumMatrix, respectively.
The linear algebra functions currently handle only (f64, i32) pairs, i.e., accessing the (double, int) C functions. We also consider (Complex64, i32) pairs.
There are many functions for linear algebra, such as (for Real and Complex types):
- Vector addition, copy, inner and outer products, norms, and more
- Matrix addition, multiplication, copy, singular-value decomposition, eigenvalues, pseudo-inverse, inverse, norms, and more
- Matrix-vector multiplication, and more
- Solution of dense linear systems with symmetric or non-symmetric coefficient matrices, and more
- Reading writing files,
linspace, grid generators, Stopwatch, linear fitting, and more - Checking results, comparing float point numbers, and verifying the correctness of derivatives; see
russell_lab::check
See the documentation for further information:
- russell_lab documentation - Contains the API reference and examples
Installation on Debian/Ubuntu/Linux
This crate depends on an efficient BLAS library such as OpenBLAS and Intel MKL.
The root README file presents the steps to install the required dependencies.
Setting Cargo.toml
๐ Check the crate version and update your Cargo.toml accordingly:
[]
= "*"
Or, considering the optional features (see more about these here):
[]
= { = "*", = ["intel_mkl"] }
Complex numbers
Note: For the functions dealing with complex numbers, the following line must be added to all derived code:
use Complex64;
This line will bring Complex64 to the scope. For convenience the (russell_lab) macro cpx! may be used to allocate complex numbers.
Examples
See also:
Compute the pseudo-inverse matrix
use ;
Compute eigenvalues
use *;
Cholesky factorization
use *;
About the column major representation
Only the COL-MAJOR representation is considered here.
โ โ row_major = {0, 3,
โ 0 3 โ 1, 4,
A = โ 1 4 โ 2, 5};
โ 2 5 โ
โ โ col_major = {0, 1, 2,
(m ร n) 3, 4, 5}
Aแตขโฑผ = col_major[i + jยทm] = row_major[iยทn + j]
โ
COL-MAJOR IS ADOPTED HERE
The main reason to use the col-major representation is to make the code work better with BLAS/LAPACK written in Fortran. Although those libraries have functions to handle row-major data, they usually add an overhead due to temporary memory allocation and copies, including transposing matrices. Moreover, the row-major versions of some BLAS/LAPACK libraries produce incorrect results (notably the DSYEV).
Benchmarks
Need to install:
Run the benchmarks with:
Jacobi Rotation versus LAPACK DSYEV
Comparison of the performances of mat_eigen_sym_jacobi (Jacobi rotation) versus mat_eigen_sym (calling LAPACK DSYEV).
For developers
Notes for developers:
- The
c_codedirectory contains a thin wrapper to the BLAS libraries (OpenBLAS or Intel MKL) - The
c_codedirectory also contains a wrapper to the C math functions - The
build.rsfile uses the crateccto build the C-wrappers