Russell Lab - Scientific laboratory for linear algebra and numerical mathematics
This crate is part of Russell - Rust Scientific Library
Contents
- Introduction
- Installation
- Complex numbers
- Examples
- Running an example with Intel MKL
- Sorting small tuples
- Check first and second derivatives
- Bessel functions
- Linear fitting
- Lagrange interpolation
- Solution of a 1D PDE using spectral collocation
- Numerical integration: perimeter of ellipse
- Finding a local minimum and a root
- Computing the pseudo-inverse matrix
- Matrix visualization
- Computing eigenvalues and eigenvectors
- Cholesky factorization
- About the column major representation
- Benchmarks
- Notes for developers
Introduction
This library implements specialized mathematical functions (e.g., Bessel, Erf, Gamma) and functions to perform linear algebra computations (e.g., Matrix, Vector, Matrix-Vector, Eigen-decomposition, SVD). This library also implements a set of helpful function for comparing floating-point numbers, measuring computer time, reading table-formatted data, and more.
The code shall be implemented in native Rust code as much as possible. However, light interfaces ("wrappers") are implemented for some of the best tools available in numerical mathematics, including OpenBLAS and Intel MKL.
The code is organized in modules:
algo— algorithms that depend on the other modules (e.g, Lagrange interpolation)base— "base" functionality to help other modulescheck— functions to assist in unit and integration testingmath— mathematical (specialized) functions and constantsmatrix— [NumMatrix] struct and associated functionsmatvec— functions operating on matrices and vectorsvector— [NumVector] struct and associated functions
For linear algebra, the main structures are NumVector and NumMatrix, that are generic Vector and Matrix structures. The Matrix data is stored as column-major. The Vector and Matrix are f64 and Complex64 aliases of NumVector and NumMatrix, respectively.
The linear algebra functions currently handle only (f64, i32) pairs, i.e., accessing the (double, int) C functions. We also consider (Complex64, i32) pairs.
There are many functions for linear algebra, such as (for Real and Complex types):
- Vector addition, copy, inner and outer products, norms, and more
- Matrix addition, multiplication, copy, singular-value decomposition, eigenvalues, pseudo-inverse, inverse, norms, and more
- Matrix-vector multiplication, and more
- Solution of dense linear systems with symmetric or non-symmetric coefficient matrices, and more
- Reading writing files,
linspace, grid generators, Stopwatch, linear fitting, and more - Checking results, comparing float point numbers, and verifying the correctness of derivatives; see
russell_lab::check
Documentation
Installation
At this moment, Russell works on Linux (Debian/Ubuntu; and maybe Arch). It has some limited functionality on macOS too. In the future, we plan to enable Russell on Windows; however, this will take time because some essential libraries are not easily available on Windows.
TL;DR (Debian/Ubuntu/Linux)
First:
Then:
Details
This crate depends on an efficient BLAS library such as OpenBLAS and Intel MKL.
The root README file presents the steps to install the required dependencies.
Setting Cargo.toml up
👆 Check the crate version and update your Cargo.toml accordingly:
[]
= "*"
Or, considering the optional features (see more about these here):
[]
= { = "*", = ["intel_mkl"] }
Complex numbers
Note: For the functions dealing with complex numbers, the following line must be added to all derived code:
use Complex64;
This line will bring Complex64 to the scope. For convenience the (russell_lab) macro cpx! may be used to allocate complex numbers.
Examples
See also:
Running an example with Intel MKL
Consider the following code:
use *;
First, run the example without Intel MKL (default):
The output looks like this:
Using Intel MKL = false
BLAS num threads = 24
BLAS num threads = 2
Second, run the code with the intel_mkl feature:
Then, the output looks like this:
Using Intel MKL = true
BLAS num threads = 24
BLAS num threads = 2
Sorting small tuples
use ;
use StrError;
Check first and second derivatives
Check the implementation of the first and second derivatives of f(x) (illustrated below).
use NoArgs;
use ;
use ;
Output:
x df/dx d²f/dx²
-2 -0.01514792899408284 -0.022255803368229403
-1.5 -0.03506208911614317 -0.06759718081851025
-1 -0.11072664359861592 -0.30612660289029103
-0.5 -0.64 -2.816
0 0 32
0.5 0.64 -2.816
1 0.11072664359861592 -0.30612660289029103
1.5 0.03506208911614317 -0.06759718081851025
2 0.01514792899408284 -0.022255803368229403
Bessel functions
Plotting the Bessel J0, J1, and J2 functions:
use ;
use ;
use ;
const OUT_DIR: &str = "/tmp/russell_lab/";
Output:
Linear fitting
Fit a line through a set of points. The line has slope m and intercepts the y axis at x=0 with y(x=0) = c.
use linear_fitting;
use ;
Results:
Lagrange interpolation
This example illustrates the use of InterpLagrange with at Chebyshev-Gauss-Lobatto grid to interpolate Runge's equation.
Results:
Solution of a 1D PDE using spectral collocation
This example illustrates the solution of a 1D PDE using the spectral collocation method. It employs the InterpLagrange struct.
d²u du x
——— - 4 —— + 4 u = e + C
dx² dx
-4 e
C = ——————
1 + e²
x ∈ [-1, 1]
Boundary conditions:
u(-1) = 0 and u(1) = 0
Reference solution:
x sinh(1) 2x C
u(x) = e - ——————— e + —
sinh(2) 4
Results:
Numerical integration: perimeter of ellipse
use Quadrature;
use ;
use ;
Finding a local minimum and a root
This example finds the local minimum between 0.1 and 0.3 and the root between 0.3 and 0.4 for the function illustrated below
The output looks like:
x_optimal = 0.20000000003467466
Number of function evaluations = 18
Number of Jacobian evaluations = 0
Number of iterations = 18
Error estimate = unavailable
Total computation time = 6.11µs
x_root = 0.3397874957748173
Number of function evaluations = 10
Number of Jacobian evaluations = 0
Number of iterations = 9
Error estimate = unavailable
Total computation time = 907ns
Computing the pseudo-inverse matrix
use ;
Matrix visualization
We can use the fantastic tool named vismatrix to visualize the pattern of non-zero values of a matrix. With vismatrix, we can click on each circle and investigate the numeric values as well.
The function mat_write_vismatrix writes the input data file for vismatrix.
After generating the "dot-smat" file, run the following command:
Output:

Computing eigenvalues and eigenvectors
use *;
Cholesky factorization
use *;
About the column major representation
Only the COL-MAJOR representation is considered here.
┌ ┐ row_major = {0, 3,
│ 0 3 │ 1, 4,
A = │ 1 4 │ 2, 5};
│ 2 5 │
└ ┘ col_major = {0, 1, 2,
(m × n) 3, 4, 5}
Aᵢⱼ = col_major[i + j·m] = row_major[i·n + j]
↑
COL-MAJOR IS ADOPTED HERE
The main reason to use the col-major representation is to make the code work better with BLAS/LAPACK written in Fortran. Although those libraries have functions to handle row-major data, they usually add an overhead due to temporary memory allocation and copies, including transposing matrices. Moreover, the row-major versions of some BLAS/LAPACK libraries produce incorrect results (notably the DSYEV).
Benchmarks
Need to install:
Run the benchmarks with:
Jacobi Rotation versus LAPACK DSYEV
Comparison of the performances of mat_eigen_sym_jacobi (Jacobi rotation) versus mat_eigen_sym (calling LAPACK DSYEV).
Notes for developers
- The
c_codedirectory contains a thin wrapper to the BLAS libraries (OpenBLAS or Intel MKL) - The
c_codedirectory also contains a wrapper to the C math functions - The
build.rsfile uses the crateccto build the C-wrappers