Russell Lab - Scientific laboratory for linear algebra and numerical mathematics
This crate is part of Russell - Rust Scientific Library
Contents
- Introduction
- Installation
- ๐ Examples
- Running an example with Intel MKL
- Sorting small tuples
- Check first and second derivatives
- Bessel functions
- Linear fitting
- Chebyshev adaptive interpolation (given function)
- Chebyshev adaptive interpolation (given data)
- Chebyshev adaptive interpolation (given noisy data)
- Lagrange interpolation
- Solution of a 1D PDE using spectral collocation
- Numerical integration: perimeter of ellipse
- Finding a local minimum and a root
- Finding all roots in an interval
- Computing the pseudo-inverse matrix
- Matrix visualization
- Computing eigenvalues and eigenvectors
- Cholesky factorization
- Read a table-formatted data file
- About the column major representation
- Benchmarks
- Notes for developers
Introduction
This library implements specialized mathematical functions (e.g., Bessel, Erf, Gamma) and functions to perform linear algebra computations (e.g., Matrix, Vector, Matrix-Vector, Eigen-decomposition, SVD). This library also implements a set of helpful function for comparing floating-point numbers, measuring computer time, reading table-formatted data, and more.
The code shall be implemented in native Rust code as much as possible. However, light interfaces ("wrappers") are implemented for some of the best tools available in numerical mathematics, including OpenBLAS and Intel MKL.
The code is organized in modules:
algoโ algorithms that depend on the other modules (e.g, Lagrange interpolation)baseโ "base" functionality to help other modulescheckโ functions to assist in unit and integration testingmathโ mathematical (specialized) functions and constantsmatrixโ [NumMatrix] struct and associated functionsmatvecโ functions operating on matrices and vectorsvectorโ [NumVector] struct and associated functions
For linear algebra, the main structures are NumVector and NumMatrix, that are generic Vector and Matrix structures. The Matrix data is stored as column-major. The Vector and Matrix are f64 and Complex64 aliases of NumVector and NumMatrix, respectively.
The linear algebra functions currently handle only (f64, i32) pairs, i.e., accessing the (double, int) C functions. We also consider (Complex64, i32) pairs.
There are many functions for linear algebra, such as (for Real and Complex types):
- Vector addition, copy, inner and outer products, norms, and more
- Matrix addition, multiplication, copy, singular-value decomposition, eigenvalues, pseudo-inverse, inverse, norms, and more
- Matrix-vector multiplication, and more
- Solution of dense linear systems with symmetric or non-symmetric coefficient matrices, and more
- Reading writing files,
linspace, grid generators, Stopwatch, linear fitting, and more - Checking results, comparing floating point numbers, and verifying the correctness of derivatives; see
russell_lab::check
Documentation
External crates
A few other third-party crates related to numerical mathematics and associated disciplines are listed below:
- lambert_w โ Computes the Lambert W function
Installation
This crate depends on some non-rust high-performance libraries. See the main README file for the steps to install these dependencies.
Setting Cargo.toml up
๐ Check the crate version and update your Cargo.toml accordingly:
[]
= "*"
Optional features
The following (Rust) features are available:
intel_mkl: Use Intel MKL instead of OpenBLAS
Note that the main README file presents the steps to compile the required libraries according to each feature.
๐ Examples
This section illustrates how to use russell_lab. See also:
Running an example with Intel MKL
Consider the following code:
use *;
First, run the example without Intel MKL (default):
The output looks like this:
Using Intel MKL = false
BLAS num threads = 24
BLAS num threads = 2
Second, run the code with the intel_mkl feature:
Then, the output looks like this:
Using Intel MKL = true
BLAS num threads = 24
BLAS num threads = 2
Sorting small tuples
use ;
use StrError;
Check first and second derivatives
Check the implementation of the first and second derivatives of f(x) (illustrated below).
use NoArgs;
use ;
use ;
Output:
x df/dx dยฒf/dxยฒ
-2 -0.01514792899408284 -0.022255803368229403
-1.5 -0.03506208911614317 -0.06759718081851025
-1 -0.11072664359861592 -0.30612660289029103
-0.5 -0.64 -2.816
0 0 32
0.5 0.64 -2.816
1 0.11072664359861592 -0.30612660289029103
1.5 0.03506208911614317 -0.06759718081851025
2 0.01514792899408284 -0.022255803368229403
Bessel functions
Plotting the Bessel J0, J1, and J2 functions:
use ;
use ;
use ;
const OUT_DIR: &str = "/tmp/russell_lab/";
Output:
Linear fitting
Fit a line through a set of points. The line has slope m and intercepts the y axis at x=0 with y(x=0) = c.
use linear_fitting;
use ;
Results:
Chebyshev adaptive interpolation (given function)
This example illustrates the use of InterpChebyshev to interpolate data given a function.
Results:
Chebyshev adaptive interpolation (given data)
This example illustrates the use of InterpChebyshev to interpolate discrete data.
Results:
Chebyshev adaptive interpolation (given noisy data)
This example illustrates the use of InterpChebyshev to interpolate noisy data.
Results:
Lagrange interpolation
This example illustrates the use of InterpLagrange with at Chebyshev-Gauss-Lobatto grid to interpolate Runge's equation.
Results:
Solution of a 1D PDE using spectral collocation
This example illustrates the solution of a 1D PDE using the spectral collocation method. It employs the InterpLagrange struct.
dยฒu du x
โโโ - 4 โโ + 4 u = e + C
dxยฒ dx
-4 e
C = โโโโโโ
1 + eยฒ
x โ [-1, 1]
Boundary conditions:
u(-1) = 0 and u(1) = 0
Reference solution:
x sinh(1) 2x C
u(x) = e - โโโโโโโ e + โ
sinh(2) 4
Results:
Numerical integration: perimeter of ellipse
use Quadrature;
use ;
use ;
Finding a local minimum and a root
This example finds the local minimum between 0.1 and 0.3 and the root between 0.3 and 0.4 for the function illustrated below
The output looks like:
x_optimal = 0.20000000003467466
Number of function evaluations = 18
Number of Jacobian evaluations = 0
Number of iterations = 18
Error estimate = unavailable
Total computation time = 6.11ยตs
x_root = 0.3397874957748173
Number of function evaluations = 10
Number of Jacobian evaluations = 0
Number of iterations = 9
Error estimate = unavailable
Total computation time = 907ns
Finding all roots in an interval
This example employs a Chebyshev interpolant to find all roots of a function in an interval. The method uses adaptive interpolation followed by calculating the eigenvalues of the companion matrix. These eigenvalues equal the roots of the polynomial. After that, a simple Newton refining (polishing) algorithm is applied.
The output looks like:
N = 184
roots =
โ โ
โ 0.04109147217011577 โ
โ 0.1530172326889439 โ
โ 0.25340124027487965 โ
โ 0.33978749525956276 โ
โ 0.47590538542276967 โ
โ 0.5162732673126048 โ
โ โ
f @ roots =
โ โ
โ 1.84E-08 โ
โ -1.51E-08 โ
โ -2.40E-08 โ
โ 9.53E-09 โ
โ -1.16E-08 โ
โ -5.80E-09 โ
โ โ
refined roots =
โ โ
โ 0.04109147155278252 โ
โ 0.15301723213859994 โ
โ 0.25340124149692184 โ
โ 0.339787495774806 โ
โ 0.47590538689192813 โ
โ 0.5162732665558162 โ
โ โ
f @ refined roots =
โ โ
โ 6.66E-16 โ
โ -2.22E-16 โ
โ -2.22E-16 โ
โ 1.33E-15 โ
โ 4.44E-16 โ
โ -2.22E-16 โ
โ โ
The function and the roots are illustrated in the figure below.
References
- Boyd JP (2002) Computing zeros on a real interval through Chebyshev expansion and polynomial rootfinding, SIAM Journal of Numerical Analysis, 40(5):1666-1682
- Boyd JP (2013) Finding the zeros of a univariate equation: proxy rootfinders, Chebyshev interpolation, and the companion matrix, SIAM Journal of Numerical Analysis, 55(2):375-396.
- Boyd JP (2014) Solving Transcendental Equations: The Chebyshev Polynomial Proxy and Other Numerical Rootfinders, Perturbation Series, and Oracles, SIAM, pp460
Computing the pseudo-inverse matrix
use ;
Matrix visualization
We can use the fantastic tool named vismatrix to visualize the pattern of non-zero values of a matrix. With vismatrix, we can click on each circle and investigate the numeric values as well.
The function mat_write_vismatrix writes the input data file for vismatrix.
After generating the "dot-smat" file, run the following command:
Output:

Computing eigenvalues and eigenvectors
use *;
Cholesky factorization
use *;
Read a table-formatted data file
The goal is to read the following file (clay-data.txt):
# Fujinomori clay test results
sr ea er # header
1.00000 -6.00000 0.10000
2.00000 7.00000 0.20000
3.00000 8.00000 0.20000 # << look at this line
# comments plus new lines are OK
4.00000 9.00000 0.40000
5.00000 10.00000 0.50000
# bye
The code below illustrates how to do it.
Each column (sr, ea, er) is accessible via the get method of the [HashMap].
use ;
use HashMap;
use env;
use PathBuf;
About the column major representation
Only the COL-MAJOR representation is considered here.
โ โ row_major = {0, 3,
โ 0 3 โ 1, 4,
A = โ 1 4 โ 2, 5};
โ 2 5 โ
โ โ col_major = {0, 1, 2,
(m ร n) 3, 4, 5}
Aแตขโฑผ = col_major[i + jยทm] = row_major[iยทn + j]
โ
COL-MAJOR IS ADOPTED HERE
The main reason to use the col-major representation is to make the code work better with BLAS/LAPACK written in Fortran. Although those libraries have functions to handle row-major data, they usually add an overhead due to temporary memory allocation and copies, including transposing matrices. Moreover, the row-major versions of some BLAS/LAPACK libraries produce incorrect results (notably the DSYEV).
Benchmarks
Need to install:
Run the benchmarks with:
Chebyshev polynomial evaluation
Comparison of the performances of InterpChebyshev::eval implementing the Clenshaw algorithm and InterpChebyshev::eval_using_trig using the trigonometric functions.
Jacobi Rotation versus LAPACK DSYEV
Comparison of the performances of mat_eigen_sym_jacobi (Jacobi rotation) versus mat_eigen_sym (calling LAPACK DSYEV).
Notes for developers
- The
c_codedirectory contains a thin wrapper to the BLAS libraries (OpenBLAS or Intel MKL) - The
c_codedirectory also contains a wrapper to the C math functions - The
build.rsfile uses the crateccto build the C-wrappers