Crate finitediff[−][src]
Expand description
This crate contains a wide range of methods for the calculation of gradients, Jacobians and
Hessians using forward and central differences.
The methods have been implemented for input vectors of the type Vec<f64>
and
ndarray::Array1<f64>
.
Central differences are more accurate but require more evaluations of the cost function and are
therefore computationally more expensive.
References
Jorge Nocedal and Stephen J. Wright (2006). Numerical Optimization. Springer. ISBN 0-387-30303-0.
Usage
Add this to your dependencies
section of Cargo.toml
:
[dependencies] finitediff = "0.1.4"
To use the FiniteDiff
trait implementations on the ndarray
types, please activate the
ndarray
feature:
[dependencies] finitediff = { version = "0.1.4", features = ["ndarray"] }
Examples
Calculation of the gradient
This section illustrates the computation of a gradient of a fuction f
at a point x
of type
Vec<f64>
. Using forward differences requires n+1
evaluations of the function f
while
central differences requires 2*n
evaluations.
For Vec<f64>
use finitediff::FiniteDiff; // Define cost function `f(x)` let f = |x: &Vec<f64>| -> f64 { // ... }; // Point at which gradient should be calculated let x = vec![1.0f64, 1.0]; // Calculate gradient of `f` at `x` using forward differences let grad_forward = x.forward_diff(&f); // Calculate gradient of `f` at `x` using central differences let grad_central = x.central_diff(&f);
For ndarray::Array1<f64>
use ndarray::{array, Array1}; use finitediff::FiniteDiff; // Define cost function `f(x)` let f = |x: &Array1<f64>| -> f64 { // ... }; // Point at which gradient should be calculated let x = array![1.0f64, 1.0]; // Calculate gradient of `f` at `x` using forward differences let grad_forward = x.forward_diff(&f); // Calculate gradient of `f` at `x` using central differences let grad_central = x.central_diff(&f);
Calculation of the Jacobian
Note that the same interface is also implemented for ndarray::Array1<f64>
(not shown).
Full Jacobian
The calculation of the full Jacobian requires n+1
evaluations of the function f
.
use finitediff::FiniteDiff; let f = |x: &Vec<f64>| -> Vec<f64> { // ... }; let x = vec![1.0f64, 1.0, 1.0, 1.0, 1.0, 1.0]; // Using forward differences let jacobian_forward = x.forward_jacobian(&f); // Using central differences let jacobian_central = x.central_jacobian(&f);
Product of the Jacobian J(x)
with a vector p
Directly computing J(x)*p
can be much more efficient than computing J(x)
first and then
multiplying it with p
. While computing the full Jacobian J(x)
requires n+1
evaluations of
f
, J(x)*p
only requires 2
.
use finitediff::FiniteDiff; let f = |x: &Vec<f64>| -> Vec<f64> { // ... }; let x = vec![1.0f64, 1.0, 1.0, 1.0, 1.0, 1.0]; let p = vec![1.0f64, 2.0, 3.0, 4.0, 5.0, 6.0]; // using forward differences let jacobian_forward = x.forward_jacobian_vec_prod(&f, &p); // using central differences let jacobian_central = x.central_jacobian_vec_prod(&f, &p);
Sparse Jacobian
If the Jacobian is sparse its structure can be exploited using perturbation vectors. See Nocedal & Wright for details.
use finitediff::{FiniteDiff, PerturbationVector}; let f = |x: &Vec<f64>| -> Vec<f64> { // ... }; let x = vec![1.0f64, 1.0, 1.0, 1.0, 1.0, 1.0]; let pert = vec![ PerturbationVector::new() .add(0, vec![0, 1]) .add(3, vec![2, 3, 4]), PerturbationVector::new() .add(1, vec![0, 1, 2]) .add(4, vec![3, 4, 5]), PerturbationVector::new() .add(2, vec![1, 2, 3]) .add(5, vec![4, 5]), ]; // using forward differences let jacobian_forward = x.forward_jacobian_pert(&f, &pert); // using central differences let jacobian_central = x.central_jacobian_pert(&f, &pert);
Calculation of the Hessian
Note that the same interface is also implemented for ndarray::Array1<f64>
(not shown).
Full Hessian
use finitediff::FiniteDiff; let g = |x: &Vec<f64>| -> Vec<f64> { // ... }; let x = vec![1.0f64, 1.0, 1.0, 1.0]; // using forward differences let hessian_forward = x.forward_hessian(&g); // using central differences let hessian_central = x.central_hessian(&g);
Product of the Hessian H(x)
with a vector p
use finitediff::FiniteDiff; let g = |x: &Vec<f64>| -> Vec<f64> { // ... }; let x = vec![1.0f64, 1.0, 1.0, 1.0]; let p = vec![2.0, 3.0, 4.0, 5.0]; // using forward differences let hessian_forward = x.forward_hessian_vec_prod(&g, &p); // using forward differences let hessian_central = x.central_hessian_vec_prod(&g, &p);
Calculation of the Hessian without knowledge of the gradient
use finitediff::FiniteDiff; let f = |x: &Vec<f64>| -> f64 { // ... }; let x = vec![1.0f64, 1.0, 1.0, 1.0]; let hessian = x.forward_hessian_nograd(&f);
Calculation of the sparse Hessian without knowledge of the gradient
use finitediff::FiniteDiff; let f = |x: &Vec<f64>| -> f64 { // ... }; let x = vec![1.0f64, 1.0, 1.0, 1.0]; // Indices at which the Hessian should be evaluated. All other // elements of the Hessian will be zero let indices = vec![[1, 1], [2, 3], [3, 3]]; let hessian = x.forward_hessian_nograd_sparse(&f, indices);
Structs
Perturbation Vector for the accelerated computation of the Jacobian.
Traits
Type Definitions
A collection of PerturbationVector
s