[][src]Function levenberg_marquardt::differentiate_numerically

pub fn differentiate_numerically<F, N, M, O>(
    problem: &mut O
) -> Option<Matrix<F, M, N, O::JacobianStorage>> where
    F: RealField + Float,
    N: Dim,
    M: Dim,
    O: LeastSquaresProblem<F, M, N>,
    O::JacobianStorage: Clone,
    DefaultAllocator: Allocator<F, M, N, Buffer = O::JacobianStorage>, 

Compute a numerical approximation of the Jacobian.

The residuals function is called approximately $30\cdot nm$ times which can make this slow in debug builds and for larger problems.

The function is intended to be used for debugging or testing. You can try to check your derivative implementation of an LeastSquaresProblem with this.

Computing the derivatives numerically is unstable: You can construct functions where the computed result is far off. If you observe large differences between the derivative computed by this function and your implementation the reason might be due to instabilty.

The achieved precision by this function is lower than the floating point precision in general. So the error is bigger than $10^{-15}$ for f64 and bigger than $10^{-7}$ for f32. See the example below for what that means in your tests. If possible use f64 for the testing.

A much more precise alternative is provided by differentiate_holomorphic_numerically but it requires your residuals to be holomorphic and LeastSquaresProblem to be implemented for complex numbers.

Example

You can use this function to check your derivative implementation in a unit test. For example:

// Let `problem` be an instance of `LeastSquaresProblem`
let jacobian_numerical = differentiate_numerically(&mut problem).unwrap();
let jacobian_trait = problem.jacobian().unwrap();
assert_relative_eq!(jacobian_numerical, jacobian_trait, epsilon = 1e-13);

The assert_relative_eq! macro is from the approx crate.