Sparse non-linear least squares optimization
This crate provides a general sparse non-linear least squares optimization library, similar to the C++ libraries ceres-solver, g2o and gtsam. It supports automatic differentiation through the sophus-autodiff crate, as well optimization on manifolds and Lie groups through the [sophus_autodiff::manifold::IsVariable] trait and the sophus-lie crate.
Example
use *;
use ;
use DualVector;
use ;
use ;
use robust_kernel;
use ;
// We want to fit the isometry `T ∈ SE(2)` to a prior distribution
// `N(E(T), W⁻¹)`, where `E(T)` is the prior mean and `W⁻¹` is the prior
// covariance matrix.
// (1) First we define the residual cost term.
// (3) Implement the `HasResidualFn` trait for the cost term.
let prior_world_from_robot = from_translation;
// (4) Define the cost terms, by specifying the residual function
// `g(T) = Isometry2PriorCostTerm` as well as providing the prior distribution.
const POSE: &str = "poses";
let obs_pose_a_from_pose_b_poses = new;
// (5) Define the decision variables. In this case, we only have one variable,
// and we initialize it with the identity transformation.
let est_world_from_robot = identity;
let variables = new
.add_family
.build;
// (6) Perform the non-linear least squares optimization.
let solution = optimize_nlls
.unwrap;
// (7) Retrieve the refined transformation and compare it with the prior one.
let refined_world_from_robot
= solution.variables.;
assert_abs_diff_eq!;
In the code above, we define an [crate::nlls::costs::Isometry2PriorCostTerm]
struct that imposes a Gaussian prior on an [sophus_lie::Isometry2F64]
pose, specifying a prior mean E(T) and a precision matrix W. The key
operation is the residual function g(T) = log[T * E(T)⁻¹]. This maps the
difference between the current estimate (T) and the prior mean (E(T)) into
the tangent space of the Lie group.
We then implement the [nlls::HasResidualFn] trait for our cost term
so that it can be evaluated inside the solver. In the eval method, automatic
differentiation with [sophus_autodiff::dual::DualVector] computes the Jacobian
of the residual. The returned [nlls::EvaluatedCostTerm] includes
both the residual vector and its associated covariance.
Next, we group our cost terms with [crate::nlls::CostTerms] and associate
them with one or more variable “families” via the [crate::variables::VarBuilder].
Here, we create a single family [crate::variables::VarFamily] named
"poses" for the free variable, initialized to the identity transform. Finally,
we call the sparse non-linear least squares solver [crate::nlls::optimize_nlls]
with these variables and the wrapped cost function [crate::nlls::CostFn],
passing in tuning parameters [crate::nlls::OptParams]; we are using the
default here. Upon convergence, the solver returns updated variables that we
retrieve from the solution using [crate::variables::VarFamilies::get_members].
Integration with sophus-rs
This crate is part of the sophus umbrella crate. It re-exports the relevant prelude types under [prelude], so you can seamlessly interoperate with the rest of the sophus-rs types.
Also check out alternative non-linear least squares crates: