pub struct Problem {
pub total_residual_dimension: usize,
pub fixed_variable_indexes: HashMap<String, HashSet<usize>>,
pub variable_bounds: HashMap<String, HashMap<usize, (f64, f64)>>,
/* private fields */
}Expand description
The optimization problem definition for factor graph optimization.
Manages residual blocks (factors/constraints), variables, and the sparse Jacobian structure. Supports mixed manifold types (SE2, SE3, SO2, SO3, Rn) in a single problem and provides efficient parallel residual/Jacobian computation.
§Architecture
The Problem acts as a container and coordinator:
- Stores all residual blocks (factors with optional loss functions)
- Tracks the global structure (which variables connect to which factors)
- Builds and maintains the sparse Jacobian pattern
- Provides parallel residual/Jacobian evaluation using rayon
- Manages variable constraints (fixed indices, bounds)
§Workflow
- Construction: Create a new Problem with
Problem::new() - Add Factors: Use
add_residual_block()to add constraints - Initialize Variables: Use
initialize_variables()with initial values - Build Sparsity: Use
build_symbolic_structure()once before optimization - Linearize: Call
compute_residual_and_jacobian_sparse()each iteration - Extract Covariance: Use
compute_and_set_covariances()after convergence
§Example
use apex_solver::core::problem::Problem;
use apex_solver::factors::BetweenFactor;
use apex_solver::manifold::ManifoldType;
use apex_solver::manifold::se2::SE2;
use nalgebra::dvector;
use std::collections::HashMap;
let mut problem = Problem::new();
// Add a between factor
let factor = Box::new(BetweenFactor::new(SE2::from_xy_angle(1.0, 0.0, 0.1)));
problem.add_residual_block(&["x0", "x1"], factor, None);
// Initialize variables
let mut initial = HashMap::new();
initial.insert("x0".to_string(), (ManifoldType::SE2, dvector![0.0, 0.0, 0.0]));
initial.insert("x1".to_string(), (ManifoldType::SE2, dvector![1.0, 0.0, 0.1]));
let variables = problem.initialize_variables(&initial);
assert_eq!(variables.len(), 2);Fields§
§total_residual_dimension: usizeTotal dimension of the stacked residual vector (sum of all residual block dimensions)
fixed_variable_indexes: HashMap<String, HashSet<usize>>Variables with fixed indices (e.g., fix first pose’s x,y coordinates) Maps variable name -> set of indices to fix
variable_bounds: HashMap<String, HashMap<usize, (f64, f64)>>Variable bounds (box constraints on individual DOF) Maps variable name -> (index -> (lower_bound, upper_bound))
Implementations§
Source§impl Problem
impl Problem
Sourcepub fn add_residual_block(
&mut self,
variable_key_size_list: &[&str],
factor: Box<dyn Factor + Send>,
loss_func: Option<Box<dyn LossFunction + Send>>,
) -> usize
pub fn add_residual_block( &mut self, variable_key_size_list: &[&str], factor: Box<dyn Factor + Send>, loss_func: Option<Box<dyn LossFunction + Send>>, ) -> usize
Add a residual block (factor with optional loss function) to the problem.
This is the primary method for building the factor graph. Each call adds one constraint connecting one or more variables.
§Arguments
variable_key_size_list- Names of the variables this factor connects (order matters)factor- The factor implementation that computes residuals and Jacobiansloss_func- Optional robust loss function for outlier rejection
§Returns
The unique ID assigned to this residual block
§Example
use apex_solver::core::problem::Problem;
use apex_solver::factors::{BetweenFactor, PriorFactor};
use apex_solver::core::loss_functions::HuberLoss;
use nalgebra::dvector;
use apex_solver::manifold::se2::SE2;
let mut problem = Problem::new();
// Add prior factor (unary constraint)
let prior = Box::new(PriorFactor { data: dvector![0.0, 0.0, 0.0] });
let id1 = problem.add_residual_block(&["x0"], prior, None);
// Add between factor with robust loss (binary constraint)
let between = Box::new(BetweenFactor::new(SE2::from_xy_angle(1.0, 0.0, 0.1)));
let loss: Option<Box<dyn apex_solver::core::loss_functions::LossFunction + Send>> =
Some(Box::new(HuberLoss::new(1.0)?));
let id2 = problem.add_residual_block(&["x0", "x1"], between, loss);
assert_eq!(id1, 0);
assert_eq!(id2, 1);
assert_eq!(problem.num_residual_blocks(), 2);pub fn remove_residual_block( &mut self, block_id: usize, ) -> Option<ResidualBlock>
pub fn fix_variable(&mut self, var_to_fix: &str, idx: usize)
pub fn unfix_variable(&mut self, var_to_unfix: &str)
pub fn set_variable_bounds( &mut self, var_to_bound: &str, idx: usize, lower_bound: f64, upper_bound: f64, )
pub fn remove_variable_bounds(&mut self, var_to_unbound: &str)
Sourcepub fn initialize_variables(
&self,
initial_values: &HashMap<String, (ManifoldType, DVector<f64>)>,
) -> HashMap<String, VariableEnum>
pub fn initialize_variables( &self, initial_values: &HashMap<String, (ManifoldType, DVector<f64>)>, ) -> HashMap<String, VariableEnum>
Initialize variables from initial values with manifold types.
Converts raw initial values into typed Variable<M> instances wrapped in VariableEnum.
This method also applies any fixed indices or bounds that were set via fix_variable()
or set_variable_bounds().
§Arguments
initial_values- Map from variable name to (manifold type, initial value vector)
§Returns
Map from variable name to VariableEnum (typed variables ready for optimization)
§Manifold Formats
- SE2:
[x, y, theta](3 elements) - SE3:
[tx, ty, tz, qw, qx, qy, qz](7 elements) - SO2:
[theta](1 element) - SO3:
[qw, qx, qy, qz](4 elements) - Rn:
[x1, x2, ..., xn](n elements)
§Example
use apex_solver::core::problem::Problem;
use apex_solver::manifold::ManifoldType;
use nalgebra::dvector;
use std::collections::HashMap;
let problem = Problem::new();
let mut initial = HashMap::new();
initial.insert("pose0".to_string(), (ManifoldType::SE2, dvector![0.0, 0.0, 0.0]));
initial.insert("pose1".to_string(), (ManifoldType::SE2, dvector![1.0, 0.0, 0.1]));
initial.insert("landmark".to_string(), (ManifoldType::RN, dvector![5.0, 3.0]));
let variables = problem.initialize_variables(&initial);
assert_eq!(variables.len(), 3);Sourcepub fn num_residual_blocks(&self) -> usize
pub fn num_residual_blocks(&self) -> usize
Get the number of residual blocks
Sourcepub fn build_symbolic_structure(
&self,
variables: &HashMap<String, VariableEnum>,
variable_index_sparce_matrix: &HashMap<String, usize>,
total_dof: usize,
) -> ApexSolverResult<SymbolicStructure>
pub fn build_symbolic_structure( &self, variables: &HashMap<String, VariableEnum>, variable_index_sparce_matrix: &HashMap<String, usize>, total_dof: usize, ) -> ApexSolverResult<SymbolicStructure>
Build symbolic structure for sparse Jacobian computation
This method constructs the sparsity pattern of the Jacobian matrix before numerical computation. It determines which entries in the Jacobian will be non-zero based on the structure of the optimization problem (which residual blocks connect which variables).
§Purpose
- Pre-allocates memory for sparse matrix operations
- Enables efficient sparse linear algebra (avoiding dense operations)
- Computed once at the beginning, used throughout optimization
§Arguments
variables- Map of variable names to their values and properties (SE2, SE3, etc.)variable_index_sparce_matrix- Map from variable name to starting column index in Jacobiantotal_dof- Total degrees of freedom (number of columns in Jacobian)
§Returns
A SymbolicStructure containing:
pattern: The symbolic sparse column matrix structure (row/col indices of non-zeros)order: An ordering/permutation for efficient numerical computation
§Algorithm
For each residual block:
- Identify which variables it depends on
- For each (residual_dimension × variable_dof) block, mark entries as non-zero
- Convert to optimized sparse matrix representation
§Example Structure
For a simple problem with 3 SE2 poses (9 DOF total):
- Between(x0, x1): Creates 3×6 block at rows 0-2, cols 0-5
- Between(x1, x2): Creates 3×6 block at rows 3-5, cols 3-8
- Prior(x0): Creates 3×3 block at rows 6-8, cols 0-2
Result: 9×9 sparse Jacobian with 45 non-zero entries
Sourcepub fn compute_residual_sparse(
&self,
variables: &HashMap<String, VariableEnum>,
) -> ApexSolverResult<Mat<f64>>
pub fn compute_residual_sparse( &self, variables: &HashMap<String, VariableEnum>, ) -> ApexSolverResult<Mat<f64>>
Compute only the residual vector for the current variable values.
This is an optimized version that skips Jacobian computation when only the cost function value is needed (e.g., during initialization or step evaluation).
§Arguments
variables- Current variable values (frominitialize_variables()or updated)
§Returns
Residual vector as N×1 column matrix (N = total residual dimension)
§Performance
Approximately 2x faster than compute_residual_and_jacobian_sparse() since it:
- Skips Jacobian computation for each residual block
- Avoids Jacobian matrix assembly and storage
- Only parallelizes residual evaluation
§When to Use
- Initial cost computation: When setting up optimization state
- Step evaluation: When computing new cost after applying parameter updates
- Cost-only queries: When you don’t need gradients
Use compute_residual_and_jacobian_sparse() when you need both residual and Jacobian
(e.g., in the main optimization iteration loop for linearization).
§Example
// Initial cost evaluation (no Jacobian needed)
let residual = problem.compute_residual_sparse(&variables)?;
let initial_cost = residual.norm_l2() * residual.norm_l2();Sourcepub fn compute_residual_and_jacobian_sparse(
&self,
variables: &HashMap<String, VariableEnum>,
variable_index_sparce_matrix: &HashMap<String, usize>,
symbolic_structure: &SymbolicStructure,
) -> ApexSolverResult<(Mat<f64>, SparseColMat<usize, f64>)>
pub fn compute_residual_and_jacobian_sparse( &self, variables: &HashMap<String, VariableEnum>, variable_index_sparce_matrix: &HashMap<String, usize>, symbolic_structure: &SymbolicStructure, ) -> ApexSolverResult<(Mat<f64>, SparseColMat<usize, f64>)>
Compute residual vector and sparse Jacobian matrix for the current variable values.
This is the core linearization method called during each optimization iteration. It:
- Evaluates all residual blocks in parallel using rayon
- Assembles the full residual vector
- Constructs the sparse Jacobian matrix using the precomputed symbolic structure
§Arguments
variables- Current variable values (frominitialize_variables()or updated)variable_index_sparce_matrix- Map from variable name to starting column in Jacobiansymbolic_structure- Precomputed sparsity pattern (frombuild_symbolic_structure())
§Returns
Tuple (residual, jacobian) where:
residual: N×1 column matrix (total residual dimension)jacobian: N×M sparse matrix (N = residual dim, M = total DOF)
§Performance
This method is highly optimized:
- Parallel evaluation: Each residual block is evaluated independently using rayon
- Sparse storage: Only non-zero Jacobian entries are stored and computed
- Memory efficient: Preallocated sparse structure avoids dynamic allocations
Typically accounts for 40-60% of total optimization time (including sparse matrix ops).
§When to Use
Use this method in the main optimization loop when you need both residual and Jacobian
for linearization. For cost-only evaluation, use compute_residual_sparse() instead.
§Example
// Inside optimizer loop, compute both residual and Jacobian for linearization
// let (residual, jacobian) = problem.compute_residual_and_jacobian_sparse(
// &variables,
// &variable_index_map,
// &symbolic_structure,
// )?;
//
// Use for linear system: J^T J dx = -J^T rSourcepub fn log_residual_to_file(
&self,
residual: &DVector<f64>,
filename: &str,
) -> Result<(), Error>
pub fn log_residual_to_file( &self, residual: &DVector<f64>, filename: &str, ) -> Result<(), Error>
Log residual vector to a text file
Sourcepub fn log_sparse_jacobian_to_file(
&self,
jacobian: &SparseColMat<usize, f64>,
filename: &str,
) -> Result<(), Error>
pub fn log_sparse_jacobian_to_file( &self, jacobian: &SparseColMat<usize, f64>, filename: &str, ) -> Result<(), Error>
Log sparse Jacobian matrix to a text file
Sourcepub fn log_variables_to_file(
&self,
variables: &HashMap<String, VariableEnum>,
filename: &str,
) -> Result<(), Error>
pub fn log_variables_to_file( &self, variables: &HashMap<String, VariableEnum>, filename: &str, ) -> Result<(), Error>
Log variables to a text file
Sourcepub fn compute_and_set_covariances(
&self,
linear_solver: &mut Box<dyn SparseLinearSolver>,
variables: &mut HashMap<String, VariableEnum>,
variable_index_map: &HashMap<String, usize>,
) -> Option<HashMap<String, Mat<f64>>>
pub fn compute_and_set_covariances( &self, linear_solver: &mut Box<dyn SparseLinearSolver>, variables: &mut HashMap<String, VariableEnum>, variable_index_map: &HashMap<String, usize>, ) -> Option<HashMap<String, Mat<f64>>>
Compute per-variable covariances and set them in Variable objects
This method computes the full covariance matrix by inverting the Hessian from the linear solver, then extracts per-variable covariance blocks and stores them in the corresponding Variable objects.
§Arguments
linear_solver- Mutable reference to the linear solver containing the cached Hessianvariables- Mutable map of variables where covariances will be storedvariable_index_map- Map from variable names to their starting column indices
§Returns
Some(HashMap) containing per-variable covariance matrices if successful, None otherwise
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Problem
impl !RefUnwindSafe for Problem
impl Send for Problem
impl Sync for Problem
impl Unpin for Problem
impl !UnwindSafe for Problem
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> DistributionExt for Twhere
T: ?Sized,
impl<T> DistributionExt for Twhere
T: ?Sized,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Pointable for T
impl<T> Pointable for T
Source§impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
impl<SS, SP> SupersetOf<SS> for SPwhere
SS: SubsetOf<SP>,
Source§fn to_subset(&self) -> Option<SS>
fn to_subset(&self) -> Option<SS>
self from the equivalent element of its
superset. Read moreSource§fn is_in_subset(&self) -> bool
fn is_in_subset(&self) -> bool
self is actually part of its subset T (and can be converted to it).Source§fn to_subset_unchecked(&self) -> SS
fn to_subset_unchecked(&self) -> SS
self.to_subset but without any property checks. Always succeeds.Source§fn from_subset(element: &SS) -> SP
fn from_subset(element: &SS) -> SP
self to the equivalent element of its superset.