Problem

Struct Problem 

Source
pub struct Problem {
    pub total_residual_dimension: usize,
    pub fixed_variable_indexes: HashMap<String, HashSet<usize>>,
    pub variable_bounds: HashMap<String, HashMap<usize, (f64, f64)>>,
    /* private fields */
}
Expand description

The optimization problem definition for factor graph optimization.

Manages residual blocks (factors/constraints), variables, and the sparse Jacobian structure. Supports mixed manifold types (SE2, SE3, SO2, SO3, Rn) in a single problem and provides efficient parallel residual/Jacobian computation.

§Architecture

The Problem acts as a container and coordinator:

  • Stores all residual blocks (factors with optional loss functions)
  • Tracks the global structure (which variables connect to which factors)
  • Builds and maintains the sparse Jacobian pattern
  • Provides parallel residual/Jacobian evaluation using rayon
  • Manages variable constraints (fixed indices, bounds)

§Workflow

  1. Construction: Create a new Problem with Problem::new()
  2. Add Factors: Use add_residual_block() to add constraints
  3. Initialize Variables: Use initialize_variables() with initial values
  4. Build Sparsity: Use build_symbolic_structure() once before optimization
  5. Linearize: Call compute_residual_and_jacobian_sparse() each iteration
  6. Extract Covariance: Use compute_and_set_covariances() after convergence

§Example

use apex_solver::core::problem::Problem;
use apex_solver::factors::BetweenFactor;
use apex_solver::manifold::ManifoldType;
use apex_solver::manifold::se2::SE2;
use nalgebra::dvector;
use std::collections::HashMap;

let mut problem = Problem::new();

// Add a between factor
let factor = Box::new(BetweenFactor::new(SE2::from_xy_angle(1.0, 0.0, 0.1)));
problem.add_residual_block(&["x0", "x1"], factor, None);

// Initialize variables
let mut initial = HashMap::new();
initial.insert("x0".to_string(), (ManifoldType::SE2, dvector![0.0, 0.0, 0.0]));
initial.insert("x1".to_string(), (ManifoldType::SE2, dvector![1.0, 0.0, 0.1]));

let variables = problem.initialize_variables(&initial);
assert_eq!(variables.len(), 2);

Fields§

§total_residual_dimension: usize

Total dimension of the stacked residual vector (sum of all residual block dimensions)

§fixed_variable_indexes: HashMap<String, HashSet<usize>>

Variables with fixed indices (e.g., fix first pose’s x,y coordinates) Maps variable name -> set of indices to fix

§variable_bounds: HashMap<String, HashMap<usize, (f64, f64)>>

Variable bounds (box constraints on individual DOF) Maps variable name -> (index -> (lower_bound, upper_bound))

Implementations§

Source§

impl Problem

Source

pub fn new() -> Self

Create a new empty optimization problem.

§Returns

A new Problem with no residual blocks or variables

§Example
use apex_solver::core::problem::Problem;

let problem = Problem::new();
assert_eq!(problem.num_residual_blocks(), 0);
assert_eq!(problem.total_residual_dimension, 0);
Source

pub fn add_residual_block( &mut self, variable_key_size_list: &[&str], factor: Box<dyn Factor + Send>, loss_func: Option<Box<dyn LossFunction + Send>>, ) -> usize

Add a residual block (factor with optional loss function) to the problem.

This is the primary method for building the factor graph. Each call adds one constraint connecting one or more variables.

§Arguments
  • variable_key_size_list - Names of the variables this factor connects (order matters)
  • factor - The factor implementation that computes residuals and Jacobians
  • loss_func - Optional robust loss function for outlier rejection
§Returns

The unique ID assigned to this residual block

§Example
use apex_solver::core::problem::Problem;
use apex_solver::factors::{BetweenFactor, PriorFactor};
use apex_solver::core::loss_functions::HuberLoss;
use nalgebra::dvector;
use apex_solver::manifold::se2::SE2;

let mut problem = Problem::new();

// Add prior factor (unary constraint)
let prior = Box::new(PriorFactor { data: dvector![0.0, 0.0, 0.0] });
let id1 = problem.add_residual_block(&["x0"], prior, None);

// Add between factor with robust loss (binary constraint)
let between = Box::new(BetweenFactor::new(SE2::from_xy_angle(1.0, 0.0, 0.1)));
let loss: Option<Box<dyn apex_solver::core::loss_functions::LossFunction + Send>> =
    Some(Box::new(HuberLoss::new(1.0)?));
let id2 = problem.add_residual_block(&["x0", "x1"], between, loss);

assert_eq!(id1, 0);
assert_eq!(id2, 1);
assert_eq!(problem.num_residual_blocks(), 2);
Source

pub fn remove_residual_block( &mut self, block_id: usize, ) -> Option<ResidualBlock>

Source

pub fn fix_variable(&mut self, var_to_fix: &str, idx: usize)

Source

pub fn unfix_variable(&mut self, var_to_unfix: &str)

Source

pub fn set_variable_bounds( &mut self, var_to_bound: &str, idx: usize, lower_bound: f64, upper_bound: f64, )

Source

pub fn remove_variable_bounds(&mut self, var_to_unbound: &str)

Source

pub fn initialize_variables( &self, initial_values: &HashMap<String, (ManifoldType, DVector<f64>)>, ) -> HashMap<String, VariableEnum>

Initialize variables from initial values with manifold types.

Converts raw initial values into typed Variable<M> instances wrapped in VariableEnum. This method also applies any fixed indices or bounds that were set via fix_variable() or set_variable_bounds().

§Arguments
  • initial_values - Map from variable name to (manifold type, initial value vector)
§Returns

Map from variable name to VariableEnum (typed variables ready for optimization)

§Manifold Formats
  • SE2: [x, y, theta] (3 elements)
  • SE3: [tx, ty, tz, qw, qx, qy, qz] (7 elements)
  • SO2: [theta] (1 element)
  • SO3: [qw, qx, qy, qz] (4 elements)
  • Rn: [x1, x2, ..., xn] (n elements)
§Example
use apex_solver::core::problem::Problem;
use apex_solver::manifold::ManifoldType;
use nalgebra::dvector;
use std::collections::HashMap;

let problem = Problem::new();

let mut initial = HashMap::new();
initial.insert("pose0".to_string(), (ManifoldType::SE2, dvector![0.0, 0.0, 0.0]));
initial.insert("pose1".to_string(), (ManifoldType::SE2, dvector![1.0, 0.0, 0.1]));
initial.insert("landmark".to_string(), (ManifoldType::RN, dvector![5.0, 3.0]));

let variables = problem.initialize_variables(&initial);
assert_eq!(variables.len(), 3);
Source

pub fn num_residual_blocks(&self) -> usize

Get the number of residual blocks

Source

pub fn build_symbolic_structure( &self, variables: &HashMap<String, VariableEnum>, variable_index_sparce_matrix: &HashMap<String, usize>, total_dof: usize, ) -> ApexSolverResult<SymbolicStructure>

Build symbolic structure for sparse Jacobian computation

This method constructs the sparsity pattern of the Jacobian matrix before numerical computation. It determines which entries in the Jacobian will be non-zero based on the structure of the optimization problem (which residual blocks connect which variables).

§Purpose
  • Pre-allocates memory for sparse matrix operations
  • Enables efficient sparse linear algebra (avoiding dense operations)
  • Computed once at the beginning, used throughout optimization
§Arguments
  • variables - Map of variable names to their values and properties (SE2, SE3, etc.)
  • variable_index_sparce_matrix - Map from variable name to starting column index in Jacobian
  • total_dof - Total degrees of freedom (number of columns in Jacobian)
§Returns

A SymbolicStructure containing:

  • pattern: The symbolic sparse column matrix structure (row/col indices of non-zeros)
  • order: An ordering/permutation for efficient numerical computation
§Algorithm

For each residual block:

  1. Identify which variables it depends on
  2. For each (residual_dimension × variable_dof) block, mark entries as non-zero
  3. Convert to optimized sparse matrix representation
§Example Structure

For a simple problem with 3 SE2 poses (9 DOF total):

  • Between(x0, x1): Creates 3×6 block at rows 0-2, cols 0-5
  • Between(x1, x2): Creates 3×6 block at rows 3-5, cols 3-8
  • Prior(x0): Creates 3×3 block at rows 6-8, cols 0-2

Result: 9×9 sparse Jacobian with 45 non-zero entries

Source

pub fn compute_residual_sparse( &self, variables: &HashMap<String, VariableEnum>, ) -> ApexSolverResult<Mat<f64>>

Compute only the residual vector for the current variable values.

This is an optimized version that skips Jacobian computation when only the cost function value is needed (e.g., during initialization or step evaluation).

§Arguments
  • variables - Current variable values (from initialize_variables() or updated)
§Returns

Residual vector as N×1 column matrix (N = total residual dimension)

§Performance

Approximately 2x faster than compute_residual_and_jacobian_sparse() since it:

  • Skips Jacobian computation for each residual block
  • Avoids Jacobian matrix assembly and storage
  • Only parallelizes residual evaluation
§When to Use
  • Initial cost computation: When setting up optimization state
  • Step evaluation: When computing new cost after applying parameter updates
  • Cost-only queries: When you don’t need gradients

Use compute_residual_and_jacobian_sparse() when you need both residual and Jacobian (e.g., in the main optimization iteration loop for linearization).

§Example
// Initial cost evaluation (no Jacobian needed)
let residual = problem.compute_residual_sparse(&variables)?;
let initial_cost = residual.norm_l2() * residual.norm_l2();
Source

pub fn compute_residual_and_jacobian_sparse( &self, variables: &HashMap<String, VariableEnum>, variable_index_sparce_matrix: &HashMap<String, usize>, symbolic_structure: &SymbolicStructure, ) -> ApexSolverResult<(Mat<f64>, SparseColMat<usize, f64>)>

Compute residual vector and sparse Jacobian matrix for the current variable values.

This is the core linearization method called during each optimization iteration. It:

  1. Evaluates all residual blocks in parallel using rayon
  2. Assembles the full residual vector
  3. Constructs the sparse Jacobian matrix using the precomputed symbolic structure
§Arguments
  • variables - Current variable values (from initialize_variables() or updated)
  • variable_index_sparce_matrix - Map from variable name to starting column in Jacobian
  • symbolic_structure - Precomputed sparsity pattern (from build_symbolic_structure())
§Returns

Tuple (residual, jacobian) where:

  • residual: N×1 column matrix (total residual dimension)
  • jacobian: N×M sparse matrix (N = residual dim, M = total DOF)
§Performance

This method is highly optimized:

  • Parallel evaluation: Each residual block is evaluated independently using rayon
  • Sparse storage: Only non-zero Jacobian entries are stored and computed
  • Memory efficient: Preallocated sparse structure avoids dynamic allocations

Typically accounts for 40-60% of total optimization time (including sparse matrix ops).

§When to Use

Use this method in the main optimization loop when you need both residual and Jacobian for linearization. For cost-only evaluation, use compute_residual_sparse() instead.

§Example
// Inside optimizer loop, compute both residual and Jacobian for linearization
// let (residual, jacobian) = problem.compute_residual_and_jacobian_sparse(
//     &variables,
//     &variable_index_map,
//     &symbolic_structure,
// )?;
//
// Use for linear system: J^T J dx = -J^T r
Source

pub fn log_residual_to_file( &self, residual: &DVector<f64>, filename: &str, ) -> Result<(), Error>

Log residual vector to a text file

Source

pub fn log_sparse_jacobian_to_file( &self, jacobian: &SparseColMat<usize, f64>, filename: &str, ) -> Result<(), Error>

Log sparse Jacobian matrix to a text file

Source

pub fn log_variables_to_file( &self, variables: &HashMap<String, VariableEnum>, filename: &str, ) -> Result<(), Error>

Log variables to a text file

Source

pub fn compute_and_set_covariances( &self, linear_solver: &mut Box<dyn SparseLinearSolver>, variables: &mut HashMap<String, VariableEnum>, variable_index_map: &HashMap<String, usize>, ) -> Option<HashMap<String, Mat<f64>>>

Compute per-variable covariances and set them in Variable objects

This method computes the full covariance matrix by inverting the Hessian from the linear solver, then extracts per-variable covariance blocks and stores them in the corresponding Variable objects.

§Arguments
  • linear_solver - Mutable reference to the linear solver containing the cached Hessian
  • variables - Mutable map of variables where covariances will be stored
  • variable_index_map - Map from variable names to their starting column indices
§Returns

Some(HashMap) containing per-variable covariance matrices if successful, None otherwise

Trait Implementations§

Source§

impl Default for Problem

Source§

fn default() -> Self

Returns the “default value” for a type. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> ByRef<T> for T

Source§

fn by_ref(&self) -> &T

Source§

impl<T> DistributionExt for T
where T: ?Sized,

Source§

fn rand<T>(&self, rng: &mut (impl Rng + ?Sized)) -> T
where Self: Distribution<T>,

Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<SS, SP> SupersetOf<SS> for SP
where SS: SubsetOf<SP>,

Source§

fn to_subset(&self) -> Option<SS>

The inverse inclusion map: attempts to construct self from the equivalent element of its superset. Read more
Source§

fn is_in_subset(&self) -> bool

Checks if self is actually part of its subset T (and can be converted to it).
Source§

fn to_subset_unchecked(&self) -> SS

Use with care! Same as self.to_subset but without any property checks. Always succeeds.
Source§

fn from_subset(element: &SS) -> SP

The inclusion map: converts self to the equivalent element of its superset.
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more