Skip to main content

Crate opt

Crate opt 

Source
Expand description

Dense nonlinear optimization solvers in Rust.

This crate provides:

  • Problem + optimize: the default API for solver selection that is hard to misuse.
  • SecondOrderProblem + optimize: automatic selection for Hessian-aware objectives.
  • Bfgs: dense quasi-Newton optimization with robust hybrid line search.
  • NewtonTrustRegion: Hessian-based trust-region optimization.
  • Arc: Adaptive Regularization with Cubics (ARC).

All solvers support optional simple box constraints and are built around practical robustness for noisy/non-ideal objectives.

§Features

  • Bfgs hybrid line search: Strong Wolfe with nonmonotone (GLL) Armijo, approximate-Wolfe, and gradient-reduction acceptors, plus a best-seen salvage path and a small probing grid.
  • Bfgs trust-region (dogleg) fallback with CG-based solves on the inverse Hessian, diagonal regularization, and scaled-identity resets under severe noise.
  • NewtonTrustRegion: projected Steihaug-Toint trust-region iterations using objective Hessians.
  • Arc: cubic-regularized model steps with adaptive regularization updates (rho, sigma).
  • Profile-based heuristic policy selection for rough, piecewise-flat objectives.
  • Adaptive strategy switching (Wolfe <-> Backtracking) based on success streaks (no timed flips).
  • Optional box constraints with projected gradients and coordinate clamping.
  • Optional flat-bracket midpoint acceptance inside zoom.
  • Stochastic jiggling of step sizes on persistent flats.
  • Multi-direction (coordinate) rescue when progress is flat.

§Defaults (key settings)

  • Line search: Strong Wolfe primary; GLL nonmonotone Armijo; approximate‑Wolfe and gradient‑drop acceptors; probing grid; keep‑best salvage.
  • Trust region: dogleg fallback enabled; Δ₀ = min(1, 10/||g₀||); adaptive by ρ; SPD enforcement and scaled‑identity resets when needed.
  • Tolerances: c1=1e-4, c2=0.9; heuristics selected by Profile.
  • Zoom midpoint: flat‑bracket midpoint acceptance under profile control.
  • Stochastic jiggling: default ON with scale 1e‑3 (only after repeated flats in backtracking).
  • Coordinate rescue: default ON (only after two consecutive flat accepts).
  • Strategy switching: switch Wolfe<->Backtracking only on success/failure streaks (no timed flips).
  • Clear, configurable builder API, and robust termination with informative errors.

§Example

Minimize the Rosenbrock function, a classic test case for optimization algorithms.

use opt::{
    optimize, FirstOrderObjective, FirstOrderSample, MaxIterations, Problem, Profile, Solution,
    Tolerance,
};
use ndarray::{array, Array1};

struct Rosenbrock;

impl opt::ZerothOrderObjective for Rosenbrock {
    fn eval_cost(&mut self, x: &Array1<f64>) -> Result<f64, opt::ObjectiveEvalError> {
        let a = 1.0;
        let b = 100.0;
        Ok((a - x[0]).powi(2) + b * (x[1] - x[0].powi(2)).powi(2))
    }
}

impl FirstOrderObjective for Rosenbrock {
    fn eval_grad(&mut self, x: &Array1<f64>) -> Result<FirstOrderSample, opt::ObjectiveEvalError> {
        let a = 1.0;
        let b = 100.0;
        let f = (a - x[0]).powi(2) + b * (x[1] - x[0].powi(2)).powi(2);
        let gradient = array![
            -2.0 * (a - x[0]) - 4.0 * b * (x[1] - x[0].powi(2)) * x[0],
            2.0 * b * (x[1] - x[0].powi(2)),
        ];
        Ok(FirstOrderSample { value: f, gradient })
    }
}

// Set the initial guess.
let x0 = array![-1.2, 1.0];

// Run the solver.
let Solution {
    final_point: x_min,
    final_value,
    iterations,
    ..
} = optimize(Problem::new(x0, Rosenbrock))
    .with_tolerance(Tolerance::new(1e-6).unwrap())
    .with_max_iterations(MaxIterations::new(100).unwrap())
    .with_profile(Profile::Robust)
    .run()
    .expect("BFGS failed to solve");

println!(
    "Found minimum f([{:.3}, {:.3}]) = {:.4} in {} iterations.",
    x_min[0], x_min[1], final_value, iterations
);

// The known minimum is at [1.0, 1.0].
assert!((x_min[0] - 1.0).abs() < 1e-5);
assert!((x_min[1] - 1.0).abs() < 1e-5);

Structs§

Arc
A configurable Adaptive Regularization with Cubics (ARC) solver.
Bfgs
A configurable BFGS solver.
Bounds
FiniteDiffGradient
FirstOrderSample
FirstOrderWorkspace
Reusable scratch buffers for first-order evaluation. Allows a caller to amortize per-call allocations of gradient and value across many evaluations.
FixedPoint
FixedPointSample
GradientTolerance
Stopping criterion for the projected gradient norm. Replaces the scalar Tolerance for callers that need a scale-aware stop.
IterationInfo
MatrixFreeTrustRegion
Matrix-free Newton trust-region solver. Uses Steihaug-Toint truncated CG with Hessian-vector products supplied by the objective’s HessianOperator.
MaxIterations
NewtonTrustRegion
OperatorSample
A sample carrying value, gradient, and a HessianValue.
OptimizationDiagnostics
Counters and final-state values that the bare Solution does not expose. Useful for retry decisions: final_trust_radius warm-starts a follow-up Newton/ARC call, accepted_steps distinguishes “no progress at all” from “progress but ran out of budget”, fallback_used flags a silent BFGS demotion.
OptimizationReport
Structured solver outcome returned by run_report(). Pairs the final Solution with a status and diagnostics so callers can make retry decisions without inspecting solver-specific error variants.
Problem
SecondOrderProblem
SecondOrderSample
SecondOrderWorkspace
Reusable scratch buffers for second-order evaluation. Same idea as FirstOrderWorkspace but also includes a hessian array.
Solution
A summary of a successful solver run.
StepInfo
SymmetricHessianMut
Tolerance

Enums§

ArcError
AutoSecondOrderError
AutoSecondOrderSolver
BfgsError
An error type for clear diagnostics.
BoundsError
ConfigError
FallbackPolicy
Policy controlling whether NewtonTrustRegion / Arc may demote to a first-order BFGS fallback when the second-order step fails to make progress (line-search failure, persistent trust-region rejection).
FixedPointError
FixedPointStatus
HessianFallbackPolicy
What NewtonTrustRegion / Arc should do when an objective returns SecondOrderSample { hessian: None } (i.e. no analytic Hessian was supplied for this evaluation).
HessianMaterialization
How (and whether) a HessianOperator can produce a dense materialized Hessian. Reported by HessianOperator::materialization() so a trust-region or ARC solver can decide between calling materialize_dense once and falling back to repeated Hessian-vector products through apply_into.
HessianValue
Explicit Hessian payload exposed alongside SecondOrderSample.hessian for callers that want to declare exact-Hessian intent without going through Option<Array2<f64>>.
InitialMetric
How to seed the BFGS inverse-Hessian approximation H_0^{-1}. Replaces (and supersedes) the previous with_initial_inverse_hessian thinking by giving callers a clean choice between scaled-identity resets and full dense seeds.
LineSearchFailureReason
MatrixError
MatrixFreeTrustRegionError
NewtonTrustRegionError
ObjectiveEvalError
OptimizationStatus
Outcome category for an optimizer run, distinct from the underlying Result<Solution, _> so callers can dispatch on convergence vs. budget exhaustion vs. numerical failure without pattern-matching solver-specific error variants.
Profile
StationarityKind

Traits§

BatchZerothOrderObjective
An objective that can evaluate the cost at a batch of candidate points in one call. Solvers that perform parallelizable speculative trials (line-search probing, multi-start exploration) can use this to amortize fixed setup cost (one P-IRLS prep, one Cholesky factor, etc.) across multiple candidates.
FirstOrderObjective
FirstOrderObjectiveInto
First-order objective trait that writes into a caller-supplied workspace instead of returning a fresh FirstOrderSample. Useful when many evaluations happen and per-call allocation dominates.
FixedPointObjective
HessianOperator
An exact analytic Hessian-vector product (and optional materialization).
OperatorObjective
An objective that exposes its Hessian as a HessianValue rather than a dense Option<Array2<f64>>. MatrixFreeTrustRegion uses this shape so it can drive Hv-only Hessians without materializing them. Callers that already have a dense Hessian should still implement this trait by returning HessianValue::Dense(_) — the dense path is handled internally as a degenerate operator.
OptimizerObserver
Accepted-vs-trial / start-of-iter signals from a running solver. Parallel to typical optimizer observer hooks; intentionally minimal so individual solvers’ wiring is local. Each solver fires whichever hooks make sense for its algorithm; default (no-op) implementations keep solvers free to add hooks without breaking existing observers.
SecondOrderObjective
SecondOrderObjectiveInto
Second-order objective trait that writes into a caller-supplied workspace. Default impl wraps SecondOrderObjective::eval_hessian.
ZerothOrderObjective

Functions§

optimize