Expand description
Dense nonlinear optimization solvers in Rust.
This crate provides:
Problem+optimize: the default API for solver selection that is hard to misuse.SecondOrderProblem+optimize: automatic selection for Hessian-aware objectives.Bfgs: dense quasi-Newton optimization with robust hybrid line search.NewtonTrustRegion: Hessian-based trust-region optimization.Arc: Adaptive Regularization with Cubics (ARC).
All solvers support optional simple box constraints and are built around practical robustness for noisy/non-ideal objectives.
§Features
Bfgshybrid line search: Strong Wolfe with nonmonotone (GLL) Armijo, approximate-Wolfe, and gradient-reduction acceptors, plus a best-seen salvage path and a small probing grid.Bfgstrust-region (dogleg) fallback with CG-based solves on the inverse Hessian, diagonal regularization, and scaled-identity resets under severe noise.NewtonTrustRegion: projected Steihaug-Toint trust-region iterations using objective Hessians.Arc: cubic-regularized model steps with adaptive regularization updates (rho,sigma).- Profile-based heuristic policy selection for rough, piecewise-flat objectives.
- Adaptive strategy switching (Wolfe <-> Backtracking) based on success streaks (no timed flips).
- Optional box constraints with projected gradients and coordinate clamping.
- Optional flat-bracket midpoint acceptance inside zoom.
- Stochastic jiggling of step sizes on persistent flats.
- Multi-direction (coordinate) rescue when progress is flat.
§Defaults (key settings)
- Line search: Strong Wolfe primary; GLL nonmonotone Armijo; approximate‑Wolfe and gradient‑drop acceptors; probing grid; keep‑best salvage.
- Trust region: dogleg fallback enabled; Δ₀ = min(1, 10/||g₀||); adaptive by ρ; SPD enforcement and scaled‑identity resets when needed.
- Tolerances:
c1=1e-4,c2=0.9; heuristics selected byProfile. - Zoom midpoint: flat‑bracket midpoint acceptance under profile control.
- Stochastic jiggling: default ON with scale 1e‑3 (only after repeated flats in backtracking).
- Coordinate rescue: default ON (only after two consecutive flat accepts).
- Strategy switching: switch Wolfe<->Backtracking only on success/failure streaks (no timed flips).
- Clear, configurable builder API, and robust termination with informative errors.
§Example
Minimize the Rosenbrock function, a classic test case for optimization algorithms.
use opt::{
optimize, FirstOrderObjective, FirstOrderSample, MaxIterations, Problem, Profile, Solution,
Tolerance,
};
use ndarray::{array, Array1};
struct Rosenbrock;
impl opt::ZerothOrderObjective for Rosenbrock {
fn eval_cost(&mut self, x: &Array1<f64>) -> Result<f64, opt::ObjectiveEvalError> {
let a = 1.0;
let b = 100.0;
Ok((a - x[0]).powi(2) + b * (x[1] - x[0].powi(2)).powi(2))
}
}
impl FirstOrderObjective for Rosenbrock {
fn eval_grad(&mut self, x: &Array1<f64>) -> Result<FirstOrderSample, opt::ObjectiveEvalError> {
let a = 1.0;
let b = 100.0;
let f = (a - x[0]).powi(2) + b * (x[1] - x[0].powi(2)).powi(2);
let gradient = array![
-2.0 * (a - x[0]) - 4.0 * b * (x[1] - x[0].powi(2)) * x[0],
2.0 * b * (x[1] - x[0].powi(2)),
];
Ok(FirstOrderSample { value: f, gradient })
}
}
// Set the initial guess.
let x0 = array![-1.2, 1.0];
// Run the solver.
let Solution {
final_point: x_min,
final_value,
iterations,
..
} = optimize(Problem::new(x0, Rosenbrock))
.with_tolerance(Tolerance::new(1e-6).unwrap())
.with_max_iterations(MaxIterations::new(100).unwrap())
.with_profile(Profile::Robust)
.run()
.expect("BFGS failed to solve");
println!(
"Found minimum f([{:.3}, {:.3}]) = {:.4} in {} iterations.",
x_min[0], x_min[1], final_value, iterations
);
// The known minimum is at [1.0, 1.0].
assert!((x_min[0] - 1.0).abs() < 1e-5);
assert!((x_min[1] - 1.0).abs() < 1e-5);Structs§
- Arc
- A configurable Adaptive Regularization with Cubics (ARC) solver.
- Bfgs
- A configurable BFGS solver.
- Bounds
- Finite
Diff Gradient - First
Order Sample - First
Order Workspace - Reusable scratch buffers for first-order evaluation. Allows a caller to amortize per-call allocations of gradient and value across many evaluations.
- Fixed
Point - Fixed
Point Sample - Gradient
Tolerance - Stopping criterion for the projected gradient norm. Replaces the
scalar
Tolerancefor callers that need a scale-aware stop. - Iteration
Info - Matrix
Free Trust Region - Matrix-free Newton trust-region solver. Uses Steihaug-Toint
truncated CG with Hessian-vector products supplied by the
objective’s
HessianOperator. - MaxIterations
- Newton
Trust Region - Operator
Sample - A sample carrying value, gradient, and a
HessianValue. - Optimization
Diagnostics - Counters and final-state values that the bare
Solutiondoes not expose. Useful for retry decisions:final_trust_radiuswarm-starts a follow-up Newton/ARC call,accepted_stepsdistinguishes “no progress at all” from “progress but ran out of budget”,fallback_usedflags a silent BFGS demotion. - Optimization
Report - Structured solver outcome returned by
run_report(). Pairs the finalSolutionwith a status and diagnostics so callers can make retry decisions without inspecting solver-specific error variants. - Problem
- Second
Order Problem - Second
Order Sample - Second
Order Workspace - Reusable scratch buffers for second-order evaluation. Same idea
as
FirstOrderWorkspacebut also includes ahessianarray. - Solution
- A summary of a successful solver run.
- Step
Info - Symmetric
Hessian Mut - Tolerance
Enums§
- ArcError
- Auto
Second Order Error - Auto
Second Order Solver - Bfgs
Error - An error type for clear diagnostics.
- Bounds
Error - Config
Error - Fallback
Policy - Policy controlling whether
NewtonTrustRegion/Arcmay demote to a first-order BFGS fallback when the second-order step fails to make progress (line-search failure, persistent trust-region rejection). - Fixed
Point Error - Fixed
Point Status - Hessian
Fallback Policy - What
NewtonTrustRegion/Arcshould do when an objective returnsSecondOrderSample { hessian: None }(i.e. no analytic Hessian was supplied for this evaluation). - Hessian
Materialization - How (and whether) a
HessianOperatorcan produce a dense materialized Hessian. Reported byHessianOperator::materialization()so a trust-region or ARC solver can decide between callingmaterialize_denseonce and falling back to repeated Hessian-vector products throughapply_into. - Hessian
Value - Explicit Hessian payload exposed alongside
SecondOrderSample.hessianfor callers that want to declare exact-Hessian intent without going throughOption<Array2<f64>>. - Initial
Metric - How to seed the BFGS inverse-Hessian approximation
H_0^{-1}. Replaces (and supersedes) the previouswith_initial_inverse_hessianthinking by giving callers a clean choice between scaled-identity resets and full dense seeds. - Line
Search Failure Reason - Matrix
Error - Matrix
Free Trust Region Error - Newton
Trust Region Error - Objective
Eval Error - Optimization
Status - Outcome category for an optimizer run, distinct from the underlying
Result<Solution, _>so callers can dispatch on convergence vs. budget exhaustion vs. numerical failure without pattern-matching solver-specific error variants. - Profile
- Stationarity
Kind
Traits§
- Batch
Zeroth Order Objective - An objective that can evaluate the cost at a batch of candidate points in one call. Solvers that perform parallelizable speculative trials (line-search probing, multi-start exploration) can use this to amortize fixed setup cost (one P-IRLS prep, one Cholesky factor, etc.) across multiple candidates.
- First
Order Objective - First
Order Objective Into - First-order objective trait that writes into a caller-supplied
workspace instead of returning a fresh
FirstOrderSample. Useful when many evaluations happen and per-call allocation dominates. - Fixed
Point Objective - Hessian
Operator - An exact analytic Hessian-vector product (and optional materialization).
- Operator
Objective - An objective that exposes its Hessian as a
HessianValuerather than a denseOption<Array2<f64>>.MatrixFreeTrustRegionuses this shape so it can drive Hv-only Hessians without materializing them. Callers that already have a dense Hessian should still implement this trait by returningHessianValue::Dense(_)— the dense path is handled internally as a degenerate operator. - Optimizer
Observer - Accepted-vs-trial / start-of-iter signals from a running solver. Parallel to typical optimizer observer hooks; intentionally minimal so individual solvers’ wiring is local. Each solver fires whichever hooks make sense for its algorithm; default (no-op) implementations keep solvers free to add hooks without breaking existing observers.
- Second
Order Objective - Second
Order Objective Into - Second-order objective trait that writes into a caller-supplied
workspace. Default impl wraps
SecondOrderObjective::eval_hessian. - Zeroth
Order Objective