Expand description
Optimization module for SciRS
This module provides implementations of various optimization algorithms,
modeled after SciPy’s optimize
module.
§Submodules
unconstrained
: Unconstrained optimization algorithmsconstrained
: Constrained optimization algorithmsleast_squares
: Least squares minimization (including robust methods)roots
: Root finding algorithmsscalar
: Scalar (univariate) optimization algorithmsglobal
: Global optimization algorithms
§Optimization Methods
The following optimization methods are currently implemented:
§Unconstrained:
- Nelder-Mead: A derivative-free method using simplex-based approach
- Powell: Derivative-free method using conjugate directions
- BFGS: Quasi-Newton method with BFGS update
- CG: Nonlinear conjugate gradient method
§Constrained:
- SLSQP: Sequential Least SQuares Programming
- TrustConstr: Trust-region constrained optimizer
§Scalar (Univariate) Optimization:
- Brent: Combines parabolic interpolation with golden section search
- Bounded: Brent’s method with bounds constraints
- Golden: Golden section search
§Global:
- Differential Evolution: Stochastic global optimization method
- Basin-hopping: Random perturbations with local minimization
- Dual Annealing: Simulated annealing with fast annealing
- Particle Swarm: Population-based optimization inspired by swarm behavior
- Simulated Annealing: Probabilistic optimization with cooling schedule
§Least Squares:
- Levenberg-Marquardt: Trust-region algorithm for nonlinear least squares
- Trust Region Reflective: Bounds-constrained least squares
- Robust Least Squares: M-estimators for outlier-resistant regression
- Huber loss: Reduces influence of moderate outliers
- Bisquare loss: Completely rejects extreme outliers
- Cauchy loss: Provides very strong outlier resistance
- Weighted Least Squares: Handles heteroscedastic data (varying variance)
- Bounded Least Squares: Box constraints on parameters
- Separable Least Squares: Variable projection for partially linear models
- Total Least Squares: Errors-in-variables regression
§Bounds Support
The unconstrained
module now supports bounds constraints for variables.
You can specify lower and upper bounds for each variable, and the optimizer
will ensure that all iterates remain within these bounds.
The following methods support bounds constraints:
- Powell
- Nelder-Mead
- BFGS
- CG (Conjugate Gradient)
§Examples
§Basic Optimization
// Example of minimizing a function using BFGS
use ndarray::{array, ArrayView1};
use scirs2_optimize::unconstrained::{minimize, Method};
fn rosenbrock(x: &ArrayView1<f64>) -> f64 {
let a = 1.0;
let b = 100.0;
let x0 = x[0];
let x1 = x[1];
(a - x0).powi(2) + b * (x1 - x0.powi(2)).powi(2)
}
let initial_guess = [0.0, 0.0];
let result = minimize(rosenbrock, &initial_guess, Method::BFGS, None)?;
println!("Solution: {:?}", result.x);
println!("Function value at solution: {}", result.fun);
println!("Number of iterations: {}", result.iterations);
println!("Success: {}", result.success);
§Optimization with Bounds
// Example of minimizing a function with bounds constraints
use ndarray::{array, ArrayView1};
use scirs2_optimize::{Bounds, unconstrained::{minimize, Method, Options}};
// A function with minimum at (-1, -1)
fn func(x: &ArrayView1<f64>) -> f64 {
(x[0] + 1.0).powi(2) + (x[1] + 1.0).powi(2)
}
// Create bounds: x >= 0, y >= 0
// This will constrain the optimization to the positive quadrant
let bounds = Bounds::new(&[(Some(0.0), None), (Some(0.0), None)]);
let initial_guess = [0.5, 0.5];
let mut options = Options::default();
options.bounds = Some(bounds);
// Use Powell's method which supports bounds
let result = minimize(func, &initial_guess, Method::Powell, Some(options))?;
// The constrained minimum should be at [0, 0] with value 2.0
println!("Solution: {:?}", result.x);
println!("Function value at solution: {}", result.fun);
§Bounds Creation Options
use scirs2_optimize::Bounds;
// Create bounds from pairs
// Format: [(min_x1, max_x1), (min_x2, max_x2), ...] where None = unbounded
let bounds1 = Bounds::new(&[
(Some(0.0), Some(1.0)), // 0 <= x[0] <= 1
(Some(-1.0), None), // x[1] >= -1, no upper bound
(None, Some(10.0)), // x[2] <= 10, no lower bound
(None, None) // x[3] is completely unbounded
]);
// Alternative: create from separate lower and upper bound vectors
let lb = vec![Some(0.0), Some(-1.0), None, None];
let ub = vec![Some(1.0), None, Some(10.0), None];
let bounds2 = Bounds::from_vecs(lb, ub).unwrap();
§Robust Least Squares Example
use ndarray::{array, Array1, Array2};
use scirs2_optimize::least_squares::{robust_least_squares, HuberLoss};
// Define residual function for linear regression
fn residual(params: &[f64], data: &[f64]) -> Array1<f64> {
let n = data.len() / 2;
let x_vals = &data[0..n];
let y_vals = &data[n..];
let mut res = Array1::zeros(n);
for i in 0..n {
res[i] = y_vals[i] - (params[0] + params[1] * x_vals[i]);
}
res
}
// Data with outliers
let data = array![0., 1., 2., 3., 4., 0.1, 0.9, 2.1, 2.9, 10.0];
let x0 = array![0.0, 0.0];
// Use Huber loss for robustness
let huber_loss = HuberLoss::new(1.0);
let result = robust_least_squares(
residual,
&x0,
huber_loss,
None::<fn(&[f64], &[f64]) -> Array2<f64>>,
&data,
None
)?;
println!("Robust solution: intercept={:.3}, slope={:.3}",
result.x[0], result.x[1]);
Re-exports§
pub use error::OptimizeError;
pub use error::OptimizeResult;
pub use result::OptimizeResults;
pub use automatic_differentiation::autodiff;
pub use automatic_differentiation::create_ad_gradient;
pub use automatic_differentiation::create_ad_hessian;
pub use automatic_differentiation::optimize_ad_mode;
pub use automatic_differentiation::ADMode;
pub use automatic_differentiation::ADResult;
pub use automatic_differentiation::AutoDiffFunction;
pub use automatic_differentiation::AutoDiffOptions;
pub use constrained::minimize_constrained;
pub use global::basinhopping;
pub use global::bayesian_optimization;
pub use global::differential_evolution;
pub use global::dual_annealing;
pub use global::generate_diverse_start_points;
pub use global::multi_start;
pub use global::multi_start_with_clustering;
pub use global::particle_swarm;
pub use global::simulated_annealing;
pub use jit_optimization::optimize_function;
pub use jit_optimization::FunctionPattern;
pub use jit_optimization::JitCompiler;
pub use jit_optimization::JitOptions;
pub use jit_optimization::JitStats;
pub use least_squares::bounded_least_squares;
pub use least_squares::least_squares;
pub use least_squares::robust_least_squares;
pub use least_squares::separable_least_squares;
pub use least_squares::total_least_squares;
pub use least_squares::weighted_least_squares;
pub use least_squares::BisquareLoss;
pub use least_squares::CauchyLoss;
pub use least_squares::HuberLoss;
pub use ml_optimizers::ml_problems;
pub use ml_optimizers::ADMMOptimizer;
pub use ml_optimizers::CoordinateDescentOptimizer;
pub use ml_optimizers::ElasticNetOptimizer;
pub use ml_optimizers::GroupLassoOptimizer;
pub use ml_optimizers::LassoOptimizer;
pub use multi_objective::scalarization;
pub use multi_objective::MultiObjectiveConfig;
pub use multi_objective::MultiObjectiveResult;
pub use multi_objective::MultiObjectiveSolution;
pub use multi_objective::NSGAII;
pub use multi_objective::NSGAIII;
pub use neural_integration::optimizers;
pub use neural_integration::NeuralOptimizer;
pub use neural_integration::NeuralParameters;
pub use neural_integration::NeuralTrainer;
pub use roots::root;
pub use scalar::minimize_scalar;
pub use sparse_numdiff::sparse_hessian;
pub use sparse_numdiff::sparse_jacobian;
pub use sparse_numdiff::SparseFiniteDiffOptions;
pub use stochastic::minimize_adam;
pub use stochastic::minimize_adamw;
pub use stochastic::minimize_rmsprop;
pub use stochastic::minimize_sgd;
pub use stochastic::minimize_sgd_momentum;
pub use stochastic::minimize_stochastic;
pub use stochastic::AdamOptions;
pub use stochastic::AdamWOptions;
pub use stochastic::DataProvider;
pub use stochastic::InMemoryDataProvider;
pub use stochastic::LearningRateSchedule;
pub use stochastic::MomentumOptions;
pub use stochastic::RMSPropOptions;
pub use stochastic::SGDOptions;
pub use stochastic::StochasticGradientFunction;
pub use stochastic::StochasticMethod;
pub use stochastic::StochasticOptions;
pub use unconstrained::minimize;
pub use unconstrained::Bounds;
Modules§
- automatic_
differentiation - Automatic differentiation for exact gradient and Hessian computation
- constrained
- Constrained optimization algorithms
- error
- Error types for the SciRS2 optimization module
- global
- Global optimization algorithms
- jit_
optimization - Just-in-time compilation and auto-vectorization for optimization
- least_
squares - Least squares submodule containing specialized algorithms and loss functions
- ml_
optimizers - Specialized optimizers for machine learning applications
- multi_
objective - Multi-objective optimization algorithms
- neural_
integration - Integration with scirs2-neural for machine learning optimization
- parallel
- Parallel computation utilities for optimization algorithms
- prelude
- result
- Optimization result structures
- roots
- Root finding algorithms
- roots_
anderson - roots_
krylov - scalar
- Scalar optimization algorithms
- simd_
ops - SIMD-accelerated implementations using scirs2-core unified system
- sparse_
numdiff - Sparse numerical differentiation for large-scale optimization
- stochastic
- Stochastic optimization methods for machine learning and large-scale problems
- unconstrained
- Unconstrained optimization algorithms