Expand description
§SciRS2 Optimize - Mathematical Optimization for Rust
scirs2-optimize provides comprehensive optimization algorithms modeled after SciPy’s
optimize
module, offering everything from simple function minimization to complex
constrained optimization and global search.
§🎯 Key Features
- Unconstrained Optimization: BFGS, CG, Nelder-Mead, Powell
- Constrained Optimization: SLSQP, Trust-region methods
- Global Optimization: Differential Evolution, Basin-hopping, Simulated Annealing
- Least Squares: Levenberg-Marquardt, robust fitting, bounded problems
- Root Finding: Newton, Brent, Bisection methods
- Scalar Optimization: Brent, Golden section search
- Bounds Support: Box constraints for all major algorithms
§📦 Module Overview
Module | Description | SciPy Equivalent |
---|---|---|
unconstrained | Unconstrained minimization (BFGS, CG, Powell) | scipy.optimize.minimize |
constrained | Constrained optimization (SLSQP, Trust-region) | scipy.optimize.minimize with constraints |
global | Global optimization (DE, Basin-hopping) | scipy.optimize.differential_evolution |
least_squares | Nonlinear least squares (LM, robust methods) | scipy.optimize.least_squares |
roots | Root finding algorithms | scipy.optimize.root |
scalar | 1-D minimization | scipy.optimize.minimize_scalar |
§🚀 Quick Start
§Installation
[dependencies]
scirs2-optimize = "0.1.0-rc.1"
§Unconstrained Minimization (Rosenbrock Function)
use scirs2_optimize::unconstrained::{minimize, Method};
use scirs2_core::ndarray::ArrayView1;
// Rosenbrock function: (1-x)² + 100(y-x²)²
fn rosenbrock(x: &ArrayView1<f64>) -> f64 {
let x0 = x[0];
let x1 = x[1];
(1.0 - x0).powi(2) + 100.0 * (x1 - x0.powi(2)).powi(2)
}
let initial_guess = [0.0, 0.0];
let result = minimize(rosenbrock, &initial_guess, Method::BFGS, None)?;
println!("Minimum at: {:?}", result.x);
println!("Function value: {}", result.fun);
println!("Converged: {}", result.success);
§Optimization with Bounds
Constrain variables to specific ranges:
use scirs2_optimize::{Bounds, unconstrained::{minimize, Method, Options}};
use scirs2_core::ndarray::ArrayView1;
fn objective(x: &ArrayView1<f64>) -> f64 {
(x[0] + 1.0).powi(2) + (x[1] + 1.0).powi(2)
}
// Constrain to positive quadrant: x >= 0, y >= 0
let bounds = Bounds::new(&[
(Some(0.0), None), // x >= 0
(Some(0.0), None), // y >= 0
]);
let mut options = Options::default();
options.bounds = Some(bounds);
let result = minimize(objective, &[0.5, 0.5], Method::Powell, Some(options))?;
println!("Constrained minimum: {:?}", result.x); // [0.0, 0.0]
§Robust Least Squares
Fit data with outliers using robust loss functions:
use scirs2_optimize::least_squares::{robust_least_squares, HuberLoss};
use scirs2_core::ndarray::{array, Array1};
// Linear model residual: y - (a + b*x)
fn residual(params: &[f64], data: &[f64]) -> Array1<f64> {
let n = data.len() / 2;
let x = &data[0..n];
let y = &data[n..];
let mut res = Array1::zeros(n);
for i in 0..n {
res[i] = y[i] - (params[0] + params[1] * x[i]);
}
res
}
// Data: x = [0,1,2,3,4], y = [0.1,0.9,2.1,2.9,10.0] (last point is outlier)
let data = array![0.,1.,2.,3.,4., 0.1,0.9,2.1,2.9,10.0];
let huber = HuberLoss::new(1.0); // Robust to outliers
let x0 = array![0.0, 0.0];
let result = robust_least_squares(
residual, &x0, huber, None::<fn(&[f64], &[f64]) -> scirs2_core::ndarray::Array2<f64>>, &data, None
)?;
println!("Robust fit: y = {:.3} + {:.3}x", result.x[0], result.x[1]);
§Global Optimization
Find global minimum of multi-modal functions:
use scirs2_optimize::global::{differential_evolution, DifferentialEvolutionOptions};
use scirs2_core::ndarray::ArrayView1;
// Rastrigin function (multiple local minima)
fn rastrigin(x: &ArrayView1<f64>) -> f64 {
let n = x.len() as f64;
10.0 * n + x.iter().map(|xi| xi.powi(2) - 10.0 * (2.0 * std::f64::consts::PI * xi).cos()).sum::<f64>()
}
let bounds = vec![(-5.12, 5.12); 5]; // 5-dimensional search space
let options = Some(DifferentialEvolutionOptions::default());
let result = differential_evolution(rastrigin, bounds, options, None)?;
println!("Global minimum: {:?}", result.x);
§Root Finding
Solve equations f(x) = 0:
use scirs2_optimize::roots::{root, Method};
use scirs2_core::ndarray::{array, Array1};
// Find root of x² - 2 = 0 (i.e., √2)
fn f(x: &[f64]) -> Array1<f64> {
array![x[0] * x[0] - 2.0]
}
let x0 = array![1.5]; // Initial guess
let result = root(f, &x0, Method::Hybr, None::<fn(&[f64]) -> scirs2_core::ndarray::Array2<f64>>, None)?;
println!("√2 ≈ {:.10}", result.x[0]); // 1.4142135624
§Submodules
unconstrained
: Unconstrained optimization algorithmsconstrained
: Constrained optimization algorithmsleast_squares
: Least squares minimization (including robust methods)roots
: Root finding algorithmsscalar
: Scalar (univariate) optimization algorithmsglobal
: Global optimization algorithms
§Optimization Methods
The following optimization methods are currently implemented:
§Unconstrained:
- Nelder-Mead: A derivative-free method using simplex-based approach
- Powell: Derivative-free method using conjugate directions
- BFGS: Quasi-Newton method with BFGS update
- CG: Nonlinear conjugate gradient method
§Constrained:
- SLSQP: Sequential Least SQuares Programming
- TrustConstr: Trust-region constrained optimizer
§Scalar (Univariate) Optimization:
- Brent: Combines parabolic interpolation with golden section search
- Bounded: Brent’s method with bounds constraints
- Golden: Golden section search
§Global:
- Differential Evolution: Stochastic global optimization method
- Basin-hopping: Random perturbations with local minimization
- Dual Annealing: Simulated annealing with fast annealing
- Particle Swarm: Population-based optimization inspired by swarm behavior
- Simulated Annealing: Probabilistic optimization with cooling schedule
§Least Squares:
- Levenberg-Marquardt: Trust-region algorithm for nonlinear least squares
- Trust Region Reflective: Bounds-constrained least squares
- Robust Least Squares: M-estimators for outlier-resistant regression
- Huber loss: Reduces influence of moderate outliers
- Bisquare loss: Completely rejects extreme outliers
- Cauchy loss: Provides very strong outlier resistance
- Weighted Least Squares: Handles heteroscedastic data (varying variance)
- Bounded Least Squares: Box constraints on parameters
- Separable Least Squares: Variable projection for partially linear models
- Total Least Squares: Errors-in-variables regression
§Bounds Support
The unconstrained
module now supports bounds constraints for variables.
You can specify lower and upper bounds for each variable, and the optimizer
will ensure that all iterates remain within these bounds.
The following methods support bounds constraints:
- Powell
- Nelder-Mead
- BFGS
- CG (Conjugate Gradient)
§Examples
§Basic Optimization
// Example of minimizing a function using BFGS
use scirs2_core::ndarray::{array, ArrayView1};
use scirs2_optimize::unconstrained::{minimize, Method};
fn rosenbrock(x: &ArrayView1<f64>) -> f64 {
let a = 1.0;
let b = 100.0;
let x0 = x[0];
let x1 = x[1];
(a - x0).powi(2) + b * (x1 - x0.powi(2)).powi(2)
}
let initial_guess = [0.0, 0.0];
let result = minimize(rosenbrock, &initial_guess, Method::BFGS, None)?;
println!("Solution: {:?}", result.x);
println!("Function value at solution: {}", result.fun);
println!("Number of nit: {}", result.nit);
println!("Success: {}", result.success);
§Optimization with Bounds
// Example of minimizing a function with bounds constraints
use scirs2_core::ndarray::{array, ArrayView1};
use scirs2_optimize::{Bounds, unconstrained::{minimize, Method, Options}};
// A function with minimum at (-1, -1)
fn func(x: &ArrayView1<f64>) -> f64 {
(x[0] + 1.0).powi(2) + (x[1] + 1.0).powi(2)
}
// Create bounds: x >= 0, y >= 0
// This will constrain the optimization to the positive quadrant
let bounds = Bounds::new(&[(Some(0.0), None), (Some(0.0), None)]);
let initial_guess = [0.5, 0.5];
let mut options = Options::default();
options.bounds = Some(bounds);
// Use Powell's method which supports bounds
let result = minimize(func, &initial_guess, Method::Powell, Some(options))?;
// The constrained minimum should be at [0, 0] with value 2.0
println!("Solution: {:?}", result.x);
println!("Function value at solution: {}", result.fun);
§Bounds Creation Options
use scirs2_optimize::Bounds;
// Create bounds from pairs
// Format: [(min_x1, max_x1), (min_x2, max_x2), ...] where None = unbounded
let bounds1 = Bounds::new(&[
(Some(0.0), Some(1.0)), // 0 <= x[0] <= 1
(Some(-1.0), None), // x[1] >= -1, no upper bound
(None, Some(10.0)), // x[2] <= 10, no lower bound
(None, None) // x[3] is completely unbounded
]);
// Alternative: create from separate lower and upper bound vectors
let lb = vec![Some(0.0), Some(-1.0), None, None];
let ub = vec![Some(1.0), None, Some(10.0), None];
let bounds2 = Bounds::from_vecs(lb, ub).unwrap();
§Robust Least Squares Example
use scirs2_core::ndarray::{array, Array1, Array2};
use scirs2_optimize::least_squares::{robust_least_squares, HuberLoss};
// Define residual function for linear regression
fn residual(params: &[f64], data: &[f64]) -> Array1<f64> {
let n = data.len() / 2;
let x_vals = &data[0..n];
let y_vals = &data[n..];
let mut res = Array1::zeros(n);
for i in 0..n {
res[i] = y_vals[i] - (params[0] + params[1] * x_vals[i]);
}
res
}
// Data with outliers
let data = array![0., 1., 2., 3., 4., 0.1, 0.9, 2.1, 2.9, 10.0];
let x0 = array![0.0, 0.0];
// Use Huber loss for robustness
let huber_loss = HuberLoss::new(1.0);
let result = robust_least_squares(
residual,
&x0,
huber_loss,
None::<fn(&[f64], &[f64]) -> Array2<f64>>,
&data,
None
)?;
println!("Robust solution: intercept={:.3}, slope={:.3}",
result.x[0], result.x[1]);
Re-exports§
pub use error::OptimizeError;
pub use error::OptimizeResult;
pub use result::OptimizeResults;
pub use advanced_coordinator::advanced_optimize;
pub use advanced_coordinator::AdvancedConfig;
pub use advanced_coordinator::AdvancedCoordinator;
pub use advanced_coordinator::AdvancedStats;
pub use advanced_coordinator::AdvancedStrategy;
pub use advanced_coordinator::StrategyPerformance;
pub use automatic_differentiation::autodiff;
pub use automatic_differentiation::create_ad_gradient;
pub use automatic_differentiation::create_ad_hessian;
pub use automatic_differentiation::optimize_ad_mode;
pub use automatic_differentiation::ADMode;
pub use automatic_differentiation::ADResult;
pub use automatic_differentiation::AutoDiffFunction;
pub use automatic_differentiation::AutoDiffOptions;
pub use benchmarking::benchmark_suites;
pub use benchmarking::test_functions;
pub use benchmarking::AlgorithmRanking;
pub use benchmarking::BenchmarkConfig;
pub use benchmarking::BenchmarkResults;
pub use benchmarking::BenchmarkRun;
pub use benchmarking::BenchmarkSummary;
pub use benchmarking::BenchmarkSystem;
pub use benchmarking::ProblemCharacteristics;
pub use benchmarking::RuntimeStats;
pub use benchmarking::TestProblem;
pub use constrained::minimize_constrained;
pub use distributed::algorithms::DistributedDifferentialEvolution;
pub use distributed::algorithms::DistributedParticleSwarm;
pub use distributed::DistributedConfig;
pub use distributed::DistributedOptimizationContext;
pub use distributed::DistributedStats;
pub use distributed::DistributionStrategy;
pub use distributed::MPIInterface;
pub use distributed::WorkAssignment;
pub use distributed_gpu::DistributedGpuConfig;
pub use distributed_gpu::DistributedGpuOptimizer;
pub use distributed_gpu::DistributedGpuResults;
pub use distributed_gpu::DistributedGpuStats;
pub use distributed_gpu::GpuCommunicationStrategy;
pub use distributed_gpu::IterationStats;
pub use global::basinhopping;
pub use global::bayesian_optimization;
pub use global::differential_evolution;
pub use global::dual_annealing;
pub use global::generate_diverse_start_points;
pub use global::multi_start;
pub use global::multi_start_with_clustering;
pub use global::particle_swarm;
pub use global::simulated_annealing;
pub use gpu::acceleration::AccelerationConfig;
pub use gpu::acceleration::AccelerationManager;
pub use gpu::acceleration::AccelerationStrategy;
pub use gpu::acceleration::PerformanceStats;
pub use gpu::algorithms::GpuDifferentialEvolution;
pub use gpu::algorithms::GpuParticleSwarm;
pub use gpu::GpuFunction;
pub use gpu::GpuOptimizationConfig;
pub use gpu::GpuOptimizationContext;
pub use gpu::GpuPrecision;
pub use jit_optimization::optimize_function;
pub use jit_optimization::FunctionPattern;
pub use jit_optimization::JitCompiler;
pub use jit_optimization::JitOptions;
pub use jit_optimization::JitStats;
pub use learned_optimizers::learned_optimize;
pub use learned_optimizers::ActivationType;
pub use learned_optimizers::AdaptationStatistics;
pub use learned_optimizers::AdaptiveNASSystem;
pub use learned_optimizers::AdaptiveTransformerOptimizer;
pub use learned_optimizers::FewShotLearningOptimizer;
pub use learned_optimizers::LearnedHyperparameterTuner;
pub use learned_optimizers::LearnedOptimizationConfig;
pub use learned_optimizers::LearnedOptimizer;
pub use learned_optimizers::MetaOptimizerState;
pub use learned_optimizers::NeuralAdaptiveOptimizer;
pub use learned_optimizers::OptimizationNetwork;
pub use learned_optimizers::OptimizationProblem;
pub use learned_optimizers::ParameterDistribution;
pub use learned_optimizers::ProblemEncoder;
pub use learned_optimizers::TrainingTask;
pub use least_squares::bounded_least_squares;
pub use least_squares::least_squares;
pub use least_squares::robust_least_squares;
pub use least_squares::separable_least_squares;
pub use least_squares::total_least_squares;
pub use least_squares::weighted_least_squares;
pub use least_squares::BisquareLoss;
pub use least_squares::CauchyLoss;
pub use least_squares::HuberLoss;
pub use ml_optimizers::ml_problems;
pub use ml_optimizers::ADMMOptimizer;
pub use ml_optimizers::CoordinateDescentOptimizer;
pub use ml_optimizers::ElasticNetOptimizer;
pub use ml_optimizers::GroupLassoOptimizer;
pub use ml_optimizers::LassoOptimizer;
pub use multi_objective::MultiObjectiveConfig;
pub use multi_objective::MultiObjectiveResult;
pub use multi_objective::MultiObjectiveSolution;
pub use multi_objective::NSGAII;
pub use multi_objective::NSGAIII;
pub use neural_integration::optimizers;
pub use neural_integration::NeuralOptimizer;
pub use neural_integration::NeuralParameters;
pub use neural_integration::NeuralTrainer;
pub use neuromorphic::neuromorphic_optimize;
pub use neuromorphic::BasicNeuromorphicOptimizer;
pub use neuromorphic::NeuromorphicConfig;
pub use neuromorphic::NeuromorphicNetwork;
pub use neuromorphic::NeuromorphicOptimizer;
pub use neuromorphic::NeuronState;
pub use neuromorphic::SpikeEvent;
pub use quantum_inspired::quantum_optimize;
pub use quantum_inspired::quantum_particle_swarm_optimize;
pub use quantum_inspired::Complex;
pub use quantum_inspired::CoolingSchedule;
pub use quantum_inspired::QuantumAnnealingSchedule;
pub use quantum_inspired::QuantumInspiredOptimizer;
pub use quantum_inspired::QuantumOptimizationStats;
pub use quantum_inspired::QuantumState;
pub use reinforcement_learning::actor_critic_optimize;
pub use reinforcement_learning::bandit_optimize;
pub use reinforcement_learning::evolutionary_optimize;
pub use reinforcement_learning::meta_learning_optimize;
pub use reinforcement_learning::policy_gradient_optimize;
pub use reinforcement_learning::BanditOptimizer;
pub use reinforcement_learning::EvolutionaryStrategy;
pub use reinforcement_learning::Experience;
pub use reinforcement_learning::MetaLearningOptimizer;
pub use reinforcement_learning::OptimizationAction;
pub use reinforcement_learning::OptimizationState;
pub use reinforcement_learning::QLearningOptimizer;
pub use reinforcement_learning::RLOptimizationConfig;
pub use reinforcement_learning::RLOptimizer;
pub use roots::root;
pub use scalar::minimize_scalar;
pub use self_tuning::presets;
pub use self_tuning::AdaptationResult;
pub use self_tuning::AdaptationStrategy;
pub use self_tuning::ParameterChange;
pub use self_tuning::ParameterValue;
pub use self_tuning::PerformanceMetrics;
pub use self_tuning::SelfTuningConfig;
pub use self_tuning::SelfTuningOptimizer;
pub use self_tuning::TunableParameter;
pub use sparse_numdiff::sparse_hessian;
pub use sparse_numdiff::sparse_jacobian;
pub use sparse_numdiff::SparseFiniteDiffOptions;
pub use stochastic::minimize_adam;
pub use stochastic::minimize_adamw;
pub use stochastic::minimize_rmsprop;
pub use stochastic::minimize_sgd;
pub use stochastic::minimize_sgd_momentum;
pub use stochastic::minimize_stochastic;
pub use stochastic::AdamOptions;
pub use stochastic::AdamWOptions;
pub use stochastic::DataProvider;
pub use stochastic::InMemoryDataProvider;
pub use stochastic::LearningRateSchedule;
pub use stochastic::MomentumOptions;
pub use stochastic::RMSPropOptions;
pub use stochastic::SGDOptions;
pub use stochastic::StochasticGradientFunction;
pub use stochastic::StochasticMethod;
pub use stochastic::StochasticOptions;
pub use streaming::exponentially_weighted_rls;
pub use streaming::incremental_bfgs;
pub use streaming::incremental_lbfgs;
pub use streaming::incremental_lbfgs_linear_regression;
pub use streaming::kalman_filter_estimator;
pub use streaming::online_gradient_descent;
pub use streaming::online_linear_regression;
pub use streaming::online_logistic_regression;
pub use streaming::real_time_linear_regression;
pub use streaming::recursive_least_squares;
pub use streaming::rolling_window_gradient_descent;
pub use streaming::rolling_window_least_squares;
pub use streaming::rolling_window_linear_regression;
pub use streaming::rolling_window_weighted_least_squares;
pub use streaming::streaming_trust_region_linear_regression;
pub use streaming::streaming_trust_region_logistic_regression;
pub use streaming::IncrementalNewton;
pub use streaming::IncrementalNewtonMethod;
pub use streaming::LinearRegressionObjective;
pub use streaming::LogisticRegressionObjective;
pub use streaming::RealTimeEstimator;
pub use streaming::RealTimeMethod;
pub use streaming::RollingWindowOptimizer;
pub use streaming::StreamingConfig;
pub use streaming::StreamingDataPoint;
pub use streaming::StreamingObjective;
pub use streaming::StreamingOptimizer;
pub use streaming::StreamingStats;
pub use streaming::StreamingTrustRegion;
pub use unconstrained::minimize;
pub use unconstrained::Bounds;
pub use unified_pipeline::presets as unified_presets;
pub use unified_pipeline::UnifiedOptimizationConfig;
pub use unified_pipeline::UnifiedOptimizationResults;
pub use unified_pipeline::UnifiedOptimizer;
pub use visualization::tracking::TrajectoryTracker;
pub use visualization::ColorScheme;
pub use visualization::OptimizationTrajectory;
pub use visualization::OptimizationVisualizer;
pub use visualization::OutputFormat;
pub use visualization::VisualizationConfig;
Modules§
- advanced_
coordinator - Advanced Mode Coordinator
- automatic_
differentiation - Automatic differentiation for exact gradient and Hessian computation
- benchmarking
- Comprehensive benchmarking system for optimization algorithms
- constrained
- Constrained optimization algorithms
- distributed
- Distributed optimization using MPI for large-scale parallel computation
- distributed_
gpu - Distributed GPU optimization combining MPI and GPU acceleration
- error
- Error types for the SciRS2 optimization module
- global
- Global optimization algorithms
- gpu
- GPU acceleration for optimization algorithms
- jit_
optimization - Just-in-time compilation and auto-vectorization for optimization
- learned_
optimizers - Learned Optimizers Module
- least_
squares - Least squares submodule containing specialized algorithms and loss functions
- ml_
optimizers - Specialized optimizers for machine learning applications
- multi_
objective - Multi-objective optimization algorithms and utilities
- neural_
integration - Integration with scirs2-neural for machine learning optimization
- neuromorphic
- Neuromorphic Optimization Module
- parallel
- Parallel computation utilities for optimization algorithms
- prelude
- quantum_
inspired - Quantum-Inspired Optimization Methods
- reinforcement_
learning - Reinforcement Learning Optimization Module
- result
- Optimization result structures
- roots
- Root finding algorithms
- roots_
anderson - roots_
krylov - scalar
- Scalar optimization algorithms
- self_
tuning - Self-tuning parameter selection for optimization algorithms
- simd_
ops - SIMD-accelerated implementations using scirs2-core unified system
- sparse_
numdiff - Sparse numerical differentiation for large-scale optimization
- stochastic
- Stochastic optimization methods for machine learning and large-scale problems
- streaming
- Streaming Optimization Module
- unconstrained
- Unconstrained optimization algorithms
- unified_
pipeline - Unified optimization pipeline combining all advanced features
- visualization
- Visualization tools for optimization trajectories and analysis