Expand description
Stochastic optimization methods for machine learning and large-scale problems
This module provides stochastic optimization algorithms that are particularly well-suited for machine learning, neural networks, and large-scale problems where exact gradients are expensive or noisy.
Re-exports§
pub use adam::minimize_adam;
pub use adam::AdamOptions;
pub use adamw::minimize_adamw;
pub use adamw::AdamWOptions;
pub use momentum::minimize_sgd_momentum;
pub use momentum::MomentumOptions;
pub use rmsprop::minimize_rmsprop;
pub use rmsprop::RMSPropOptions;
pub use sgd::minimize_sgd;
pub use sgd::SGDOptions;
Modules§
- adam
- ADAM (Adaptive Moment Estimation) optimizer
- adamw
- AdamW (Adam with decoupled Weight Decay) optimizer
- momentum
- SGD with Momentum optimizer
- rmsprop
- RMSProp (Root Mean Square Propagation) optimizer
- sgd
- Stochastic Gradient Descent (SGD) optimization
Structs§
- Batch
Gradient Wrapper - Wrapper for regular gradient functions
- InMemory
Data Provider - Simple in-memory data provider
- Stochastic
Options - Common options for stochastic optimization
Enums§
- Learning
Rate Schedule - Learning rate schedules
- Stochastic
Method - Stochastic optimization method selection
Traits§
- Data
Provider - Data provider trait for stochastic optimization
- Stochastic
Gradient Function - Stochastic gradient function trait
Functions§
- clip_
gradients - Clip gradients to prevent exploding gradients
- create_
stochastic_ options_ for_ problem - Create stochastic options optimized for specific problem types
- generate_
batch_ indices - Generate random batch indices
- minimize_
stochastic - Main stochastic optimization function
- update_
learning_ rate - Update learning rate according to schedule