Module stochastic

Source
Expand description

Stochastic optimization methods for machine learning and large-scale problems

This module provides stochastic optimization algorithms that are particularly well-suited for machine learning, neural networks, and large-scale problems where exact gradients are expensive or noisy.

Re-exports§

pub use adam::minimize_adam;
pub use adam::AdamOptions;
pub use adamw::minimize_adamw;
pub use adamw::AdamWOptions;
pub use momentum::minimize_sgd_momentum;
pub use momentum::MomentumOptions;
pub use rmsprop::minimize_rmsprop;
pub use rmsprop::RMSPropOptions;
pub use sgd::minimize_sgd;
pub use sgd::SGDOptions;

Modules§

adam
ADAM (Adaptive Moment Estimation) optimizer
adamw
AdamW (Adam with decoupled Weight Decay) optimizer
momentum
SGD with Momentum optimizer
rmsprop
RMSProp (Root Mean Square Propagation) optimizer
sgd
Stochastic Gradient Descent (SGD) optimization

Structs§

BatchGradientWrapper
Wrapper for regular gradient functions
InMemoryDataProvider
Simple in-memory data provider
StochasticOptions
Common options for stochastic optimization

Enums§

LearningRateSchedule
Learning rate schedules
StochasticMethod
Stochastic optimization method selection

Traits§

DataProvider
Data provider trait for stochastic optimization
StochasticGradientFunction
Stochastic gradient function trait

Functions§

clip_gradients
Clip gradients to prevent exploding gradients
create_stochastic_options_for_problem
Create stochastic options optimized for specific problem types
generate_batch_indices
Generate random batch indices
minimize_stochastic
Main stochastic optimization function
update_learning_rate
Update learning rate according to schedule