Expand description
Optimizers such as Sgd, Adam, and RMSprop that can optimize neural networks.
Initializing
All the optimizer’s provide Default implementations, and also provide a way to specify all the relevant parameters:
Updating network parameters
This is done via Optimizer::update(), where you pass in a mutable crate::nn::Module, and the crate::gradients::Gradients:
let mut model: MyModel = Default::default();
let mut opt: Sgd = Default::default();
// -- snip loss computation --
let gradients: Gradients = loss.backward();
opt.update(&mut model, gradients);
Structs
An implementation of the Adam optimizer from Adam: A Method for Stochastic Optimization
RMSprop As described in Hinton, 2012.
Configuration options for RMSprop.
Implementation of Stochastic Gradient Descent. Based on pytorch’s implementation
Enums
Traits
All optimizers must implement the update function, which takes an object that implements CanUpdateWithGradients, and calls CanUpdateWithGradients::update.