Module dfdx::optim

source · []
Expand description

Optimizers such as Sgd, Adam, and RMSprop that can optimize neural networks.

Initializing

All the optimizer’s provide Default implementations, and also provide a way to specify all the relevant parameters through the corresponding config object:

Updating network parameters

This is done via Optimizer::update(), where you pass in a mutable crate::nn::Module, and the crate::gradients::Gradients:

let mut model: MyModel = Default::default();
let mut opt: Sgd<MyModel> = Default::default();
// -- snip loss computation --

let gradients: Gradients = loss.backward();
opt.update(&mut model, gradients);

Structs

An implementation of the Adam optimizer from Adam: A Method for Stochastic Optimization

Configuration of hyperparameters for Adam.

RMSprop As described in Hinton, 2012.

Configuration of hyperparameters for RMSprop.

Implementation of Stochastic Gradient Descent. Based on pytorch’s implementation

Configuration of hyperparameters for Sgd.

Enums

Momentum used for Sgd

Traits

All optimizers must implement the update function, which takes an object that implements CanUpdateWithGradients, and calls CanUpdateWithGradients::update.