Module dfdx::optim

source ·
Expand description

Optimizers such as Sgd, Adam, and RMSprop that can optimize neural networks.

Initializing

All the optimizer’s provide Default implementations, and also provide a way to specify all the relevant parameters through the corresponding config object:

Updating network parameters

This is done via Optimizer::update(), where you pass in a mutable crate::nn::Module, and the crate::tensor::Gradients:

let mut model = MyModel::build_on_device(&dev);
let mut grads = model.alloc_grads();
let mut opt = Sgd::new(&model, Default::default());
// -- snip loss computation --

grads = loss.backward();
opt.update(&mut model, &grads);
model.zero_grads(&mut grads);

Re-exports

Modules

Structs

Enums

  • An error indicating that a parameter was not used in gradient computation, and was therefore not present in Gradients during an update.

Traits

  • All optimizers must implement the update function, which takes a M and updates all of its parameters.