Skip to main content

Module optim

Module optim 

Source
Expand description

Optimizers.

This module provides small, allocation-free-per-step optimizers that update an Mlp given a set of Gradients.

Design notes:

  • Optimizer state (momentum/Adam moments) lives outside the model.
  • The training loop owns the optimizer state and reuses it across steps.

Structs§

Sgd
Stochastic gradient descent with a fixed learning rate.

Enums§

Optimizer
Optimizer choice for training.
OptimizerState
Owned optimizer state.