Crate candle_optimisers
source ·Expand description
Optimisers for use with the candle framework for lightweight machine learning.
Apart from LBFGS, these all implement the candle_nn::optim::Optimizer
trait from candle-nn
Modules§
- Adadelta optimiser
- Adagrad optimiser
- Adam optimiser (inlcuding AdamW)
- Adamax optimiser
- Stochastic Gradient Descent
- Limited memory Broyden–Fletcher–Goldfarb–Shanno algorithm
- NAdam optimiser: Adam with Nesterov momentum
- RAdam optimiser
- RMS prop algorithm
Enums§
- Method of weight decay to use
- Outcomes of an optimiser step for methods such as LBFGS
- Type of momentum to use
Traits§
- trait for optimisers like LBFGS that need the ability to calculate the loss and its gradient
- Trait for Models: this is needed for optimisers that require the ability to calculate the loss such as LBFGS
- Trait for optimisers to expose their parameters