pub struct Adam { /* private fields */ }Expand description
Adam optimiser
This includes AdamW via use of decoupled weight decay
Described in Adam: A Method for Stochastic Optimization and Decoupled Weight Decay Regularization
The AMSGrad variant is also implemented, described in On the Convergence of Adam and Beyond
Implementations§
Trait Implementations§
Source§impl OptimParams for Adam
impl OptimParams for Adam
Source§fn set_params(&mut self, config: Self::Config)
fn set_params(&mut self, config: Self::Config)
Set the parameters for the optimiser
§Warning
As the AMSGrad variant requires having tracked an additional tensor this variable cannot be changed once set initally on creation of the optimiser.
Source§impl Optimizer for Adam
impl Optimizer for Adam
type Config = ParamsAdam
fn new(vars: Vec<Var>, params: ParamsAdam) -> Result<Self>
fn learning_rate(&self) -> f64
fn step(&mut self, grads: &GradStore) -> Result<()>
fn set_learning_rate(&mut self, lr: f64)
fn empty(config: Self::Config) -> Result<Self, Error>
fn backward_step(&mut self, loss: &Tensor) -> Result<(), Error>
fn from_slice(vars: &[&Var], config: Self::Config) -> Result<Self, Error>
Auto Trait Implementations§
impl Freeze for Adam
impl !RefUnwindSafe for Adam
impl Send for Adam
impl Sync for Adam
impl Unpin for Adam
impl !UnwindSafe for Adam
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more