Expand description
An implementation of the Adam optimizer from Adam: A Method for Stochastic Optimization
Example Usage:
let mut t = Tensor0D::ones();
let mut opt: Adam = Default::default();
let gradients = t.trace().backward();
opt.update(&mut t, gradients);
Changing default parmeters:
let adam = Adam::new(1e-2, [0.5, 0.25], 1e-6);
Fields
lr: f32
Learning rate
betas: [f32; 2]
Betas from Adam paper
eps: f32
Epsilon for numerical stability
Implementations
Trait Implementations
sourceimpl Default for Adam
impl Default for Adam
Use the default parameters suggested in the paper of lr=1e-3, beta1=0.9, beta2=0.999, and epsilon=1e-8
sourcefn default() -> Self
fn default() -> Self
- Self::lr
1e-3
- Self::betas
[0.9, 0.999]
- Self::eps
1e-8
sourceimpl GradientProvider for Adam
impl GradientProvider for Adam
sourcefn gradient<P>(&mut self, p: &P) -> Box<P::Array> where
P: HasUniqueId + HasArrayType<Dtype = f32> + HasDevice,
fn gradient<P>(&mut self, p: &P) -> Box<P::Array> where
P: HasUniqueId + HasArrayType<Dtype = f32> + HasDevice,
Retrieves the data associated with p
if there is any.
This can modify self
, for instance if velocities are calculated
based on the associated data! Read more
Auto Trait Implementations
impl !RefUnwindSafe for Adam
impl !Send for Adam
impl !Sync for Adam
impl Unpin for Adam
impl !UnwindSafe for Adam
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more