Expand description
An implementation of the Adam optimizer from Adam: A Method for Stochastic Optimization
Example Usage:
use dfdx::prelude::*;
let mut t = Tensor0D::ones();
let mut opt: Adam = Default::default();
let gradients = t.trace().backward();
opt.update(&mut t, gradients);
Changing default parmeters:
use dfdx::optim::Adam;
let adam = Adam::new(1e-2, [0.5, 0.25], 1e-6);
Fields
lr: f32
betas: [f32; 2]
eps: f32
Implementations
Trait Implementations
sourceimpl Default for Adam
impl Default for Adam
Use the default parameters suggested in the paper of lr=1e-3, beta1=0.9, beta2=0.999, and epsilon=1e-8
sourceimpl GradientProvider for Adam
impl GradientProvider for Adam
Auto Trait Implementations
impl !RefUnwindSafe for Adam
impl !Send for Adam
impl !Sync for Adam
impl Unpin for Adam
impl !UnwindSafe for Adam
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more