Expand description
Implementation of Stochastic Gradient Descent. Based on pytorch’s implementation
Nesterov Momentum is implemented as described in On the importance of initialization and momentum in deep learning.
Example Usage:
use dfdx::prelude::*;
let mut t = Tensor0D::ones();
let mut opt: Sgd = Default::default();
let gradients = t.trace().backward();
opt.update(&mut t, gradients);
Changing default parmeters:
use dfdx::optim::{Sgd, Momentum};
let sgd_no_momentum = Sgd::new(1e-1, None);
let sgd_classic_momentum = Sgd::new(1e-2, Some(Momentum::Classic(0.5)));
let sgd_nesterov_momentum = Sgd::new(1e-3, Some(Momentum::Nesterov(0.25)));
Fields
lr: f32
momentum: Option<Momentum>
Implementations
Trait Implementations
Auto Trait Implementations
impl !RefUnwindSafe for Sgd
impl !Send for Sgd
impl !Sync for Sgd
impl Unpin for Sgd
impl !UnwindSafe for Sgd
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more