Struct dfdx::optim::sgd::Sgd

source · []
pub struct Sgd {
    pub lr: f32,
    pub momentum: Option<Momentum>,
    /* private fields */
}
Expand description

Implementation of Stochastic Gradient Descent. Based on pytorch’s implementation

Nesterov Momentum is implemented as described in On the importance of initialization and momentum in deep learning.

Example Usage:

use dfdx::prelude::*;

let mut t = Tensor0D::ones();
let mut opt: Sgd = Default::default();

let gradients = t.trace().backward();
opt.update(&mut t, gradients);

Changing default parmeters:

use dfdx::optim::{Sgd, Momentum};

let sgd_no_momentum = Sgd::new(1e-1, None);
let sgd_classic_momentum = Sgd::new(1e-2, Some(Momentum::Classic(0.5)));
let sgd_nesterov_momentum = Sgd::new(1e-3, Some(Momentum::Nesterov(0.25)));

Fields

lr: f32momentum: Option<Momentum>

Implementations

Trait Implementations

Formats the value using the given formatter. Read more

Returns the “default value” for a type. Read more

Retrieves the data associated with p if there is any. This can modify self, for instance if velocities are calculated based on the associated data! Read more

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.