pub struct Sgd<M> {
pub cfg: SgdConfig,
/* private fields */
}
Expand description
Implementation of Stochastic Gradient Descent. Based on pytorch’s implementation
Nesterov Momentum is implemented as described in On the importance of initialization and momentum in deep learning.
Example Usage
Constructing using default:
let mut opt: Sgd<Model> = Default::default();
Constructing using new:
let mut opt: Sgd<Model> = Sgd::new(SgdConfig {
lr: 1e-3,
momentum: Some(Momentum::Classic(0.5)),
});
See module level documentation at crate::optim for examples of how to actually use an optimizer.
Fields
cfg: SgdConfig
Hyperparameter configuration
Implementations
Trait Implementations
sourceimpl<M> GradientProvider for Sgd<M>
impl<M> GradientProvider for Sgd<M>
sourcefn gradient<P>(&mut self, p: &P) -> Box<P::Array> where
P: HasUniqueId + HasArrayType<Dtype = f32> + HasDevice,
fn gradient<P>(&mut self, p: &P) -> Box<P::Array> where
P: HasUniqueId + HasArrayType<Dtype = f32> + HasDevice,
Retrieves the data associated with p
if there is any.
This can modify self
, for instance if velocities are calculated
based on the associated data! Read more
Auto Trait Implementations
impl<M> !RefUnwindSafe for Sgd<M>
impl<M> !Send for Sgd<M>
impl<M> !Sync for Sgd<M>
impl<M> Unpin for Sgd<M>
impl<M> !UnwindSafe for Sgd<M>
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more