pub struct Sgd<M> {
pub cfg: SgdConfig,
/* private fields */
}
Expand description
Implementation of Stochastic Gradient Descent. Based on pytorch’s implementation
Nesterov Momentum is implemented as described in On the importance of initialization and momentum in deep learning.
Example Usage
Constructing using default:
let mut opt: Sgd<Model> = Default::default();
Constructing using new:
let mut opt: Sgd<Model> = Sgd::new(SgdConfig {
lr: 1e-3,
momentum: Some(Momentum::Classic(0.5)),
});
See module level documentation at crate::optim for examples of how to actually use an optimizer.
Fields
cfg: SgdConfig
Hyperparameter configuration
Implementations
Trait Implementations
Auto Trait Implementations
impl<M> !RefUnwindSafe for Sgd<M>
impl<M> !Send for Sgd<M>
impl<M> !Sync for Sgd<M>
impl<M> Unpin for Sgd<M>
impl<M> !UnwindSafe for Sgd<M>
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more