Skip to main content

Crate yscv_optim

Crate yscv_optim 

Source
Expand description

Optimizers and training helpers for yscv models.

Structs§

Adagrad
Adagrad optimizer with optional L2 weight decay.
Adam
Adam optimizer with optional L2 weight decay.
AdamW
AdamW optimizer with decoupled weight decay.
CosineAnnealingLr
Cosine annealing learning-rate scheduler.
CosineAnnealingWarmRestarts
Cosine annealing with warm restarts learning-rate scheduler.
CyclicLr
Cyclic learning-rate scheduler with triangular policy.
ExponentialLr
Exponential learning-rate scheduler.
Lamb
Layer-wise Adaptive Moments optimizer for Batch training (LAMB).
LambdaLr
Lambda learning-rate scheduler.
Lars
Layer-wise Adaptive Rate Scaling (LARS) optimizer.
LinearWarmupLr
Linear warmup learning-rate scheduler.
Lookahead
Lookahead optimizer wrapper.
MultiStepLr
Multi-step learning-rate scheduler.
OneCycleLr
One-cycle learning-rate scheduler with linear warmup and linear cooldown.
PolynomialDecayLr
Polynomial decay learning-rate scheduler.
RAdam
RAdam (Rectified Adam) optimizer with variance rectification.
ReduceLrOnPlateau
Reduce learning rate when a metric has stopped improving.
RmsProp
RMSProp optimizer with optional momentum, weight decay, and centered variance.
Sgd
Stochastic gradient descent optimizer with optional momentum and weight decay.
StepLr
Piecewise constant learning-rate scheduler.

Enums§

OptimError
Errors returned by optimizer configuration and update steps.

Constants§

CRATE_ID

Traits§

LearningRate
Shared learning-rate control surface for optimizers.
LrScheduler
Scheduler abstraction for stateful learning-rate policies.
StepOptimizer
Trait for optimizers that support a per-parameter step update.

Functions§

clip_grad_norm_
Clips the total norm of gradients for the given nodes in-place.
clip_grad_value_
Clamps every gradient element to the range [-max_val, max_val] in-place.