pub struct Muon<B: Backend> { /* private fields */ }Expand description
Muon optimizer.
Muon internally runs standard SGD-momentum, and then performs an orthogonalization post-processing step, in which each 2D parameter’s update is replaced with the nearest orthogonal matrix. For efficient orthogonalization we use a Newton-Schulz iteration, which has the advantage that it can be stably run in bfloat16 on the GPU.
§Important Notes
-
Only for 2D+ parameters: Muon is designed for weight matrices. Use AdamW or SGD for biases, embeddings, and layer norms.
-
Learning rate adjustment: Muon automatically adjusts the learning rate based on parameter shape. See
AdjustLrFnfor details. -
Weight decay timing: Unlike typical optimizers, Muon applies weight decay AFTER orthogonalization but uses the original (unadjusted) learning rate for it.
Trait Implementations§
Source§impl<B: Backend> SimpleOptimizer<B> for Muon<B>
impl<B: Backend> SimpleOptimizer<B> for Muon<B>
Source§fn step<const D: usize>(
&self,
lr: LearningRate,
tensor: Tensor<B, D>,
grad: Tensor<B, D>,
state: Option<Self::State<D>>,
) -> (Tensor<B, D>, Option<Self::State<D>>)
fn step<const D: usize>( &self, lr: LearningRate, tensor: Tensor<B, D>, grad: Tensor<B, D>, state: Option<Self::State<D>>, ) -> (Tensor<B, D>, Option<Self::State<D>>)
Perform a single Muon optimization step.
§Algorithm
- Apply momentum to gradient
- Orthogonalize update via Newton-Schulz
- Adjust learning rate based on parameter shape
- Apply weight decay (using original lr)
- Update parameter (using adjusted lr)
§Notes
Unlike typical optimizers, the weight decay and parameter update use different learning rates:
- Weight decay uses the original
lr - Parameter update uses the shape-adjusted
lr
§Panics
This function will panic if the input tensors are not 2D.