1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
//! Optimizer Kernels
//!
//! GPU kernels for fused optimizer weight updates, eliminating CPU↔GPU synchronization.
//!
//! ## Available Kernels
//!
//! - [`AdamWStepKernel`]: Fused AdamW with weight decay
//! - [`AdamStepKernel`]: Vanilla Adam without weight decay
//! - [`GradientClipKernel`]: L2 gradient norm clipping
//! - [`ClipScaleReduceKernel`]: GPU-side fused norm reduction + clip scale computation (ALB-078)
//! - [`GradientClipGpuScaleKernel`]: Gradient clipping with scale read from GPU memory (ALB-078)
//! - [`SquaredSumKernel`]: GPU-side sum-of-squares reduction for L2 norm (KAIZEN-049)
//!
//! ## Performance Benefits
//!
//! Traditional training loop:
//! 1. Forward pass (GPU)
//! 2. Backward pass (GPU)
//! 3. Copy gradients GPU → CPU
//! 4. Optimizer step (CPU)
//! 5. Copy weights CPU → GPU
//!
//! With fused kernels:
//! 1. Forward pass (GPU)
//! 2. Backward pass (GPU)
//! 3. Optimizer step (GPU) ← All on GPU!
//!
//! This eliminates the PCIe bottleneck entirely.
//!
//! # Issue #89: Fused optimizer kernels
pub use ;
pub use GradientClipKernel;
pub use ;
pub use SquaredSumKernel;