//! # Quantization Schemes
//!
//! This module exposes a suite of post-training quantization (PTQ) strategies:
//!
//! | Module | Scheme | Primary use |
//! |---------------|---------------------------------------------|-------------|
//! | `minmax` | Min-Max calibration (INT4/INT8) | General PTQ |
//! | `nf4` | NormalFloat4 (QLoRA) | 4-bit weights |
//! | `fp8` | FP8 E4M3 / E5M2 (Hopper / Blackwell) | Training & inference |
//! | `gptq` | GPTQ Hessian-guided quantization | LLM weights |
//! | `smooth_quant`| SmoothQuant activation–weight migration | LLM activations |
pub use ;
pub use ;
pub use ;
pub use ;
pub use ;