//! TurboQuant KV cache compression algorithm
//!
//! Implements Google's TurboQuant (ICLR 2026) for memory-efficient
//! KV caches in transformer inference. Two variants:
//!
//! - **TurboQuant_MSE** (biased): Randomized Hadamard rotation + Lloyd-Max
//! scalar quantization at 2-3 bits. Simpler, lower overhead.
//!
//! - **TurboQuant_prod** (unbiased): Same as MSE plus a QJL 1-bit
//! correction on the quantization residual. Higher accuracy for
//! attention computation at the cost of ~1 extra bit per dimension.
pub use Codebook;
pub use QjlProjector;
pub use ;
pub use HadamardRotation;