Skip to main content

Module quantization

Module quantization 

Source
Expand description

Vector quantization algorithms for memory-efficient storage.

Quantization reduces vector precision for memory savings:

MethodCompressionAccuracySpeedUse Case
Scalar4x~97%FastDefault for most datasets
Binary32x~80%FastestVery large datasets

§Scalar Quantization

Converts f32 values to u8 by learning min/max ranges per dimension:

use grafeo_core::index::vector::quantization::ScalarQuantizer;

// Training vectors to learn min/max ranges
let vectors = vec![
    vec![0.0f32, 0.3, 0.7],
    vec![0.2, 0.5, 1.0],
    vec![0.1, 0.6, 0.9],
];
let refs: Vec<&[f32]> = vectors.iter().map(|v| v.as_slice()).collect();
let quantizer = ScalarQuantizer::train(&refs);

// Quantize: f32 -> u8 (4x compression)
let original = vec![0.1f32, 0.5, 0.9];
let quantized = quantizer.quantize(&original);

// Compute distance in quantized space (approximate)
let other_quantized = quantizer.quantize(&[0.15, 0.45, 0.85]);
let dist = quantizer.distance_u8(&quantized, &other_quantized);

§Binary Quantization

Converts f32 values to bits (sign only), enabling hamming distance:

use grafeo_core::index::vector::quantization::BinaryQuantizer;

let v1 = vec![0.1f32, -0.5, 0.0, 0.9];
let v2 = vec![0.2f32, -0.3, 0.1, 0.8];
let bits1 = BinaryQuantizer::quantize(&v1);
let bits2 = BinaryQuantizer::quantize(&v2);

// Hamming distance (count differing bits)
let dist = BinaryQuantizer::hamming_distance(&bits1, &bits2);

Structs§

BinaryQuantizer
Binary quantizer: f32 -> 1 bit (sign only).
ProductQuantizer
Product quantizer: splits vectors into M subvectors, quantizes each to K centroids.
ScalarQuantizer
Scalar quantizer: f32 -> u8 with per-dimension min/max scaling.

Enums§

QuantizationType
Quantization strategy for vector storage.

Functions§

hamming_distance_simd
Fallback scalar implementation.