Expand description
SIMD-accelerated vector similarity primitives.
Fast building blocks for embedding similarity with automatic hardware dispatch.
§Which Function Should I Use?
| Task | Function | Notes |
|---|---|---|
| Similarity (normalized) | cosine | Most embeddings are normalized |
| Similarity (raw) | dot | When you know norms |
| Distance (L2) | l2_distance | For k-NN, clustering |
| Token-level matching | maxsim | ColBERT-style (feature maxsim) |
| Sparse vectors | sparse_dot | BM25 scores (feature sparse) |
§SIMD Dispatch
All functions automatically dispatch to the fastest available instruction set:
| Architecture | Instructions | Detection |
|---|---|---|
| x86_64 | AVX2 + FMA | Runtime |
| aarch64 | NEON | Always available |
| Other | Portable | LLVM auto-vectorizes |
Vectors shorter than 16 dimensions use portable code (SIMD overhead not worthwhile).
§Historical Context
The inner product (dot product) dates to Grassmann’s 1844 “Ausdehnungslehre” and Hamilton’s quaternions, formalized in Gibbs and Heaviside’s vector calculus (~1880s). Modern embedding similarity (Word2Vec 2013, BERT 2018) relies on inner products in high-dimensional spaces where SIMD acceleration is essential.
ColBERT’s MaxSim (Khattab & Zaharia, 2020) extends this to token-level late interaction, requiring O(|Q| * |D|) inner products per query-document pair.
§Example
use innr::{dot, cosine, norm};
let a = [1.0_f32, 0.0, 0.0];
let b = [0.707, 0.707, 0.0];
// Dot product
let d = dot(&a, &b);
assert!((d - 0.707).abs() < 0.01);
// Cosine similarity (normalized dot product)
let c = cosine(&a, &b);
assert!((c - 0.707).abs() < 0.01);
// L2 norm
let n = norm(&a);
assert!((n - 1.0).abs() < 1e-6);§References
- Gibbs, J.W. (1881). “Elements of Vector Analysis”
- Mikolov et al. (2013). “Efficient Estimation of Word Representations” (Word2Vec)
- Khattab & Zaharia (2020). “ColBERT: Efficient and Effective Passage Search”
Re-exports§
pub use dense::angular_distance;pub use dense::cosine;pub use dense::dot;pub use dense::dot_portable;pub use dense::l1_distance;pub use dense::l2_distance;pub use dense::l2_distance_squared;pub use dense::matryoshka_cosine;pub use dense::matryoshka_dot;pub use dense::norm;pub use dense::pool_mean;pub use binary::binary_dot;pub use binary::binary_hamming;pub use binary::binary_jaccard;pub use binary::encode_binary;pub use binary::PackedBinary;pub use metric::Quasimetric;pub use metric::SymmetricMetric;pub use fast_math::fast_cosine;pub use fast_math::fast_cosine_dispatch;pub use fast_math::fast_rsqrt;pub use fast_math::fast_rsqrt_precise;
Modules§
- batch
- Batch vector operations with columnar (PDX-style) layout. Batch vector operations with columnar (PDX-style) data layout.
- binary
- SIMD-accelerated binary (1-bit) vector operations.
- clifford
- Clifford Algebra (Geometric Algebra) for steerable embeddings.
- dense
- Dense vector operations with SIMD acceleration.
- fast_
math - Fast math operations using hardware-aware approximations (rsqrt, NR iteration). Fast math operations using hardware-aware approximations.
- metric
- Metric and quasimetric trait surfaces.
- ternary
- Ternary quantization (1.58-bit) for ultra-compressed embeddings. SIMD-accelerated ternary vector operations.
Constants§
- L1_
ALIGNMENT_ EPSILON - Cross-lingual alignment constant for L1-stable center mapping.
- MIN_
DIM_ SIMD - Minimum vector dimension for SIMD to be worthwhile.
- NORM_
EPSILON - Threshold for treating a norm as “effectively zero”.