SimSIMD 📏
Computing dot-products, similarity measures, and distances between low- and high-dimensional vectors is ubiquitous in Machine Learning, Scientific Computing, Geo-Spatial Analysis, and Information Retrieval.
These algorithms generally have linear complexity in time, constant complexity in space, and are data-parallel.
In other words, it is easily parallelizable and vectorizable and often available in packages like BLAS and LAPACK, as well as higher-level numpy and scipy Python libraries.
Ironically, even with decades of evolution in compilers and numerical computing, most libraries can be 3-200x slower than hardware potential even on the most popular hardware, like 64-bit x86 and Arm CPUs.
SimSIMD attempts to fill that gap.
1️⃣ SimSIMD functions are practically as fast as memcpy.
2️⃣ SimSIMD compiles to more platforms than NumPy (105 vs 35) and has more backends than most BLAS implementations.
Features
SimSIMD provides over 100 SIMD-optimized kernels for various distance and similarity measures, accelerating search in USearch and several DBMS products. Implemented distance functions include:
- Euclidean (L2) and Cosine (Angular) spatial distances for Vector Search.
- Dot-Products for real & complex vectors for DSP & Quantum computing.
- Hamming (~ Manhattan) and Jaccard (~ Tanimoto) bit-level distances.
- Kullback-Leibler and Jensen–Shannon divergences for probability distributions.
- Haversine and Vincenty's formulae for Geospatial Analysis.
- For Levenshtein, Needleman–Wunsch and other text metrics, check StringZilla.
Moreover, SimSIMD...
- handles
f64,f32, andf16real & complex vectors. - handles
i8integral andb8binary vectors. - is a zero-dependency header-only C 99 library.
- has bindings for Python, Rust and JavaScript.
- has Arm backends for NEON and Scalable Vector Extensions (SVE).
- has x86 backends for Haswell, Skylake, Ice Lake, and Sapphire Rapids.
Due to the high-level of fragmentation of SIMD support in different x86 CPUs, SimSIMD uses the names of select Intel CPU generations for its backends. They, however, also work on AMD CPUs. Inel Haswell is compatible with AMD Zen 1/2/3, while AMD Genoa Zen 4 covers AVX-512 instructions added to Intel Skylake and Ice Lake. You can learn more about the technical implementation details in the following blogposts:
- Uses Horner's method for polynomial approximations, beating GCC 12 by 119x.
- Uses Arm SVE and x86 AVX-512's masked loads to eliminate tail
for-loops. - Uses AVX-512 FP16 for half-precision operations, that few compilers vectorize.
- Substitutes LibC's
sqrtcalls with bit-hacks using Jan Kadlec's constant. - For Python avoids slow PyBind11, SWIG, and even
PyArg_ParseTuplefor speed. - For JavaScript uses typed arrays and NAPI for zero-copy calls.
Benchmarks
Against NumPy and SciPy
Given 1000 embeddings from OpenAI Ada API with 1536 dimensions, running on the Apple M2 Pro Arm CPU with NEON support, here's how SimSIMD performs against conventional methods:
| Kind | f32 improvement |
f16 improvement |
i8 improvement |
Conventional method | SimSIMD |
|---|---|---|---|---|---|
| Inner Product | 2 x | 9 x | 18 x | numpy.inner |
inner |
| Cosine Distance | 32 x | 79 x | 133 x | scipy.spatial.distance.cosine |
cosine |
| Euclidean Distance ² | 5 x | 26 x | 17 x | scipy.spatial.distance.sqeuclidean |
sqeuclidean |
| Jensen-Shannon Divergence | 31 x | 53 x | scipy.spatial.distance.jensenshannon |
jensenshannon |
Against GCC Auto-Vectorization
On the Intel Sapphire Rapids platform, SimSIMD was benchmarked against auto-vectorized code using GCC 12.
GCC handles single-precision float but might not be the best choice for int8 and _Float16 arrays, which have been part of the C language since 2011.
| Kind | GCC 12 f32 |
GCC 12 f16 |
SimSIMD f16 |
f16 improvement |
|---|---|---|---|---|
| Inner Product | 3,810 K/s | 192 K/s | 5,990 K/s | 31 x |
| Cosine Distance | 3,280 K/s | 336 K/s | 6,880 K/s | 20 x |
| Euclidean Distance ² | 4,620 K/s | 147 K/s | 5,320 K/s | 36 x |
| Jensen-Shannon Divergence | 1,180 K/s | 18 K/s | 2,140 K/s | 118 x |
Broader Benchmarking Results:
Using SimSIMD in Python
The package is intended to replace the usage of numpy.inner, numpy.dot, and scipy.spatial.distance.
Aside from drastic performance improvements, SimSIMD significantly improves accuracy in mixed precision setups.
NumPy and SciPy, processing i8 or f16 vectors, will use the same types for accumulators, while SimSIMD can combine i8 enumeration, i16 multiplication, and i32 accumulation to avoid overflows entirely.
The same applies to processing f16 values with f32 precision.
Installation
Use the following snippet to install SimSIMD and list available hardware acceleration options available on your machine:
One-to-One Distance
=
=
=
Supported functions include cosine, inner, sqeuclidean, hamming, and jaccard.
Dot products are supported for both real and complex numbers:
= + 1j *
= + 1j *
=
=
= # conjugate, same as `np.vdot`
Unlike SciPy, SimSIMD allows explicitly stating the precision of the input vectors, which is especially useful for mixed-precision setups.
=
=
=
=
It also allows using SimSIMD for half-precision complex numbers, which NumPy does not support.
For that, view data as continuous even-length np.float16 vectors and override type-resolution with complex32 string.
=
=
One-to-Many Distances
Every distance function can be used not only for one-to-one but also one-to-many and many-to-many distance calculations. For one-to-many:
= # rank 1 tensor
= # rank 2 tensor
=
=
=
Many-to-Many Distances
All distance functions in SimSIMD can be used to compute many-to-many distances. For two batches of 100 vectors to compute 100 distances, one would call it like this:
=
=
=
Input matrices must have identical shapes. This functionality isn't natively present in NumPy or SciPy, and generally requires creating intermediate arrays, which is inefficient and memory-consuming.
Many-to-Many All-Pairs Distances
One can use SimSIMD to compute distances between all possible pairs of rows across two matrices (akin to scipy.spatial.distance.cdist).
The resulting object will have a type DistancesTensor, zero-copy compatible with NumPy and other libraries.
For two arrays of 10 and 1,000 entries, the resulting tensor will have 10,000 cells:
=
=
: = # zero-copy
: = # now managed by NumPy
Multithreading
By default, computations use a single CPU core.
To optimize and utilize all CPU cores on Linux systems, add the threads=0 argument.
Alternatively, specify a custom number of threads:
=
Using Python API with USearch
Want to use it in Python with USearch?
You can wrap the raw C function pointers SimSIMD backends into a CompiledMetric and pass it to USearch, similar to how it handles Numba's JIT-compiled code.
=
=
Using SimSIMD in Rust
To install, add the following to your Cargo.toml:
[]
= "..."
Before using the SimSIMD library, ensure you have imported the necessary traits and types into your Rust source file.
The library provides several traits for different distance/similarity kinds - SpatialSimilarity, BinarySimilarity, and ProbabilitySimilarity.
Spatial Similarity: Cosine and Euclidean Distances
use SpatialSimilarity;
Spatial similarity functions are available for f64, f32, f16, and i8 types.
Dot-Products: Inner and Complex Inner Products
use SpatialSimilarity;
use ComplexProducts;
Complex inner products are available for f64, f32, and f16 types.
Probability Distributions: Jensen-Shannon and Kullback-Leibler Divergences
use SpatialSimilarity;
Probability similarity functions are available for f64, f32, and f16 types.
Binary Similarity: Hamming and Jaccard Distances
Similar to spatial distances, one can compute bit-level distance functions between slices of unsigned integers:
use BinarySimilarity;
Binary similarity functions are available only for u8 types.
Half-Precision Floating-Point Numbers
Rust has no native support for half-precision floating-point numbers, but SimSIMD provides a f16 type.
It has no functionality - it is a transparent wrapper around u16 and can be used with half or any other half-precision library.
use SpatialSimilarity;
use f16 as SimF16;
use f16 as HalfF16;
Dynamic Dispatch
SimSIMD provides a dynamic dispatch mechanism to select the most advanced micro-kernel for the current CPU.
You can query supported backends and use the SimSIMD::capabilities function to select the best one.
println!;
println!;
println!;
println!;
println!;
println!;
Using SimSIMD in JavaScript
To install, choose one of the following options depending on your environment:
npm install --save simsimdyarn add simsimdpnpm add simsimdbun install simsimd
The package is distributed with prebuilt binaries for Node.js v10 and above for Linux (x86_64, arm64), macOS (x86_64, arm64), and Windows (i386, x86_64).
If your platform is not supported, you can build the package from the source via npm run build.
This will automatically happen unless you install the package with the --ignore-scripts flag or use Bun.
After you install it, you will be able to call the SimSIMD functions on various TypedArray variants:
const = require;
const vectorA = ;
const vectorB = ;
const distance = ;
console.log;
Other numeric types and precision levels are supported as well:
const vectorA = ;
const vectorB = ;
const distance = ;
console.log;
Using SimSIMD in C
For integration within a CMake-based project, add the following segment to your CMakeLists.txt:
FetchContent_Declare(
simsimd
GIT_REPOSITORY https://github.com/ashvardanian/simsimd.git
GIT_SHALLOW TRUE
)
FetchContent_MakeAvailable(simsimd)
After that, you can use the SimSIMD library in your C code in several ways. Simplest of all, you can include the headers, and the compiler will automatically select the most recent CPU extensions that SimSIMD will use.
int
Dynamic Dispatch
To avoid hard-coding the backend, you can rely on c/lib.c to prepackage all possible backends in one binary, and select the most recent CPU features at runtime.
That feature of the C library is called dynamic dispatch and is extensively used in the Python, JavaScript, and Rust bindings.
To test which CPU features are available on the machine at runtime, use the following APIs:
int uses_neon = ;
int uses_sve = ;
int uses_haswell = ;
int uses_skylake = ;
int uses_ice = ;
int uses_sapphire = ;
simsimd_capability_t capabilities = ;
To differentiate between runtime and compile-time dispatch, define the following macro:
Spatial Distances: Cosine and Euclidean Distances
int
Dot-Products: Inner and Complex Inner Products
int
Binary Distances: Hamming and Jaccard Distances
int
Probability Distributions: Jensen-Shannon and Kullback-Leibler Divergences
int
Half-Precision Floating-Point Numbers
If you aim to utilize the _Float16 functionality with SimSIMD, ensure your development environment is compatible with C 11.
For other SimSIMD functionalities, C 99 compatibility will suffice.
To explicitly disable half-precision support, define the following macro before imports:
Target Specific Backends
SimSIMD exposes all kernels for all backends, and you can select the most advanced one for the current CPU without relying on built-in dispatch mechanisms.
All of the function names follow the same pattern: simsimd_{function}_{type}_{backend}.
- The backend can be
serial,haswell,skylake,ice,sapphire,neon, orsve. - The type can be
f64,f32,f16,f64c,f32c,f16c,i8, orb8. - The function can be
dot,vdot,cos,l2sq,hamming,jaccard,kl, orjs.
To avoid hard-coding the backend, you can use the simsimd_metric_punned_t to pun the function pointer and the simsimd_capabilities function to get the available backends at runtime.
simsimd_dot_f64_sve
simsimd_l2sq_f64_sve
simsimd_cos_f64_skylake
simsimd_dot_f64_serial
simsimd_l2sq_f64_serial
simsimd_kl_f64_serial
simsimd_cos_f32_sve
simsimd_dot_f32_neon
simsimd_l2sq_f32_neon
simsimd_kl_f32_neon
simsimd_cos_f32_skylake
simsimd_js_f32_skylake
simsimd_dot_f32_serial
simsimd_l2sq_f32_serial
simsimd_kl_f32_serial
simsimd_cos_f16_sve
simsimd_dot_f16_neon
simsimd_l2sq_f16_neon
simsimd_kl_f16_neon
simsimd_cos_f16_sapphire
simsimd_js_f16_sapphire
simsimd_dot_f16_haswell
simsimd_l2sq_f16_haswell
simsimd_kl_f16_haswell
simsimd_cos_f16_serial
simsimd_js_f16_serial
simsimd_cos_i8_neon
simsimd_l2sq_i8_neon
simsimd_cos_i8_ice
simsimd_cos_i8_haswell
simsimd_l2sq_i8_haswell
simsimd_cos_i8_serial
simsimd_hamming_b8_sve
simsimd_hamming_b8_neon
simsimd_hamming_b8_ice
simsimd_hamming_b8_haswell
simsimd_hamming_b8_serial
simsimd_dot_f32c_sve
simsimd_dot_f32c_neon
simsimd_dot_f32c_haswell
simsimd_dot_f32c_skylake
simsimd_dot_f32c_serial
simsimd_dot_f64c_sve
simsimd_dot_f64c_skylake
simsimd_dot_f64c_serial
simsimd_dot_f16c_sve
simsimd_dot_f16c_neon
simsimd_dot_f16c_haswell
simsimd_dot_f16c_sapphire
simsimd_dot_f16c_serial