SimSIMD 📏
Computing dot-products, similarity measures, and distances between low- and high-dimensional vectors is ubiquitous in Machine Learning, Scientific Computing, Geo-Spatial Analysis, and Information Retrieval.
These algorithms generally have linear complexity in time, constant complexity in space, and are data-parallel.
In other words, it is easily parallelizable and vectorizable and often available in packages like BLAS and LAPACK, as well as higher-level numpy and scipy Python libraries.
Ironically, even with decades of evolution in compilers and numerical computing, [most libraries can be 3-200x slower than hardware potential][] even on the most popular hardware, like 64-bit x86 and Arm CPUs.
SimSIMD attempts to fill that gap.
1️⃣ SimSIMD functions are practically as fast as memcpy.
2️⃣ SimSIMD compiles to more platforms than NumPy and has more backends than most BLAS implementations.
It is currently powering search in USearch and several DBMS products.
Implemented distance functions include:
- Euclidean (L2) and Cosine (Angular) spatial distances for Vector Search.
- Dot-Products for real & complex vectors for DSP & Quantum computing.
- Hamming (~ Manhattan) and Jaccard (~ Tanimoto) bit-level distances.
- Kullback-Leibler and Jensen–Shannon divergences for probability distributions.
- Haversine and Vincenty's formulae for Geospatial Analysis.
- For Levenshtein, Needleman–Wunsch and other text metrics, check StringZilla.
Moreover, SimSIMD...
- handles
f64,f32, andf16real & complex vectors. - handles
i8integral andb8binary vectors. - is a zero-dependency header-only C 99 library.
- has bindings for Python, Rust and JavaScript.
- has Arm backends for NEON and Scalable Vector Extensions (SVE).
- has x86 backends for Haswell, Skylake, Ice Lake, and Sapphire Rapids.
We enumerate subsets of AVX-512 instructions in Intel CPU generations, but they also work on AMD.
Technical Insights and related articles:
- Uses Horner's method for polynomial approximations, beating GCC 12 by 119x.
- Uses Arm SVE and x86 AVX-512's masked loads to eliminate tail
for-loops. - Uses AVX-512 FP16 for half-precision operations, that few compilers vectorize.
- Substitutes LibC's
sqrtcalls with bit-hacks using Jan Kadlec's constant. - For Python avoids slow PyBind11, SWIG, and even
PyArg_ParseTuplefor speed. - For JavaScript uses typed arrays and NAPI for zero-copy calls.
Benchmarks
Against NumPy and SciPy
Given 1000 embeddings from OpenAI Ada API with 1536 dimensions, running on the Apple M2 Pro Arm CPU with NEON support, here's how SimSIMD performs against conventional methods:
| Kind | f32 improvement |
f16 improvement |
i8 improvement |
Conventional method | SimSIMD |
|---|---|---|---|---|---|
| Inner Product | 2 x | 9 x | 18 x | numpy.inner |
inner |
| Cosine Distance | 32 x | 79 x | 133 x | scipy.spatial.distance.cosine |
cosine |
| Euclidean Distance ² | 5 x | 26 x | 17 x | scipy.spatial.distance.sqeuclidean |
sqeuclidean |
| Jensen-Shannon Divergence | 31 x | 53 x | scipy.spatial.distance.jensenshannon |
jensenshannon |
Against GCC Auto-Vectorization
On the Intel Sapphire Rapids platform, SimSIMD was benchmarked against auto-vectorized code using GCC 12.
GCC handles single-precision float but might not be the best choice for int8 and _Float16 arrays, which have been part of the C language since 2011.
| Kind | GCC 12 f32 |
GCC 12 f16 |
SimSIMD f16 |
f16 improvement |
|---|---|---|---|---|
| Inner Product | 3,810 K/s | 192 K/s | 5,990 K/s | 31 x |
| Cosine Distance | 3,280 K/s | 336 K/s | 6,880 K/s | 20 x |
| Euclidean Distance ² | 4,620 K/s | 147 K/s | 5,320 K/s | 36 x |
| Jensen-Shannon Divergence | 1,180 K/s | 18 K/s | 2,140 K/s | 118 x |
Broader Benchmarking Results:
Using SimSIMD in Python
The package is intended to replace the usage of numpy.inner, numpy.dot, and scipy.spatial.distance.
Aside from drastic performance improvements, SimSIMD significantly improves accuracy in mixed precision setups.
NumPy and SciPy, processing i8 or f16 vectors, will use the same types for accumulators, while SimSIMD can combine i8 enumeration, i16 multiplication, and i32 accumulation to avoid overflows entirely.
The same applies to processing f16 values with f32 precision.
Installation
Use the following snippet to install SimSIMD and list available hardware acceleration options available on your machine:
One-to-One Distance
=
=
=
Supported functions include cosine, inner, sqeuclidean, hamming, and jaccard.
Dot products are supported for both real and complex numbers:
= + 1j *
= + 1j *
=
=
= # conjugate, same as `np.vdot`
Unlike SciPy, SimSIMD allows explicitly stating the precision of the input vectors, which is especially useful for mixed-precision setups.
=
=
=
=
It also allows using SimSIMD for half-precision complex numbers, which NumPy does not support.
For that, view data as continuous even-length np.float16 vectors and override type-resolution with complex32 string.
=
=
One-to-Many Distances
Every distance function can be used not only for one-to-one but also one-to-many and many-to-many distance calculations. For one-to-many:
= # rank 1 tensor
= # rank 2 tensor
=
=
=
Many-to-Many Distances
All distance functions in SimSIMD can be used to compute many-to-many distances. For two batches of 100 vectors to compute 100 distances, one would call it like this:
=
=
=
Input matrices must have identical shapes.
Many-to-Many All-Pairs Distances
One can use SimSIMD to compute distances between all possible pairs of rows across two matrices (akin to scipy.spatial.distance.cdist).
The resulting object will have a type DistancesTensor, zero-copy compatible with NumPy and other libraries.
For two arrays of 10 and 1,000 entries, the resulting tensor will have 10,000 cells:
=
=
: = # zero-copy
: = # now managed by NumPy
Multithreading
By default, computations use a single CPU core.
To optimize and utilize all CPU cores on Linux systems, add the threads=0 argument.
Alternatively, specify a custom number of threads:
=
Using Python API with USearch
Want to use it in Python with USearch?
You can wrap the raw C function pointers SimSIMD backends into a CompiledMetric and pass it to USearch, similar to how it handles Numba's JIT-compiled code.
=
=
Using SimSIMD in Rust
To install, add the following to your Cargo.toml:
[]
= "..."
Before using the SimSIMD library, ensure you have imported the necessary traits and types into your Rust source file.
The library provides several traits for different distance/similarity kinds - SpatialSimilarity, BinarySimilarity, and ProbabilitySimilarity.
use SpatialSimilarity;
Similarly, one can compute bit-level distance functions between slices of unsigned integers:
use BinarySimilarity;
Rust has no native support for half-precision floating-point numbers, but SimSIMD provides a f16 type.
It has no functionality - it is a transparent wrapper around u16 and can be used with half or any other half-precision library.
use SpatialSimilarity;
use f16 as SimF16;
use f16 as HalfF16;
Using SimSIMD in JavaScript
To install, choose one of the following options depending on your environment:
npm install --save simsimdyarn add simsimdpnpm add simsimdbun install simsimd
The package is distributed with prebuilt binaries for Node.js v10 and above for Linux (x86_64, arm64), macOS (x86_64, arm64), and Windows (i386, x86_64).
If your platform is not supported, you can build the package from the source via npm run build.
This will automatically happen unless you install the package with the --ignore-scripts flag or use Bun.
After you install it, you will be able to call the SimSIMD functions on various TypedArray variants:
const = require;
const vectorA = ;
const vectorB = ;
const distance = ;
console.log;
Using SimSIMD in C
For integration within a CMake-based project, add the following segment to your CMakeLists.txt:
FetchContent_Declare(
simsimd
GIT_REPOSITORY https://github.com/ashvardanian/simsimd.git
GIT_SHALLOW TRUE
)
FetchContent_MakeAvailable(simsimd)
If you aim to utilize the _Float16 functionality with SimSIMD, ensure your development environment is compatible with C 11.
For other SimSIMD functionalities, C 99 compatibility will suffice.
A minimal usage example would be:
int
All of the function names follow the same pattern: simsimd_{metric}_{type}_{backend}.
- The backend can be
avx512,avx2,neon, orsve. - The type can be
f64,f32,f16,i8, orb8. - The metric can be
cos,ip,l2sq,hamming,jaccard,kl, orjs.
To avoid hard-coding the backend, you can use the simsimd_metric_punned_t to pun the function pointer and the simsimd_capabilities function to get the available backends at runtime.