
Computing dot-products, similarity measures, and distances between low- and high-dimensional vectors is ubiquitous in Machine Learning, Scientific Computing, Geo-Spatial Analysis, and Information Retrieval.
These algorithms generally have linear complexity in time, constant or linear complexity in space, and are data-parallel.
In other words, it is easily parallelizable and vectorizable and often available in packages like BLAS (level 1) and LAPACK, as well as higher-level numpy and scipy Python libraries.
Ironically, even with decades of evolution in compilers and numerical computing, most libraries can be 3-200x slower than hardware potential even on the most popular hardware, like 64-bit x86 and Arm CPUs.
Moreover, most lack mixed-precision support, which is crucial for modern AI!
The rare few that support minimal mixed precision, run only on one platform, and are vendor-locked, by companies like Intel and Nvidia.
SimSIMD provides an alternative.
1️⃣ SimSIMD functions are practically as fast as memcpy.
2️⃣ Unlike BLAS, most kernels are designed for mixed-precision and bit-level operations.
3️⃣ SimSIMD compiles to more platforms than NumPy (105 vs 35) and has more backends than most BLAS implementations, and more high-level interfaces than most libraries.
Features
SimSIMD (Arabic: "سيمسيم دي") is a mixed-precision math library of over 200 SIMD-optimized kernels extensively used in AI, Search, and DBMS workloads. Named after the iconic "Open Sesame" command that opened doors to treasure in Ali Baba and the Forty Thieves, SimSimd can help you 10x the cost-efficiency of your computational pipelines. Implemented distance functions include:
- Euclidean (L2) and Cosine (Angular) spatial distances for Vector Search. docs
- Dot-Products for real & complex vectors for DSP & Quantum computing. docs
- Hamming (~ Manhattan) and Jaccard (~ Tanimoto) bit-level distances. docs
- Set Intersections for Sparse Vectors and Text Analysis. docs
- Mahalanobis distance and Quadratic forms for Scientific Computing. docs
- Kullback-Leibler and Jensen–Shannon divergences for probability distributions. docs
- For Levenshtein, Needleman–Wunsch, and Smith-Waterman, check StringZilla.
- 🔜 Haversine and Vincenty's formulae for Geospatial Analysis.
Moreover, SimSIMD...
- handles
f64,f32,f16, andbf16real & complex vectors. - handles
i8integral,i4sub-byte, andb8binary vectors. - handles sparse
u32andu16sets, and weighted sparse vectors. - is a zero-dependency header-only C 99 library.
- has Python, Rust, JS, and Swift bindings.
- has Arm backends for NEON, Scalable Vector Extensions (SVE), and SVE2.
- has x86 backends for Haswell, Skylake, Ice Lake, Genoa, and Sapphire Rapids.
- with both compile-time and runtime CPU feature detection easily integrates anywhere!
Due to the high-level of fragmentation of SIMD support in different x86 CPUs, SimSIMD generally uses the names of select Intel CPU generations for its backends. They, however, also work on AMD CPUs. Intel Haswell is compatible with AMD Zen 1/2/3, while AMD Genoa Zen 4 covers AVX-512 instructions added to Intel Skylake and Ice Lake. You can learn more about the technical implementation details in the following blog-posts:
- Uses Horner's method for polynomial approximations, beating GCC 12 by 119x.
- Uses Arm SVE and x86 AVX-512's masked loads to eliminate tail
for-loops. - Substitutes LibC's
sqrtwith Newton Raphson iterations. - Uses Galloping and SVE2 histograms to intersect sparse vectors.
- For Python: avoids slow PyBind11, SWIG, &
PyArg_ParseTupleusing faster calling convention. - For JavaScript: uses typed arrays and NAPI for zero-copy calls.
Benchmarks
For reference, we use 1536-dimensional vectors, like the embeddings produced by the OpenAI Ada API. Comparing the serial code throughput produced by GCC 12 to hand-optimized kernels in SimSIMD, we see the following single-core improvements:
| Type | Apple M2 Pro | Intel Sapphire Rapids | AWS Graviton 4 |
|---|---|---|---|
f64 |
18.5 → 28.8 GB/s + 56 % | 21.9 → 41.4 GB/s + 89 % | 20.7 → 41.3 GB/s + 99 % |
f32 |
9.2 → 29.6 GB/s + 221 % | 10.9 → 95.8 GB/s + 779 % | 4.9 → 41.9 GB/s + 755 % |
f16 |
4.6 → 14.6 GB/s + 217 % | 3.1 → 108.4 GB/s + 3,397 % | 5.4 → 39.3 GB/s + 627 % |
bf16 |
4.6 → 26.3 GB/s + 472 % | 0.8 → 59.5 GB/s +7,437 % | 2.5 → 29.9 GB/s + 1,096 % |
i8 |
25.8 → 47.1 GB/s + 83 % | 33.1 → 65.3 GB/s + 97 % | 35.2 → 43.5 GB/s + 24 % |
Similar speedups are often observed even when compared to BLAS and LAPACK libraries underlying most numerical computing libraries, including NumPy and SciPy in Python. Broader benchmarking results:
Using SimSIMD in Python
The package is intended to replace the usage of numpy.inner, numpy.dot, and scipy.spatial.distance.
Aside from drastic performance improvements, SimSIMD significantly improves accuracy in mixed precision setups.
NumPy and SciPy, processing i8 or f16 vectors, will use the same types for accumulators, while SimSIMD can combine i8 enumeration, i16 multiplication, and i32 accumulation to avoid overflows entirely.
The same applies to processing f16 and bf16 values with f32 precision.
Installation
Use the following snippet to install SimSIMD and list available hardware acceleration options available on your machine:
With precompiled binaries, SimSIMD ships .pyi interface files for type hinting and static analysis.
You can check all the available functions in python/annotations/__init__.pyi.
One-to-One Distance
=
=
=
Supported functions include cosine, inner, sqeuclidean, hamming, jaccard, kulbackleibler, jensenshannon, and intersect.
Dot products are supported for both real and complex numbers:
= + 1j *
= + 1j *
=
=
= # conjugate, same as `np.vdot`
Unlike SciPy, SimSIMD allows explicitly stating the precision of the input vectors, which is especially useful for mixed-precision setups.
=
=
=
=
=
=
It also allows using SimSIMD for half-precision complex numbers, which NumPy does not support.
For that, view data as continuous even-length np.float16 vectors and override type-resolution with complex32 string.
=
=
When dealing with sparse representations and integer sets, you can apply the intersect function to two 1-dimensional arrays of uint16 or uint32 integers:
, = ,
=
=
=
=
assert ==
One-to-Many Distances
Every distance function can be used not only for one-to-one but also one-to-many and many-to-many distance calculations. For one-to-many:
= # rank 1 tensor
= # rank 2 tensor
=
=
=
Many-to-Many Distances
All distance functions in SimSIMD can be used to compute many-to-many distances. For two batches of 100 vectors to compute 100 distances, one would call it like this:
=
=
=
Input matrices must have identical shapes. This functionality isn't natively present in NumPy or SciPy, and generally requires creating intermediate arrays, which is inefficient and memory-consuming.
Many-to-Many All-Pairs Distances
One can use SimSIMD to compute distances between all possible pairs of rows across two matrices (akin to scipy.spatial.distance.cdist).
The resulting object will have a type DistancesTensor, zero-copy compatible with NumPy and other libraries.
For two arrays of 10 and 1,000 entries, the resulting tensor will have 10,000 cells:
=
=
: = # zero-copy, managed by SimSIMD
: = # now managed by NumPy
Multithreading and Memory Usage
By default, computations use a single CPU core.
To override this behavior, use the threads argument.
Set it to 0 to use all available CPU cores.
Here is an example of dealing with large sets of binary vectors:
= 1536 # OpenAI Ada embeddings
=
=
=
By default, the output distances will be stored in double-precision f64 floating-point numbers.
That behavior may not be space-efficient, especially if you are computing the hamming distance between short binary vectors, that will generally fit into 8x smaller u8 or u16 types.
To override this behavior, use the dtype argument.
Helper Functions
You can turn specific backends on or off depending on the exact environment. A common case may be avoiding AVX-512 on older AMD CPUs and Intel Ice Lake CPUs to ensure the CPU doesn't change the frequency license and throttle performance.
$
>
$
$
Using Python API with USearch
Want to use it in Python with USearch?
You can wrap the raw C function pointers SimSIMD backends into a CompiledMetric and pass it to USearch, similar to how it handles Numba's JIT-compiled code.
=
=
Using SimSIMD in Rust
To install, add the following to your Cargo.toml:
[]
= "..."
Before using the SimSIMD library, ensure you have imported the necessary traits and types into your Rust source file.
The library provides several traits for different distance/similarity kinds - SpatialSimilarity, BinarySimilarity, and ProbabilitySimilarity.
Spatial Similarity: Cosine and Euclidean Distances
use SpatialSimilarity;
Spatial similarity functions are available for f64, f32, f16, and i8 types.
Dot-Products: Inner and Complex Inner Products
use SpatialSimilarity;
use ComplexProducts;
Complex inner products are available for f64, f32, and f16 types.
Probability Distributions: Jensen-Shannon and Kullback-Leibler Divergences
use SpatialSimilarity;
Probability similarity functions are available for f64, f32, and f16 types.
Binary Similarity: Hamming and Jaccard Distances
Similar to spatial distances, one can compute bit-level distance functions between slices of unsigned integers:
use BinarySimilarity;
Binary similarity functions are available only for u8 types.
Half-Precision Floating-Point Numbers
Rust has no native support for half-precision floating-point numbers, but SimSIMD provides a f16 type.
It has no functionality - it is a transparent wrapper around u16 and can be used with half or any other half-precision library.
use SpatialSimilarity;
use f16 as SimF16;
use f16 as HalfF16;
Half-Precision Brain-Float Numbers
The "brain-float-16" is a popular machine learning format.
It's broadly supported in hardware and is very machine-friendly, but software support is still lagging behind.
Unlike NumPy, you can already use bf16 datatype in SimSIMD.
Luckily, to downcast f32 to bf16 you only have to drop the last 16 bits:
=
=
# NumPy doesn't natively support brain-float, so we need a trick!
# Luckily, it's very easy to reduce the representation accuracy
# by simply masking the low 16-bits of our 32-bit single-precision
# numbers. We can also add `0x8000` to round the numbers.
=
=
# To represent them as brain-floats, we need to drop the second half
=
=
# Now we can compare the results
=
=
Dynamic Dispatch in Rust
SimSIMD provides a dynamic dispatch mechanism to select the most advanced micro-kernel for the current CPU.
You can query supported backends and use the SimSIMD::capabilities function to select the best one.
println!;
println!;
println!;
println!;
println!;
println!;
println!;
println!;
Using SimSIMD in JavaScript
To install, choose one of the following options depending on your environment:
npm install --save simsimdyarn add simsimdpnpm add simsimdbun install simsimd
The package is distributed with prebuilt binaries, but if your platform is not supported, you can build the package from the source via npm run build.
This will automatically happen unless you install the package with the --ignore-scripts flag or use Bun.
After you install it, you will be able to call the SimSIMD functions on various TypedArray variants:
const = require;
const vectorA = ;
const vectorB = ;
const distance = ;
console.log;
Other numeric types and precision levels are supported as well.
For double-precision floating-point numbers, use Float64Array:
const vectorA = ;
const vectorB = ;
const distance = ;
When doing machine learning and vector search with high-dimensional vectors you may want to quantize them to 8-bit integers.
You may want to project values from the $[-1, 1]$ range to the $[-127, 127]$ range and then cast them to Int8Array:
const quantizedVectorA = ;
const quantizedVectorB = ;
const distance = ;
A more extreme quantization case would be to use binary vectors.
You can map all positive values to 1 and all negative values and zero to 0, packing eight values into a single byte.
After that, Hamming and Jaccard distances can be computed.
const = require;
const binaryVectorA = ;
const binaryVectorB = ;
const distance = ;
Using SimSIMD in Sift
To install, simply add the following dependency to you Package.swift:
dependencies: [
.package(url: "https://github.com/ashvardanian/simsimd")
]
The package provides the most common spatial metrics for Int8, Float16, Float32, and Float64 vectors.
import SimSIMD
let vectorA: [Int8] = [1, 2, 3]
let vectorB: [Int8] = [4, 5, 6]
let cosineSimilarity = vectorA.cosine(vectorB) // Computes the cosine similarity
let dotProduct = vectorA.dot(vectorB) // Computes the dot product
let sqEuclidean = vectorA.sqeuclidean(vectorB) // Computes the squared Euclidean distance
Using SimSIMD in C
For integration within a CMake-based project, add the following segment to your CMakeLists.txt:
FetchContent_Declare(
simsimd
GIT_REPOSITORY https://github.com/ashvardanian/simsimd.git
GIT_SHALLOW TRUE
)
FetchContent_MakeAvailable(simsimd)
After that, you can use the SimSIMD library in your C code in several ways. Simplest of all, you can include the headers, and the compiler will automatically select the most recent CPU extensions that SimSIMD will use.
int
Dynamic Dispatch in C
To avoid hard-coding the backend, you can rely on c/lib.c to prepackage all possible backends in one binary, and select the most recent CPU features at runtime.
That feature of the C library is called dynamic dispatch and is extensively used in the Python, JavaScript, and Rust bindings.
To test which CPU features are available on the machine at runtime, use the following APIs:
int uses_dynamic_dispatch = ; // Check if dynamic dispatch was enabled
simsimd_capability_t capabilities = ; // Returns a bitmask
int uses_neon = ;
int uses_sve = ;
int uses_haswell = ;
int uses_skylake = ;
int uses_ice = ;
int uses_genoa = ;
int uses_sapphire = ;
To override compilation settings and switch between runtime and compile-time dispatch, define the following macro:
Spatial Distances: Cosine and Euclidean Distances
int
Dot-Products: Inner and Complex Inner Products
int
Binary Distances: Hamming and Jaccard Distances
int
Probability Distributions: Jensen-Shannon and Kullback-Leibler Divergences
int
Half-Precision Floating-Point Numbers
If you aim to utilize the _Float16 functionality with SimSIMD, ensure your development environment is compatible with C 11.
For other SimSIMD functionalities, C 99 compatibility will suffice.
To explicitly disable half-precision support, define the following macro before imports:
Compilation Settings and Debugging
SIMSIMD_DYNAMIC_DISPATCH:
By default, SimSIMD is a header-only library. But if you are running on different generations of devices, it makes sense to pre-compile the library for all supported generations at once, and dispatch at runtime. This flag does just that and is used to produce the
simsimd.soshared library, as well as the Python and other bindings.
SIMSIMD_TARGET_ARM (SIMSIMD_TARGET_NEON, SIMSIMD_TARGET_SVE, SIMSIMD_TARGET_SVE2, SIMSIMD_TARGET_NEON_F16, SIMSIMD_TARGET_SVE_F16, SIMSIMD_TARGET_NEON_BF16, SIMSIMD_TARGET_SVE_BF16), SIMSIMD_TARGET_X86 (SIMSIMD_TARGET_HASWELL, SIMSIMD_TARGET_SKYLAKE, SIMSIMD_TARGET_ICE, SIMSIMD_TARGET_GENOA, SIMSIMD_TARGET_SAPPHIRE, SIMSIMD_TARGET_TURIN, SIMSIMD_TARGET_SIERRA):
By default, SimSIMD automatically infers the target architecture and pre-compiles as many kernels as possible. In some cases, you may want to explicitly disable some of the kernels. Most often it's due to compiler support issues, like the lack of some recent intrinsics or low-precision numeric types. In other cases, you may want to disable some kernels to speed up the compilation process and trim the binary size.
SIMSIMD_SQRT, SIMSIMD_RSQRT, SIMSIMD_LOG:
By default, for non-SIMD backends, SimSIMD may use
libcfunctions likesqrtandlog. Those are generally very accurate, but slow, and introduce a dependency on the C standard library. To avoid that you can override those definitions with your custom implementations, like:#define SIMSIMD_RSQRT(x) (1 / sqrt(x)).
Algorithms & Design Decisions 📚
In general there are a few principles that SimSIMD follows:
- Avoid loop unrolling.
- Never allocate memory.
- Never throw exceptions or set
errno. - Detect overflows and report the distance with a "signaling"
NaN. - Keep all function arguments the size of the pointer.
- Avoid returning from public interfaces, use out-arguments instead.
- Don't over-optimize for old CPUs and single- and double-precision floating-point numbers.
- Prioritize mixed-precision and integer operations, and new ISA extensions.
Last, but not the least - don't build unless there is a demand for it. So if you have a specific use-case, please open an issue or a pull request, and ideally, bring in more users with similar needs.
Cosine Similarity, Reciprocal Square Root, and Newton-Raphson Iteration
The cosine similarity is the most common and straightforward metric used in machine learning and information retrieval. Interestingly, there are multiple ways to shoot yourself in the foot when computing it. The cosine similarity is the inverse of the cosine distance, which is the cosine of the angle between two vectors.
\text{CosineSimilarity}(a, b) = \frac{a \cdot b}{\|a\| \cdot \|b\|}
\text{CosineDistance}(a, b) = 1 - \frac{a \cdot b}{\|a\| \cdot \|b\|}
In NumPy terms, SimSIMD implementation is similar to:
, , = , , # Fused in SimSIMD
= 0 # Same in SciPy
= 1 # Division by zero error in SciPy
= 1 - / # Bigger rounding error in SciPy
return
In SciPy, however, the cosine distance is computed as 1 - ab / np.sqrt(a2 * b2).
It handles the edge case of a zero and non-zero argument pair differently, resulting in a division by zero error.
It's not only less efficient, but also less accurate, given how the reciprocal square roots are computed.
The C standard library provides the sqrt function, which is generally very accurate, but slow.
The rsqrt in-hardware implementations are faster, but have different accuracy characteristics.
- SSE
rsqrtpsand AVXvrsqrtps: $1.5 \times 2^{-12}$ maximal relative error. - AVX-512
vrsqrt14pdinstruction: $2^{-14}$ maximal relative error. - NEON
frsqrteinstruction has no documented error bounds, but can be $2^{-3}$.
To overcome the limitations of the rsqrt instruction, SimSIMD uses the Newton-Raphson iteration to refine the initial estimate for high-precision floating-point numbers.
It can be defined as:
x_{n+1} = x_n \cdot (3 - x_n \cdot x_n) / 2
On 1536-dimensional inputs on Intel Sapphire Rapids CPU a single such iteration can result in a 2-3 orders of magnitude relative error reduction:
| Datatype | NumPy Error | SimSIMD w/out Iteration | SimSIMD |
|---|---|---|---|
bfloat16 |
1.89e-08 ± 1.59e-08 | 3.07e-07 ± 3.09e-07 | 3.53e-09 ± 2.70e-09 |
float16 |
1.67e-02 ± 1.44e-02 | 2.68e-05 ± 1.95e-05 | 2.02e-05 ± 1.39e-05 |
float32 |
2.21e-08 ± 1.65e-08 | 3.47e-07 ± 3.49e-07 | 3.77e-09 ± 2.84e-09 |
float64 |
0.00e+00 ± 0.00e+00 | 3.80e-07 ± 4.50e-07 | 1.35e-11 ± 1.85e-11 |
Curved Spaces, Mahalanobis Distance, and Bilinear Quadratic Forms
The Mahalanobis distance is a generalization of the Euclidean distance, which takes into account the covariance of the data. It's very similar in its form to the bilinear form, which is a generalization of the dot product.
\text{BilinearForm}(a, b, M) = a^T M b
\text{Mahalanobis}(a, b, M) = \sqrt{(a - b)^T M^{-1} (a - b)}
Bilinear Forms can be seen as one of the most important linear algebraic operations, surprisingly missing in BLAS and LAPACK. They are versatile and appear in various domains:
- In Quantum Mechanics, the expectation value of an observable $A$ in a state $\psi$ is given by $\langle \psi | A | \psi \rangle$, which is a bilinear form.
- In Machine Learning, in Support Vector Machines (SVMs), bilinear forms define kernel functions that measure similarity between data points.
- In Differential Geometry, the metric tensor, which defines distances and angles on a manifold, is a bilinear form on the tangent space.
- In Economics, payoff functions in certain Game Theoretic problems can be modeled as bilinear forms of players' strategies.
- In Physics, interactions between electric and magnetic fields can be expressed using bilinear forms.
Broad applications aside, the lack of a specialized primitive for bilinear forms in BLAS and LAPACK means significant performance overhead. A $vector * matrix * vector$ product is a scalar, whereas its constituent parts ($vector * matrix$ and $matrix * vector$) are vectors:
- They need memory to be stored in: $O(n)$ allocation.
- The data will be written to memory and read back, wasting CPU cycles.
SimSIMD doesn't produce intermediate vector results, like a @ M @ b, but computes the bilinear form directly.
Set Intersection, Galloping, and Binary Search
The set intersection operation is generally defined as the number of elements that are common between two sets, represented as sorted arrays of integers. The most common way to compute it is a linear scan:
size_t
Alternatively, one can use the binary search to find the elements in the second array that are present in the first one. On every step the checked region of the second array is halved, which is called the galloping search. It's faster, but only when large arrays of very different sizes are intersected. Third approach is to use the SIMD instructions to compare multiple elements at once:
- Using string-intersection instructions on x86, like
pcmpestrm. - Using integer-intersection instructions in AVX-512, like
vp2intersectd. - Using vanilla equality checks present in all SIMD instruction sets.
After benchmarking, the last approach was chosen, as it's the most flexible and often the fastest.
Complex Dot Products, Conjugate Dot Products, and Complex Numbers
Complex dot products are a generalization of the dot product to complex numbers.
They are supported by most BLAS packages, but almost never in mixed precision.
SimSIMD defines dot and vdot kernels as:
\text{dot}(a, b) = \sum_{i=0}^{n-1} a_i \cdot b_i
\text{vdot}(a, b) = \sum_{i=0}^{n-1} a_i \cdot \bar{b_i}
Where $\bar{b_i}$ is the complex conjugate of $b_i$. Putting that into Python code for scalar arrays:
, = ,
, = ,
, = 0, 0
+= * - *
+= * + *
return ,
, = ,
, = ,
, = 0, 0
+= * + *
+= * - *
return ,
Logarithms in Kullback-Leibler & Jensen–Shannon Divergences
The Kullback-Leibler divergence is a measure of how one probability distribution diverges from a second, expected probability distribution. Jensen-Shannon divergence is a symmetrized and smoothed version of the Kullback-Leibler divergence, which can be used as a distance metric between probability distributions.
\text{KL}(P || Q) = \sum_{i} P(i) \log \frac{P(i)}{Q(i)}
\text{JS}(P, Q) = \frac{1}{2} \text{KL}(P || M) + \frac{1}{2} \text{KL}(Q || M), M = \frac{P + Q}{2}
Both functions are defined for non-negative numbers, and the logarithm is a key part of their computation.
Auto-Vectorization & Loop Unrolling
On the Intel Sapphire Rapids platform, SimSIMD was benchmarked against auto-vectorized code using GCC 12.
GCC handles single-precision float but might not be the best choice for int8 and _Float16 arrays, which have been part of the C language since 2011.
| Kind | GCC 12 f32 |
GCC 12 f16 |
SimSIMD f16 |
f16 improvement |
|---|---|---|---|---|
| Inner Product | 3,810 K/s | 192 K/s | 5,990 K/s | 31 x |
| Cosine Distance | 3,280 K/s | 336 K/s | 6,880 K/s | 20 x |
| Euclidean Distance ² | 4,620 K/s | 147 K/s | 5,320 K/s | 36 x |
| Jensen-Shannon Divergence | 1,180 K/s | 18 K/s | 2,140 K/s | 118 x |
Dynamic Dispatch
Most popular software is precompiled and distributed with fairly conservative CPU optimizations, to ensure compatibility with older hardware. Database Management platforms, like ClickHouse, and Web Browsers, like Google Chrome,need to run on billions of devices, and they can't afford to be picky about the CPU features. For such users SimSIMD provides a dynamic dispatch mechanism, which selects the most advanced micro-kernel for the current CPU at runtime.
You can compile SimSIMD on an old CPU, like Intel Haswell, and run it on a new one, like AMD Genoa, and it will automatically use the most advanced instructions available.
Reverse is also true, you can compile on a new CPU and run on an old one, and it will automatically fall back to the most basic instructions.
Moreover, the very first time you prove for CPU capabilities with simsimd_capabilities(), it initializes the dynamic dispatch mechanism, and all subsequent calls will be faster and won't face race conditions in multi-threaded environments.
Target Specific Backends
SimSIMD exposes all kernels for all backends, and you can select the most advanced one for the current CPU without relying on built-in dispatch mechanisms.
That's handy for testing and benchmarking, but also in case you want to dispatch a very specific kernel for a very specific CPU, bypassing SimSIMD assignment logic.
All of the function names follow the same pattern: simsimd_{function}_{type}_{backend}.
- The backend can be
serial,haswell,skylake,ice,genoa,sapphire,turin,neon, orsve. - The type can be
f64,f32,f16,bf16,f64c,f32c,f16c,bf16c,i8, orb8. - The function can be
dot,vdot,cos,l2sq,hamming,jaccard,kl,js, orintersect.
To avoid hard-coding the backend, you can use the simsimd_metric_punned_t to pun the function pointer and the simsimd_capabilities function to get the available backends at runtime.
To match all the function names, consider a RegEx:
SIMSIMD_PUBLIC void simsimd_\w+_\w+_\w+\(
On Linux, you can use the following command to list all unique functions:
| |
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC
> include/simsimd/binary.h:SIMSIMD_PUBLIC