DiskANN: On-disk graph-based approximate nearest neighbor search 🦀
A Rust implementation of DiskANN (Disk-based Approximate Nearest Neighbor search) using the Vamana graph algorithm. This project provides an efficient and scalable solution for large-scale vector similarity search with minimal memory footprint, as an alternative to the widely used in-memory HNSW algorithm.
Key algorithm
This implementation follows the DiskANN paper's approach:
- Using the Vamana graph algorithm for index construction, pruning and refinement (in parallel)
- Memory-mapping the index file for efficient disk-based access (via memmap2)
- Implementing beam search with medoid entry points (in parallel)
- Supporting Euclidean, Cosine, Hamming and other distance metrics via a generic distance trait
- Maintaining minimal memory footprint during search operations
Features
-
Single-file storage: All index data stored in one memory-mapped file in the following way:
[ metadata_len:u64 ][ metadata (bincode) ][ padding up to vectors_offset ] [ vectors (num * dim * f32) ][ adjacency (num * max_degree * u32) ] `vectors_offset` is a fixed 1 MiB gap by default. -
Vamana graph construction: Builds an approximate nearest-neighbor graph with robust α-pruning and multi-pass refinement. The default build uses at least two passes, with a first diversification pass at α = 1.0 and a second refinement pass at user α (default 1.2).
-
Parallel batched graph refinement: Uses rayon to parallelize candidate generation and batched symmetrization/re-pruning during construction for high build throughput.
-
Build-optimized data layout: Uses flat contiguous storage instead of Vec<Vec> during construction to improve cache locality and reduce allocation overhead.
-
Memory-mapped on-disk index: Stores vectors and fixed-degree adjacency lists in a single file and memory-maps it for low-overhead loading and search.
-
Beam-search query algorithm: Uses a medoid entry point and beam search over the graph, typically visiting only a small fraction of indexed vectors
-
Generic over vector element type and distance: Works with generic T and any anndists::Distance, supporting use cases beyond standard floating-point ANN
-
Distance metrics: Support for Euclidean, Cosine and Hamming similarity et.al. via anndists. A generic distance trait that can be extended to other distances
-
Medoid-based entry points: Uses an approximate medoid as the default search entry point
-
Parallel query processing: Supports concurrent queries with rayon; depending on access patterns, this can increase page activity in the memory-mapped index
-
Minimal memory footprint: Keeps RAM usage well below full index size by relying on mmap rather than fully loading the index into memory.
-
Extensitve benchmarks: Speed, accuracy and memory consumption benchmark with HNSW (both in-memory and on-disk)
Visualization of Vamana graph build and search
The Vamana graph build plot is in 2D with L2 distance. See diskann-vamana-viz crate for details.
For search, the final graph was used. The path from entry node (red) to nearest node (green) for the query (pink) in the graph was labeled in orange.
Usage in Rust 🦀
Building a New Index
use ; // or your own Distance types
use ;
// Your vectors to index (all rows must share the same dimension)
let vectors: = vec!;
// Easiest: build with defaults (M=64, L_build=128, alpha=1.2)
let index = build_index_default?;
// Or: custom construction parameters
let params = DiskAnnParams ;
let index2 = build_index_with_params?;
Opening an Existing Index
use DistL2;
use DiskANN;
// If you built with DistL2 and defaults:
let index = open_index_default_metric?;
// Or, explicitly provide the distance you built with:
let index2 = open_index_with?;
Searching the Index
use DistL2;
use DiskANN;
let index = open_index_default_metric?;
let query: = vec!; // length must match the indexed dim
let k = 10;
let beam = 256; // search beam width
// (IDs, distance)
let hits: = index.search_with_dists;
// `neighbors` are the IDs of the k nearest vectors
let neighbors: = index.search;
Parallel Search
use DistL2;
use DiskANN;
use *;
let index = open_index_default_metric?;
// Suppose you have a batch of queries
let query_batch: = /* ... */;
let results: = query_batch
.par_iter
.map
.collect;
Space and time complexity analysis
- Index Build Time: O(n * max_degree * beam_width)
- Disk Space: n * (dimension * 4 + max_degree * 4) bytes
- Search Time: O(beam_width * log n) - typically visits < 1% of dataset
- Memory Usage: O(beam_width) during search
- Query Throughput: Scales linearly with CPU cores
Parameters Tuning
Index building Parameters
max_degree: 32-64 for most datasetsbuild_beam_width: 128-256 for good graph qualityalpha: 1.2-2.0 (higher = more diverse neighbors)
Index search Parameters
beam_width: 128 or larger (trade-off between speed and recall)- Higher beam_width = better recall but slower search
Index memory-mapping
When host RAM is not large enough for mapping the entire database file, it is possible to build the database in several smaller pieces (random split). Then users can search the query againt each piece and collect results from each piece before merging (rank by distance). This is equivalent to a single big database approach (as long as K'>=K) but requires a much smaller number of RAM for memory-mapping. In practice, the Microsoft Azure Cosmos DB found that this database shard idea can improve recall. Intutively, with smaller data points for each piece, we can use large M and build beam width to further improve accuracy. See their paper here
Building and Testing
# Build the library
# Run tests
# Run demo
# Run performance test
# test MNIST fashion dataset
# test SIFT dataset
Examples
See the examples/ directory for:
demo.rs: Demo with 100k vectorsperf_test.rs: Performance benchmarking with 1M vectorsdiskann_mnist.rs: Performance benchmarking with MNIST fashion dataset (60K)diskann_sift.rs: Performance benchmarking with SIFT 1M datasetbigann.rs: Performance benchmarking with SIFT 10M datasethnsw_sift.rs: Comparison with in-memory HNSW
Benchmark against in-memory HNSW (hnsw_rs crate) for SIFT 1 million dataset
Results:
## DiskANN, sift1m , M4 Max
### sift1m, hnsw_rs, M4 Max
)
)
)
License
MIT This project is licensed under the MIT License - see the LICENSE file for details.
References
Jayaram Subramanya, S., Devvrit, F., Simhadri, H.V., Krishnawamy, R. and Kadekodi, R., 2019. Diskann: Fast accurate billion-point nearest neighbor search on a single node. Advances in neural information processing Systems, 32.
Acknowledgments
This implementation is based on the DiskANN paper and the official Microsoft implementation. It was also largely inspired by the implementation here.