DiskANN Implementation in Rust

A Rust implementation of DiskANN (Disk-based Approximate Nearest Neighbor search) using the Vamana graph algorithm. This project provides an efficient and scalable solution for large-scale vector similarity search with minimal memory footprint.
When to Use diskann-rs
| Use diskann-rs when... |
Use in-memory indexes (hnsw_rs) when... |
| Index is larger than available RAM |
Index fits comfortably in RAM |
| You need incremental updates without rebuilding |
Build time is critical (one-time cost) |
| Memory-constrained environments (containers, edge) |
Maximum recall needed (98%+) |
| Multiple large indexes on same machine |
Single index, dedicated resources |
| Cost-sensitive deployments |
Latency-critical applications |
TL;DR: diskann-rs trades ~60% slower build time for 6-10x lower memory usage and 15x faster incremental updates.
Key Features
| Feature |
Description |
| Incremental Updates |
Add/delete vectors without rebuilding the entire index |
| Filtered Search |
Query with metadata predicates (e.g., category filters) |
| SIMD Acceleration |
Optimized distance calculations (AVX2, SSE4.1, NEON) |
| Product Quantization |
Compress vectors up to 64x with PQ encoding |
| Memory-Mapped I/O |
Single-file storage with minimal RAM footprint |
| Parallel Processing |
Concurrent index building and batch queries |
Quick Start
Basic Index Operations
use anndists::dist::DistL2;
use diskann_rs::{DiskANN, DiskAnnParams};
let vectors: Vec<Vec<f32>> = vec![vec![0.1, 0.2, 0.3], vec![0.4, 0.5, 0.6]];
let index = DiskANN::<DistL2>::build_index_default(&vectors, DistL2 {}, "index.db")?;
let query = vec![0.1, 0.2, 0.4];
let neighbors: Vec<u32> = index.search(&query, 10, 256);
Incremental Updates (No Rebuild Required)
use anndists::dist::DistL2;
use diskann_rs::IncrementalDiskANN;
let vectors = vec![vec![0.0; 128]; 1000];
let index = IncrementalDiskANN::<DistL2>::build_default(&vectors, "index.db")?;
let new_vectors = vec![vec![1.0; 128]; 100];
index.add_vectors(&new_vectors)?;
index.delete_vectors(&[0, 1, 2])?;
if index.should_compact() {
index.compact("index_v2.db")?;
}
Filtered Search (Metadata Predicates)
use anndists::dist::DistL2;
use diskann_rs::{FilteredDiskANN, Filter};
let vectors = vec![vec![0.0; 128]; 1000];
let labels: Vec<Vec<u64>> = (0..1000).map(|i| vec![i % 10]).collect();
let index = FilteredDiskANN::<DistL2>::build(&vectors, &labels, "filtered.db")?;
let filter = Filter::label_eq(0, 5);
let results = index.search_filtered(&query, 10, 128, &filter);
let filter = Filter::and(vec![
Filter::label_eq(0, 5), Filter::label_range(1, 10, 100), ]);
Product Quantization (64x Compression)
use diskann_rs::pq::{ProductQuantizer, PQConfig};
let config = PQConfig {
num_subspaces: 8, num_centroids: 256, ..Default::default()
};
let pq = ProductQuantizer::train(&vectors, config)?;
let codes: Vec<Vec<u8>> = pq.encode_batch(&vectors);
let table = pq.create_distance_table(&query);
let dist = pq.distance_with_table(&table, &codes[0]);
SIMD-Accelerated Distance
use diskann_rs::{SimdL2, DiskANN, simd_info};
println!("{}", simd_info());
let index = DiskANN::<SimdL2>::build_index_default(&vectors, SimdL2, "index.db")?;
use diskann_rs::simd::{l2_squared, dot_product, cosine_distance};
let dist = l2_squared(&vec_a, &vec_b);
Performance
Why diskann-rs? Memory-Mapped I/O
Unlike in-memory indexes that require loading the entire graph into RAM, diskann-rs uses memory-mapped files. The OS loads only the pages you access, making it ideal for large-scale deployments:
| Workload |
diskann-rs |
hnsw_rs |
Savings |
| Light (10 queries) |
90 MB |
896 MB |
10x less RAM |
| Medium (100 queries) |
136 MB |
896 MB |
6.6x less RAM |
| Heavy (1K queries) |
147 MB |
896 MB |
6x less RAM |
| Stress (5K queries) |
139 MB |
896 MB |
6.4x less RAM |
Tested with 200K vectors, 128 dimensions. hnsw_rs must hold the full index in RAM; diskann-rs loads pages on-demand.
Benchmark Comparisons (vs hnsw_rs)
| Metric |
diskann-rs |
hnsw_rs |
Winner |
| QPS at 93% recall |
586 |
170 |
diskann-rs (3.4x) |
| Add vectors |
31,000 vec/s |
2,000 vec/s |
diskann-rs (15x) |
| Delete vectors |
Instant (tombstone) |
Full rebuild |
diskann-rs |
| Build time |
33s / 100K |
21s / 100K |
hnsw_rs (1.6x) |
| Max recall |
97.5% |
99.2% |
hnsw_rs |
Dataset Benchmarks
Benchmarks on Apple M4 Max:
| Dataset |
Vectors |
Build Time |
QPS |
Recall@10 |
| SIFT-1M |
1,000,000 |
295s |
8,590 |
99.6% |
| Fashion-MNIST |
60,000 |
111s |
18,000 |
98.8% |
| Random-50K |
50,000 |
38s |
2,200 |
85.6% |
Memory Efficiency
- ~330MB RAM for 2GB index (16% of file size)
- Product Quantization: 64x compression (512 bytes → 8 bytes per vector)
Architecture
File Layout
[ metadata_len:u64 ][ metadata (bincode) ][ padding to 1 MiB ]
[ vectors (n × dim × f32) ][ adjacency (n × max_degree × u32) ]
Incremental Updates (Delta Layer)
┌─────────────────────────────────────────────────────────────┐
│ IncrementalDiskANN │
├─────────────────────────────────────────────────────────────┤
│ ┌──────────────────┐ ┌──────────────────┐ ┌───────────┐ │
│ │ Base Index │ │ Delta Layer │ │ Tombstones│ │
│ │ (mmap file) │ │ (in-memory) │ │ (BitSet) │ │
│ │ │ │ + mini-graph │ │ │ │
│ └──────────────────┘ └──────────────────┘ └───────────┘ │
└─────────────────────────────────────────────────────────────┘
Search: query base → merge delta → filter tombstones → return
Parameters
Build Parameters
| Parameter |
Default |
Description |
max_degree |
64 |
Maximum neighbors per node (32-64) |
build_beam_width |
128 |
Construction beam width (128-256) |
alpha |
1.2 |
Pruning diversity factor (1.2-2.0) |
Search Parameters
| Parameter |
Typical |
Trade-off |
beam_width |
128-512 |
Higher = better recall, slower |
k |
10-100 |
Number of neighbors to return |
Building and Testing
cargo build --release
cargo test --lib
cargo bench --bench benchmark
DISKANN_BENCH_LARGE=1 cargo bench --bench benchmark
Comparison with Other Libraries
vs rust-diskann
| Feature |
diskann-rs |
rust-diskann |
| Incremental updates |
Yes |
No |
| Filtered search |
Yes |
No |
| Product Quantization |
Yes |
No |
| SIMD acceleration |
Yes |
Uses anndists |
| Memory-mapped I/O |
Yes |
Yes |
| Generic vector types |
f32 |
f32, u64, etc. |
vs hnsw_rs
| Feature |
diskann-rs |
hnsw_rs |
| Memory usage |
6-10x lower |
Full index in RAM |
| Incremental add |
15x faster |
Slower |
| Incremental delete |
Instant (tombstone) |
Full rebuild |
| Build time |
1.6x slower |
Faster |
| Max recall |
~97% |
~99% |
| Disk-based |
Yes (mmap) |
No (in-memory only) |
vs Faiss (C++/Python)
| Feature |
diskann-rs |
Faiss |
| Language |
Pure Rust |
C++/Python |
| Deployment |
Single binary |
Complex deps |
| Memory-mapped |
Native |
Limited |
| GPU support |
No |
Yes |
| Index types |
Vamana graph |
IVF, HNSW, PQ, etc. |
| Maturity |
Growing |
Production-proven |
When to choose diskann-rs: Pure Rust deployment, memory-constrained environments, need for incremental updates.
When to choose Faiss: GPU acceleration needed, need specialized index types, Python ecosystem.
License
MIT License - see LICENSE for details.
References