Expand description
Tree-based indices for efficient nearest neighbor search
EXPERIMENTAL: These tree implementations are currently experimental and have known limitations with large datasets or specific configurations. For production use, prefer HNSW, IVF, or LSH indices instead.
§Known Limitations
- Tree construction uses recursion with conservative depth limits (20 levels)
- Best suited for moderate-sized datasets (< 100K vectors)
- May encounter stack overflow on systems with very small stack sizes
- Performance degrades in high-dimensional spaces (> 128 dimensions)
§Recommended Alternatives
- For most use cases: Use
HnswIndex(Hierarchical Navigable Small World) - For very large datasets: Use
IVFIndex(Inverted File Index) - For approximate search: Use
LSHIndex(Locality Sensitive Hashing)
This module implements various tree data structures:
- Ball Tree: Efficient for arbitrary metrics
- KD-Tree: Classic space partitioning tree
- VP-Tree: Vantage point tree for metric spaces
- Cover Tree: Navigating nets with provable bounds
- Random Projection Trees: Randomized space partitioning
Structs§
- Ball
Tree - Ball Tree implementation
- Cover
Tree - Cover Tree implementation
- KdTree
- KD-Tree implementation
- Random
Projection Tree - Random Projection Tree implementation
- Tree
Index - Unified tree index interface
- Tree
Index Config - Configuration for tree-based indices
- VpTree
- VP-Tree (Vantage Point Tree) implementation
Enums§
- Distance
Metric - Distance metrics
- Tree
Type - Available tree types