OmenDB
Embedded vector database for Python and Node.js. No server, no setup, just install.
Quick Start
# Create database (persistent) - creates ./mydb.omen file
=
# Add vectors with metadata
# Search
=
# Filtered search
=
Features
- Embedded - Runs in-process, no server needed
- Persistent - Data survives restarts automatically
- Filtered search - Query by metadata with JSON-style filters
- Hybrid search - Combine vector similarity with BM25 text search
- Quantization - 4-8x smaller indexes with minimal recall loss
Platforms
| Platform | Status |
|---|---|
| Linux (x86_64, ARM64) | Supported |
| macOS (Intel, Apple Silicon) | Supported |
| Windows (x86_64) | Experimental |
API
# Database
= # Open or create
= # In-memory (ephemeral)
# CRUD
# Insert/update vectors
# Get by ID
# Batch get by IDs
# Delete by IDs
# Delete by metadata filter
# Update metadata only
# Iteration
# Number of vectors
# Same as len(db)
# Count matching filter
# Iterate all IDs (lazy)
# Get all items as list
# Check if ID exists
in # Same as exists()
... # Iterate all items (lazy)
# Search
# Vector search
# Filtered search
# Only results with distance <= 0.5
# Batch search (parallel)
# Hybrid search (requires text field in vectors)
# 70% vector, 30% text
# Return separate scores
# Text-only BM25
# Persistence
# Flush to disk
Distance Filtering
Use max_distance to filter out low-relevance results (prevents "context rot" in RAG):
# Only return results with distance <= 0.5
=
# Combine with metadata filter
=
This ensures your RAG pipeline only receives highly relevant context, avoiding distractors that can hurt LLM performance.
Filters
# Equality
# Shorthand
# Explicit
# Comparison
# Not equal
# Greater than
# Greater or equal
# Less than
# Less or equal
# Membership
# In list
# String contains
# Logical
# AND
# OR
Configuration
=
# Quantization options:
# - True or "sq8": SQ8 ~4x smaller, ~99% recall (recommended)
# - "rabitq": RaBitQ ~8x smaller, ~98% recall
# - None/False: Full precision (default)
# Distance metric options:
# - "l2" or "euclidean": Euclidean distance (default)
# - "cosine": Cosine distance (1 - cosine similarity)
# - "dot" or "ip": Inner product (for MIPS)
# Context manager (auto-flush on exit)
# Hybrid search with alpha (0=text, 1=vector, default=0.5)
# Get separate keyword and semantic scores for debugging/tuning
=
# Returns: {"id": "...", "score": 0.85, "keyword_score": 0.92, "semantic_score": 0.78}
Performance
10K vectors, Apple M3 Max (m=16, ef=100, k=10):
| Dimension | Single QPS | Batch QPS | Speedup |
|---|---|---|---|
| 128D | 12,000+ | 87,000+ | 7.2x |
| 768D | 3,800+ | 20,500+ | 5.4x |
| 1536D | 1,600+ | 6,200+ | 3.8x |
SIFT-1M (1M vectors, 128D, m=16, ef=100, k=10):
| Machine | QPS | Recall |
|---|---|---|
| i9-13900KF | 4,591 | 98.6% |
| Apple M3 Max | 3,216 | 98.4% |
Quantization reduces memory with minimal recall loss:
| Mode | Compression | Use Case |
|---|---|---|
| f32 | 1x | Default, highest recall |
| sq8 | 4x | Recommended for most users |
| rabitq | 8x | Large datasets, cost-sensitive |
= # Enable SQ8
- Parameters: m=16, ef_construction=100, ef_search=100
- Batch: Uses Rayon for parallel search across all cores
- Recall: Validated against brute-force ground truth on SIFT/GloVe
- Reproduce:
- Quick (10K):
uv run python benchmarks/run.py - SIFT-1M:
uv run python benchmarks/ann_dataset_test.py --dataset sift-128-euclidean
- Quick (10K):
Examples
See python/examples/ for complete working examples:
quickstart.py- Minimal working examplebasic.py- CRUD operations and persistencefilters.py- All filter operatorsrag.py- RAG workflow with mock embeddings
Integrations
LangChain
=
=
LlamaIndex
=
=
=
=
License
Elastic License 2.0 - Free to use, modify, and embed. The only restriction: you can't offer OmenDB as a managed service to third parties.