hyperspace-sdk (Rust)
Official Rust client for HyperspaceDB gRPC data plane.
This crate provides:
- authenticated gRPC client
- collection management
- insert/search APIs
- high-throughput
search_batch f32helper methods for Euclidean workloads (insert_f32,search_f32,search_batch_f32)
Installation
[]
= "3.0.0-alpha.2"
= { = "1", = ["macros", "rt-multi-thread"] }
Quick Start
use Client;
use HashMap;
async
Batch Search
Use search_batch to reduce RPC overhead:
let responses = client.search_batch.await?;
Each entry in responses corresponds to one query vector.
f32 Helpers
When your app keeps Euclidean vectors in f32, use conversion helpers:
insert_f32search_f32search_batch_f32
The crate converts to protocol f64 once per call.
API Surface (Core)
Client::connectcreate_collection,delete_collection,list_collectionsinsert,insert_f32insert_text(server-side vectorization and storage)vectorize(convert text to vector on server)search,search_f32,search_advancedsearch_text(search with text input, vectorized on server)search_batch,search_batch_f32,search_wasserstein,search_multi_collectiondeleteconfigureget_collection_stats,get_digesttrigger_vacuum,trigger_reconsolidation,rebuild_index,rebuild_index_with_filterget_neighbors_with_weights(graph edges with distances)subscribe_to_events(CDC stream)
Rebuild with Pruning
client
.rebuild_index_with_filter
.await?;
Hyperbolic Math Utilities
use ;
Graph Diagnostics (Gromov Delta)
Analyze your dataset structure directly on the client to select the correct metric:
use analyze_delta_hyperbolicity;
// Returns the delta value and recommended metric (lorentz, poincare, cosine, l2)
let = analyze_delta_hyperbolicity;
AI Sleep Mode / Memory Reconsolidation
Trigger the database to run Riemannian SGD via Flow Matching natively:
client.trigger_reconsolidation.await?;
Cognitive Math SDK (Spatial AI Engine)
Provides advanced tools for Agentic AI, running entirely on the client side:
use ;
// 1. Detect Hallucinations (Entropy approaches 1.0)
let entropy = local_entropy?;
// 2. Proof of Convergence (Negative derivative = convergence)
let stability = lyapunov_convergence?;
// 3. Extrapolate next thought (Koopman linearization)
let next_thought = koopman_extrapolate?;
// 4. Phase-Locked Loop for topic tracking
let synced_thought = context_resonance?;
Embedding Pipeline (Optional)
HyperspaceDB supports per-geometry embeddings — each of the 4 distance types (l2, cosine, poincare, lorentz) can have its own embedding backend configured independently.
Available Backends
| Backend | Feature Flag | Description |
|---|---|---|
| Local ONNX | local-onnx |
Load model.onnx + tokenizer.json from disk |
| HuggingFace Hub | huggingface |
Auto-download model.onnx + tokenizer.json from Hub |
| OpenAI / OpenRouter | embedders |
Cloud API with OpenAI-compatible protocol |
| Cohere | embedders |
Cohere /v1/embed endpoint |
| Voyage AI | embedders |
Voyage /v1/embeddings endpoint |
| Google Gemini | embedders |
Gemini embedContent endpoint |
Usage
# Cargo.toml
[]
# API providers only
= { = "3.0.0", = ["embedders"] }
# Local ONNX files (no network required at inference time)
= { = "3.0.0", = ["local-onnx"] }
# Download from HuggingFace Hub (includes local-onnx)
= { = "3.0.0", = ["huggingface"] }
EmbedGeometry
Every embedder requires specifying the target geometry, which controls post-processing:
use EmbedGeometry;
// Cosine / dot-product: vectors are unit-normalized
let geom = Cosine;
// L2 / Euclidean: vectors are unit-normalized
let geom = L2;
// Poincaré ball: vectors are clamped inside the unit ball (||x|| < 1)
let geom = Poincare;
// Lorentz hyperboloid: no post-processing (model head handles constraint)
let geom = Lorentz;
// Parse from collection metric string
let geom = from_str;
Local ONNX Embedder
use ;
let embedder = new?;
let vector = embedder.encode.await?;
HuggingFace Hub Embedder
Downloads model.onnx and tokenizer.json automatically from the Hub on first use. Files are cached locally (~/.cache/huggingface/hub).
use ;
// Public model — no token needed
let embedder = new?;
// Private or gated model — provide HF_TOKEN
let embedder = new?;
let vector = embedder.encode.await?;
OpenAI / Remote API Embedder
use ;
let embedder = new;
let vector = embedder.encode.await?;
Server-Side Embedding (InsertText / SearchText)
The server can embed text automatically. Configure in .env (see server docs):
HYPERSPACE_EMBED=true
# Cosine geometry via HuggingFace
HS_EMBED_COSINE_PROVIDER=huggingface
HS_EMBED_COSINE_HF_MODEL_ID=BAAI/bge-small-en-v1.5
HS_EMBED_COSINE_DIM=384
HF_TOKEN=hf_your_token_here # Optional: for gated models
# Lorentz geometry via local ONNX
HS_EMBED_LORENTZ_PROVIDER=local
HS_EMBED_LORENTZ_MODEL_PATH=./models/lorentz_128d.onnx
HS_EMBED_LORENTZ_TOKENIZER_PATH=./models/lorentz_128d_tokenizer.json
HS_EMBED_LORENTZ_DIM=129
Production Notes
- Reuse long-lived clients instead of reconnecting per request.
- Prefer
search_batchon concurrency-heavy paths. - Keep collection metric/dimension consistent with your vector source.
- For
huggingfaceprovider, models are cached; first startup incurs download time. - For
lorentzgeometry, dimension is typically spatial_dim + 1 (the time component).