nexus-memory-vectors 1.2.2

Semantic search over storage-backed embeddings for Nexus Memory System
Documentation

Nexus Vectors - Semantic search over storage-backed embeddings

This crate provides semantic search capabilities for vector embeddings, used by the cognition engine (nexus-agent) to retrieve relevant memories.

Runtime Retrieval Path

The live cognition path uses SemanticSearch over VectorEntry slices fetched from nexus-storage. SemanticSearch performs cosine similarity ranking with optional graph-tree-based score boosting.

Internal/Test Abstractions

VectorDatabase is an in-memory vector store with its own HashMap-based storage. It is not used by the shipped retrieval path and exists for testing and development. It is deprecated and should not be used for runtime retrieval.

Features

  • 384-dimensional embeddings: Compatible with all-MiniLM-L6-v2
  • Cosine similarity search: Fast semantic search
  • Graph tree organization: Hierarchical memory management with relevance boosting
  • Priority-based boosting: High-priority memories get boosted scores

Performance Targets

  • Search latency: <10ms for 1k vectors
  • Memory efficiency: In-memory storage with indexing

Usage (Runtime Path)

use nexus_memory_vectors::{SemanticSearch, SearchOptions, VectorEntry};

// Vectors come from storage, not an in-memory DB
let vectors: Vec<VectorEntry> = /* fetched from nexus-storage */;
let search = SemanticSearch::new();
let options = SearchOptions::with_limit(10).with_threshold(0.5);
let (results, latency) = search.search(&query_embedding, &vectors, &options).unwrap();