Expand description
Vector memory and semantic search capabilities.
This module provides vector embeddings and semantic search functionality for agent memories. It enables similarity-based memory retrieval using vector embeddings rather than simple keyword matching.
§Features
- Embedding Generation: Convert text to vector embeddings using various providers
- Semantic Search: Find similar memories based on meaning, not just keywords
- Hybrid Search: Combine semantic similarity with traditional keyword search
- Multiple Backends: Support for local and cloud vector databases
§Architecture
The vector memory system is built around three core traits:
EmbeddingProvider- Generates vector embeddings from textVectorStore- Stores and retrieves vectors with metadataVectorMemory- High-level interface combining embeddings and storage
§Available Backends
LocalVectorStore- In-memory vector store with cosine similarity search
§Example
use ceylon_next::memory::vector::{EmbeddingProvider, VectorMemory, LocalVectorStore};
use std::sync::Arc;
#[tokio::main]
async fn main() {
// Create an embedding provider (e.g., OpenAI, Ollama, etc.)
// let embedder = OpenAIEmbeddings::new("api-key");
// Create a vector store
let store = LocalVectorStore::new(384); // 384-dimensional embeddings
// Use for semantic search
// let results = store.search(&query_vector, 5).await.unwrap();
}Structs§
- Cached
Embeddings - A caching wrapper for embedding providers.
- Local
Vector Store - In-memory vector store using brute-force cosine similarity search.
- Search
Result - A search result with similarity score.
- Vector
Entry - A vector embedding with associated metadata.
Traits§
- Embedding
Provider - Trait for generating vector embeddings from text.
- Vector
Memory - High-level interface for semantic memory operations.
- Vector
Store - Trait for storing and retrieving vector embeddings.
Functions§
- cosine_
similarity - Computes the cosine similarity between two vectors.
- normalize_
vector - Normalizes a vector to unit length (L2 normalization).