pub struct MemoryEngine { /* private fields */ }Expand description
Main memory engine interface
This is the primary entry point for all MnemeFusion operations. It coordinates storage, indexing, and retrieval across all dimensions.
Implementations§
Source§impl MemoryEngine
impl MemoryEngine
Sourcepub fn open<P: AsRef<Path>>(path: P, config: Config) -> Result<Self>
pub fn open<P: AsRef<Path>>(path: P, config: Config) -> Result<Self>
Open or create a memory database
§Arguments
path- Path to the .mfdb fileconfig- Configuration options
§Returns
A new MemoryEngine instance
§Errors
Returns an error if:
- The database file cannot be created or opened
- The file format is invalid
- The configuration is invalid
§Example
use mnemefusion_core::{MemoryEngine, Config};
let engine = MemoryEngine::open("./brain.mfdb", Config::default()).unwrap();Sourcepub fn flush_extraction_queue(&self) -> Result<usize>
pub fn flush_extraction_queue(&self) -> Result<usize>
Process all deferred LLM extractions queued by add() in async mode.
When async_extraction_threshold > 0 (set via config or
with_async_extraction_threshold()), add() stores large memories
immediately and defers LLM extraction here. Call this periodically
(e.g., every N messages, or before querying) to build entity profiles.
Returns the number of memories whose extraction was processed.
Safe to call when the queue is empty (returns Ok(0)).
Sourcepub fn pending_extraction_count(&self) -> usize
pub fn pending_extraction_count(&self) -> usize
Returns the number of memories with deferred LLM extractions pending.
Non-zero only when async_extraction_threshold > 0 and large add() calls
have been made since the last flush_extraction_queue().
Sourcepub fn with_user(self, user: impl Into<String>) -> Self
pub fn with_user(self, user: impl Into<String>) -> Self
Set a default namespace (user identity) for all add/query operations.
When set, any call to add() or query() that does not supply an explicit
namespace argument will use this value automatically. Equivalent to always
passing namespace = Some(user) — enables “Memory is per-user” semantics
without changing every call site.
§Example
let engine = MemoryEngine::open("./brain.mfdb", Config::default()).unwrap()
.with_user("alice");
// All subsequent add/query calls default to namespace="alice"Sourcepub fn set_user_entity(&mut self, name: impl Into<String>)
pub fn set_user_entity(&mut self, name: impl Into<String>)
Set the user entity name for first-person pronoun resolution.
When set, queries containing “I”, “me”, “my”, etc. automatically include this entity in the profile injection step (Step 2.1), ensuring the user’s own memories get the entity score boost.
Unlike with_user(), this does NOT enable namespace filtering — it only
affects entity detection at query time. Use this when memories are stored
without namespace but you want pronoun resolution.
Sourcepub fn set_embedding_fn(&mut self, f: EmbeddingFn)
pub fn set_embedding_fn(&mut self, f: EmbeddingFn)
Set the embedding function for computing fact embeddings at ingestion time.
When set, the pipeline will compute and store embeddings for each extracted entity fact during ingestion. These embeddings enable semantic matching in ProfileSearch (cosine similarity vs word-overlap).
The function should return an embedding vector for the given text input.
Typically this wraps the same embedding model used for memory embeddings
(e.g., SentenceTransformer.encode()).
§Arguments
f- Embedding function:Fn(&str) -> Vec<f32>
Sourcepub fn precompute_fact_embeddings(&self) -> Result<usize>
pub fn precompute_fact_embeddings(&self) -> Result<usize>
Precompute missing fact embeddings for all entity profiles.
Iterates all stored profiles, checks each fact for a stored embedding, and computes + stores any missing ones using the registered EmbeddingFn. This is a one-time backfill operation — “pay the cost once.”
Returns the number of fact embeddings computed.
Sourcepub fn rebuild_speaker_embeddings(&self) -> Result<usize>
pub fn rebuild_speaker_embeddings(&self) -> Result<usize>
Rebuild embeddings for memories with first-person content using speaker-aware pronoun substitution.
For each memory that has a "speaker" in its metadata and first-person content
(e.g., "I joined a gym"), recomputes the embedding on the third-person form
("Alice joined a gym") to improve semantic similarity with entity-centric queries.
This is a one-time backfill for databases ingested before this feature was added. Safe to call multiple times — only updates memories where pronoun substitution changes the text (i.e., skips memories without first-person pronouns).
Uses the registered EmbeddingFn (set via set_embedding_fn()) when available,
falling back to the internal auto_embed() engine otherwise.
Returns the number of memory embeddings updated.
Sourcepub fn summarize_profiles(&self) -> Result<usize>
pub fn summarize_profiles(&self) -> Result<usize>
Generate summaries for all entity profiles.
For each profile with facts, generates a dense summary paragraph that condenses the profile’s facts into one text block. When present, query() injects summaries as single context items instead of N individual facts, addressing RANK failures where evidence is present but buried.
Returns the number of profiles summarized.
Sourcepub fn consolidate_profiles(&self) -> Result<(usize, usize)>
pub fn consolidate_profiles(&self) -> Result<(usize, usize)>
Consolidate entity profiles by removing noise and deduplicating facts.
Performs the following cleanup operations:
- Remove null-indicator values (“none”, “N/A”, etc.)
- Remove overly verbose values (>100 chars)
- Semantic dedup within same fact_type using embedding similarity (threshold: 0.85) — keeps fact with higher confidence, or first encountered on tie
- Delete garbage entity profiles (non-person entities with ≤2 facts)
Returns (facts_removed, profiles_deleted).
Sourcepub fn repair_profiles_from_metadata(&self) -> Result<(usize, usize)>
pub fn repair_profiles_from_metadata(&self) -> Result<(usize, usize)>
Repair entity profiles by re-processing llm_extraction metadata stored in memories.
This is a recovery function for databases where entity profiles are missing or incomplete due to extraction failures, consolidation over-pruning, or ingestion bugs.
For every memory in the DB:
- Parse the
llm_extractionJSON from metadata (if present) - For each entity_fact: create/update the entity profile with the fact and add the memory as a source_memory
- For the
speakermetadata field: ensure the speaker entity’s profile includes this memory as a source_memory (handles first-person statements where the speaker name isn’t in the content text)
Respects the pipeline’s profile_entity_types filter and type allowlist.
Skips entities whose names appear to be pronouns or generic placeholders.
Returns (profiles_created, source_memories_added).
Sourcepub fn add(
&self,
content: String,
embedding: impl Into<Option<Vec<f32>>>,
metadata: Option<HashMap<String, String>>,
timestamp: Option<Timestamp>,
source: Option<Source>,
namespace: Option<&str>,
) -> Result<MemoryId>
pub fn add( &self, content: String, embedding: impl Into<Option<Vec<f32>>>, metadata: Option<HashMap<String, String>>, timestamp: Option<Timestamp>, source: Option<Source>, namespace: Option<&str>, ) -> Result<MemoryId>
Add a new memory to the database
This will automatically index the memory across all dimensions:
- Semantic (vector similarity)
- Temporal (time-based)
- Entity (if auto-extraction enabled)
§Arguments
content- The text content to storeembedding- Vector embedding (must match configured dimension)metadata- Optional key-value metadatatimestamp- Optional custom timestamp (defaults to now)source- Optional provenance/source tracking information
§Returns
The ID of the created memory
§Errors
Returns an error if:
- Embedding dimension doesn’t match configuration
- Storage operation fails
- Source serialization fails
§Example
let embedding = vec![0.1; 384];
// Add memory with source tracking
let source = Source::new(SourceType::Conversation)
.with_id("conv_123")
.with_confidence(0.95);
let id = engine.add(
"Meeting scheduled for next week".to_string(),
embedding,
None,
None,
Some(source),
None,
).unwrap();Sourcepub fn delete(&self, id: &MemoryId, namespace: Option<&str>) -> Result<bool>
pub fn delete(&self, id: &MemoryId, namespace: Option<&str>) -> Result<bool>
Delete a memory by ID
This will remove the memory from all indexes.
§Arguments
id- The memory ID to deletenamespace- Optional namespace. If provided, verifies the memory is in this namespace before deleting
§Returns
true if the memory was deleted, false if it didn’t exist
§Errors
Returns Error::NamespaceMismatch if namespace is provided and doesn’t match
§Example
let deleted = engine.delete(&id, None).unwrap();
assert!(deleted);Sourcepub fn add_batch(
&self,
inputs: Vec<MemoryInput>,
namespace: Option<&str>,
) -> Result<BatchResult>
pub fn add_batch( &self, inputs: Vec<MemoryInput>, namespace: Option<&str>, ) -> Result<BatchResult>
Add multiple memories in a batch operation
This is significantly faster than calling add() multiple times (10x+ improvement)
because it uses:
- Single transaction for all storage operations
- Vector index locked once for all additions
- Batched entity extraction with deduplication
§Arguments
inputs- Vector of MemoryInput to add
§Returns
BatchResult containing IDs of created memories and any errors
§Performance
Target: 1,000 memories in <500ms
§Example
use mnemefusion_core::{MemoryEngine, Config};
use mnemefusion_core::types::MemoryInput;
let inputs = vec![
MemoryInput::new("content 1".to_string(), vec![0.1; 384]),
MemoryInput::new("content 2".to_string(), vec![0.2; 384]),
];
let result = engine.add_batch(inputs, None).unwrap();
println!("Created {} memories", result.created_count);
if result.has_errors() {
println!("Encountered {} errors", result.errors.len());
}Sourcepub fn add_batch_with_progress(
&self,
inputs: Vec<MemoryInput>,
namespace: Option<&str>,
progress_callback: Option<Box<dyn Fn(usize, usize)>>,
) -> Result<BatchResult>
pub fn add_batch_with_progress( &self, inputs: Vec<MemoryInput>, namespace: Option<&str>, progress_callback: Option<Box<dyn Fn(usize, usize)>>, ) -> Result<BatchResult>
Add multiple memories in a single batch operation with progress reporting.
Like add_batch(), but calls progress_callback(current, total) after each
memory is processed. Useful for long ingestion runs.
§Example
let inputs: Vec<MemoryInput> = vec![]; // ...
let result = engine.add_batch_with_progress(
inputs,
None,
Some(Box::new(|current, total| {
println!("Progress: {}/{}", current, total);
})),
).unwrap();Sourcepub fn delete_batch(
&self,
ids: Vec<MemoryId>,
namespace: Option<&str>,
) -> Result<usize>
pub fn delete_batch( &self, ids: Vec<MemoryId>, namespace: Option<&str>, ) -> Result<usize>
Delete multiple memories in a batch operation
This is faster than calling delete() multiple times because it uses:
- Single transaction for all storage operations
- Batched entity cleanup
§Arguments
ids- Vector of MemoryIds to deletenamespace- Optional namespace. If provided, only deletes memories in this namespace
§Returns
Number of memories actually deleted (may be less than input if some don’t exist or are in wrong namespace)
§Example
use mnemefusion_core::{MemoryEngine, Config};
let ids = vec![id1, id2];
let deleted_count = engine.delete_batch(ids, None).unwrap();
println!("Deleted {} memories", deleted_count);Sourcepub fn add_with_dedup(
&self,
content: String,
embedding: Vec<f32>,
metadata: Option<HashMap<String, String>>,
timestamp: Option<Timestamp>,
source: Option<Source>,
namespace: Option<&str>,
) -> Result<AddResult>
pub fn add_with_dedup( &self, content: String, embedding: Vec<f32>, metadata: Option<HashMap<String, String>>, timestamp: Option<Timestamp>, source: Option<Source>, namespace: Option<&str>, ) -> Result<AddResult>
Add a memory with automatic deduplication
Uses content hash to detect duplicates. If identical content already exists, returns the existing memory ID without creating a duplicate.
§Arguments
content- Text contentembedding- Vector embeddingmetadata- Optional metadatatimestamp- Optional custom timestampsource- Optional source/provenance
§Returns
AddResult with created flag and ID (either new or existing)
§Example
use mnemefusion_core::{MemoryEngine, Config};
let embedding = vec![0.1; 384];
// First add
let result1 = engine.add_with_dedup(
"Meeting notes".to_string(),
embedding.clone(),
None,
None,
None,
None,
).unwrap();
assert!(result1.created);
// Second add with same content
let result2 = engine.add_with_dedup(
"Meeting notes".to_string(),
embedding.clone(),
None,
None,
None,
None,
).unwrap();
assert!(!result2.created); // Duplicate detected
assert_eq!(result1.id, result2.id); // Same ID returnedSourcepub fn upsert(
&self,
key: &str,
content: String,
embedding: Vec<f32>,
metadata: Option<HashMap<String, String>>,
timestamp: Option<Timestamp>,
source: Option<Source>,
namespace: Option<&str>,
) -> Result<UpsertResult>
pub fn upsert( &self, key: &str, content: String, embedding: Vec<f32>, metadata: Option<HashMap<String, String>>, timestamp: Option<Timestamp>, source: Option<Source>, namespace: Option<&str>, ) -> Result<UpsertResult>
Upsert a memory by logical key
If key exists: replaces content, embedding, and metadata If key doesn’t exist: creates new memory and associates with key
This is useful for updating facts that may change over time.
§Arguments
key- Logical key (e.g., “user_profile:123”, “doc:readme”)content- Text contentembedding- Vector embeddingmetadata- Optional metadatatimestamp- Optional custom timestampsource- Optional source/provenance
§Returns
UpsertResult indicating whether memory was created or updated
§Example
use mnemefusion_core::{MemoryEngine, Config};
let embedding = vec![0.1; 384];
// First upsert - creates new
let result1 = engine.upsert(
"user:profile",
"Alice likes hiking".to_string(),
embedding.clone(),
None,
None,
None,
None,
).unwrap();
assert!(result1.created);
// Second upsert - updates existing
let result2 = engine.upsert(
"user:profile",
"Alice likes hiking and photography".to_string(),
vec![0.2; 384],
None,
None,
None,
None,
).unwrap();
assert!(result2.updated);
assert_eq!(result2.previous_content, Some("Alice likes hiking".to_string()));Sourcepub fn count(&self) -> Result<usize>
pub fn count(&self) -> Result<usize>
Get the number of memories in the database
§Example
let count = engine.count().unwrap();
println!("Total memories: {}", count);Sourcepub fn list_ids(&self) -> Result<Vec<MemoryId>>
pub fn list_ids(&self) -> Result<Vec<MemoryId>>
List all memory IDs (for debugging/testing)
§Warning
This loads all memory IDs into memory. Use with caution on large databases.
Sourcepub fn update_embedding(
&self,
id: &MemoryId,
new_embedding: Vec<f32>,
) -> Result<()>
pub fn update_embedding( &self, id: &MemoryId, new_embedding: Vec<f32>, ) -> Result<()>
Update the embedding vector for an existing memory.
This updates both the stored memory record (used by MMR diversity) and the HNSW vector index (used by semantic search). The memory content, metadata, and all other fields are preserved.
§Arguments
id- The memory ID to updatenew_embedding- The new embedding vector (must match configured dimension)
§Errors
Returns error if the memory doesn’t exist or the embedding dimension is wrong.
Sourcepub fn reserve_capacity(&self, capacity: usize) -> Result<()>
pub fn reserve_capacity(&self, capacity: usize) -> Result<()>
Reserve capacity in the vector index for future insertions
This is useful when you know you’ll be adding many memories and want to avoid repeated reallocations, improving performance.
§Arguments
capacity- Number of vectors to reserve space for
§Example
// Reserve space for 10,000 memories before bulk insertion
engine.reserve_capacity(10_000).unwrap();Sourcepub fn search(
&self,
query_embedding: &[f32],
top_k: usize,
namespace: Option<&str>,
filters: Option<&[MetadataFilter]>,
) -> Result<Vec<(Memory, f32)>>
pub fn search( &self, query_embedding: &[f32], top_k: usize, namespace: Option<&str>, filters: Option<&[MetadataFilter]>, ) -> Result<Vec<(Memory, f32)>>
Search for memories by semantic similarity
§Arguments
query_embedding- The query vector to search fortop_k- Maximum number of results to returnnamespace- Optional namespace filter. If provided, only returns memories in this namespace
§Returns
A vector of (Memory, similarity_score) tuples, sorted by similarity (highest first)
§Example
let results = engine.search(&query_embedding, 10, None, None).unwrap();
for (memory, score) in results {
println!("Similarity: {:.3} - {}", score, memory.content);
}Sourcepub fn query(
&self,
query_text: &str,
query_embedding: impl Into<Option<Vec<f32>>>,
limit: usize,
namespace: Option<&str>,
filters: Option<&[MetadataFilter]>,
) -> Result<(IntentClassification, Vec<(Memory, FusedResult)>, Vec<String>)>
pub fn query( &self, query_text: &str, query_embedding: impl Into<Option<Vec<f32>>>, limit: usize, namespace: Option<&str>, filters: Option<&[MetadataFilter]>, ) -> Result<(IntentClassification, Vec<(Memory, FusedResult)>, Vec<String>)>
Intelligent multi-dimensional query with intent classification
This method performs intent-aware retrieval across all dimensions:
- Classifies the query intent (temporal, causal, entity, factual)
- Retrieves results from relevant dimensions
- Fuses results with adaptive weights based on intent
§Arguments
query_text- Natural language query textquery_embedding- Vector embedding of the querylimit- Maximum number of results to returnnamespace- Optional namespace filter. If provided, only returns memories in this namespace
§Returns
Tuple of (intent classification, fused results with full memory records)
§Example
let (intent, results, profile_context) = engine.query(
"Why was the meeting cancelled?",
&query_embedding,
10,
None,
None
).unwrap();
println!("Query intent: {:?}", intent.intent);
println!("Profile context: {} entries", profile_context.len());
for result in results {
println!("Score: {:.3} - {}", result.1.fused_score, result.0.content);
}Sourcepub fn last_query_trace(&self) -> Option<Trace>
pub fn last_query_trace(&self) -> Option<Trace>
Returns the trace from the most recent query() call, if tracing is enabled.
Sourcepub fn get_range(
&self,
start: Timestamp,
end: Timestamp,
limit: usize,
namespace: Option<&str>,
) -> Result<Vec<(Memory, Timestamp)>>
pub fn get_range( &self, start: Timestamp, end: Timestamp, limit: usize, namespace: Option<&str>, ) -> Result<Vec<(Memory, Timestamp)>>
Query memories within a time range
Returns memories whose timestamps fall within the specified range, sorted by timestamp (newest first).
§Arguments
start- Start of the time range (inclusive)end- End of the time range (inclusive)limit- Maximum number of results to returnnamespace- Optional namespace filter. If provided, only returns memories in this namespace
§Returns
A vector of (Memory, Timestamp) tuples, sorted newest first
§Example
let now = Timestamp::now();
let week_ago = now.subtract_days(7);
let results = engine.get_range(week_ago, now, 100, None).unwrap();
for (memory, timestamp) in results {
println!("{}: {}", timestamp.as_unix_secs(), memory.content);
}Sourcepub fn get_recent(
&self,
n: usize,
namespace: Option<&str>,
) -> Result<Vec<(Memory, Timestamp)>>
pub fn get_recent( &self, n: usize, namespace: Option<&str>, ) -> Result<Vec<(Memory, Timestamp)>>
Get the N most recent memories
Returns the most recent memories, sorted by timestamp (newest first).
§Arguments
n- Number of recent memories to retrievenamespace- Optional namespace filter. If provided, only returns memories in this namespace
§Returns
A vector of (Memory, Timestamp) tuples, sorted newest first
§Example
let recent = engine.get_recent(10, None).unwrap();
println!("10 most recent memories:");
for (memory, timestamp) in recent {
println!(" {} - {}", timestamp.as_unix_secs(), memory.content);
}Sourcepub fn add_causal_link(
&self,
cause: &MemoryId,
effect: &MemoryId,
confidence: f32,
evidence: String,
) -> Result<()>
pub fn add_causal_link( &self, cause: &MemoryId, effect: &MemoryId, confidence: f32, evidence: String, ) -> Result<()>
Add a causal link between two memories
Links a cause memory to an effect memory with a confidence score.
§Arguments
cause- The MemoryId of the causeeffect- The MemoryId of the effectconfidence- Confidence score (0.0 to 1.0)evidence- Evidence text explaining the causal relationship
§Errors
Returns error if confidence is not in range [0.0, 1.0]
§Example
engine.add_causal_link(&id1, &id2, 0.9, "id1 caused id2".to_string()).unwrap();Sourcepub fn get_causes(
&self,
memory_id: &MemoryId,
max_hops: usize,
) -> Result<CausalTraversalResult>
pub fn get_causes( &self, memory_id: &MemoryId, max_hops: usize, ) -> Result<CausalTraversalResult>
Get causes of a memory (backward traversal)
Finds all memories that causally precede the given memory, up to max_hops.
§Arguments
memory_id- The memory to find causes formax_hops- Maximum traversal depth
§Returns
CausalTraversalResult with all paths found
§Example
let causes = engine.get_causes(&id, 3).unwrap();
for path in causes.paths {
println!("Found causal path with {} steps (confidence: {})",
path.memories.len(), path.confidence);
}Sourcepub fn get_effects(
&self,
memory_id: &MemoryId,
max_hops: usize,
) -> Result<CausalTraversalResult>
pub fn get_effects( &self, memory_id: &MemoryId, max_hops: usize, ) -> Result<CausalTraversalResult>
Get effects of a memory (forward traversal)
Finds all memories that causally follow the given memory, up to max_hops.
§Arguments
memory_id- The memory to find effects formax_hops- Maximum traversal depth
§Returns
CausalTraversalResult with all paths found
§Example
let effects = engine.get_effects(&id, 3).unwrap();
for path in effects.paths {
println!("Found effect chain with {} steps (confidence: {})",
path.memories.len(), path.confidence);
}Sourcepub fn list_namespaces(&self) -> Result<Vec<String>>
pub fn list_namespaces(&self) -> Result<Vec<String>>
List all namespaces in the database
Returns a sorted list of all unique namespace strings, excluding the default namespace (“”).
§Performance
O(n) where n = total memories. This scans all memories to extract namespaces.
§Example
let namespaces = engine.list_namespaces().unwrap();
for ns in namespaces {
println!("Namespace: {}", ns);
}Sourcepub fn count_namespace(&self, namespace: &str) -> Result<usize>
pub fn count_namespace(&self, namespace: &str) -> Result<usize>
Sourcepub fn delete_namespace(&self, namespace: &str) -> Result<usize>
pub fn delete_namespace(&self, namespace: &str) -> Result<usize>
Delete all memories in a namespace
This is a convenience method that lists all memory IDs in the namespace and deletes them via the ingestion pipeline (ensuring proper cleanup of indexes).
§Arguments
namespace- The namespace to delete (empty string “” for default namespace)
§Returns
Number of memories deleted
§Warning
This operation cannot be undone. Use with caution.
§Example
let deleted = engine.delete_namespace("old_user").unwrap();
println!("Deleted {} memories from namespace", deleted);Sourcepub fn get_entity_memories(&self, entity_name: &str) -> Result<Vec<Memory>>
pub fn get_entity_memories(&self, entity_name: &str) -> Result<Vec<Memory>>
Get all memories that mention a specific entity
§Arguments
entity_name- The name of the entity to query (case-insensitive)
§Returns
A vector of Memory objects that mention this entity
§Example
let memories = engine.get_entity_memories("Project Alpha").unwrap();
for memory in memories {
println!("{}", memory.content);
}Sourcepub fn list_entities(&self) -> Result<Vec<Entity>>
pub fn list_entities(&self) -> Result<Vec<Entity>>
Sourcepub fn get_entity_profile(&self, name: &str) -> Result<Option<EntityProfile>>
pub fn get_entity_profile(&self, name: &str) -> Result<Option<EntityProfile>>
Get the profile for an entity by name
Entity profiles aggregate facts about entities across all memories. They are automatically built during ingestion when SLM metadata extraction is enabled.
§Arguments
name- The entity name (case-insensitive)
§Returns
The EntityProfile if found, or None
§Example
if let Some(profile) = engine.get_entity_profile("Alice").unwrap() {
println!("Entity: {} ({})", profile.name, profile.entity_type);
// Get facts about Alice's occupation
for fact in profile.get_facts("occupation") {
println!(" Occupation: {} (confidence: {})", fact.value, fact.confidence);
}
// Get facts about Alice's research
for fact in profile.get_facts("research_topic") {
println!(" Research: {} (confidence: {})", fact.value, fact.confidence);
}
}Sourcepub fn list_entity_profiles(&self) -> Result<Vec<EntityProfile>>
pub fn list_entity_profiles(&self) -> Result<Vec<EntityProfile>>
List all entity profiles in the database
§Returns
A vector of all EntityProfile objects
§Example
let profiles = engine.list_entity_profiles().unwrap();
for profile in profiles {
println!("{} ({}) - {} facts from {} memories",
profile.name,
profile.entity_type,
profile.total_facts(),
profile.source_memories.len()
);
}Sourcepub fn count_entity_profiles(&self) -> Result<usize>
pub fn count_entity_profiles(&self) -> Result<usize>
Sourcepub fn scope<S: Into<String>>(&self, namespace: S) -> ScopedMemory<'_>
pub fn scope<S: Into<String>>(&self, namespace: S) -> ScopedMemory<'_>
Create a scoped view for namespace-specific operations
Returns a ScopedMemory that automatically applies the namespace to all operations. This provides a more ergonomic API when working with a single namespace.
§Arguments
namespace- The namespace to scope to (empty string “” for default namespace)
§Returns
A ScopedMemory view bound to this namespace
§Example
// Create scoped view for a user
let user_memory = engine.scope("user_123");
// All operations automatically use the namespace
let id = user_memory.add("User note".to_string(), vec![0.1; 384], None, None, None).unwrap();
let results = user_memory.search(&vec![0.1; 384], 10, None).unwrap();
let count = user_memory.count().unwrap();
user_memory.delete_all().unwrap();Sourcepub fn close(self) -> Result<()>
pub fn close(self) -> Result<()>
Close the database
This saves all indexes and ensures all data is flushed to disk. While not strictly necessary (redb handles persistence automatically), it’s good practice to call this explicitly when you’re done.
§Example
let engine = MemoryEngine::open("./test.mfdb", Config::default()).unwrap();
// ... use engine ...
engine.close().unwrap();