Skip to main content

MemoryEngine

Struct MemoryEngine 

Source
pub struct MemoryEngine { /* private fields */ }
Expand description

Main memory engine interface

This is the primary entry point for all MnemeFusion operations. It coordinates storage, indexing, and retrieval across all dimensions.

Implementations§

Source§

impl MemoryEngine

Source

pub fn open<P: AsRef<Path>>(path: P, config: Config) -> Result<Self>

Open or create a memory database

§Arguments
  • path - Path to the .mfdb file
  • config - Configuration options
§Returns

A new MemoryEngine instance

§Errors

Returns an error if:

  • The database file cannot be created or opened
  • The file format is invalid
  • The configuration is invalid
§Example
use mnemefusion_core::{MemoryEngine, Config};

let engine = MemoryEngine::open("./brain.mfdb", Config::default()).unwrap();
Source

pub fn flush_extraction_queue(&self) -> Result<usize>

Process all deferred LLM extractions queued by add() in async mode.

When async_extraction_threshold > 0 (set via config or with_async_extraction_threshold()), add() stores large memories immediately and defers LLM extraction here. Call this periodically (e.g., every N messages, or before querying) to build entity profiles.

Returns the number of memories whose extraction was processed. Safe to call when the queue is empty (returns Ok(0)).

Source

pub fn pending_extraction_count(&self) -> usize

Returns the number of memories with deferred LLM extractions pending.

Non-zero only when async_extraction_threshold > 0 and large add() calls have been made since the last flush_extraction_queue().

Source

pub fn with_user(self, user: impl Into<String>) -> Self

Set a default namespace (user identity) for all add/query operations.

When set, any call to add() or query() that does not supply an explicit namespace argument will use this value automatically. Equivalent to always passing namespace = Some(user) — enables “Memory is per-user” semantics without changing every call site.

§Example
let engine = MemoryEngine::open("./brain.mfdb", Config::default()).unwrap()
    .with_user("alice");
// All subsequent add/query calls default to namespace="alice"
Source

pub fn set_user_entity(&mut self, name: impl Into<String>)

Set the user entity name for first-person pronoun resolution.

When set, queries containing “I”, “me”, “my”, etc. automatically include this entity in the profile injection step (Step 2.1), ensuring the user’s own memories get the entity score boost.

Unlike with_user(), this does NOT enable namespace filtering — it only affects entity detection at query time. Use this when memories are stored without namespace but you want pronoun resolution.

Source

pub fn set_embedding_fn(&mut self, f: EmbeddingFn)

Set the embedding function for computing fact embeddings at ingestion time.

When set, the pipeline will compute and store embeddings for each extracted entity fact during ingestion. These embeddings enable semantic matching in ProfileSearch (cosine similarity vs word-overlap).

The function should return an embedding vector for the given text input. Typically this wraps the same embedding model used for memory embeddings (e.g., SentenceTransformer.encode()).

§Arguments
  • f - Embedding function: Fn(&str) -> Vec<f32>
Source

pub fn precompute_fact_embeddings(&self) -> Result<usize>

Precompute missing fact embeddings for all entity profiles.

Iterates all stored profiles, checks each fact for a stored embedding, and computes + stores any missing ones using the registered EmbeddingFn. This is a one-time backfill operation — “pay the cost once.”

Returns the number of fact embeddings computed.

Source

pub fn rebuild_speaker_embeddings(&self) -> Result<usize>

Rebuild embeddings for memories with first-person content using speaker-aware pronoun substitution.

For each memory that has a "speaker" in its metadata and first-person content (e.g., "I joined a gym"), recomputes the embedding on the third-person form ("Alice joined a gym") to improve semantic similarity with entity-centric queries.

This is a one-time backfill for databases ingested before this feature was added. Safe to call multiple times — only updates memories where pronoun substitution changes the text (i.e., skips memories without first-person pronouns).

Uses the registered EmbeddingFn (set via set_embedding_fn()) when available, falling back to the internal auto_embed() engine otherwise.

Returns the number of memory embeddings updated.

Source

pub fn summarize_profiles(&self) -> Result<usize>

Generate summaries for all entity profiles.

For each profile with facts, generates a dense summary paragraph that condenses the profile’s facts into one text block. When present, query() injects summaries as single context items instead of N individual facts, addressing RANK failures where evidence is present but buried.

Returns the number of profiles summarized.

Source

pub fn consolidate_profiles(&self) -> Result<(usize, usize)>

Consolidate entity profiles by removing noise and deduplicating facts.

Performs the following cleanup operations:

  1. Remove null-indicator values (“none”, “N/A”, etc.)
  2. Remove overly verbose values (>100 chars)
  3. Semantic dedup within same fact_type using embedding similarity (threshold: 0.85) — keeps fact with higher confidence, or first encountered on tie
  4. Delete garbage entity profiles (non-person entities with ≤2 facts)

Returns (facts_removed, profiles_deleted).

Source

pub fn repair_profiles_from_metadata(&self) -> Result<(usize, usize)>

Repair entity profiles by re-processing llm_extraction metadata stored in memories.

This is a recovery function for databases where entity profiles are missing or incomplete due to extraction failures, consolidation over-pruning, or ingestion bugs.

For every memory in the DB:

  1. Parse the llm_extraction JSON from metadata (if present)
  2. For each entity_fact: create/update the entity profile with the fact and add the memory as a source_memory
  3. For the speaker metadata field: ensure the speaker entity’s profile includes this memory as a source_memory (handles first-person statements where the speaker name isn’t in the content text)

Respects the pipeline’s profile_entity_types filter and type allowlist. Skips entities whose names appear to be pronouns or generic placeholders.

Returns (profiles_created, source_memories_added).

Source

pub fn add( &self, content: String, embedding: impl Into<Option<Vec<f32>>>, metadata: Option<HashMap<String, String>>, timestamp: Option<Timestamp>, source: Option<Source>, namespace: Option<&str>, ) -> Result<MemoryId>

Add a new memory to the database

This will automatically index the memory across all dimensions:

  • Semantic (vector similarity)
  • Temporal (time-based)
  • Entity (if auto-extraction enabled)
§Arguments
  • content - The text content to store
  • embedding - Vector embedding (must match configured dimension)
  • metadata - Optional key-value metadata
  • timestamp - Optional custom timestamp (defaults to now)
  • source - Optional provenance/source tracking information
§Returns

The ID of the created memory

§Errors

Returns an error if:

  • Embedding dimension doesn’t match configuration
  • Storage operation fails
  • Source serialization fails
§Example
let embedding = vec![0.1; 384];

// Add memory with source tracking
let source = Source::new(SourceType::Conversation)
    .with_id("conv_123")
    .with_confidence(0.95);

let id = engine.add(
    "Meeting scheduled for next week".to_string(),
    embedding,
    None,
    None,
    Some(source),
    None,
).unwrap();
Source

pub fn get(&self, id: &MemoryId) -> Result<Option<Memory>>

Retrieve a memory by ID

§Arguments
  • id - The memory ID to retrieve
§Returns

The memory record if found, or None

§Example
let memory = engine.get(&id).unwrap();
if let Some(mem) = memory {
    println!("Content: {}", mem.content);
}
Source

pub fn delete(&self, id: &MemoryId, namespace: Option<&str>) -> Result<bool>

Delete a memory by ID

This will remove the memory from all indexes.

§Arguments
  • id - The memory ID to delete
  • namespace - Optional namespace. If provided, verifies the memory is in this namespace before deleting
§Returns

true if the memory was deleted, false if it didn’t exist

§Errors

Returns Error::NamespaceMismatch if namespace is provided and doesn’t match

§Example
let deleted = engine.delete(&id, None).unwrap();
assert!(deleted);
Source

pub fn add_batch( &self, inputs: Vec<MemoryInput>, namespace: Option<&str>, ) -> Result<BatchResult>

Add multiple memories in a batch operation

This is significantly faster than calling add() multiple times (10x+ improvement) because it uses:

  • Single transaction for all storage operations
  • Vector index locked once for all additions
  • Batched entity extraction with deduplication
§Arguments
  • inputs - Vector of MemoryInput to add
§Returns

BatchResult containing IDs of created memories and any errors

§Performance

Target: 1,000 memories in <500ms

§Example
use mnemefusion_core::{MemoryEngine, Config};
use mnemefusion_core::types::MemoryInput;

let inputs = vec![
    MemoryInput::new("content 1".to_string(), vec![0.1; 384]),
    MemoryInput::new("content 2".to_string(), vec![0.2; 384]),
];

let result = engine.add_batch(inputs, None).unwrap();
println!("Created {} memories", result.created_count);
if result.has_errors() {
    println!("Encountered {} errors", result.errors.len());
}
Source

pub fn add_batch_with_progress( &self, inputs: Vec<MemoryInput>, namespace: Option<&str>, progress_callback: Option<Box<dyn Fn(usize, usize)>>, ) -> Result<BatchResult>

Add multiple memories in a single batch operation with progress reporting.

Like add_batch(), but calls progress_callback(current, total) after each memory is processed. Useful for long ingestion runs.

§Example
let inputs: Vec<MemoryInput> = vec![]; // ...
let result = engine.add_batch_with_progress(
    inputs,
    None,
    Some(Box::new(|current, total| {
        println!("Progress: {}/{}", current, total);
    })),
).unwrap();
Source

pub fn delete_batch( &self, ids: Vec<MemoryId>, namespace: Option<&str>, ) -> Result<usize>

Delete multiple memories in a batch operation

This is faster than calling delete() multiple times because it uses:

  • Single transaction for all storage operations
  • Batched entity cleanup
§Arguments
  • ids - Vector of MemoryIds to delete
  • namespace - Optional namespace. If provided, only deletes memories in this namespace
§Returns

Number of memories actually deleted (may be less than input if some don’t exist or are in wrong namespace)

§Example
use mnemefusion_core::{MemoryEngine, Config};

let ids = vec![id1, id2];
let deleted_count = engine.delete_batch(ids, None).unwrap();
println!("Deleted {} memories", deleted_count);
Source

pub fn add_with_dedup( &self, content: String, embedding: Vec<f32>, metadata: Option<HashMap<String, String>>, timestamp: Option<Timestamp>, source: Option<Source>, namespace: Option<&str>, ) -> Result<AddResult>

Add a memory with automatic deduplication

Uses content hash to detect duplicates. If identical content already exists, returns the existing memory ID without creating a duplicate.

§Arguments
  • content - Text content
  • embedding - Vector embedding
  • metadata - Optional metadata
  • timestamp - Optional custom timestamp
  • source - Optional source/provenance
§Returns

AddResult with created flag and ID (either new or existing)

§Example
use mnemefusion_core::{MemoryEngine, Config};

let embedding = vec![0.1; 384];

// First add
let result1 = engine.add_with_dedup(
    "Meeting notes".to_string(),
    embedding.clone(),
    None,
    None,
    None,
    None,
).unwrap();
assert!(result1.created);

// Second add with same content
let result2 = engine.add_with_dedup(
    "Meeting notes".to_string(),
    embedding.clone(),
    None,
    None,
    None,
    None,
).unwrap();
assert!(!result2.created); // Duplicate detected
assert_eq!(result1.id, result2.id); // Same ID returned
Source

pub fn upsert( &self, key: &str, content: String, embedding: Vec<f32>, metadata: Option<HashMap<String, String>>, timestamp: Option<Timestamp>, source: Option<Source>, namespace: Option<&str>, ) -> Result<UpsertResult>

Upsert a memory by logical key

If key exists: replaces content, embedding, and metadata If key doesn’t exist: creates new memory and associates with key

This is useful for updating facts that may change over time.

§Arguments
  • key - Logical key (e.g., “user_profile:123”, “doc:readme”)
  • content - Text content
  • embedding - Vector embedding
  • metadata - Optional metadata
  • timestamp - Optional custom timestamp
  • source - Optional source/provenance
§Returns

UpsertResult indicating whether memory was created or updated

§Example
use mnemefusion_core::{MemoryEngine, Config};

let embedding = vec![0.1; 384];

// First upsert - creates new
let result1 = engine.upsert(
    "user:profile",
    "Alice likes hiking".to_string(),
    embedding.clone(),
    None,
    None,
    None,
    None,
).unwrap();
assert!(result1.created);

// Second upsert - updates existing
let result2 = engine.upsert(
    "user:profile",
    "Alice likes hiking and photography".to_string(),
    vec![0.2; 384],
    None,
    None,
    None,
    None,
).unwrap();
assert!(result2.updated);
assert_eq!(result2.previous_content, Some("Alice likes hiking".to_string()));
Source

pub fn count(&self) -> Result<usize>

Get the number of memories in the database

§Example
let count = engine.count().unwrap();
println!("Total memories: {}", count);
Source

pub fn list_ids(&self) -> Result<Vec<MemoryId>>

List all memory IDs (for debugging/testing)

§Warning

This loads all memory IDs into memory. Use with caution on large databases.

Source

pub fn update_embedding( &self, id: &MemoryId, new_embedding: Vec<f32>, ) -> Result<()>

Update the embedding vector for an existing memory.

This updates both the stored memory record (used by MMR diversity) and the HNSW vector index (used by semantic search). The memory content, metadata, and all other fields are preserved.

§Arguments
  • id - The memory ID to update
  • new_embedding - The new embedding vector (must match configured dimension)
§Errors

Returns error if the memory doesn’t exist or the embedding dimension is wrong.

Source

pub fn config(&self) -> &Config

Get the configuration

Source

pub fn reserve_capacity(&self, capacity: usize) -> Result<()>

Reserve capacity in the vector index for future insertions

This is useful when you know you’ll be adding many memories and want to avoid repeated reallocations, improving performance.

§Arguments
  • capacity - Number of vectors to reserve space for
§Example
// Reserve space for 10,000 memories before bulk insertion
engine.reserve_capacity(10_000).unwrap();
Source

pub fn search( &self, query_embedding: &[f32], top_k: usize, namespace: Option<&str>, filters: Option<&[MetadataFilter]>, ) -> Result<Vec<(Memory, f32)>>

Search for memories by semantic similarity

§Arguments
  • query_embedding - The query vector to search for
  • top_k - Maximum number of results to return
  • namespace - Optional namespace filter. If provided, only returns memories in this namespace
§Returns

A vector of (Memory, similarity_score) tuples, sorted by similarity (highest first)

§Example
let results = engine.search(&query_embedding, 10, None, None).unwrap();
for (memory, score) in results {
    println!("Similarity: {:.3} - {}", score, memory.content);
}
Source

pub fn query( &self, query_text: &str, query_embedding: impl Into<Option<Vec<f32>>>, limit: usize, namespace: Option<&str>, filters: Option<&[MetadataFilter]>, ) -> Result<(IntentClassification, Vec<(Memory, FusedResult)>, Vec<String>)>

Intelligent multi-dimensional query with intent classification

This method performs intent-aware retrieval across all dimensions:

  • Classifies the query intent (temporal, causal, entity, factual)
  • Retrieves results from relevant dimensions
  • Fuses results with adaptive weights based on intent
§Arguments
  • query_text - Natural language query text
  • query_embedding - Vector embedding of the query
  • limit - Maximum number of results to return
  • namespace - Optional namespace filter. If provided, only returns memories in this namespace
§Returns

Tuple of (intent classification, fused results with full memory records)

§Example
let (intent, results, profile_context) = engine.query(
    "Why was the meeting cancelled?",
    &query_embedding,
    10,
    None,
    None
).unwrap();

println!("Query intent: {:?}", intent.intent);
println!("Profile context: {} entries", profile_context.len());
for result in results {
    println!("Score: {:.3} - {}", result.1.fused_score, result.0.content);
}
Source

pub fn last_query_trace(&self) -> Option<Trace>

Returns the trace from the most recent query() call, if tracing is enabled.

Source

pub fn get_range( &self, start: Timestamp, end: Timestamp, limit: usize, namespace: Option<&str>, ) -> Result<Vec<(Memory, Timestamp)>>

Query memories within a time range

Returns memories whose timestamps fall within the specified range, sorted by timestamp (newest first).

§Arguments
  • start - Start of the time range (inclusive)
  • end - End of the time range (inclusive)
  • limit - Maximum number of results to return
  • namespace - Optional namespace filter. If provided, only returns memories in this namespace
§Returns

A vector of (Memory, Timestamp) tuples, sorted newest first

§Example
let now = Timestamp::now();
let week_ago = now.subtract_days(7);

let results = engine.get_range(week_ago, now, 100, None).unwrap();
for (memory, timestamp) in results {
    println!("{}: {}", timestamp.as_unix_secs(), memory.content);
}
Source

pub fn get_recent( &self, n: usize, namespace: Option<&str>, ) -> Result<Vec<(Memory, Timestamp)>>

Get the N most recent memories

Returns the most recent memories, sorted by timestamp (newest first).

§Arguments
  • n - Number of recent memories to retrieve
  • namespace - Optional namespace filter. If provided, only returns memories in this namespace
§Returns

A vector of (Memory, Timestamp) tuples, sorted newest first

§Example
let recent = engine.get_recent(10, None).unwrap();
println!("10 most recent memories:");
for (memory, timestamp) in recent {
    println!("  {} - {}", timestamp.as_unix_secs(), memory.content);
}

Add a causal link between two memories

Links a cause memory to an effect memory with a confidence score.

§Arguments
  • cause - The MemoryId of the cause
  • effect - The MemoryId of the effect
  • confidence - Confidence score (0.0 to 1.0)
  • evidence - Evidence text explaining the causal relationship
§Errors

Returns error if confidence is not in range [0.0, 1.0]

§Example
engine.add_causal_link(&id1, &id2, 0.9, "id1 caused id2".to_string()).unwrap();
Source

pub fn get_causes( &self, memory_id: &MemoryId, max_hops: usize, ) -> Result<CausalTraversalResult>

Get causes of a memory (backward traversal)

Finds all memories that causally precede the given memory, up to max_hops.

§Arguments
  • memory_id - The memory to find causes for
  • max_hops - Maximum traversal depth
§Returns

CausalTraversalResult with all paths found

§Example
let causes = engine.get_causes(&id, 3).unwrap();
for path in causes.paths {
    println!("Found causal path with {} steps (confidence: {})",
             path.memories.len(), path.confidence);
}
Source

pub fn get_effects( &self, memory_id: &MemoryId, max_hops: usize, ) -> Result<CausalTraversalResult>

Get effects of a memory (forward traversal)

Finds all memories that causally follow the given memory, up to max_hops.

§Arguments
  • memory_id - The memory to find effects for
  • max_hops - Maximum traversal depth
§Returns

CausalTraversalResult with all paths found

§Example
let effects = engine.get_effects(&id, 3).unwrap();
for path in effects.paths {
    println!("Found effect chain with {} steps (confidence: {})",
             path.memories.len(), path.confidence);
}
Source

pub fn list_namespaces(&self) -> Result<Vec<String>>

List all namespaces in the database

Returns a sorted list of all unique namespace strings, excluding the default namespace (“”).

§Performance

O(n) where n = total memories. This scans all memories to extract namespaces.

§Example
let namespaces = engine.list_namespaces().unwrap();
for ns in namespaces {
    println!("Namespace: {}", ns);
}
Source

pub fn count_namespace(&self, namespace: &str) -> Result<usize>

Count memories in a specific namespace

§Arguments
  • namespace - The namespace to count (empty string “” for default namespace)
§Returns

Number of memories in the namespace

§Example
let count = engine.count_namespace("user_123").unwrap();
println!("User has {} memories", count);
Source

pub fn delete_namespace(&self, namespace: &str) -> Result<usize>

Delete all memories in a namespace

This is a convenience method that lists all memory IDs in the namespace and deletes them via the ingestion pipeline (ensuring proper cleanup of indexes).

§Arguments
  • namespace - The namespace to delete (empty string “” for default namespace)
§Returns

Number of memories deleted

§Warning

This operation cannot be undone. Use with caution.

§Example
let deleted = engine.delete_namespace("old_user").unwrap();
println!("Deleted {} memories from namespace", deleted);
Source

pub fn get_entity_memories(&self, entity_name: &str) -> Result<Vec<Memory>>

Get all memories that mention a specific entity

§Arguments
  • entity_name - The name of the entity to query (case-insensitive)
§Returns

A vector of Memory objects that mention this entity

§Example
let memories = engine.get_entity_memories("Project Alpha").unwrap();
for memory in memories {
    println!("{}", memory.content);
}
Source

pub fn get_memory_entities(&self, memory_id: &MemoryId) -> Result<Vec<Entity>>

Get all entities mentioned in a specific memory

§Arguments
  • memory_id - The memory to query
§Returns

A vector of Entity objects mentioned in this memory

§Example
let entities = engine.get_memory_entities(&id).unwrap();
for entity in entities {
    println!("Entity: {}", entity.name);
}
Source

pub fn list_entities(&self) -> Result<Vec<Entity>>

List all entities in the database

§Returns

A vector of all Entity objects

§Example
let all_entities = engine.list_entities().unwrap();
for entity in all_entities {
    println!("{}: {} mentions", entity.name, entity.mention_count);
}
Source

pub fn get_entity_profile(&self, name: &str) -> Result<Option<EntityProfile>>

Get the profile for an entity by name

Entity profiles aggregate facts about entities across all memories. They are automatically built during ingestion when SLM metadata extraction is enabled.

§Arguments
  • name - The entity name (case-insensitive)
§Returns

The EntityProfile if found, or None

§Example
if let Some(profile) = engine.get_entity_profile("Alice").unwrap() {
    println!("Entity: {} ({})", profile.name, profile.entity_type);

    // Get facts about Alice's occupation
    for fact in profile.get_facts("occupation") {
        println!("  Occupation: {} (confidence: {})", fact.value, fact.confidence);
    }

    // Get facts about Alice's research
    for fact in profile.get_facts("research_topic") {
        println!("  Research: {} (confidence: {})", fact.value, fact.confidence);
    }
}
Source

pub fn list_entity_profiles(&self) -> Result<Vec<EntityProfile>>

List all entity profiles in the database

§Returns

A vector of all EntityProfile objects

§Example
let profiles = engine.list_entity_profiles().unwrap();
for profile in profiles {
    println!("{} ({}) - {} facts from {} memories",
        profile.name,
        profile.entity_type,
        profile.total_facts(),
        profile.source_memories.len()
    );
}
Source

pub fn count_entity_profiles(&self) -> Result<usize>

Count entity profiles in the database

§Returns

The number of entity profiles

§Example
let count = engine.count_entity_profiles().unwrap();
println!("Total entity profiles: {}", count);
Source

pub fn scope<S: Into<String>>(&self, namespace: S) -> ScopedMemory<'_>

Create a scoped view for namespace-specific operations

Returns a ScopedMemory that automatically applies the namespace to all operations. This provides a more ergonomic API when working with a single namespace.

§Arguments
  • namespace - The namespace to scope to (empty string “” for default namespace)
§Returns

A ScopedMemory view bound to this namespace

§Example
// Create scoped view for a user
let user_memory = engine.scope("user_123");

// All operations automatically use the namespace
let id = user_memory.add("User note".to_string(), vec![0.1; 384], None, None, None).unwrap();
let results = user_memory.search(&vec![0.1; 384], 10, None).unwrap();
let count = user_memory.count().unwrap();
user_memory.delete_all().unwrap();
Source

pub fn close(self) -> Result<()>

Close the database

This saves all indexes and ensures all data is flushed to disk. While not strictly necessary (redb handles persistence automatically), it’s good practice to call this explicitly when you’re done.

§Example
let engine = MemoryEngine::open("./test.mfdb", Config::default()).unwrap();
// ... use engine ...
engine.close().unwrap();

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more