Struct ShardedSieveCache

Source
pub struct ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,
{ /* private fields */ }
Expand description

A thread-safe implementation of SieveCache that uses multiple shards to reduce contention.

This provides better concurrency than SyncSieveCache by splitting the cache into multiple independent shards, each protected by its own mutex. Operations on different shards can proceed in parallel, which can significantly improve throughput in multi-threaded environments.

§How Sharding Works

The cache is partitioned into multiple independent segments (shards) based on the hash of the key. Each shard has its own mutex, allowing operations on different shards to proceed concurrently. This reduces lock contention compared to a single-mutex approach, especially under high concurrency with access patterns distributed across different keys.

§Thread Safety Characteristics

§Key-Based Locking

  • Operations on the same key will always map to the same shard and are serialized
  • Operations on different keys that hash to different shards can execute concurrently
  • Hash-based sharding ensures good distribution of keys across shards by default

§Deadlock Prevention

This implementation prevents deadlocks by:

  • Using the same deadlock prevention techniques as SyncSieveCache for each shard
  • Only acquiring one shard lock at a time for single-key operations
  • Using a consistent lock acquisition order when multiple shards must be accessed
  • Releasing locks before executing callbacks to prevent nested lock acquisition

§Concurrent Operations

  • Single-key operations only lock one shard, allowing high concurrency
  • Multi-key operations (like clear(), keys(), values()) access shards sequentially
  • No operation holds locks on multiple shards simultaneously, preventing deadlocks

§Consistency Model

  • Per-Key Consistency: Operations on individual keys are atomic and isolated
  • Cross-Shard Consistency: There are no guarantees of a globally consistent view across shards
  • Iteration Methods: Methods like keys(), values(), and entries() create point-in-time snapshots that may not reflect concurrent modifications
  • Bulk Operations: Methods like retain(), for_each_value(), and for_each_entry() operate on each shard independently

§Callback Handling

  • get_mut: Executes callbacks while holding only the lock for the relevant shard
  • with_key_lock: Provides exclusive access to a specific shard for atomic multi-step operations
  • for_each_value, for_each_entry: Process one shard at a time, with lock released between shards

§Performance Considerations

  • For workloads with high concurrency across different keys, ShardedSieveCache typically offers better performance than SyncSieveCache
  • The benefits increase with the number of concurrent threads and the distribution of keys
  • More shards reduce contention but increase memory overhead
  • If most operations target the same few keys (which map to the same shards), the benefits of sharding may be limited
  • Default of 16 shards provides a good balance for most workloads, but can be customized

§Examples

// Create a cache with default number of shards (16)
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(1000).unwrap();

// Or specify a custom number of shards
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 32).unwrap();

cache.insert("key1".to_string(), "value1".to_string());
assert_eq!(cache.get(&"key1".to_string()), Some("value1".to_string()));

§Multi-Threaded Example


// Create a sharded cache with 8 shards
let cache = Arc::new(ShardedSieveCache::with_shards(1000, 8).unwrap());

// Spawn 4 threads that each insert 100 items
let mut handles = vec![];
for t in 0..4 {
    let cache_clone = Arc::clone(&cache);
    let handle = thread::spawn(move || {
        for i in 0..100 {
            let key = format!("thread{}key{}", t, i);
            let value = format!("value{}_{}", t, i);
            // Different threads can insert concurrently
            cache_clone.insert(key, value);
        }
    });
    handles.push(handle);
}

// Wait for all threads to complete
for handle in handles {
    handle.join().unwrap();
}

assert_eq!(cache.len(), 400); // All 400 items were inserted

Implementations§

Source§

impl<K, V> ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,

Source

pub fn new(capacity: usize) -> Result<Self, &'static str>

Creates a new sharded cache with the specified capacity, using the default number of shards.

The capacity will be divided evenly among the shards. The default shard count (16) provides a good balance between concurrency and memory overhead for most workloads.

§Errors

Returns an error if the capacity is zero.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(1000).unwrap();
assert_eq!(cache.num_shards(), 16); // Default shard count
Source

pub fn with_shards( capacity: usize, num_shards: usize, ) -> Result<Self, &'static str>

Creates a new sharded cache with the specified capacity and number of shards.

The capacity will be divided among the shards, distributing any remainder to ensure the total capacity is at least the requested amount.

§Arguments
  • capacity - The total capacity of the cache
  • num_shards - The number of shards to divide the cache into
§Errors

Returns an error if either the capacity or number of shards is zero.

§Performance Impact
  • More shards can reduce contention in highly concurrent environments
  • However, each shard has memory overhead, so very high shard counts may increase memory usage without providing additional performance benefits
  • For most workloads, a value between 8 and 32 shards is optimal
§Examples
// Create a cache with 8 shards
let cache: ShardedSieveCache<String, u32> = ShardedSieveCache::with_shards(1000, 8).unwrap();
assert_eq!(cache.num_shards(), 8);
assert!(cache.capacity() >= 1000);
Source

pub fn capacity(&self) -> usize

Returns the total capacity of the cache (sum of all shard capacities).

§Examples
let cache: ShardedSieveCache<String, u32> = ShardedSieveCache::new(1000).unwrap();
assert!(cache.capacity() >= 1000);
Source

pub fn len(&self) -> usize

Returns the total number of entries in the cache (sum of all shard lengths).

Note that this operation requires acquiring a lock on each shard, so it may cause temporary contention if called frequently in a high-concurrency environment.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();

cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

assert_eq!(cache.len(), 2);
Source

pub fn is_empty(&self) -> bool

Returns true when no values are currently cached in any shard.

Note that this operation requires acquiring a lock on each shard, so it may cause temporary contention if called frequently in a high-concurrency environment.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
assert!(cache.is_empty());

cache.insert("key".to_string(), "value".to_string());
assert!(!cache.is_empty());
Source

pub fn contains_key<Q>(&self, key: &Q) -> bool
where Q: Hash + Eq + ?Sized, K: Borrow<Q>,

Returns true if there is a value in the cache mapped to by key.

This operation only locks the specific shard containing the key.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

assert!(cache.contains_key(&"key".to_string()));
assert!(!cache.contains_key(&"missing".to_string()));
Source

pub fn get<Q>(&self, key: &Q) -> Option<V>
where Q: Hash + Eq + ?Sized, K: Borrow<Q>, V: Clone,

Gets a clone of the value in the cache mapped to by key.

If no value exists for key, this returns None. This operation only locks the specific shard containing the key.

§Note

This method returns a clone of the value rather than a reference, since the mutex guard would be dropped after this method returns. This means that V must implement Clone.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

assert_eq!(cache.get(&"key".to_string()), Some("value".to_string()));
assert_eq!(cache.get(&"missing".to_string()), None);
Source

pub fn get_mut<Q, F>(&self, key: &Q, f: F) -> bool
where Q: Hash + Eq + ?Sized, K: Borrow<Q>, F: FnOnce(&mut V), V: Clone,

Gets a mutable reference to the value in the cache mapped to by key via a callback function.

If no value exists for key, the callback will not be invoked and this returns false. Otherwise, the callback is invoked with a mutable reference to the value and this returns true. This operation only locks the specific shard containing the key.

This operation marks the entry as “visited” in the SIEVE algorithm, which affects eviction decisions.

§Thread Safety

This method operates safely with recursive calls by:

  1. Cloning the value with a short-lived lock on only the relevant shard
  2. Releasing the lock during callback execution
  3. Re-acquiring the lock to update the original value

This approach means:

  • The callback can safely make other calls to the same cache instance
  • The value can be modified by other threads during the callback execution
  • Changes are not visible to other threads until the callback completes
  • Last writer wins if multiple threads modify the same key concurrently

Compared to SyncSieveCache.get_mut():

  • Only locks a single shard rather than the entire cache
  • Reduces contention when operating on different keys in different shards
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

// Modify the value in-place
cache.get_mut(&"key".to_string(), |value| {
    *value = "new_value".to_string();
});

assert_eq!(cache.get(&"key".to_string()), Some("new_value".to_string()));
Source

pub fn insert(&self, key: K, value: V) -> bool

Maps key to value in the cache, possibly evicting old entries from the appropriate shard.

This method returns true when this is a new entry, and false if an existing entry was updated. This operation only locks the specific shard containing the key.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();

// Insert a new key
assert!(cache.insert("key1".to_string(), "value1".to_string()));

// Update an existing key
assert!(!cache.insert("key1".to_string(), "updated_value".to_string()));
Source

pub fn remove<Q>(&self, key: &Q) -> Option<V>
where K: Borrow<Q>, Q: Eq + Hash + ?Sized,

Removes the cache entry mapped to by key.

This method returns the value removed from the cache. If key did not map to any value, then this returns None. This operation only locks the specific shard containing the key.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

// Remove an existing key
assert_eq!(cache.remove(&"key".to_string()), Some("value".to_string()));

// Attempt to remove a missing key
assert_eq!(cache.remove(&"key".to_string()), None);
Source

pub fn evict(&self) -> Option<V>

Removes and returns a value from the cache that was not recently accessed.

This method tries to evict from each shard in turn until it finds a value to evict. If no suitable value exists in any shard, this returns None.

§Note

This implementation differs from the non-sharded version in that it checks each shard in sequence until it finds a suitable entry to evict. This may not provide the globally optimal eviction decision across all shards, but it avoids the need to lock all shards simultaneously.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(10, 2).unwrap();

// Fill the cache
for i in 0..15 {
    cache.insert(format!("key{}", i), format!("value{}", i));
}

// Should be able to evict something
assert!(cache.evict().is_some());
Source

pub fn clear(&self)

Removes all entries from the cache.

This operation clears all stored values across all shards and resets the cache to an empty state, while maintaining the original capacity.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

assert_eq!(cache.len(), 2);

cache.clear();
assert_eq!(cache.len(), 0);
assert!(cache.is_empty());
Source

pub fn keys(&self) -> Vec<K>

Returns an iterator over all keys in the cache.

The order of keys is not specified and should not be relied upon.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

let keys: HashSet<_> = cache.keys().into_iter().collect();
assert_eq!(keys.len(), 2);
assert!(keys.contains(&"key1".to_string()));
assert!(keys.contains(&"key2".to_string()));
Source

pub fn values(&self) -> Vec<V>
where V: Clone,

Returns all values in the cache.

The order of values is not specified and should not be relied upon.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

let values: HashSet<_> = cache.values().into_iter().collect();
assert_eq!(values.len(), 2);
assert!(values.contains(&"value1".to_string()));
assert!(values.contains(&"value2".to_string()));
Source

pub fn entries(&self) -> Vec<(K, V)>
where V: Clone,

Returns all key-value pairs in the cache.

The order of pairs is not specified and should not be relied upon.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

let entries: HashMap<_, _> = cache.entries().into_iter().collect();
assert_eq!(entries.len(), 2);
assert_eq!(entries.get(&"key1".to_string()), Some(&"value1".to_string()));
assert_eq!(entries.get(&"key2".to_string()), Some(&"value2".to_string()));
Source

pub fn for_each_value<F>(&self, f: F)
where F: FnMut(&mut V), V: Clone,

Applies a function to all values in the cache across all shards.

This method marks all entries as visited. It safely processes each shard one at a time, applying the function to values without holding the lock.

§Thread Safety

This method operates in phases for each shard:

  1. Lock a shard and collect all key-value pairs
  2. Release the lock and process the values
  3. Re-acquire the lock to update the values
  4. Repeat for each shard

This approach ensures that:

  • Only one shard is locked at a time
  • The callback never executes while holding a lock
  • Each shard is processed atomically
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

// Update all values by appending text
cache.for_each_value(|value| {
    *value = format!("{}_updated", value);
});

assert_eq!(cache.get(&"key1".to_string()), Some("value1_updated".to_string()));
assert_eq!(cache.get(&"key2".to_string()), Some("value2_updated".to_string()));
Source

pub fn for_each_entry<F>(&self, f: F)
where F: FnMut((&K, &mut V)), V: Clone,

Applies a function to all key-value pairs in the cache across all shards.

This method marks all entries as visited. It safely processes each shard one at a time, applying the function to key-value pairs without holding the lock.

§Thread Safety

This method uses the same safety pattern as for_each_value:

  1. Lock a shard and collect all key-value pairs
  2. Release the lock and process the pairs
  3. Re-acquire the lock to update the values
  4. Repeat for each shard
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

// Update all values associated with keys containing '1'
cache.for_each_entry(|(key, value)| {
    if key.contains('1') {
        *value = format!("{}_special", value);
    }
});

assert_eq!(cache.get(&"key1".to_string()), Some("value1_special".to_string()));
assert_eq!(cache.get(&"key2".to_string()), Some("value2".to_string()));
Source

pub fn with_key_lock<Q, F, T>(&self, key: &Q, f: F) -> T
where Q: Hash + ?Sized, F: FnOnce(&mut SieveCache<K, V>) -> T,

Gets exclusive access to a specific shard based on the key.

This can be useful for performing multiple operations atomically on entries that share the same shard. Note that only keys that hash to the same shard can be manipulated within a single transaction.

§Thread Safety

This method provides a way to perform atomic operations on a subset of the cache:

  • Acquires a lock on a single shard determined by the key’s hash
  • Provides exclusive access to that shard for the duration of the callback
  • Allows multiple operations to be performed atomically within the shard
  • Operations on different shards remain concurrent (unlike SyncSieveCache.with_lock())

Important thread safety considerations:

  • Only keys that hash to the same shard can be accessed atomically in a single call
  • Operations affect only one shard, providing partial atomicity (limited to that shard)
  • The callback should not attempt to acquire other locks to avoid deadlocks
  • Long-running callbacks will block other threads from accessing the same shard

This method provides a good balance between atomicity and concurrency: it allows atomic multi-step operations while still permitting operations on other shards to proceed concurrently.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();

// Perform multiple operations atomically
cache.with_key_lock(&"foo", |shard| {
    // All operations within this closure have exclusive access to the shard
    shard.insert("key1".to_string(), "value1".to_string());
    shard.insert("key2".to_string(), "value2".to_string());
     
    // We can check state mid-transaction
    assert!(shard.contains_key(&"key1".to_string()));
});
Source

pub fn num_shards(&self) -> usize

Returns the number of shards in this cache.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 32).unwrap();
assert_eq!(cache.num_shards(), 32);
Source

pub fn get_shard_by_index( &self, index: usize, ) -> Option<&Arc<Mutex<SieveCache<K, V>>>>

Gets a specific shard by index.

This is mainly useful for advanced use cases and maintenance operations. Returns None if the index is out of bounds.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 8).unwrap();

// Access a valid shard
assert!(cache.get_shard_by_index(0).is_some());

// Out of bounds index
assert!(cache.get_shard_by_index(100).is_none());
Source

pub fn retain<F>(&self, f: F)
where F: FnMut(&K, &V) -> bool, V: Clone,

Retains only the elements specified by the predicate.

Removes all entries for which the provided function returns false. The elements are visited in arbitrary, unspecified order, across all shards. This operation processes each shard individually, acquiring and releasing locks as it goes.

§Thread Safety

This method processes each shard independently:

  1. Locks a shard and collects all its entries
  2. Releases the lock and evaluates the predicate on each entry
  3. Re-acquires the lock and removes entries that didn’t pass
  4. Repeats for each shard

This ensures that:

  • Only one shard is locked at a time
  • Predicates never execute while holding locks
  • No deadlocks can occur from lock ordering
§Examples
let cache: ShardedSieveCache<String, i32> = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), 100);
cache.insert("key2".to_string(), 200);
cache.insert("key3".to_string(), 300);

// Keep only entries with values greater than 150
cache.retain(|_, value| *value > 150);

assert_eq!(cache.len(), 2);
assert!(!cache.contains_key(&"key1".to_string()));
assert!(cache.contains_key(&"key2".to_string()));
assert!(cache.contains_key(&"key3".to_string()));
Source

pub fn recommended_capacity( &self, min_factor: f64, max_factor: f64, low_threshold: f64, high_threshold: f64, ) -> usize

Returns a recommended cache capacity based on current utilization across all shards.

This method analyzes the utilization of all shards and recommends a new total capacity. It works by:

  1. Computing recommended capacity for each shard
  2. Summing these to get an aggregate recommendation

The recommendation balances resource usage with hit rate:

  • If many entries are frequently accessed (high utilization), it suggests increasing capacity
  • If few entries are accessed frequently (low utilization), it suggests decreasing capacity
  • Within a normal utilization range, it keeps the capacity stable
§Arguments
  • min_factor - Minimum scaling factor (e.g., 0.5 means never recommend less than 50% of current capacity)
  • max_factor - Maximum scaling factor (e.g., 2.0 means never recommend more than 200% of current capacity)
  • low_threshold - Utilization threshold below which capacity is reduced (e.g., 0.3 means 30% utilization)
  • high_threshold - Utilization threshold above which capacity is increased (e.g., 0.7 means 70% utilization)
§Examples
// Get a recommended capacity with default parameters
let recommended = cache.recommended_capacity(0.5, 2.0, 0.3, 0.7);
println!("Recommended capacity: {}", recommended);
§Default Recommendation Parameters

If you’re unsure what parameters to use, these values provide a reasonable starting point:

  • min_factor: 0.5 (never recommend less than half current capacity)
  • max_factor: 2.0 (never recommend more than twice current capacity)
  • low_threshold: 0.3 (reduce capacity if utilization below 30%)
  • high_threshold: 0.7 (increase capacity if utilization above 70%)

Trait Implementations§

Source§

impl<K, V> Clone for ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync + Clone, V: Send + Sync + Clone,

Source§

fn clone(&self) -> ShardedSieveCache<K, V>

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl<K, V> Debug for ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync + Debug, V: Send + Sync + Debug,

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<K, V> Default for ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,

Source§

fn default() -> Self

Creates a new sharded cache with a default capacity of 100 entries and default number of shards.

§Panics

Panics if the underlying ShardedSieveCache::new() returns an error, which should never happen for a non-zero capacity.

§Examples
let cache: ShardedSieveCache<String, u32> = Default::default();
assert!(cache.capacity() >= 100); // Due to shard distribution, might be slightly larger
assert_eq!(cache.num_shards(), 16); // Default shard count
Source§

impl<K, V> From<SyncSieveCache<K, V>> for ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Clone + Send + Sync,

Source§

fn from(sync_cache: SyncSieveCache<K, V>) -> Self

Creates a new sharded cache from an existing SyncSieveCache.

This allows for upgrading a standard thread-safe cache to a more scalable sharded version.

§Examples
let sync_cache = SyncSieveCache::new(100).unwrap();
sync_cache.insert("key".to_string(), "value".to_string());

// Convert to sharded version with default sharding
let sharded_cache = ShardedSieveCache::from(sync_cache);
assert_eq!(sharded_cache.get(&"key".to_string()), Some("value".to_string()));
Source§

impl<K, V> IntoIterator for ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Clone + Send + Sync,

Source§

fn into_iter(self) -> Self::IntoIter

Converts the cache into an iterator over its key-value pairs.

This collects all entries into a Vec and returns an iterator over that Vec.

§Examples
let cache = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

// Collect into a HashMap
let map: HashMap<_, _> = cache.into_iter().collect();
assert_eq!(map.len(), 2);
assert_eq!(map.get("key1"), Some(&"value1".to_string()));
Source§

type Item = (K, V)

The type of the elements being iterated over.
Source§

type IntoIter = IntoIter<(K, V)>

Which kind of iterator are we turning this into?

Auto Trait Implementations§

§

impl<K, V> Freeze for ShardedSieveCache<K, V>

§

impl<K, V> RefUnwindSafe for ShardedSieveCache<K, V>

§

impl<K, V> Send for ShardedSieveCache<K, V>

§

impl<K, V> Sync for ShardedSieveCache<K, V>

§

impl<K, V> Unpin for ShardedSieveCache<K, V>

§

impl<K, V> UnwindSafe for ShardedSieveCache<K, V>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.