Struct ShardedSieveCache

Source
pub struct ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,
{ /* private fields */ }
Expand description

A thread-safe implementation of SieveCache that uses multiple shards to reduce contention.

This provides better concurrency than SyncSieveCache by splitting the cache into multiple independent shards, each protected by its own mutex. Operations on different shards can proceed in parallel, which can significantly improve throughput in multi-threaded environments.

§How Sharding Works

The cache is partitioned into multiple independent segments (shards) based on the hash of the key. Each shard has its own mutex, allowing operations on different shards to proceed concurrently. This reduces lock contention compared to a single-mutex approach, especially under high concurrency with access patterns distributed across different keys.

§Performance Considerations

  • For workloads with high concurrency across different keys, ShardedSieveCache typically offers better performance than SyncSieveCache
  • The benefits increase with the number of concurrent threads and the distribution of keys
  • More shards reduce contention but increase memory overhead
  • If most operations target the same few keys (which map to the same shards), the benefits of sharding may be limited

§Examples

// Create a cache with default number of shards (16)
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(1000).unwrap();

// Or specify a custom number of shards
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 32).unwrap();

cache.insert("key1".to_string(), "value1".to_string());
assert_eq!(cache.get(&"key1".to_string()), Some("value1".to_string()));

Implementations§

Source§

impl<K, V> ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,

Source

pub fn new(capacity: usize) -> Result<Self, &'static str>

Creates a new sharded cache with the specified capacity, using the default number of shards.

The capacity will be divided evenly among the shards. The default shard count (16) provides a good balance between concurrency and memory overhead for most workloads.

§Errors

Returns an error if the capacity is zero.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(1000).unwrap();
assert_eq!(cache.num_shards(), 16); // Default shard count
Source

pub fn with_shards( capacity: usize, num_shards: usize, ) -> Result<Self, &'static str>

Creates a new sharded cache with the specified capacity and number of shards.

The capacity will be divided among the shards, distributing any remainder to ensure the total capacity is at least the requested amount.

§Arguments
  • capacity - The total capacity of the cache
  • num_shards - The number of shards to divide the cache into
§Errors

Returns an error if either the capacity or number of shards is zero.

§Performance Impact
  • More shards can reduce contention in highly concurrent environments
  • However, each shard has memory overhead, so very high shard counts may increase memory usage without providing additional performance benefits
  • For most workloads, a value between 8 and 32 shards is optimal
§Examples
// Create a cache with 8 shards
let cache: ShardedSieveCache<String, u32> = ShardedSieveCache::with_shards(1000, 8).unwrap();
assert_eq!(cache.num_shards(), 8);
assert!(cache.capacity() >= 1000);
Source

pub fn capacity(&self) -> usize

Returns the total capacity of the cache (sum of all shard capacities).

§Examples
let cache: ShardedSieveCache<String, u32> = ShardedSieveCache::new(1000).unwrap();
assert!(cache.capacity() >= 1000);
Source

pub fn len(&self) -> usize

Returns the total number of entries in the cache (sum of all shard lengths).

Note that this operation requires acquiring a lock on each shard, so it may cause temporary contention if called frequently in a high-concurrency environment.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();

cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

assert_eq!(cache.len(), 2);
Source

pub fn is_empty(&self) -> bool

Returns true when no values are currently cached in any shard.

Note that this operation requires acquiring a lock on each shard, so it may cause temporary contention if called frequently in a high-concurrency environment.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
assert!(cache.is_empty());

cache.insert("key".to_string(), "value".to_string());
assert!(!cache.is_empty());
Source

pub fn contains_key<Q>(&self, key: &Q) -> bool
where Q: Hash + Eq + ?Sized, K: Borrow<Q>,

Returns true if there is a value in the cache mapped to by key.

This operation only locks the specific shard containing the key.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

assert!(cache.contains_key(&"key".to_string()));
assert!(!cache.contains_key(&"missing".to_string()));
Source

pub fn get<Q>(&self, key: &Q) -> Option<V>
where Q: Hash + Eq + ?Sized, K: Borrow<Q>, V: Clone,

Gets a clone of the value in the cache mapped to by key.

If no value exists for key, this returns None. This operation only locks the specific shard containing the key.

§Note

This method returns a clone of the value rather than a reference, since the mutex guard would be dropped after this method returns. This means that V must implement Clone.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

assert_eq!(cache.get(&"key".to_string()), Some("value".to_string()));
assert_eq!(cache.get(&"missing".to_string()), None);
Source

pub fn insert(&self, key: K, value: V) -> bool

Maps key to value in the cache, possibly evicting old entries from the appropriate shard.

This method returns true when this is a new entry, and false if an existing entry was updated. This operation only locks the specific shard containing the key.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();

// Insert a new key
assert!(cache.insert("key1".to_string(), "value1".to_string()));

// Update an existing key
assert!(!cache.insert("key1".to_string(), "updated_value".to_string()));
Source

pub fn remove<Q>(&self, key: &Q) -> Option<V>
where K: Borrow<Q>, Q: Eq + Hash + ?Sized,

Removes the cache entry mapped to by key.

This method returns the value removed from the cache. If key did not map to any value, then this returns None. This operation only locks the specific shard containing the key.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

// Remove an existing key
assert_eq!(cache.remove(&"key".to_string()), Some("value".to_string()));

// Attempt to remove a missing key
assert_eq!(cache.remove(&"key".to_string()), None);
Source

pub fn evict(&self) -> Option<V>

Removes and returns a value from the cache that was not recently accessed.

This method tries to evict from each shard in turn until it finds a value to evict. If no suitable value exists in any shard, this returns None.

§Note

This implementation differs from the non-sharded version in that it checks each shard in sequence until it finds a suitable entry to evict. This may not provide the globally optimal eviction decision across all shards, but it avoids the need to lock all shards simultaneously.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(10, 2).unwrap();

// Fill the cache
for i in 0..15 {
    cache.insert(format!("key{}", i), format!("value{}", i));
}

// Should be able to evict something
assert!(cache.evict().is_some());
Source

pub fn with_key_lock<Q, F, T>(&self, key: &Q, f: F) -> T
where Q: Hash + ?Sized, F: FnOnce(&mut SieveCache<K, V>) -> T,

Gets exclusive access to a specific shard based on the key.

This can be useful for performing multiple operations atomically on entries that share the same shard. Note that only keys that hash to the same shard can be manipulated within a single transaction.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();

// Perform multiple operations atomically
cache.with_key_lock(&"foo", |shard| {
    // All operations within this closure have exclusive access to the shard
    shard.insert("key1".to_string(), "value1".to_string());
    shard.insert("key2".to_string(), "value2".to_string());
     
    // We can check state mid-transaction
    assert!(shard.contains_key(&"key1".to_string()));
});
Source

pub fn num_shards(&self) -> usize

Returns the number of shards in this cache.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 32).unwrap();
assert_eq!(cache.num_shards(), 32);
Source

pub fn get_shard_by_index( &self, index: usize, ) -> Option<&Arc<Mutex<SieveCache<K, V>>>>

Gets a specific shard by index.

This is mainly useful for advanced use cases and maintenance operations. Returns None if the index is out of bounds.

§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 8).unwrap();

// Access a valid shard
assert!(cache.get_shard_by_index(0).is_some());

// Out of bounds index
assert!(cache.get_shard_by_index(100).is_none());

Trait Implementations§

Source§

impl<K, V> Clone for ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync + Clone, V: Send + Sync + Clone,

Source§

fn clone(&self) -> ShardedSieveCache<K, V>

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more

Auto Trait Implementations§

§

impl<K, V> Freeze for ShardedSieveCache<K, V>

§

impl<K, V> RefUnwindSafe for ShardedSieveCache<K, V>

§

impl<K, V> Send for ShardedSieveCache<K, V>

§

impl<K, V> Sync for ShardedSieveCache<K, V>

§

impl<K, V> Unpin for ShardedSieveCache<K, V>

§

impl<K, V> UnwindSafe for ShardedSieveCache<K, V>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.