pub struct ShardedSieveCache<K, V>{ /* private fields */ }Expand description
A thread-safe implementation of SieveCache that uses multiple shards to reduce contention.
This provides better concurrency than SyncSieveCache by splitting the cache into multiple
independent shards, each protected by its own mutex. Operations on different shards can
proceed in parallel, which can significantly improve throughput in multi-threaded environments.
§How Sharding Works
The cache is partitioned into multiple independent segments (shards) based on the hash of the key. Each shard has its own mutex, allowing operations on different shards to proceed concurrently. This reduces lock contention compared to a single-mutex approach, especially under high concurrency with access patterns distributed across different keys.
§Performance Considerations
- For workloads with high concurrency across different keys,
ShardedSieveCachetypically offers better performance thanSyncSieveCache - The benefits increase with the number of concurrent threads and the distribution of keys
- More shards reduce contention but increase memory overhead
- If most operations target the same few keys (which map to the same shards), the benefits of sharding may be limited
§Examples
// Create a cache with default number of shards (16)
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(1000).unwrap();
// Or specify a custom number of shards
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 32).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
assert_eq!(cache.get(&"key1".to_string()), Some("value1".to_string()));Implementations§
Source§impl<K, V> ShardedSieveCache<K, V>
impl<K, V> ShardedSieveCache<K, V>
Sourcepub fn new(capacity: usize) -> Result<Self, &'static str>
pub fn new(capacity: usize) -> Result<Self, &'static str>
Creates a new sharded cache with the specified capacity, using the default number of shards.
The capacity will be divided evenly among the shards. The default shard count (16) provides a good balance between concurrency and memory overhead for most workloads.
§Errors
Returns an error if the capacity is zero.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(1000).unwrap();
assert_eq!(cache.num_shards(), 16); // Default shard countSourcepub fn with_shards(
capacity: usize,
num_shards: usize,
) -> Result<Self, &'static str>
pub fn with_shards( capacity: usize, num_shards: usize, ) -> Result<Self, &'static str>
Creates a new sharded cache with the specified capacity and number of shards.
The capacity will be divided among the shards, distributing any remainder to ensure the total capacity is at least the requested amount.
§Arguments
capacity- The total capacity of the cachenum_shards- The number of shards to divide the cache into
§Errors
Returns an error if either the capacity or number of shards is zero.
§Performance Impact
- More shards can reduce contention in highly concurrent environments
- However, each shard has memory overhead, so very high shard counts may increase memory usage without providing additional performance benefits
- For most workloads, a value between 8 and 32 shards is optimal
§Examples
// Create a cache with 8 shards
let cache: ShardedSieveCache<String, u32> = ShardedSieveCache::with_shards(1000, 8).unwrap();
assert_eq!(cache.num_shards(), 8);
assert!(cache.capacity() >= 1000);Sourcepub fn capacity(&self) -> usize
pub fn capacity(&self) -> usize
Returns the total capacity of the cache (sum of all shard capacities).
§Examples
let cache: ShardedSieveCache<String, u32> = ShardedSieveCache::new(1000).unwrap();
assert!(cache.capacity() >= 1000);Sourcepub fn len(&self) -> usize
pub fn len(&self) -> usize
Returns the total number of entries in the cache (sum of all shard lengths).
Note that this operation requires acquiring a lock on each shard, so it may cause temporary contention if called frequently in a high-concurrency environment.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());
assert_eq!(cache.len(), 2);Sourcepub fn is_empty(&self) -> bool
pub fn is_empty(&self) -> bool
Returns true when no values are currently cached in any shard.
Note that this operation requires acquiring a lock on each shard, so it may cause temporary contention if called frequently in a high-concurrency environment.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
assert!(cache.is_empty());
cache.insert("key".to_string(), "value".to_string());
assert!(!cache.is_empty());Sourcepub fn contains_key<Q>(&self, key: &Q) -> bool
pub fn contains_key<Q>(&self, key: &Q) -> bool
Returns true if there is a value in the cache mapped to by key.
This operation only locks the specific shard containing the key.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());
assert!(cache.contains_key(&"key".to_string()));
assert!(!cache.contains_key(&"missing".to_string()));Sourcepub fn get<Q>(&self, key: &Q) -> Option<V>
pub fn get<Q>(&self, key: &Q) -> Option<V>
Gets a clone of the value in the cache mapped to by key.
If no value exists for key, this returns None. This operation only locks
the specific shard containing the key.
§Note
This method returns a clone of the value rather than a reference, since the
mutex guard would be dropped after this method returns. This means that
V must implement Clone.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());
assert_eq!(cache.get(&"key".to_string()), Some("value".to_string()));
assert_eq!(cache.get(&"missing".to_string()), None);Sourcepub fn insert(&self, key: K, value: V) -> bool
pub fn insert(&self, key: K, value: V) -> bool
Maps key to value in the cache, possibly evicting old entries from the appropriate shard.
This method returns true when this is a new entry, and false if an existing entry was
updated. This operation only locks the specific shard containing the key.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
// Insert a new key
assert!(cache.insert("key1".to_string(), "value1".to_string()));
// Update an existing key
assert!(!cache.insert("key1".to_string(), "updated_value".to_string()));Sourcepub fn remove<Q>(&self, key: &Q) -> Option<V>
pub fn remove<Q>(&self, key: &Q) -> Option<V>
Removes the cache entry mapped to by key.
This method returns the value removed from the cache. If key did not map to any value,
then this returns None. This operation only locks the specific shard containing the key.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());
// Remove an existing key
assert_eq!(cache.remove(&"key".to_string()), Some("value".to_string()));
// Attempt to remove a missing key
assert_eq!(cache.remove(&"key".to_string()), None);Sourcepub fn evict(&self) -> Option<V>
pub fn evict(&self) -> Option<V>
Removes and returns a value from the cache that was not recently accessed.
This method tries to evict from each shard in turn until it finds a value to evict.
If no suitable value exists in any shard, this returns None.
§Note
This implementation differs from the non-sharded version in that it checks each shard in sequence until it finds a suitable entry to evict. This may not provide the globally optimal eviction decision across all shards, but it avoids the need to lock all shards simultaneously.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(10, 2).unwrap();
// Fill the cache
for i in 0..15 {
cache.insert(format!("key{}", i), format!("value{}", i));
}
// Should be able to evict something
assert!(cache.evict().is_some());Sourcepub fn with_key_lock<Q, F, T>(&self, key: &Q, f: F) -> T
pub fn with_key_lock<Q, F, T>(&self, key: &Q, f: F) -> T
Gets exclusive access to a specific shard based on the key.
This can be useful for performing multiple operations atomically on entries that share the same shard. Note that only keys that hash to the same shard can be manipulated within a single transaction.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::new(100).unwrap();
// Perform multiple operations atomically
cache.with_key_lock(&"foo", |shard| {
// All operations within this closure have exclusive access to the shard
shard.insert("key1".to_string(), "value1".to_string());
shard.insert("key2".to_string(), "value2".to_string());
// We can check state mid-transaction
assert!(shard.contains_key(&"key1".to_string()));
});Sourcepub fn num_shards(&self) -> usize
pub fn num_shards(&self) -> usize
Returns the number of shards in this cache.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 32).unwrap();
assert_eq!(cache.num_shards(), 32);Sourcepub fn get_shard_by_index(
&self,
index: usize,
) -> Option<&Arc<Mutex<SieveCache<K, V>>>>
pub fn get_shard_by_index( &self, index: usize, ) -> Option<&Arc<Mutex<SieveCache<K, V>>>>
Gets a specific shard by index.
This is mainly useful for advanced use cases and maintenance operations.
Returns None if the index is out of bounds.
§Examples
let cache: ShardedSieveCache<String, String> = ShardedSieveCache::with_shards(1000, 8).unwrap();
// Access a valid shard
assert!(cache.get_shard_by_index(0).is_some());
// Out of bounds index
assert!(cache.get_shard_by_index(100).is_none());Trait Implementations§
Source§impl<K, V> Clone for ShardedSieveCache<K, V>
impl<K, V> Clone for ShardedSieveCache<K, V>
Source§fn clone(&self) -> ShardedSieveCache<K, V>
fn clone(&self) -> ShardedSieveCache<K, V>
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more