Struct SyncSieveCache

Source
pub struct SyncSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,
{ /* private fields */ }
Expand description

A thread-safe wrapper around SieveCache.

This provides a thread-safe implementation of the SIEVE cache algorithm by wrapping the standard SieveCache in an Arc<Mutex<>>. It offers the same functionality as the underlying cache but with thread safety guarantees.

§Thread Safety

All operations acquire a lock on the entire cache, which provides strong consistency but may become a bottleneck under high contention. For workloads with high concurrency, consider using ShardedSieveCache instead, which partitions the cache into multiple independently-locked segments.

§Lock Behavior

The thread safety mechanism works as follows:

  • Simple query operations (e.g., get, contains_key) hold the lock only long enough to read and clone the value
  • Modification operations (e.g., insert, remove) hold the lock for the duration of the change
  • Operations that accept callbacks have specific lock behavior:
    • get_mut acquires and releases the lock repeatedly to avoid deadlocks, using a clone-modify-update pattern
    • for_each_value, for_each_entry, and retain collect data under the lock, then release it before processing to avoid blocking other threads

§Deadlock Prevention

This implementation prevents deadlocks by:

  • Never allowing callbacks to execute while holding the cache lock
  • Using a clone-modify-update pattern for all callbacks that need to modify values
  • Ensuring lock acquisition is always done in a consistent order
  • Providing explicit transaction methods that make locking transparent to the user

§Consistency Guarantees

  • Operations on individual keys are atomic and isolated
  • Snapshot-based operations (e.g., iteration, bulk operations) may not reflect concurrent modifications
  • When using callback functions, be aware they execute outside the lock which means the cache state may change between lock acquisitions

§Examples

let cache = SyncSieveCache::new(100).unwrap();

// Multiple threads can safely access the cache
cache.insert("key1".to_string(), "value1".to_string());
assert_eq!(cache.get(&"key1".to_string()), Some("value1".to_string()));

Example with callbacks:

let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), 1);
cache.insert("key2".to_string(), 2);

// Create a clone to move into another thread
let cache_clone = cache.clone();

// Spawn a thread that modifies values
let handle = thread::spawn(move || {
    // The cache is safely accessible from multiple threads
    cache_clone.for_each_value(|value| {
        *value += 10;  // Add 10 to each value
    });
});

// Wait for the background thread to complete
handle.join().unwrap();

// Values have been updated
assert_eq!(cache.get(&"key1".to_string()), Some(11));
assert_eq!(cache.get(&"key2".to_string()), Some(12));

Implementations§

Source§

impl<K, V> SyncSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,

Source

pub fn new(capacity: usize) -> Result<Self, &'static str>

Creates a new thread-safe cache with the given capacity.

§Errors

Returns an error if the capacity is zero.

§Examples
let cache = SyncSieveCache::<String, String>::new(100).unwrap();
Source

pub fn capacity(&self) -> usize

Returns the capacity of the cache.

§Examples
let cache = SyncSieveCache::<String, u32>::new(100).unwrap();
assert_eq!(cache.capacity(), 100);
Source

pub fn len(&self) -> usize

Returns the number of cached values.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());
assert_eq!(cache.len(), 1);
Source

pub fn is_empty(&self) -> bool

Returns true when no values are currently cached.

§Examples
let cache = SyncSieveCache::<String, String>::new(100).unwrap();
assert!(cache.is_empty());

cache.insert("key".to_string(), "value".to_string());
assert!(!cache.is_empty());
Source

pub fn contains_key<Q>(&self, key: &Q) -> bool
where Q: Hash + Eq + ?Sized, K: Borrow<Q>,

Returns true if there is a value in the cache mapped to by key.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

assert!(cache.contains_key(&"key".to_string()));
assert!(!cache.contains_key(&"missing".to_string()));
Source

pub fn get<Q>(&self, key: &Q) -> Option<V>
where Q: Hash + Eq + ?Sized, K: Borrow<Q>, V: Clone,

Gets a clone of the value in the cache mapped to by key.

If no value exists for key, this returns None.

§Note

Unlike the unwrapped SieveCache, this returns a clone of the value rather than a reference, since the mutex guard would be dropped after this method returns. This means that V must implement Clone.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

assert_eq!(cache.get(&"key".to_string()), Some("value".to_string()));
assert_eq!(cache.get(&"missing".to_string()), None);
Source

pub fn get_mut<Q, F>(&self, key: &Q, f: F) -> bool
where Q: Hash + Eq + ?Sized, K: Borrow<Q>, F: FnOnce(&mut V), V: Clone,

Gets a mutable reference to the value in the cache mapped to by key via a callback function.

If no value exists for key, the callback will not be invoked and this returns false. Otherwise, the callback is invoked with a mutable reference to the value and this returns true.

This operation marks the entry as “visited” in the SIEVE algorithm, which affects eviction decisions.

§Thread Safety

This method operates safely with recursive calls by:

  1. Cloning the value with a short-lived lock
  2. Releasing the lock during callback execution
  3. Re-acquiring the lock to update the original value

This approach means:

  • The callback can safely make other calls to the same cache instance
  • The value can be modified by other threads during the callback execution
  • Changes are not visible to other threads until the callback completes
  • Last writer wins if multiple threads modify the same key concurrently
§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

// Modify the value in-place
cache.get_mut(&"key".to_string(), |value| {
    *value = "new_value".to_string();
});

assert_eq!(cache.get(&"key".to_string()), Some("new_value".to_string()));
Source

pub fn insert(&self, key: K, value: V) -> bool

Maps key to value in the cache, possibly evicting old entries.

This method returns true when this is a new entry, and false if an existing entry was updated.

§Examples
let cache = SyncSieveCache::new(100).unwrap();

// Insert a new key
assert!(cache.insert("key1".to_string(), "value1".to_string()));

// Update an existing key
assert!(!cache.insert("key1".to_string(), "updated_value".to_string()));
Source

pub fn remove<Q>(&self, key: &Q) -> Option<V>
where K: Borrow<Q>, Q: Eq + Hash + ?Sized,

Removes the cache entry mapped to by key.

This method returns the value removed from the cache. If key did not map to any value, then this returns None.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key".to_string(), "value".to_string());

// Remove an existing key
assert_eq!(cache.remove(&"key".to_string()), Some("value".to_string()));

// Attempt to remove a missing key
assert_eq!(cache.remove(&"key".to_string()), None);
Source

pub fn evict(&self) -> Option<V>

Removes and returns a value from the cache that was not recently accessed.

This implements the SIEVE eviction algorithm to select an entry for removal. If no suitable value exists, this returns None.

§Examples
let cache = SyncSieveCache::new(2).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

// Accessing key1 marks it as recently used
assert_eq!(cache.get(&"key1".to_string()), Some("value1".to_string()));

// Insert a new key, which should evict key2
cache.insert("key3".to_string(), "value3".to_string());

// key2 should have been evicted
assert_eq!(cache.get(&"key2".to_string()), None);
Source

pub fn clear(&self)

Removes all entries from the cache.

This operation clears all stored values and resets the cache to an empty state, while maintaining the original capacity.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

assert_eq!(cache.len(), 2);

cache.clear();
assert_eq!(cache.len(), 0);
assert!(cache.is_empty());
Source

pub fn keys(&self) -> Vec<K>

Returns an iterator over all keys in the cache.

The order of keys is not specified and should not be relied upon.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

let keys: HashSet<_> = cache.keys().into_iter().collect();
assert_eq!(keys.len(), 2);
assert!(keys.contains(&"key1".to_string()));
assert!(keys.contains(&"key2".to_string()));
Source

pub fn values(&self) -> Vec<V>
where V: Clone,

Returns all values in the cache.

The order of values is not specified and should not be relied upon.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

let values: HashSet<_> = cache.values().into_iter().collect();
assert_eq!(values.len(), 2);
assert!(values.contains(&"value1".to_string()));
assert!(values.contains(&"value2".to_string()));
Source

pub fn entries(&self) -> Vec<(K, V)>
where V: Clone,

Returns all key-value pairs in the cache.

The order of pairs is not specified and should not be relied upon.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

let entries: HashMap<_, _> = cache.entries().into_iter().collect();
assert_eq!(entries.len(), 2);
assert_eq!(entries.get(&"key1".to_string()), Some(&"value1".to_string()));
assert_eq!(entries.get(&"key2".to_string()), Some(&"value2".to_string()));
Source

pub fn for_each_value<F>(&self, f: F)
where F: FnMut(&mut V), V: Clone,

Applies a function to all values in the cache.

This method safely processes values by collecting them with the lock held, then releasing the lock before applying the function to each value individually. If the function modifies the values, the changes are saved back to the cache.

§Thread Safety

This method operates in three phases:

  1. It acquires the lock and creates a complete snapshot of the cache
  2. It releases the lock and applies the callback to each value
  3. It acquires the lock again individually for each value when writing changes back

This approach means:

  • The lock is not held during callback execution, preventing lock contention
  • If other threads modify the cache between steps 1 and 3, those changes might be overwritten
  • The callback sees a point-in-time snapshot that might not reflect the latest state
  • For long-running operations, consider using individual key operations instead
§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

// Update all values by appending text
cache.for_each_value(|value| {
    *value = format!("{}_updated", value);
});

assert_eq!(cache.get(&"key1".to_string()), Some("value1_updated".to_string()));
assert_eq!(cache.get(&"key2".to_string()), Some("value2_updated".to_string()));
Source

pub fn for_each_entry<F>(&self, f: F)
where F: FnMut((&K, &mut V)), V: Clone,

Applies a function to all key-value pairs in the cache.

This method safely processes key-value pairs by collecting them with the lock held, then releasing the lock before applying the function to each pair individually. If the function modifies the values, the changes are saved back to the cache.

§Thread Safety

This method operates in three phases:

  1. It acquires the lock and creates a complete snapshot of the cache
  2. It releases the lock and applies the callback to each key-value pair
  3. It acquires the lock again individually for each entry when writing changes back

This approach means:

  • The lock is not held during callback execution, preventing lock contention
  • If other threads modify the cache between steps 1 and 3, those changes might be overwritten
  • The callback sees a point-in-time snapshot that might not reflect the latest state
  • For long-running operations, consider using individual key operations instead
§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

// Update all values associated with keys containing '1'
cache.for_each_entry(|(key, value)| {
    if key.contains('1') {
        *value = format!("{}_special", value);
    }
});

assert_eq!(cache.get(&"key1".to_string()), Some("value1_special".to_string()));
assert_eq!(cache.get(&"key2".to_string()), Some("value2".to_string()));
Source

pub fn with_lock<F, T>(&self, f: F) -> T
where F: FnOnce(&mut SieveCache<K, V>) -> T,

Gets exclusive access to the underlying cache to perform multiple operations atomically.

This is useful when you need to perform a series of operations that depend on each other and you want to ensure that no other thread can access the cache between operations.

§Thread Safety

This method provides the strongest thread safety guarantees by:

  • Acquiring the lock once for the entire duration of the callback
  • Allowing multiple operations to be performed atomically within a single lock scope
  • Ensuring all operations within the callback are fully isolated from other threads

However, this also means:

  • The entire cache is locked during the callback execution
  • Other threads will be blocked from accessing the cache until the callback completes
  • Long-running operations within the callback will cause high contention
  • Callbacks must never try to access the same cache instance recursively (deadlock risk)

This method should be used when you need atomic multi-step operations but used sparingly in high-concurrency environments.

§Examples
let cache = SyncSieveCache::new(100).unwrap();

cache.with_lock(|inner_cache| {
    // Perform multiple operations atomically
    inner_cache.insert("key1".to_string(), "value1".to_string());
    inner_cache.insert("key2".to_string(), "value2".to_string());

    // We can check internal state mid-transaction
    assert_eq!(inner_cache.len(), 2);
});
Source

pub fn retain<F>(&self, f: F)
where F: FnMut(&K, &V) -> bool, V: Clone,

Retains only the elements specified by the predicate.

Removes all entries for which the provided function returns false. The elements are visited in arbitrary, unspecified order.

This method offers two modes of operation:

  • Default mode: evaluates predicates outside the lock, with separate remove operations
  • Batch mode: evaluates predicates outside the lock, then performs removals in a single batch
§Thread Safety

This method operates in three phases:

  1. It acquires the lock and creates a complete snapshot of the cache
  2. It releases the lock and applies the predicate to each entry
  3. It either:
    • Acquires the lock again individually for each entry to be removed (default), or
    • Acquires the lock once and removes all entries in a batch (batch mode)

This approach means:

  • The lock is not held during predicate execution, preventing lock contention
  • If other threads modify the cache between steps 1 and 3, the method may:
    • Remove entries that were modified and would now pass the predicate
    • Miss removing entries added after the snapshot was taken
  • The predicate sees a point-in-time snapshot that might not reflect the latest state
  • For strict consistency requirements, use with_lock instead
§Examples

Basic usage:

let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), 100);
cache.insert("key2".to_string(), 200);
cache.insert("key3".to_string(), 300);

// Keep only entries with values greater than 150
cache.retain(|_, value| *value > 150);

assert_eq!(cache.len(), 2);
assert!(!cache.contains_key(&"key1".to_string()));
assert!(cache.contains_key(&"key2".to_string()));
assert!(cache.contains_key(&"key3".to_string()));

Using batch mode (more efficient):

let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), 100);
cache.insert("key2".to_string(), 200);
cache.insert("key3".to_string(), 300);

// Keep only entries with values greater than 150 (batch mode)
cache.retain_batch(|_, value| *value > 150);

assert_eq!(cache.len(), 2);
assert!(!cache.contains_key(&"key1".to_string()));
assert!(cache.contains_key(&"key2".to_string()));
assert!(cache.contains_key(&"key3".to_string()));
Source

pub fn retain_batch<F>(&self, f: F)
where F: FnMut(&K, &V) -> bool, V: Clone,

Retains only the elements specified by the predicate, using batch processing.

This is an optimized version of retain() that collects all keys to remove first, then removes them in a single batch operation with a single lock acquisition. This is more efficient when removing many entries, especially in high-contention scenarios.

§Thread Safety

This method has improved performance characteristics:

  • Only acquires the lock twice (once for collection, once for removal)
  • Performs all removals in a single atomic operation
  • Reduces lock contention compared to standard retain()

However, it still has the same consistency characteristics as retain():

  • The snapshot might not reflect concurrent modifications
  • There’s a window between collecting entries and removing them where other threads might modify the cache
§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), 100);
cache.insert("key2".to_string(), 200);
cache.insert("key3".to_string(), 300);

// Keep only entries with values greater than 150
cache.retain_batch(|_, value| *value > 150);

assert_eq!(cache.len(), 2);
assert!(!cache.contains_key(&"key1".to_string()));
assert!(cache.contains_key(&"key2".to_string()));
assert!(cache.contains_key(&"key3".to_string()));
Source

pub fn recommended_capacity( &self, min_factor: f64, max_factor: f64, low_threshold: f64, high_threshold: f64, ) -> usize

Returns a recommended cache capacity based on current utilization.

This method analyzes the current cache utilization and recommends a new capacity based on:

  • The number of entries with the ‘visited’ flag set
  • The current capacity and fill ratio
  • A target utilization range

The recommendation aims to keep the cache size optimal:

  • If many entries are frequently accessed (high utilization), it suggests increasing capacity
  • If few entries are accessed frequently (low utilization), it suggests decreasing capacity
  • Within a normal utilization range, it keeps the capacity stable
§Arguments
  • min_factor - Minimum scaling factor (e.g., 0.5 means never recommend less than 50% of current capacity)
  • max_factor - Maximum scaling factor (e.g., 2.0 means never recommend more than 200% of current capacity)
  • low_threshold - Utilization threshold below which capacity is reduced (e.g., 0.3 means 30% utilization)
  • high_threshold - Utilization threshold above which capacity is increased (e.g., 0.7 means 70% utilization)
§Examples
// Get a recommended capacity with default parameters
let recommended = cache.recommended_capacity(0.5, 2.0, 0.3, 0.7);
println!("Recommended capacity: {}", recommended);
§Default Recommendation Parameters

If you’re unsure what parameters to use, these values provide a reasonable starting point:

  • min_factor: 0.5 (never recommend less than half current capacity)
  • max_factor: 2.0 (never recommend more than twice current capacity)
  • low_threshold: 0.3 (reduce capacity if utilization below 30%)
  • high_threshold: 0.7 (increase capacity if utilization above 70%)

Trait Implementations§

Source§

impl<K, V> Clone for SyncSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync + Clone, V: Send + Sync + Clone,

Source§

fn clone(&self) -> SyncSieveCache<K, V>

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl<K, V> Debug for SyncSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync + Debug, V: Send + Sync + Debug,

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<K, V> Default for SyncSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,

Source§

fn default() -> Self

Creates a new cache with a default capacity of 100 entries.

§Panics

Panics if the underlying SieveCache::new() returns an error, which should never happen for a non-zero capacity.

§Examples
let cache: SyncSieveCache<String, u32> = Default::default();
assert_eq!(cache.capacity(), 100);
Source§

impl<K, V> From<SieveCache<K, V>> for SyncSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Send + Sync,

Source§

fn from(cache: SieveCache<K, V>) -> Self

Creates a new thread-safe cache from an existing SieveCache.

This allows for easily converting a single-threaded cache to a thread-safe version.

§Examples
let mut single_threaded = SieveCache::new(100).unwrap();
single_threaded.insert("key".to_string(), "value".to_string());

// Convert to thread-safe version
let thread_safe = SyncSieveCache::from(single_threaded);
assert_eq!(thread_safe.get(&"key".to_string()), Some("value".to_string()));
Source§

impl<K, V> From<SyncSieveCache<K, V>> for ShardedSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Clone + Send + Sync,

Source§

fn from(sync_cache: SyncSieveCache<K, V>) -> Self

Creates a new sharded cache from an existing SyncSieveCache.

This allows for upgrading a standard thread-safe cache to a more scalable sharded version.

§Examples
let sync_cache = SyncSieveCache::new(100).unwrap();
sync_cache.insert("key".to_string(), "value".to_string());

// Convert to sharded version with default sharding
let sharded_cache = ShardedSieveCache::from(sync_cache);
assert_eq!(sharded_cache.get(&"key".to_string()), Some("value".to_string()));
Source§

impl<K, V> IntoIterator for SyncSieveCache<K, V>
where K: Eq + Hash + Clone + Send + Sync, V: Clone + Send + Sync,

Source§

fn into_iter(self) -> Self::IntoIter

Converts the cache into an iterator over its key-value pairs.

This collects all entries into a Vec and returns an iterator over that Vec.

§Examples
let cache = SyncSieveCache::new(100).unwrap();
cache.insert("key1".to_string(), "value1".to_string());
cache.insert("key2".to_string(), "value2".to_string());

// Collect into a HashMap
let map: HashMap<_, _> = cache.into_iter().collect();
assert_eq!(map.len(), 2);
assert_eq!(map.get("key1"), Some(&"value1".to_string()));
Source§

type Item = (K, V)

The type of the elements being iterated over.
Source§

type IntoIter = IntoIter<(K, V)>

Which kind of iterator are we turning this into?

Auto Trait Implementations§

§

impl<K, V> Freeze for SyncSieveCache<K, V>

§

impl<K, V> RefUnwindSafe for SyncSieveCache<K, V>

§

impl<K, V> Send for SyncSieveCache<K, V>

§

impl<K, V> Sync for SyncSieveCache<K, V>

§

impl<K, V> Unpin for SyncSieveCache<K, V>

§

impl<K, V> UnwindSafe for SyncSieveCache<K, V>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.