pub struct ConcurrentLruCache<K, V, S = DefaultHashBuilder> { /* private fields */ }Expand description
A thread-safe LRU cache with segmented storage for high concurrency.
Keys are partitioned across multiple segments using hash-based sharding. Each segment has its own lock, allowing concurrent access to different segments without blocking.
§Type Parameters
K: Key type. Must implementHash + Eq + Clone + Send.V: Value type. Must implementClone + Send.S: Hash builder type. Defaults toDefaultHashBuilder.
§Note on LRU Semantics
LRU ordering is maintained per-segment, not globally. This means an item in segment A might be evicted while segment B has items that were accessed less recently in wall-clock time. For most workloads with good key distribution, this approximation works well.
§Example
use cache_rs::concurrent::ConcurrentLruCache;
use std::sync::Arc;
let cache = Arc::new(ConcurrentLruCache::new(1000));
// Safe to use from multiple threads
cache.put("key".to_string(), 42);
assert_eq!(cache.get(&"key".to_string()), Some(42));Implementations§
Source§impl<K, V> ConcurrentLruCache<K, V, DefaultHashBuilder>
impl<K, V> ConcurrentLruCache<K, V, DefaultHashBuilder>
Sourcepub fn init(
config: ConcurrentLruCacheConfig,
hasher: Option<DefaultHashBuilder>,
) -> Self
pub fn init( config: ConcurrentLruCacheConfig, hasher: Option<DefaultHashBuilder>, ) -> Self
Creates a new concurrent LRU cache from a configuration with an optional hasher.
This is the recommended way to create a concurrent LRU cache.
§Arguments
config- Configuration specifying capacity, segments, and optional size limithasher- Optional custom hash builder. IfNone, usesDefaultHashBuilder
§Example
use cache_rs::concurrent::ConcurrentLruCache;
use cache_rs::config::{ConcurrentLruCacheConfig, ConcurrentCacheConfig, LruCacheConfig};
use core::num::NonZeroUsize;
// Simple capacity-only cache with default segments
let config: ConcurrentLruCacheConfig = ConcurrentCacheConfig {
base: LruCacheConfig {
capacity: NonZeroUsize::new(10000).unwrap(),
max_size: u64::MAX,
},
segments: 16,
};
let cache: ConcurrentLruCache<String, i32> = ConcurrentLruCache::init(config, None);
// With custom segments and size limit
let config: ConcurrentLruCacheConfig = ConcurrentCacheConfig {
base: LruCacheConfig {
capacity: NonZeroUsize::new(10000).unwrap(),
max_size: 100 * 1024 * 1024, // 100MB
},
segments: 32,
};
let cache: ConcurrentLruCache<String, Vec<u8>> = ConcurrentLruCache::init(config, None);Source§impl<K, V, S> ConcurrentLruCache<K, V, S>
impl<K, V, S> ConcurrentLruCache<K, V, S>
Sourcepub fn segment_count(&self) -> usize
pub fn segment_count(&self) -> usize
Returns the number of segments in the cache.
Sourcepub fn len(&self) -> usize
pub fn len(&self) -> usize
Returns the total number of entries across all segments.
Note: This acquires a lock on each segment sequentially, so the returned value may be slightly stale in high-concurrency scenarios.
Sourcepub fn get<Q>(&self, key: &Q) -> Option<V>
pub fn get<Q>(&self, key: &Q) -> Option<V>
Retrieves a value from the cache.
Returns a clone of the value to avoid holding the lock. For operations
that don’t need ownership, use get_with() instead.
If the key exists, it is moved to the MRU position within its segment.
§Example
let value = cache.get(&"key".to_string());Sourcepub fn get_with<Q, F, R>(&self, key: &Q, f: F) -> Option<R>
pub fn get_with<Q, F, R>(&self, key: &Q, f: F) -> Option<R>
Retrieves a value and applies a function to it while holding the lock.
More efficient than get() when you only need to read from the value,
as it avoids cloning. The lock is released after f returns.
§Type Parameters
F: Function that takes&Vand returnsRR: Return type of the function
§Example
// Get length without cloning the whole string
let len = cache.get_with(&key, |value| value.len());Sourcepub fn get_mut_with<Q, F, R>(&self, key: &Q, f: F) -> Option<R>
pub fn get_mut_with<Q, F, R>(&self, key: &Q, f: F) -> Option<R>
Sourcepub fn put(&self, key: K, value: V) -> Option<(K, V)>
pub fn put(&self, key: K, value: V) -> Option<(K, V)>
Inserts a key-value pair into the cache.
If the key exists, the value is updated and moved to MRU position. If at capacity, the LRU entry in the target segment is evicted.
§Returns
Some((old_key, old_value))if key existed or entry was evictedNoneif inserted with available capacity
§Example
cache.put("key".to_string(), 42);Sourcepub fn put_with_size(&self, key: K, value: V, size: u64) -> Option<(K, V)>
pub fn put_with_size(&self, key: K, value: V, size: u64) -> Option<(K, V)>
Inserts a key-value pair with explicit size tracking.
Use this for size-aware caching. The size is used for max_size tracking
and eviction decisions.
§Arguments
key- The key to insertvalue- The value to cachesize- Size of this entry (in your chosen unit)
§Example
let data = vec![0u8; 1024];
cache.put_with_size("file".to_string(), data, 1024);Sourcepub fn contains_key<Q>(&self, key: &Q) -> bool
pub fn contains_key<Q>(&self, key: &Q) -> bool
Checks if the cache contains a key.
Note: This does update the entry’s recency (moves to MRU position). If you need a pure existence check without side effects, this isn’t it.
Sourcepub fn clear(&self)
pub fn clear(&self)
Removes all entries from all segments.
Acquires locks on each segment sequentially.
Sourcepub fn current_size(&self) -> u64
pub fn current_size(&self) -> u64
Returns the current total size across all segments.
This is the sum of all size values from put_with_size() calls.
Sourcepub fn record_miss(&self, object_size: u64)
pub fn record_miss(&self, object_size: u64)
Records a cache miss for metrics tracking.
Call this after a failed get() when you fetch from the origin.