Skip to main content

ConcurrentLruCache

Struct ConcurrentLruCache 

Source
pub struct ConcurrentLruCache<K, V, S = DefaultHashBuilder> { /* private fields */ }
Expand description

A thread-safe LRU cache with segmented storage for high concurrency.

Keys are partitioned across multiple segments using hash-based sharding. Each segment has its own lock, allowing concurrent access to different segments without blocking.

§Type Parameters

  • K: Key type. Must implement Hash + Eq + Clone + Send.
  • V: Value type. Must implement Clone + Send.
  • S: Hash builder type. Defaults to DefaultHashBuilder.

§Note on LRU Semantics

LRU ordering is maintained per-segment, not globally. This means an item in segment A might be evicted while segment B has items that were accessed less recently in wall-clock time. For most workloads with good key distribution, this approximation works well.

§Example

use cache_rs::concurrent::ConcurrentLruCache;
use std::sync::Arc;

let cache = Arc::new(ConcurrentLruCache::new(1000));

// Safe to use from multiple threads
cache.put("key".to_string(), 42);
assert_eq!(cache.get(&"key".to_string()), Some(42));

Implementations§

Source§

impl<K, V> ConcurrentLruCache<K, V, DefaultHashBuilder>
where K: Hash + Eq + Clone + Send, V: Clone + Send,

Source

pub fn init( config: ConcurrentLruCacheConfig, hasher: Option<DefaultHashBuilder>, ) -> Self

Creates a new concurrent LRU cache from a configuration with an optional hasher.

This is the recommended way to create a concurrent LRU cache.

§Arguments
  • config - Configuration specifying capacity, segments, and optional size limit
  • hasher - Optional custom hash builder. If None, uses DefaultHashBuilder
§Example
use cache_rs::concurrent::ConcurrentLruCache;
use cache_rs::config::{ConcurrentLruCacheConfig, ConcurrentCacheConfig, LruCacheConfig};
use core::num::NonZeroUsize;

// Simple capacity-only cache with default segments
let config: ConcurrentLruCacheConfig = ConcurrentCacheConfig {
    base: LruCacheConfig {
        capacity: NonZeroUsize::new(10000).unwrap(),
        max_size: u64::MAX,
    },
    segments: 16,
};
let cache: ConcurrentLruCache<String, i32> = ConcurrentLruCache::init(config, None);

// With custom segments and size limit
let config: ConcurrentLruCacheConfig = ConcurrentCacheConfig {
    base: LruCacheConfig {
        capacity: NonZeroUsize::new(10000).unwrap(),
        max_size: 100 * 1024 * 1024,  // 100MB
    },
    segments: 32,
};
let cache: ConcurrentLruCache<String, Vec<u8>> = ConcurrentLruCache::init(config, None);
Source§

impl<K, V, S> ConcurrentLruCache<K, V, S>
where K: Hash + Eq + Clone + Send, V: Clone + Send, S: BuildHasher + Clone + Send,

Source

pub fn capacity(&self) -> usize

Returns the total capacity across all segments.

Source

pub fn segment_count(&self) -> usize

Returns the number of segments in the cache.

Source

pub fn len(&self) -> usize

Returns the total number of entries across all segments.

Note: This acquires a lock on each segment sequentially, so the returned value may be slightly stale in high-concurrency scenarios.

Source

pub fn is_empty(&self) -> bool

Returns true if the cache contains no entries.

Source

pub fn get<Q>(&self, key: &Q) -> Option<V>
where K: Borrow<Q>, Q: ?Sized + Hash + Eq,

Retrieves a value from the cache.

Returns a clone of the value to avoid holding the lock. For operations that don’t need ownership, use get_with() instead.

If the key exists, it is moved to the MRU position within its segment.

§Example
let value = cache.get(&"key".to_string());
Source

pub fn get_with<Q, F, R>(&self, key: &Q, f: F) -> Option<R>
where K: Borrow<Q>, Q: ?Sized + Hash + Eq, F: FnOnce(&V) -> R,

Retrieves a value and applies a function to it while holding the lock.

More efficient than get() when you only need to read from the value, as it avoids cloning. The lock is released after f returns.

§Type Parameters
  • F: Function that takes &V and returns R
  • R: Return type of the function
§Example
// Get length without cloning the whole string
let len = cache.get_with(&key, |value| value.len());
Source

pub fn get_mut_with<Q, F, R>(&self, key: &Q, f: F) -> Option<R>
where K: Borrow<Q>, Q: ?Sized + Hash + Eq, F: FnOnce(&mut V) -> R,

Retrieves a mutable reference and applies a function to it.

Allows in-place modification of cached values without removing them.

§Example
// Increment a counter in-place
cache.get_mut_with(&"counter".to_string(), |value| *value += 1);
Source

pub fn put(&self, key: K, value: V) -> Option<(K, V)>

Inserts a key-value pair into the cache.

If the key exists, the value is updated and moved to MRU position. If at capacity, the LRU entry in the target segment is evicted.

§Returns
  • Some((old_key, old_value)) if key existed or entry was evicted
  • None if inserted with available capacity
§Example
cache.put("key".to_string(), 42);
Source

pub fn put_with_size(&self, key: K, value: V, size: u64) -> Option<(K, V)>

Inserts a key-value pair with explicit size tracking.

Use this for size-aware caching. The size is used for max_size tracking and eviction decisions.

§Arguments
  • key - The key to insert
  • value - The value to cache
  • size - Size of this entry (in your chosen unit)
§Example
let data = vec![0u8; 1024];
cache.put_with_size("file".to_string(), data, 1024);
Source

pub fn remove<Q>(&self, key: &Q) -> Option<V>
where K: Borrow<Q>, Q: ?Sized + Hash + Eq,

Removes a key from the cache.

§Returns
  • Some(value) if the key existed
  • None if the key was not found
Source

pub fn contains_key<Q>(&self, key: &Q) -> bool
where K: Borrow<Q>, Q: ?Sized + Hash + Eq,

Checks if the cache contains a key.

Note: This does update the entry’s recency (moves to MRU position). If you need a pure existence check without side effects, this isn’t it.

Source

pub fn clear(&self)

Removes all entries from all segments.

Acquires locks on each segment sequentially.

Source

pub fn current_size(&self) -> u64

Returns the current total size across all segments.

This is the sum of all size values from put_with_size() calls.

Source

pub fn max_size(&self) -> u64

Returns the maximum total content size across all segments.

Source

pub fn record_miss(&self, object_size: u64)

Records a cache miss for metrics tracking.

Call this after a failed get() when you fetch from the origin.

Trait Implementations§

Source§

impl<K, V, S> CacheMetrics for ConcurrentLruCache<K, V, S>
where K: Hash + Eq + Clone + Send, V: Clone + Send, S: BuildHasher + Clone + Send,

Source§

fn metrics(&self) -> BTreeMap<String, f64>

Returns all metrics as key-value pairs in deterministic order Read more
Source§

fn algorithm_name(&self) -> &'static str

Algorithm name for identification Read more
Source§

impl<K, V, S> Debug for ConcurrentLruCache<K, V, S>
where K: Hash + Eq + Clone + Send, V: Clone + Send, S: BuildHasher + Clone + Send,

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<K: Send, V: Send, S: Send> Send for ConcurrentLruCache<K, V, S>

Source§

impl<K: Send, V: Send, S: Send + Sync> Sync for ConcurrentLruCache<K, V, S>

Auto Trait Implementations§

§

impl<K, V, S> Freeze for ConcurrentLruCache<K, V, S>
where S: Freeze,

§

impl<K, V, S = DefaultHashBuilder> !RefUnwindSafe for ConcurrentLruCache<K, V, S>

§

impl<K, V, S> Unpin for ConcurrentLruCache<K, V, S>
where S: Unpin,

§

impl<K, V, S> UnwindSafe for ConcurrentLruCache<K, V, S>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.