cache_rs/concurrent.rs
1//! Concurrent Cache Implementations
2//!
3//! This module provides thread-safe concurrent cache implementations using the
4//! Shared Segment Pattern for high-performance multi-threaded access.
5//!
6//! # Architecture
7//!
8//! Each concurrent cache uses segmented storage where:
9//! - The key space is partitioned across multiple segments using hash-based sharding
10//! - Each segment is protected by its own lock (using `parking_lot::Mutex`)
11//! - Operations only lock the relevant segment, allowing concurrent access to different segments
12//!
13//! This design provides near-linear scalability with thread count for workloads
14//! with good key distribution.
15//!
16//! ## Why Mutex Instead of RwLock?
17//!
18//! Cache algorithms like LRU, LFU, LFUDA, GDSF, and SLRU require **mutable access
19//! even for read operations**. Every `get()` call must update internal state:
20//!
21//! - **LRU**: Moves the accessed item to the front of the recency list
22//! - **LFU**: Increments the frequency counter and may move the item between frequency buckets
23//! - **LFUDA**: Updates frequency and recalculates priority with the aging factor
24//! - **GDSF**: Recalculates priority based on size, frequency, and cost
25//! - **SLRU**: May promote items from the probationary to protected segment
26//!
27//! Since `get()` is inherently a write operation, using `RwLock` would provide no benefit—
28//! every access would still require an exclusive write lock. `Mutex` is preferred because:
29//!
30//! 1. **Lower overhead**: `Mutex` has less bookkeeping than `RwLock`
31//! 2. **No false promises**: Makes it clear that all operations are mutually exclusive
32//! 3. **Better performance**: `parking_lot::Mutex` is highly optimized for this use case
33//!
34//! Concurrency is achieved through **segmentation** instead: different keys can be accessed
35//! in parallel as long as they hash to different segments.
36//!
37//! # Available Concurrent Caches
38//!
39//! | Type | Description |
40//! |------|-------------|
41//! | [`ConcurrentLruCache`] | Thread-safe LRU cache with segmented storage |
42//! | [`ConcurrentSlruCache`] | Thread-safe Segmented LRU cache |
43//! | [`ConcurrentLfuCache`] | Thread-safe LFU cache |
44//! | [`ConcurrentLfudaCache`] | Thread-safe LFUDA cache |
45//! | [`ConcurrentGdsfCache`] | Thread-safe GDSF cache |
46//!
47//! # Performance Characteristics
48//!
49//! - **Read/Write Latency**: O(1) average case, same as single-threaded variants
50//! - **Concurrency**: Near-linear scaling up to segment count
51//! - **Memory Overhead**: ~1 Mutex per segment (typically 16 segments by default)
52//!
53//! # Default Segment Count
54//!
55//! By default, concurrent caches use `min(num_cpus * 4, 64)` segments, which provides
56//! good parallelism while limiting memory overhead. You can customize this using the
57//! `with_segments()` constructor.
58//!
59//! # Example
60//!
61//! ```rust,ignore
62//! use cache_rs::concurrent::ConcurrentLruCache;
63//! use std::sync::Arc;
64//! use std::thread;
65//!
66//! // Create a concurrent cache (can be shared across threads)
67//! let cache = Arc::new(ConcurrentLruCache::new(1000));
68//!
69//! // Spawn multiple threads that access the cache concurrently
70//! let handles: Vec<_> = (0..4).map(|t| {
71//! let cache = Arc::clone(&cache);
72//! thread::spawn(move || {
73//! for i in 0..1000 {
74//! let key = format!("key_{}_{}", t, i);
75//! cache.put(key.clone(), i);
76//! let _ = cache.get(&key);
77//! }
78//! })
79//! }).collect();
80//!
81//! for handle in handles {
82//! handle.join().unwrap();
83//! }
84//! ```
85//!
86//! # Thread Safety
87//!
88//! All concurrent cache types implement `Send` and `Sync`, making them safe to share
89//! across threads. They can be wrapped in `Arc` for shared ownership.
90//!
91//! # Zero-Copy Access
92//!
93//! For performance-critical code paths, use the `get_with()` method which provides
94//! access to the value while holding the segment lock, avoiding unnecessary cloning:
95//!
96//! ```rust,ignore
97//! let result = cache.get_with(&key, |value| {
98//! // Process value while lock is held
99//! value.some_method()
100//! });
101//! ```
102
103mod gdsf;
104mod lfu;
105mod lfuda;
106mod lru;
107mod slru;
108
109pub use self::gdsf::ConcurrentGdsfCache;
110pub use self::lfu::ConcurrentLfuCache;
111pub use self::lfuda::ConcurrentLfudaCache;
112pub use self::lru::ConcurrentLruCache;
113pub use self::slru::ConcurrentSlruCache;
114
115/// Returns the default number of segments based on CPU count.
116///
117/// This provides a good balance between parallelism and memory overhead.
118/// The formula is `min(num_cpus * 4, 64)`.
119#[inline]
120pub fn default_segment_count() -> usize {
121 // Use a reasonable default that works well across different hardware
122 // In production, this would use num_cpus crate, but we keep it simple here
123 16
124}