Expand description
Segmented LRU (SLRU) cache implementation.
Provides a fixed-size cache that uses a segmented approach to evict items based on their usage patterns. This is useful for scenarios where certain items are accessed more frequently than others. Segmented Least Recently Used (SLRU) Cache Implementation
SLRU is a scan-resistant cache algorithm that divides the cache into two segments: a probationary segment for new entries and a protected segment for frequently accessed entries. This design prevents one-time access patterns (scans) from evicting valuable cached items.
§How the Algorithm Works
SLRU uses a two-tier approach to distinguish between items that are accessed once (scans, sequential reads) versus items accessed repeatedly (working set).
§Segment Architecture
┌─────────────────────────────────────────────────────────────────────────────┐
│ SLRU Cache │
│ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ PROTECTED SEGMENT (20%) │ │
│ │ Frequently accessed items - harder to evict │ │
│ │ ┌─────────────────────────────────────────────────────────────┐ │ │
│ │ │ MRU ◀──▶ [hot_1] ◀──▶ [hot_2] ◀──▶ ... ◀──▶ [demote] LRU │ │ │
│ │ └─────────────────────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │ demote ▲ promote │
│ ▼ │ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ PROBATIONARY SEGMENT (80%) │ │
│ │ New items and demoted items - easier to evict │ │
│ │ ┌─────────────────────────────────────────────────────────────┐ │ │
│ │ │ MRU ◀──▶ [new_1] ◀──▶ [new_2] ◀──▶ ... ◀──▶ [evict] LRU │ │ │
│ │ └─────────────────────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ ▲ │
│ │ insert │
│ new items │
└─────────────────────────────────────────────────────────────────────────────┘§Entry Lifecycle
- Insert: New items enter the probationary segment
- First access in probationary: Item promoted to protected segment
- Protected segment full: LRU item demoted back to probationary
- Eviction: Always from the LRU end of the probationary segment
§Scan Resistance Example
Initial state: Protected=[A, B, C], Probationary=[D, E, F]
(A, B, C are hot items; D, E, F are warm items)
Sequential scan of X, Y, Z (one-time access):
put(X) → Protected=[A, B, C], Probationary=[X, D, E] (F evicted)
put(Y) → Protected=[A, B, C], Probationary=[Y, X, D] (E evicted)
put(Z) → Protected=[A, B, C], Probationary=[Z, Y, X] (D evicted)
Hot items A, B, C remain in protected segment!
The scan only displaced probationary items.§Operations
| Operation | Action | Time |
|---|---|---|
get(key) | Promote to protected if in probationary | O(1) |
put(key, value) | Insert to probationary, may evict | O(1) |
remove(key) | Remove from whichever segment contains it | O(1) |
§Dual-Limit Capacity
This implementation supports two independent limits:
max_entries: Maximum total items across both segmentsmax_size: Maximum total size of content
The protected segment size is configured separately (default: 20% of total).
§Performance Characteristics
| Metric | Value |
|---|---|
| Get | O(1) |
| Put | O(1) |
| Remove | O(1) |
| Memory per entry | ~90 bytes overhead + key×2 + value |
Memory overhead includes: two list pointers, location tag, size tracking, HashMap bucket, and allocator overhead.
§When to Use SLRU
Good for:
- Mixed workloads with both hot data and sequential scans
- Database buffer pools
- File system caches
- Any scenario where scans shouldn’t evict the working set
Not ideal for:
- Pure recency-based access patterns (LRU is simpler)
- Frequency-dominant patterns (LFU/LFUDA is better)
- Very small caches where the two-segment overhead isn’t justified
§Tuning the Protected Ratio
The protected segment size controls the trade-off:
- Larger protected: More scan resistance, but new items evicted faster
- Smaller protected: Less scan resistance, but more room for new items
Default is 20% protected, which works well for most workloads.
§Thread Safety
SlruCache is not thread-safe. For concurrent access, either:
- Wrap with
MutexorRwLock - Use
ConcurrentSlruCache(requiresconcurrentfeature)
§Examples
§Basic Usage
use cache_rs::SlruCache;
use cache_rs::config::SlruCacheConfig;
use core::num::NonZeroUsize;
// Total capacity 100, protected segment 20
let config = SlruCacheConfig {
capacity: NonZeroUsize::new(100).unwrap(),
protected_capacity: NonZeroUsize::new(20).unwrap(),
max_size: u64::MAX,
};
let mut cache = SlruCache::init(config, None);
cache.put("a", 1); // Enters probationary
cache.get(&"a"); // Promoted to protected!
cache.put("b", 2); // Enters probationary
assert_eq!(cache.get(&"a"), Some(&1)); // Still in protected§Scan Resistance Demo
use cache_rs::SlruCache;
use cache_rs::config::SlruCacheConfig;
use core::num::NonZeroUsize;
let config = SlruCacheConfig {
capacity: NonZeroUsize::new(10).unwrap(),
protected_capacity: NonZeroUsize::new(3).unwrap(),
max_size: u64::MAX,
};
let mut cache: SlruCache<i32, i32> = SlruCache::init(config, None);
// Establish hot items in protected segment
for key in [1, 2, 3] {
cache.put(key, 100);
cache.get(&key); // Promote to protected
}
// Simulate a scan - these items only enter probationary
for i in 100..120 {
cache.put(i, i); // One-time insertions
}
// Hot items survive the scan!
assert!(cache.get(&1).is_some());
assert!(cache.get(&2).is_some());
assert!(cache.get(&3).is_some());Structs§
- Slru
Cache - An implementation of a Segmented Least Recently Used (SLRU) cache.