Expand description
Least Recently Used (LRU) cache implementation.
Provides a fixed-size cache that evicts the least recently used items when the capacity is reached. Least Recently Used (LRU) Cache Implementation
An LRU cache evicts the least recently accessed item when capacity is reached. This implementation provides O(1) time complexity for all operations using a hash map combined with a doubly-linked list.
§How the Algorithm Works
The LRU algorithm is based on the principle of temporal locality: items accessed recently are likely to be accessed again soon. The cache maintains items ordered by their last access time.
§Data Structure
┌─────────────────────────────────────────────────────────────────┐
│ LRU Cache │
│ │
│ HashMap<K, *Node> Doubly-Linked List │
│ ┌──────────────┐ ┌──────────────────────────────┐ │
│ │ "apple" ──────────────▶ │ MRU ◀──▶ ... ◀──▶ LRU │ │
│ │ "banana" ─────────────▶ │ ▲ │ │ │
│ │ "cherry" ─────────────▶ │ │ ▼ │ │
│ └──────────────┘ │ head tail │ │
│ └──────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘- HashMap: Provides O(1) key lookup, storing pointers to list nodes
- Doubly-Linked List: Maintains access order (most recent at head, least recent at tail)
§Operations
| Operation | Action | Time |
|---|---|---|
get(key) | Move accessed node to head (MRU position) | O(1) |
put(key, value) | Insert at head, evict from tail if full | O(1) |
remove(key) | Unlink node from list, remove from map | O(1) |
§Eviction Example
Cache capacity: 3
put("a", 1) → [a]
put("b", 2) → [b, a]
put("c", 3) → [c, b, a]
get("a") → [a, c, b] // "a" moved to front (MRU)
put("d", 4) → [d, a, c] // "b" evicted (was LRU)§Dual-Limit Capacity
This implementation supports two independent limits:
max_entries: Maximum number of items (bounds cache memory overhead)max_size: Maximum total size of content (sum of item sizes)
Eviction occurs when either limit would be exceeded.
§Performance Characteristics
| Metric | Value |
|---|---|
| Get | O(1) |
| Put | O(1) |
| Remove | O(1) |
| Memory per entry | ~80 bytes overhead + key×2 + value |
Memory overhead breakdown (64-bit): list node pointers (16B), CacheEntry metadata (24B),
HashMap bucket (~24B), allocator overhead (~16B). Key is stored twice (in entry and HashMap).
§When to Use LRU
Good for:
- General-purpose caching with temporal locality
- Web page caches, database query caches
- Any workload where recent items are accessed again soon
Not ideal for:
- Frequency-based access patterns (use LFU instead)
- Scan-resistant workloads (use SLRU instead)
- Size-aware caching of variable-sized objects (use GDSF instead)
§Thread Safety
LruCache is not thread-safe. For concurrent access, either:
- Wrap with
MutexorRwLock - Use
ConcurrentLruCache(requiresconcurrentfeature)
§Examples
§Basic Usage
use cache_rs::LruCache;
use cache_rs::config::LruCacheConfig;
use core::num::NonZeroUsize;
let config = LruCacheConfig {
capacity: NonZeroUsize::new(3).unwrap(),
max_size: u64::MAX,
};
let mut cache = LruCache::init(config, None);
cache.put("a", 1);
cache.put("b", 2);
cache.put("c", 3);
assert_eq!(cache.get(&"a"), Some(&1)); // "a" is now MRU
cache.put("d", 4); // Evicts "b" (LRU)
assert_eq!(cache.get(&"b"), None);§Size-Aware Caching
use cache_rs::LruCache;
use cache_rs::config::LruCacheConfig;
use core::num::NonZeroUsize;
// Cache with max 1000 entries and 10MB total size
let config = LruCacheConfig {
capacity: NonZeroUsize::new(1000).unwrap(),
max_size: 10 * 1024 * 1024,
};
let mut cache: LruCache<String, Vec<u8>> = LruCache::init(config, None);
let data = vec![0u8; 1024]; // 1KB
cache.put_with_size("file.bin".to_string(), data.clone(), 1024);