Expand description
Least Frequently Used (LFU) cache implementation.
Provides a fixed-size cache that evicts the least frequently used items when capacity is reached. Items are tracked by their access frequency. Least Frequently Used (LFU) Cache Implementation
An LFU cache evicts the least frequently accessed item when capacity is reached. This implementation tracks access frequency for each item and maintains items organized by their frequency count using a combination of hash map and frequency-indexed lists.
§How the Algorithm Works
LFU is based on the principle that items accessed more frequently in the past are more likely to be accessed again in the future. Unlike LRU which only considers recency, LFU considers the total number of accesses.
§Data Structure
┌─────────────────────────────────────────────────────────────────────────────┐
│ LFU Cache │
│ │
│ HashMap<K, *Node> BTreeMap<Frequency, List> │
│ ┌──────────────┐ ┌─────────────────────────────────────────┐ │
│ │ "hot" ──────────────────────│ freq=10: [hot] ◀──▶ [warm] │ │
│ │ "warm" ─────────────────────│ freq=5: [item_a] ◀──▶ [item_b] │ │
│ │ "cold" ─────────────────────│ freq=1: [cold] ◀──▶ [new_item] ← LFU │ │
│ └──────────────┘ └─────────────────────────────────────────┘ │
│ ▲ │
│ │ │
│ min_frequency=1 │
└─────────────────────────────────────────────────────────────────────────────┘- HashMap: Provides O(1) key lookup, storing pointers to list nodes
- BTreeMap: Maps frequency counts to lists of items with that frequency
- min_frequency: Tracks the lowest frequency for O(1) eviction
§Operations
| Operation | Action | Time |
|---|---|---|
get(key) | Increment frequency, move to new frequency list | O(log F)* |
put(key, value) | Insert at frequency 1, evict lowest freq if full | O(log F)* |
remove(key) | Remove from frequency list, update min_frequency | O(log F)* |
*Effectively O(1) since F (distinct frequencies) is bounded to a small constant.
§Access Pattern Example
Cache capacity: 3
put("a", 1) → freq_1: [a]
put("b", 2) → freq_1: [b, a]
put("c", 3) → freq_1: [c, b, a]
get("a") → freq_1: [c, b], freq_2: [a]
get("a") → freq_1: [c, b], freq_3: [a]
put("d", 4) → freq_1: [d, c], freq_3: [a] // "b" evicted (LFU at freq_1)§Dual-Limit Capacity
This implementation supports two independent limits:
max_entries: Maximum number of items (bounds entry count)max_size: Maximum total size of content (sum of item sizes)
Eviction occurs when either limit would be exceeded.
§Performance Characteristics
| Metric | Value |
|---|---|
| Get | O(log F) amortized, effectively O(1) |
| Put | O(log F) amortized, effectively O(1) |
| Remove | O(log F) amortized, effectively O(1) |
| Memory per entry | ~100 bytes overhead + key×2 + value |
Where F = number of distinct frequency values. Since frequencies are small integers (1, 2, 3, …), F is typically bounded to a small constant (< 100 in practice), making operations effectively O(1).
Memory overhead includes: list node pointers (16B), CacheEntry metadata (32B),
frequency metadata (8B), HashMap bucket (~24B), BTreeMap overhead (~16B).
§When to Use LFU
Good for:
- Workloads with stable popularity patterns (some items are consistently popular)
- Database query caches where certain queries are repeated frequently
- CDN edge caches with predictable content popularity
- Scenarios requiring excellent scan resistance
Not ideal for:
- Recency-based access patterns (use LRU instead)
- Workloads where popularity changes over time (use LFUDA instead)
- Short-lived caches where frequency counts don’t accumulate meaningfully
- “Cache pollution” scenarios where old popular items block new ones
§The Cache Pollution Problem
A limitation of pure LFU: items that were popular in the past but are no longer
accessed can persist in the cache indefinitely due to high frequency counts.
For long-running caches with changing popularity, consider LfudaCache
which addresses this with dynamic aging.
§Thread Safety
LfuCache is not thread-safe. For concurrent access, either:
- Wrap with
MutexorRwLock - Use
ConcurrentLfuCache(requiresconcurrentfeature)
§Examples
§Basic Usage
use cache_rs::LfuCache;
use cache_rs::config::LfuCacheConfig;
use core::num::NonZeroUsize;
let config = LfuCacheConfig {
capacity: NonZeroUsize::new(3).unwrap(),
max_size: u64::MAX,
};
let mut cache = LfuCache::init(config, None);
cache.put("a", 1);
cache.put("b", 2);
cache.put("c", 3);
// Access "a" multiple times - increases its frequency
assert_eq!(cache.get(&"a"), Some(&1));
assert_eq!(cache.get(&"a"), Some(&1));
// Add new item - "b" or "c" evicted (both at frequency 1)
cache.put("d", 4);
// "a" survives due to higher frequency
assert_eq!(cache.get(&"a"), Some(&1));§Size-Aware Caching
use cache_rs::LfuCache;
use cache_rs::config::LfuCacheConfig;
use core::num::NonZeroUsize;
// Cache with max 1000 entries and 10MB total size
let config = LfuCacheConfig {
capacity: NonZeroUsize::new(1000).unwrap(),
max_size: 10 * 1024 * 1024,
};
let mut cache: LfuCache<String, Vec<u8>> = LfuCache::init(config, None);
let data = vec![0u8; 1024]; // 1KB
cache.put_with_size("file.bin".to_string(), data.clone(), 1024);Structs§
- LfuCache
- An implementation of a Least Frequently Used (LFU) cache.