§Cache Trait Hierarchy
This module defines the trait hierarchy for the cache subsystem, providing a unified
interface for different cache eviction policies (FIFO, LRU, LFU, LRU-K) while ensuring
type safety and policy-appropriate operation sets.
§Architecture
┌─────────────────────────────────────────┐
│ CoreCache<K, V> │
│ │
│ insert(&mut, K, V) → Option<V> │
│ get(&mut, &K) → Option<&V> │
│ contains(&, &K) → bool │
│ len(&) → usize │
│ is_empty(&) → bool │
│ capacity(&) → usize │
│ clear(&mut) │
└──────────────────┬──────────────────────┘
│
┌───────────────────────────┼───────────────────────────┐
│ │ │
▼ ▼ ▼
┌────────────────────────────┐ ┌─────────────────────────┐
│ FIFOCacheTrait<K, V> │ │ MutableCache<K, V> │
│ │ │ │
│ pop_oldest() → (K, V) │ │ remove(&K) → Option<V> │
│ peek_oldest() → (&K, &V) │ │ remove_batch(&[K]) │
│ pop_oldest_batch(n) │ │ │
│ age_rank(&K) → usize │ └───────────┬─────────────┘
│ │ │
│ No arbitrary removal! │ │
└────────────────────────────┘ │
┌───────────┴───────────┐
│ │
▼ ▼
┌──────────────────────────┐ ┌────────────────────────┐
│ LRUCacheTrait<K, V> │ │ LFUCacheTrait<K, V> │
│ │ │ │
│ pop_lru() → (K, V) │ │ pop_lfu() → (K, V) │
│ peek_lru() → (&K, &V) │ │ peek_lfu() → (&K, &V) │
│ touch(&K) → bool │ │ frequency(&K) → u64 │
│ recency_rank(&K) │ │ reset_frequency(&K) │
│ │ │ increment_frequency() │
└──────────────────────────┘ └────────────────────────┘
│
▼
┌──────────────────────────┐
│ LRUKCacheTrait<K, V> │
│ │
│ pop_lru_k() → (K, V) │
│ peek_lru_k() → (&K,&V) │
│ k_value() → usize │
│ access_history(&K) │
│ access_count(&K) │
│ k_distance(&K) → u64 │
│ touch(&K) → bool │
│ k_distance_rank(&K) │
└──────────────────────────┘
§Trait Design Philosophy
┌──────────────────────────────────────────────────────────────────────────┐
│ TRAIT HIERARCHY DESIGN │
│ │
│ 1. CoreCache: Universal operations ALL caches must support │
│ └── insert, get, contains, len, capacity, clear │
│ │
│ 2. MutableCache: Adds arbitrary key-based removal │
│ └── remove(&K) - NOT suitable for FIFO (breaks insertion order) │
│ │
│ 3. Policy-Specific Traits: Add policy-appropriate eviction │
│ ├── FIFO: pop_oldest (no arbitrary removal!) │
│ ├── LRU: pop_lru + touch (recency-based) │
│ ├── LFU: pop_lfu + frequency (frequency-based) │
│ └── LRU-K: pop_lru_k + k_distance (scan-resistant) │
│ │
│ Key Insight: FIFO extends CoreCache directly (NOT MutableCache) │
│ because arbitrary removal would violate FIFO semantics. │
└──────────────────────────────────────────────────────────────────────────┘
§Trait Summary
| Trait | Extends | Purpose |
CoreCache | - | Universal cache operations |
MutableCache | CoreCache | Adds arbitrary key removal |
FIFOCacheTrait | CoreCache | FIFO-specific (no remove!) |
LRUCacheTrait | MutableCache | LRU-specific with recency tracking |
LFUCacheTrait | MutableCache | LFU-specific with frequency tracking |
LRUKCacheTrait | MutableCache | LRU-K with K-distance tracking |
ConcurrentCache | Send + Sync | Marker for thread-safe caches |
| - | - | - |
CacheTierManager | - | Multi-tier cache management |
CacheFactory | - | Cache instance creation |
AsyncCacheFuture | Send + Sync | Future async operation support |
§Why FIFO Doesn’t Extend MutableCache
FIFO Cache Semantics:
═══════════════════════════════════════════════════════════════════════════
VecDeque: [A] ─ [B] ─ [C] ─ [D]
↑ ↑
oldest newest
If we allowed remove(&B):
VecDeque: [A] ─ [C] ─ [D] ← Order still intact, but...
Problem: Now VecDeque doesn't track true insertion order!
- Stale entries accumulate
- age_rank() becomes O(n) scanning for valid entries
- FIFO semantics become muddled
Solution: FIFOCacheTrait extends CoreCache directly, ensuring
only FIFO-appropriate operations are available.
═══════════════════════════════════════════════════════════════════════════
§Policy Comparison
| Policy | Eviction Basis | Supports Remove | Best For |
| FIFO | Insertion order | ❌ No | Predictable eviction |
| LRU | Last access time | ✅ Yes | Temporal locality |
| LFU | Access frequency | ✅ Yes | Stable hot spots |
| LRU-K | K-th access time | ✅ Yes | Scan resistance |
§Utility Traits
┌─────────────────────────────────────────────────────────────────────────┐
│ ConcurrentCache │
│ │
│ Marker trait: Send + Sync │
│ Purpose: Guarantee thread-safe cache implementations │
│ Usage: fn use_cache<C: CoreCache<K, V> + ConcurrentCache>(c: &C) │
└─────────────────────────────────────────────────────────────────────────┘
§Cache Tier Management
┌─────────────────────────────────────────────────────────────────────────┐
│ Three-Tier Cache Architecture │
│ │
│ ┌──────────────┐ promote() ┌──────────────┐ promote() │
│ │ Cold Tier │ ───────────────►│ Warm Tier │───────────────► │
│ │ │ │ │ │
│ │ FIFOCache │◄─────────────── │ LFUCache │◄─────────────── │
│ │ │ demote() │ │ demote() │
│ └──────────────┘ └──────────────┘ │
│ │ │ │
│ │ │ ┌──────────────┐ │
│ │ │ │ Hot Tier │ │
│ │ └─────────►│ │ │
│ │ │ LRUCache │ │
│ └──────────────────────────────────────────►│ │ │
│ └──────────────┘ │
│ │
│ locate_key(&K) → CacheTier Which tier has this key? │
│ evict_from_tier(T) → (K, V) Force eviction from tier │
└─────────────────────────────────────────────────────────────────────────┘
§CacheConfig
| Field | Type | Default | Description |
capacity | usize | 1000 | Maximum entries |
enable_stats | bool | false | Enable hit/miss tracking |
prealloc_memory | bool | true | Pre-allocate memory for capacity |
thread_safe | bool | false | Use internal synchronization |
§Example Usage
ⓘuse crate::storage::disk::async_disk::cache::cache_traits::{
CoreCache, MutableCache, FIFOCacheTrait, LRUCacheTrait, LFUCacheTrait,
};
fn warm_cache<C: CoreCache<u64, Vec<u8>>>(cache: &mut C, data: &[(u64, Vec<u8>)]) {
for (key, value) in data {
cache.insert(*key, value.clone());
}
}
fn invalidate_keys<C: MutableCache<u64, Vec<u8>>>(cache: &mut C, keys: &[u64]) {
for key in keys {
cache.remove(key);
}
}
fn evict_oldest_batch<C: FIFOCacheTrait<u64, Vec<u8>>>(
cache: &mut C,
count: usize,
) -> Vec<(u64, Vec<u8>)> {
cache.pop_oldest_batch(count)
}
fn touch_hot_keys<C: LRUCacheTrait<u64, Vec<u8>>>(cache: &mut C, keys: &[u64]) {
for key in keys {
cache.touch(key); }
}
fn boost_key_priority<C: LFUCacheTrait<u64, Vec<u8>>>(cache: &mut C, key: &u64) {
cache.increment_frequency(key);
}
use std::sync::{Arc, RwLock};
use crate::storage::disk::async_disk::cache::lru::ConcurrentLRUCache;
let shared_cache = Arc::new(ConcurrentLRUCache::<u64, Vec<u8>>::new(1000));
let cache_clone = shared_cache.clone();
std::thread::spawn(move || {
cache_clone.insert(42, vec![1, 2, 3]);
});
§Thread Safety
- Individual cache implementations are NOT thread-safe by default
- Use
ConcurrentCache marker trait to identify thread-safe implementations
- Wrap non-concurrent caches in
Arc<RwLock<C>> for shared access
- Some implementations (e.g.,
ConcurrentLRUCache) provide built-in concurrency
§Implementation Notes
- Trait Bounds:
CoreCache has no bounds on K, V; implementations add as needed
- Default Implementations:
is_empty(), total_misses(), remove_batch(), pop_oldest_batch()
- Batch Operations: Default implementations loop over single operations
- Async Support:
AsyncCacheFuture prepared for Phase 2 async-trait integration