Module traits

Module traits 

Source
Expand description

§Cache Trait Hierarchy

This module defines the trait hierarchy for the cache subsystem, providing a unified interface for different cache eviction policies (FIFO, LRU, LFU, LRU-K) while ensuring type safety and policy-appropriate operation sets.

§Architecture

                            ┌─────────────────────────────────────────┐
                            │            CoreCache<K, V>              │
                            │                                         │
                            │  insert(&mut, K, V) → Option<V>         │
                            │  get(&mut, &K) → Option<&V>             │
                            │  contains(&, &K) → bool                 │
                            │  len(&) → usize                         │
                            │  is_empty(&) → bool                     │
                            │  capacity(&) → usize                    │
                            │  clear(&mut)                            │
                            └──────────────────┬──────────────────────┘
                                               │
                   ┌───────────────────────────┼───────────────────────────┐
                   │                           │                           │
                   ▼                           ▼                           ▼
  ┌────────────────────────────┐   ┌─────────────────────────┐
  │   FIFOCacheTrait<K, V>     │   │   MutableCache<K, V>    │
  │                            │   │                         │
  │  pop_oldest() → (K, V)     │   │  remove(&K) → Option<V> │
  │  peek_oldest() → (&K, &V)  │   │  remove_batch(&[K])     │
  │  pop_oldest_batch(n)       │   │                         │
  │  age_rank(&K) → usize      │   └───────────┬─────────────┘
  │                            │               │
  │  No arbitrary removal!     │               │
  └────────────────────────────┘               │
                                   ┌───────────┴───────────┐
                                   │                       │
                                   ▼                       ▼
                    ┌──────────────────────────┐  ┌────────────────────────┐
                    │   LRUCacheTrait<K, V>    │  │   LFUCacheTrait<K, V>  │
                    │                          │  │                        │
                    │  pop_lru() → (K, V)      │  │  pop_lfu() → (K, V)    │
                    │  peek_lru() → (&K, &V)   │  │  peek_lfu() → (&K, &V) │
                    │  touch(&K) → bool        │  │  frequency(&K) → u64   │
                    │  recency_rank(&K)        │  │  reset_frequency(&K)   │
                    │                          │  │  increment_frequency() │
                    └──────────────────────────┘  └────────────────────────┘
                                   │
                                   ▼
                    ┌──────────────────────────┐
                    │  LRUKCacheTrait<K, V>    │
                    │                          │
                    │  pop_lru_k() → (K, V)    │
                    │  peek_lru_k() → (&K,&V)  │
                    │  k_value() → usize       │
                    │  access_history(&K)      │
                    │  access_count(&K)        │
                    │  k_distance(&K) → u64    │
                    │  touch(&K) → bool        │
                    │  k_distance_rank(&K)     │
                    └──────────────────────────┘

§Trait Design Philosophy

  ┌──────────────────────────────────────────────────────────────────────────┐
  │                         TRAIT HIERARCHY DESIGN                           │
  │                                                                          │
  │   1. CoreCache: Universal operations ALL caches must support             │
  │      └── insert, get, contains, len, capacity, clear                     │
  │                                                                          │
  │   2. MutableCache: Adds arbitrary key-based removal                      │
  │      └── remove(&K) - NOT suitable for FIFO (breaks insertion order)     │
  │                                                                          │
  │   3. Policy-Specific Traits: Add policy-appropriate eviction             │
  │      ├── FIFO: pop_oldest (no arbitrary removal!)                        │
  │      ├── LRU:  pop_lru + touch (recency-based)                           │
  │      ├── LFU:  pop_lfu + frequency (frequency-based)                     │
  │      └── LRU-K: pop_lru_k + k_distance (scan-resistant)                  │
  │                                                                          │
  │   Key Insight: FIFO extends CoreCache directly (NOT MutableCache)        │
  │   because arbitrary removal would violate FIFO semantics.                │
  └──────────────────────────────────────────────────────────────────────────┘

§Trait Summary

TraitExtendsPurpose
CoreCache-Universal cache operations
MutableCacheCoreCacheAdds arbitrary key removal
FIFOCacheTraitCoreCacheFIFO-specific (no remove!)
LRUCacheTraitMutableCacheLRU-specific with recency tracking
LFUCacheTraitMutableCacheLFU-specific with frequency tracking
LRUKCacheTraitMutableCacheLRU-K with K-distance tracking
ConcurrentCacheSend + SyncMarker for thread-safe caches
---
CacheTierManager-Multi-tier cache management
CacheFactory-Cache instance creation
AsyncCacheFutureSend + SyncFuture async operation support

§Why FIFO Doesn’t Extend MutableCache

  FIFO Cache Semantics:
  ═══════════════════════════════════════════════════════════════════════════

    VecDeque: [A] ─ [B] ─ [C] ─ [D]
              ↑                 ↑
            oldest           newest

  If we allowed remove(&B):
    VecDeque: [A] ─ [C] ─ [D]   ← Order still intact, but...

  Problem: Now VecDeque doesn't track true insertion order!
  - Stale entries accumulate
  - age_rank() becomes O(n) scanning for valid entries
  - FIFO semantics become muddled

  Solution: FIFOCacheTrait extends CoreCache directly, ensuring
  only FIFO-appropriate operations are available.

  ═══════════════════════════════════════════════════════════════════════════

§Policy Comparison

PolicyEviction BasisSupports RemoveBest For
FIFOInsertion order❌ NoPredictable eviction
LRULast access time✅ YesTemporal locality
LFUAccess frequency✅ YesStable hot spots
LRU-KK-th access time✅ YesScan resistance

§Utility Traits

  ┌─────────────────────────────────────────────────────────────────────────┐
  │ ConcurrentCache                                                         │
  │                                                                         │
  │   Marker trait: Send + Sync                                             │
  │   Purpose: Guarantee thread-safe cache implementations                  │
  │   Usage: fn use_cache<C: CoreCache<K, V> + ConcurrentCache>(c: &C)      │
  └─────────────────────────────────────────────────────────────────────────┘

§Cache Tier Management

  ┌─────────────────────────────────────────────────────────────────────────┐
  │                         Three-Tier Cache Architecture                   │
  │                                                                         │
  │   ┌──────────────┐    promote()    ┌──────────────┐    promote()        │
  │   │  Cold Tier   │ ───────────────►│  Warm Tier   │───────────────►     │
  │   │              │                 │              │                     │
  │   │  FIFOCache   │◄─────────────── │  LFUCache    │◄───────────────     │
  │   │              │    demote()     │              │    demote()         │
  │   └──────────────┘                 └──────────────┘                     │
  │         │                                │                              │
  │         │                                │          ┌──────────────┐    │
  │         │                                │          │   Hot Tier   │    │
  │         │                                └─────────►│              │    │
  │         │                                           │  LRUCache    │    │
  │         └──────────────────────────────────────────►│              │    │
  │                                                     └──────────────┘    │
  │                                                                         │
  │   locate_key(&K)       → CacheTier   Which tier has this key?           │
  │   evict_from_tier(T)   → (K, V)      Force eviction from tier           │
  └─────────────────────────────────────────────────────────────────────────┘

§CacheConfig

FieldTypeDefaultDescription
capacityusize1000Maximum entries
enable_statsboolfalseEnable hit/miss tracking
prealloc_memorybooltruePre-allocate memory for capacity
thread_safeboolfalseUse internal synchronization

§Example Usage

use crate::storage::disk::async_disk::cache::cache_traits::{
    CoreCache, MutableCache, FIFOCacheTrait, LRUCacheTrait, LFUCacheTrait,
};

// Function accepting any cache
fn warm_cache<C: CoreCache<u64, Vec<u8>>>(cache: &mut C, data: &[(u64, Vec<u8>)]) {
    for (key, value) in data {
        cache.insert(*key, value.clone());
    }
}

// Function requiring removal capability (LRU, LFU - NOT FIFO)
fn invalidate_keys<C: MutableCache<u64, Vec<u8>>>(cache: &mut C, keys: &[u64]) {
    for key in keys {
        cache.remove(key);
    }
}

// FIFO-specific function
fn evict_oldest_batch<C: FIFOCacheTrait<u64, Vec<u8>>>(
    cache: &mut C,
    count: usize,
) -> Vec<(u64, Vec<u8>)> {
    cache.pop_oldest_batch(count)
}

// LRU-specific function
fn touch_hot_keys<C: LRUCacheTrait<u64, Vec<u8>>>(cache: &mut C, keys: &[u64]) {
    for key in keys {
        cache.touch(key); // Mark as recently used without retrieving
    }
}

// LFU-specific function with frequency-based prioritization
fn boost_key_priority<C: LFUCacheTrait<u64, Vec<u8>>>(cache: &mut C, key: &u64) {
    // Increment frequency without accessing value
    cache.increment_frequency(key);
}

// Thread-safe cache usage
use std::sync::{Arc, RwLock};
use crate::storage::disk::async_disk::cache::lru::ConcurrentLRUCache;

let shared_cache = Arc::new(ConcurrentLRUCache::<u64, Vec<u8>>::new(1000));

// Safe to use from multiple threads
let cache_clone = shared_cache.clone();
std::thread::spawn(move || {
    cache_clone.insert(42, vec![1, 2, 3]);
});

§Thread Safety

  • Individual cache implementations are NOT thread-safe by default
  • Use ConcurrentCache marker trait to identify thread-safe implementations
  • Wrap non-concurrent caches in Arc<RwLock<C>> for shared access
  • Some implementations (e.g., ConcurrentLRUCache) provide built-in concurrency

§Implementation Notes

  • Trait Bounds: CoreCache has no bounds on K, V; implementations add as needed
  • Default Implementations: is_empty(), total_misses(), remove_batch(), pop_oldest_batch()
  • Batch Operations: Default implementations loop over single operations
  • Async Support: AsyncCacheFuture prepared for Phase 2 async-trait integration

Structs§

CacheConfig
Configuration for cache creation

Enums§

CacheTier
Cache tier enumeration

Traits§

AsyncCacheFuture
Extension trait for async cache operations Note: Will be implemented in Phase 2 when async-trait dependency is added
CacheFactory
Factory trait for creating cache instances
CacheTierManager
High-level cache tier management
ConcurrentCache
Marker trait for caches that are safe to use concurrently Implementors guarantee thread-safe operations
CoreCache
Core cache operations that all caches support.
FIFOCacheTrait
FIFO-specific operations that respect insertion order No arbitrary removal - only ordered operations that maintain FIFO semantics
LFUCacheTrait
LFU-specific operations that respect frequency order
LRUCacheTrait
LRU-specific operations that respect access order
LRUKCacheTrait
LRU-K specific operations that respect K-distance access patterns
MutableCache
Caches that support arbitrary key-based removal This is appropriate for LRU, LFU, and general hash-map style caches