Skip to main content

Module lfuda

Module lfuda 

Source
Expand description

Least Frequently Used with Dynamic Aging (LFUDA) cache implementation.

An enhanced LFU cache that addresses the aging problem by dynamically adjusting item priorities based on a global aging factor. Least Frequently Used with Dynamic Aging (LFUDA) Cache Implementation

LFUDA is an enhanced LFU algorithm that addresses the “cache pollution” problem by incorporating a global age factor. This allows the cache to gradually forget old access patterns, enabling new popular items to compete fairly against historically popular but now cold items.

§How the Algorithm Works

LFUDA maintains a global age that increases whenever an item is evicted. Each item’s priority is the sum of its access frequency and the age at which it was last accessed. This elegant mechanism ensures that:

  1. Recently accessed items get a “boost” from the current global age
  2. Old items with high frequency but no recent access gradually become eviction candidates
  3. New items can compete fairly against established items

§Mathematical Formulation

For each cache entry i:
  - F_i = access frequency of item i
  - A_i = global age when item i was last accessed
  - Priority_i = F_i + A_i

On eviction:
  - Select item j where Priority_j = min{Priority_i for all i}
  - Set global_age = Priority_j

On access:
  - Increment F_i
  - Update A_i = global_age (item gets current age)

§Data Structure

┌─────────────────────────────────────────────────────────────────────────────┐
│                           LFUDA Cache (global_age=100)                       │
│                                                                              │
│  HashMap<K, *Node>              BTreeMap<Priority, List>                     │
│  ┌──────────────┐              ┌─────────────────────────────────────────┐   │
│  │ "hot" ──────────────────────│ pri=115: [hot]    (freq=5, age=110)     │   │
│  │ "warm" ─────────────────────│ pri=103: [warm]   (freq=3, age=100)     │   │
│  │ "stale" ────────────────────│ pri=60:  [stale]  (freq=50, age=10) ←LFU│   │
│  └──────────────┘              └─────────────────────────────────────────┘   │
│                                        ▲                                     │
│                                        │                                     │
│                                   min_priority=60                            │
│                                                                              │
│  Note: "stale" has high frequency (50) but low age (10), making it the      │
│        eviction candidate despite being historically popular.                │
└─────────────────────────────────────────────────────────────────────────────┘

§Operations

OperationActionTime
get(key)Increment frequency, update age to global_ageO(log P)
put(key, value)Insert with priority=global_age+1, evict min priorityO(log P)
remove(key)Remove from priority listO(log P)

§Aging Example

global_age = 0

put("a", 1)    →  a: freq=1, age=0, priority=1
get("a") x10   →  a: freq=11, age=0, priority=11
put("b", 2)    →  b: freq=1, age=0, priority=1
put("c", 3)    →  c: freq=1, age=0, priority=1

[Cache full, add "d"]
- Evict min priority (b or c with priority=1)
- Set global_age = 1

put("d", 4)    →  d: freq=1, age=1, priority=2

[Many evictions later, global_age = 100]
- "a" still has priority=11 (no recent access)
- New item "e" gets priority=101
- "a" becomes eviction candidate despite high frequency!

§Dual-Limit Capacity

This implementation supports two independent limits:

  • max_entries: Maximum number of items
  • max_size: Maximum total size of content

Eviction occurs when either limit would be exceeded.

§Performance Characteristics

MetricValue
GetO(log P)
PutO(log P) amortized
RemoveO(log P)
Memory per entry~110 bytes overhead + key×2 + value

Where P = number of distinct priority values. Since priority = frequency + age, P can grow with the number of entries. BTreeMap provides O(log P) lookups.

Slightly higher overhead than LFU due to age tracking per entry.

§When to Use LFUDA

Good for:

  • Long-running caches where item popularity changes over time
  • Web caches, CDN caches with evolving content popularity
  • Systems that need frequency-based eviction but must adapt to trends
  • Preventing “cache pollution” from historically popular but now stale items

Not ideal for:

  • Short-lived caches (aging doesn’t have time to take effect)
  • Static popularity patterns (pure LFU is simpler and equally effective)
  • Strictly recency-based workloads (use LRU instead)

§LFUDA vs LFU

AspectLFULFUDA
Adapts to popularity changesNoYes
Memory overheadLowerSlightly higher
ComplexitySimplerMore complex
Best forStable patternsEvolving patterns

§Thread Safety

LfudaCache is not thread-safe. For concurrent access, either:

  • Wrap with Mutex or RwLock
  • Use ConcurrentLfudaCache (requires concurrent feature)

§Examples

§Basic Usage

use cache_rs::LfudaCache;
use cache_rs::config::LfudaCacheConfig;
use core::num::NonZeroUsize;

let config = LfudaCacheConfig {
    capacity: NonZeroUsize::new(3).unwrap(),
    initial_age: 0,
    max_size: u64::MAX,
};
let mut cache = LfudaCache::init(config, None);

cache.put("a", 1);
cache.put("b", 2);
cache.put("c", 3);

// Access "a" to increase its priority
assert_eq!(cache.get(&"a"), Some(&1));
assert_eq!(cache.get(&"a"), Some(&1));

// Add new item - lowest priority item evicted
cache.put("d", 4);

// "a" survives due to higher priority (frequency + age)
assert_eq!(cache.get(&"a"), Some(&1));

§Long-Running Cache with Aging

use cache_rs::LfudaCache;
use cache_rs::config::LfudaCacheConfig;
use core::num::NonZeroUsize;

let config = LfudaCacheConfig {
    capacity: NonZeroUsize::new(100).unwrap(),
    initial_age: 0,
    max_size: u64::MAX,
};
let mut cache = LfudaCache::init(config, None);

// Populate cache with initial data
for i in 0..100 {
    cache.put(i, i * 10);
}

// Access some items frequently
for _ in 0..50 {
    cache.get(&0);  // Item 0 becomes very popular
}

// Later: new content arrives, old content ages out
for i in 100..200 {
    cache.put(i, i * 10);  // Each insert may evict old items
}

// Item 0 may eventually be evicted if not accessed recently,
// despite its historical popularity - this is the power of LFUDA!

Structs§

LfudaCache
An implementation of a Least Frequently Used with Dynamic Aging (LFUDA) cache.