ThreadLocalCache

Struct ThreadLocalCache 

Source
pub struct ThreadLocalCache<R: 'static> {
    pub cache: &'static LocalKey<RefCell<HashMap<String, CacheEntry<R>>>>,
    pub order: &'static LocalKey<RefCell<VecDeque<String>>>,
    pub limit: Option<usize>,
    pub max_memory: Option<usize>,
    pub policy: EvictionPolicy,
    pub ttl: Option<u64>,
    pub frequency_weight: Option<f64>,
    pub window_ratio: Option<f64>,
    pub sketch_width: Option<usize>,
    pub sketch_depth: Option<usize>,
    pub decay_interval: Option<u64>,
    pub stats: CacheStats,
}
Expand description

Core cache abstraction that stores values in a thread-local HashMap with configurable limits.

This cache is designed to work with static thread-local maps declared using the thread_local! macro. Each thread maintains its own independent cache, ensuring thread safety without the need for locks.

§Type Parameters

  • R - The type of values stored in the cache. Must be 'static to satisfy thread-local storage requirements and Clone for retrieval.

§Features

  • Thread-local storage: Each thread has its own cache instance
  • Configurable limits: Optional entry count limit and memory limit
  • Eviction policies: FIFO, LRU (default), LFU, ARC, Random, and TLRU
    • FIFO: First In, First Out - simple and predictable
    • LRU: Least Recently Used - evicts least recently accessed entries
    • LFU: Least Frequently Used - evicts least frequently accessed entries
    • ARC: Adaptive Replacement Cache - hybrid policy combining recency and frequency
    • Random: Random replacement - O(1) eviction with minimal overhead
    • TLRU: Time-aware LRU - combines recency, frequency, and age factors
      • Customizable with frequency_weight parameter
      • Formula: score = frequency^weight × position × age_factor
      • frequency_weight < 1.0: Emphasize recency (time-sensitive data)
      • frequency_weight > 1.0: Emphasize frequency (popular content)
  • TTL support: Optional time-to-live for automatic expiration
  • Result-aware: Special handling for Result<T, E> types
  • Memory-based limits: Optional maximum memory usage (requires MemoryEstimator)
  • Statistics tracking: Optional hit/miss monitoring (requires stats feature)

§Thread Safety

The cache is thread-safe by design - each thread has its own independent copy of the cache data. This means:

  • No locks or synchronization needed
  • No contention between threads
  • Cache entries are not shared across threads

§Examples

§Basic Usage

use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};

thread_local! {
    static MY_CACHE: RefCell<HashMap<String, CacheEntry<i32>>> = RefCell::new(HashMap::new());
    static MY_ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}

let cache = ThreadLocalCache::new(&MY_CACHE, &MY_ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
cache.insert("answer", 42);
assert_eq!(cache.get("answer"), Some(42));

§With Cache Limit and LRU Policy

use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};

thread_local! {
    static CACHE: RefCell<HashMap<String, CacheEntry<String>>> = RefCell::new(HashMap::new());
    static ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}

// Cache with limit of 100 entries using LRU eviction
let cache = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::LRU, None, None, None, None, None, None);
cache.insert("key1", "value1".to_string());
cache.insert("key2", "value2".to_string());

// Accessing key1 moves it to the end (most recently used)
let _ = cache.get("key1");

§With TTL (Time To Live)

use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};

thread_local! {
    static CACHE: RefCell<HashMap<String, CacheEntry<String>>> = RefCell::new(HashMap::new());
    static ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}

// Cache with 60 second TTL
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, Some(60), None, None, None, None, None);
cache.insert("key", "value".to_string());

// Entry will expire after 60 seconds
// get() returns None for expired entries

§TLRU with Custom Frequency Weight

use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};

thread_local! {
    static CACHE: RefCell<HashMap<String, CacheEntry<String>>> = RefCell::new(HashMap::new());
    static ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}

// Low frequency_weight (0.3) - emphasizes recency over frequency
// Good for time-sensitive data where freshness matters more than popularity
let cache = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::TLRU, Some(300), Some(0.3), None, None, None, None);

// High frequency_weight (1.5) - emphasizes frequency over recency
// Good for popular content that should stay cached despite age
let cache_popular = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::TLRU, Some(300), Some(1.5), None, None, None, None);

// Default (omit frequency_weight) - balanced approach
let cache_balanced = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::TLRU, Some(300), None, None, None, None, None);

Fields§

§cache: &'static LocalKey<RefCell<HashMap<String, CacheEntry<R>>>>

Reference to the thread-local storage key for the cache HashMap

§order: &'static LocalKey<RefCell<VecDeque<String>>>

Reference to the thread-local storage key for the cache order queue

§limit: Option<usize>

Maximum number of items to store in the cache

§max_memory: Option<usize>

Maximum memory size in bytes

§policy: EvictionPolicy

Eviction policy to use for the cache

§ttl: Option<u64>

Optional TTL (in seconds) for cache entries

§frequency_weight: Option<f64>

Frequency weight for TLRU policy (non-negative, >= 0.0). Only used when policy is TLRU.

§window_ratio: Option<f64>

Window ratio for W-TinyLFU policy (between 0.0 and 1.0). Only used when policy is WTinyLFU.

§sketch_width: Option<usize>

Sketch width for W-TinyLFU policy. Only used when policy is WTinyLFU.

§sketch_depth: Option<usize>

Sketch depth for W-TinyLFU policy. Only used when policy is WTinyLFU.

§decay_interval: Option<u64>

Decay interval for W-TinyLFU policy. Only used when policy is WTinyLFU.

§stats: CacheStats

Cache statistics (when stats feature is enabled)

Implementations§

Source§

impl<R: Clone + 'static> ThreadLocalCache<R>

Source

pub fn new( cache: &'static LocalKey<RefCell<HashMap<String, CacheEntry<R>>>>, order: &'static LocalKey<RefCell<VecDeque<String>>>, limit: Option<usize>, max_memory: Option<usize>, policy: EvictionPolicy, ttl: Option<u64>, frequency_weight: Option<f64>, window_ratio: Option<f64>, sketch_width: Option<usize>, sketch_depth: Option<usize>, decay_interval: Option<u64>, ) -> Self

Creates a new ThreadLocalCache wrapper around thread-local storage keys.

§Arguments
  • cache - A static reference to a LocalKey that stores the cache HashMap
  • order - A static reference to a LocalKey that stores the eviction order queue
  • limit - Optional maximum number of entries (None for unlimited)
  • max_memory - Optional maximum memory size in bytes (None for unlimited)
  • policy - Eviction policy to use when limit is reached
  • ttl - Optional time-to-live in seconds (None for no expiration)
  • frequency_weight - Optional frequency weight for TLRU policy (0.0 to 1.0)
  • window_ratio - Optional window ratio for W-TinyLFU policy (between 0.0 and 1.0)
  • sketch_width - Optional sketch width for W-TinyLFU policy
  • sketch_depth - Optional sketch depth for W-TinyLFU policy
  • decay_interval - Optional decay interval for W-TinyLFU policy
§Examples
use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};

thread_local! {
    static CACHE: RefCell<HashMap<String, CacheEntry<String>>> = RefCell::new(HashMap::new());
    static ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}

let cache = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::LRU, Some(60), None, None, None, None, None);
Source

pub fn get(&self, key: &str) -> Option<R>

Retrieves a value from the cache by key.

§Arguments
  • key - The cache key to look up
§Returns
  • Some(value) if the key exists in the cache and is not expired
  • None if the key is not found or has expired
§Examples
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
cache.insert("key", 100);
assert_eq!(cache.get("key"), Some(100));
assert_eq!(cache.get("missing"), None);
Source

pub fn insert(&self, key: &str, value: R)

Inserts a value into the cache with the specified key.

If a value already exists for this key, it will be replaced.

§Arguments
  • key - The cache key
  • value - The value to store
§Examples
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
cache.insert("first", 1);
cache.insert("first", 2); // Replaces previous value
assert_eq!(cache.get("first"), Some(2));
§Note

This method does NOT require MemoryEstimator trait. It only handles entry-count limits. If max_memory is configured, use insert_with_memory() instead, which requires the type to implement MemoryEstimator.

Source

pub fn stats(&self) -> &CacheStats

Returns a reference to the cache statistics.

This method is only available when the stats feature is enabled.

§Examples
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
cache.insert("key1", 100);
let _ = cache.get("key1");
let _ = cache.get("key2");

let stats = cache.stats();
assert_eq!(stats.hits(), 1);
assert_eq!(stats.misses(), 1);
Source§

impl<R: Clone + 'static + MemoryEstimator> ThreadLocalCache<R>

Source

pub fn insert_with_memory(&self, key: &str, value: R)

Insert with memory limit support.

This method requires R to implement MemoryEstimator and handles both memory-based and entry-count-based eviction.

Use this method when max_memory is configured in the cache.

Source§

impl<T: Clone + Debug + 'static, E: Clone + Debug + 'static> ThreadLocalCache<Result<T, E>>

Specialized implementation for caching Result<T, E> return types.

This implementation provides a method to cache only successful (Ok) results, which is useful for functions that may fail - you typically don’t want to cache errors, as retrying the operation might succeed later.

§Type Parameters

  • T - The success type (inner type of Ok)
  • E - The error type (inner type of Err)

§Examples

let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);

// Ok values are cached
cache.insert_result("success", &Ok(42));
assert_eq!(cache.get("success"), Some(Ok(42)));

// Err values are NOT cached
cache.insert_result("failure", &Err("error".to_string()));
assert_eq!(cache.get("failure"), None);
Source

pub fn insert_result(&self, key: &str, value: &Result<T, E>)

Inserts a Result into the cache, but only if it’s an Ok value.

This method is specifically designed for caching functions that return Result<T, E>. It intelligently ignores Err values, as errors typically should not be cached (the operation might succeed on retry).

This version does NOT require MemoryEstimator. Use insert_result_with_memory() when max_memory is configured.

§Arguments
  • key - The cache key
  • value - The Result to potentially cache
§Behavior
  • If value is Ok(v), stores Ok(v.clone()) in the cache
  • If value is Err(_), does nothing (error is not cached)
Source§

impl<T: Clone + Debug + 'static + MemoryEstimator, E: Clone + Debug + 'static + MemoryEstimator> ThreadLocalCache<Result<T, E>>

Implementation for Result types WITH MemoryEstimator support.

Source

pub fn insert_result_with_memory(&self, key: &str, value: &Result<T, E>)

Inserts a Result into the cache with memory limit support.

This method requires both T and E to implement MemoryEstimator. Use this when max_memory is configured.

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.