pub struct ThreadLocalCache<R: 'static> {
pub cache: &'static LocalKey<RefCell<HashMap<String, CacheEntry<R>>>>,
pub order: &'static LocalKey<RefCell<VecDeque<String>>>,
pub limit: Option<usize>,
pub max_memory: Option<usize>,
pub policy: EvictionPolicy,
pub ttl: Option<u64>,
pub frequency_weight: Option<f64>,
pub window_ratio: Option<f64>,
pub sketch_width: Option<usize>,
pub sketch_depth: Option<usize>,
pub decay_interval: Option<u64>,
pub stats: CacheStats,
}Expand description
Core cache abstraction that stores values in a thread-local HashMap with configurable limits.
This cache is designed to work with static thread-local maps declared using
the thread_local! macro. Each thread maintains its own independent cache,
ensuring thread safety without the need for locks.
§Type Parameters
R- The type of values stored in the cache. Must be'staticto satisfy thread-local storage requirements andClonefor retrieval.
§Features
- Thread-local storage: Each thread has its own cache instance
- Configurable limits: Optional entry count limit and memory limit
- Eviction policies: FIFO, LRU (default), LFU, ARC, Random, and TLRU
- FIFO: First In, First Out - simple and predictable
- LRU: Least Recently Used - evicts least recently accessed entries
- LFU: Least Frequently Used - evicts least frequently accessed entries
- ARC: Adaptive Replacement Cache - hybrid policy combining recency and frequency
- Random: Random replacement - O(1) eviction with minimal overhead
- TLRU: Time-aware LRU - combines recency, frequency, and age factors
- Customizable with
frequency_weightparameter - Formula:
score = frequency^weight × position × age_factor frequency_weight < 1.0: Emphasize recency (time-sensitive data)frequency_weight > 1.0: Emphasize frequency (popular content)
- Customizable with
- TTL support: Optional time-to-live for automatic expiration
- Result-aware: Special handling for
Result<T, E>types - Memory-based limits: Optional maximum memory usage (requires
MemoryEstimator) - Statistics tracking: Optional hit/miss monitoring (requires
statsfeature)
§Thread Safety
The cache is thread-safe by design - each thread has its own independent copy of the cache data. This means:
- No locks or synchronization needed
- No contention between threads
- Cache entries are not shared across threads
§Examples
§Basic Usage
use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};
thread_local! {
static MY_CACHE: RefCell<HashMap<String, CacheEntry<i32>>> = RefCell::new(HashMap::new());
static MY_ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}
let cache = ThreadLocalCache::new(&MY_CACHE, &MY_ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
cache.insert("answer", 42);
assert_eq!(cache.get("answer"), Some(42));§With Cache Limit and LRU Policy
use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};
thread_local! {
static CACHE: RefCell<HashMap<String, CacheEntry<String>>> = RefCell::new(HashMap::new());
static ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}
// Cache with limit of 100 entries using LRU eviction
let cache = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::LRU, None, None, None, None, None, None);
cache.insert("key1", "value1".to_string());
cache.insert("key2", "value2".to_string());
// Accessing key1 moves it to the end (most recently used)
let _ = cache.get("key1");§With TTL (Time To Live)
use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};
thread_local! {
static CACHE: RefCell<HashMap<String, CacheEntry<String>>> = RefCell::new(HashMap::new());
static ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}
// Cache with 60 second TTL
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, Some(60), None, None, None, None, None);
cache.insert("key", "value".to_string());
// Entry will expire after 60 seconds
// get() returns None for expired entries§TLRU with Custom Frequency Weight
use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};
thread_local! {
static CACHE: RefCell<HashMap<String, CacheEntry<String>>> = RefCell::new(HashMap::new());
static ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}
// Low frequency_weight (0.3) - emphasizes recency over frequency
// Good for time-sensitive data where freshness matters more than popularity
let cache = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::TLRU, Some(300), Some(0.3), None, None, None, None);
// High frequency_weight (1.5) - emphasizes frequency over recency
// Good for popular content that should stay cached despite age
let cache_popular = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::TLRU, Some(300), Some(1.5), None, None, None, None);
// Default (omit frequency_weight) - balanced approach
let cache_balanced = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::TLRU, Some(300), None, None, None, None, None);Fields§
§cache: &'static LocalKey<RefCell<HashMap<String, CacheEntry<R>>>>Reference to the thread-local storage key for the cache HashMap
order: &'static LocalKey<RefCell<VecDeque<String>>>Reference to the thread-local storage key for the cache order queue
limit: Option<usize>Maximum number of items to store in the cache
max_memory: Option<usize>Maximum memory size in bytes
policy: EvictionPolicyEviction policy to use for the cache
ttl: Option<u64>Optional TTL (in seconds) for cache entries
frequency_weight: Option<f64>Frequency weight for TLRU policy (non-negative, >= 0.0). Only used when policy is TLRU.
window_ratio: Option<f64>Window ratio for W-TinyLFU policy (between 0.0 and 1.0). Only used when policy is WTinyLFU.
sketch_width: Option<usize>Sketch width for W-TinyLFU policy. Only used when policy is WTinyLFU.
sketch_depth: Option<usize>Sketch depth for W-TinyLFU policy. Only used when policy is WTinyLFU.
decay_interval: Option<u64>Decay interval for W-TinyLFU policy. Only used when policy is WTinyLFU.
stats: CacheStatsCache statistics (when stats feature is enabled)
Implementations§
Source§impl<R: Clone + 'static> ThreadLocalCache<R>
impl<R: Clone + 'static> ThreadLocalCache<R>
Sourcepub fn new(
cache: &'static LocalKey<RefCell<HashMap<String, CacheEntry<R>>>>,
order: &'static LocalKey<RefCell<VecDeque<String>>>,
limit: Option<usize>,
max_memory: Option<usize>,
policy: EvictionPolicy,
ttl: Option<u64>,
frequency_weight: Option<f64>,
window_ratio: Option<f64>,
sketch_width: Option<usize>,
sketch_depth: Option<usize>,
decay_interval: Option<u64>,
) -> Self
pub fn new( cache: &'static LocalKey<RefCell<HashMap<String, CacheEntry<R>>>>, order: &'static LocalKey<RefCell<VecDeque<String>>>, limit: Option<usize>, max_memory: Option<usize>, policy: EvictionPolicy, ttl: Option<u64>, frequency_weight: Option<f64>, window_ratio: Option<f64>, sketch_width: Option<usize>, sketch_depth: Option<usize>, decay_interval: Option<u64>, ) -> Self
Creates a new ThreadLocalCache wrapper around thread-local storage keys.
§Arguments
cache- A static reference to aLocalKeythat stores the cache HashMaporder- A static reference to aLocalKeythat stores the eviction order queuelimit- Optional maximum number of entries (None for unlimited)max_memory- Optional maximum memory size in bytes (None for unlimited)policy- Eviction policy to use when limit is reachedttl- Optional time-to-live in seconds (None for no expiration)frequency_weight- Optional frequency weight for TLRU policy (0.0 to 1.0)window_ratio- Optional window ratio for W-TinyLFU policy (between 0.0 and 1.0)sketch_width- Optional sketch width for W-TinyLFU policysketch_depth- Optional sketch depth for W-TinyLFU policydecay_interval- Optional decay interval for W-TinyLFU policy
§Examples
use std::cell::RefCell;
use std::collections::{HashMap, VecDeque};
use cachelito_core::{ThreadLocalCache, EvictionPolicy, CacheEntry};
thread_local! {
static CACHE: RefCell<HashMap<String, CacheEntry<String>>> = RefCell::new(HashMap::new());
static ORDER: RefCell<VecDeque<String>> = RefCell::new(VecDeque::new());
}
let cache = ThreadLocalCache::new(&CACHE, &ORDER, Some(100), None, EvictionPolicy::LRU, Some(60), None, None, None, None, None);Sourcepub fn get(&self, key: &str) -> Option<R>
pub fn get(&self, key: &str) -> Option<R>
Retrieves a value from the cache by key.
§Arguments
key- The cache key to look up
§Returns
Some(value)if the key exists in the cache and is not expiredNoneif the key is not found or has expired
§Examples
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
cache.insert("key", 100);
assert_eq!(cache.get("key"), Some(100));
assert_eq!(cache.get("missing"), None);Sourcepub fn insert(&self, key: &str, value: R)
pub fn insert(&self, key: &str, value: R)
Inserts a value into the cache with the specified key.
If a value already exists for this key, it will be replaced.
§Arguments
key- The cache keyvalue- The value to store
§Examples
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
cache.insert("first", 1);
cache.insert("first", 2); // Replaces previous value
assert_eq!(cache.get("first"), Some(2));§Note
This method does NOT require MemoryEstimator trait. It only handles entry-count limits.
If max_memory is configured, use insert_with_memory() instead, which requires
the type to implement MemoryEstimator.
Sourcepub fn stats(&self) -> &CacheStats
pub fn stats(&self) -> &CacheStats
Returns a reference to the cache statistics.
This method is only available when the stats feature is enabled.
§Examples
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
cache.insert("key1", 100);
let _ = cache.get("key1");
let _ = cache.get("key2");
let stats = cache.stats();
assert_eq!(stats.hits(), 1);
assert_eq!(stats.misses(), 1);Source§impl<R: Clone + 'static + MemoryEstimator> ThreadLocalCache<R>
impl<R: Clone + 'static + MemoryEstimator> ThreadLocalCache<R>
Sourcepub fn insert_with_memory(&self, key: &str, value: R)
pub fn insert_with_memory(&self, key: &str, value: R)
Insert with memory limit support.
This method requires R to implement MemoryEstimator and handles both
memory-based and entry-count-based eviction.
Use this method when max_memory is configured in the cache.
Source§impl<T: Clone + Debug + 'static, E: Clone + Debug + 'static> ThreadLocalCache<Result<T, E>>
Specialized implementation for caching Result<T, E> return types.
impl<T: Clone + Debug + 'static, E: Clone + Debug + 'static> ThreadLocalCache<Result<T, E>>
Specialized implementation for caching Result<T, E> return types.
This implementation provides a method to cache only successful (Ok) results,
which is useful for functions that may fail - you typically don’t want to cache
errors, as retrying the operation might succeed later.
§Type Parameters
T- The success type (inner type ofOk)E- The error type (inner type ofErr)
§Examples
let cache = ThreadLocalCache::new(&CACHE, &ORDER, None, None, EvictionPolicy::FIFO, None, None, None, None, None, None);
// Ok values are cached
cache.insert_result("success", &Ok(42));
assert_eq!(cache.get("success"), Some(Ok(42)));
// Err values are NOT cached
cache.insert_result("failure", &Err("error".to_string()));
assert_eq!(cache.get("failure"), None);Sourcepub fn insert_result(&self, key: &str, value: &Result<T, E>)
pub fn insert_result(&self, key: &str, value: &Result<T, E>)
Inserts a Result into the cache, but only if it’s an Ok value.
This method is specifically designed for caching functions that return
Result<T, E>. It intelligently ignores Err values, as errors typically
should not be cached (the operation might succeed on retry).
This version does NOT require MemoryEstimator. Use insert_result_with_memory()
when max_memory is configured.
§Arguments
key- The cache keyvalue- TheResultto potentially cache
§Behavior
- If
valueisOk(v), storesOk(v.clone())in the cache - If
valueisErr(_), does nothing (error is not cached)
Source§impl<T: Clone + Debug + 'static + MemoryEstimator, E: Clone + Debug + 'static + MemoryEstimator> ThreadLocalCache<Result<T, E>>
Implementation for Result types WITH MemoryEstimator support.
impl<T: Clone + Debug + 'static + MemoryEstimator, E: Clone + Debug + 'static + MemoryEstimator> ThreadLocalCache<Result<T, E>>
Implementation for Result types WITH MemoryEstimator support.
Sourcepub fn insert_result_with_memory(&self, key: &str, value: &Result<T, E>)
pub fn insert_result_with_memory(&self, key: &str, value: &Result<T, E>)
Inserts a Result into the cache with memory limit support.
This method requires both T and E to implement MemoryEstimator. Use this when max_memory is configured.