pub struct GlobalCache<R>where
R: 'static,{
pub map: &'static Lazy<RwLock<RawRwLock, HashMap<String, CacheEntry<R>>>>,
pub order: &'static Lazy<Mutex<RawMutex, VecDeque<String>>>,
pub limit: Option<usize>,
pub max_memory: Option<usize>,
pub policy: EvictionPolicy,
pub ttl: Option<u64>,
pub frequency_weight: Option<f64>,
pub stats: &'static Lazy<CacheStats>,
}Expand description
A thread-safe global cache that can be shared across multiple threads.
Unlike ThreadLocalCache which uses thread-local storage, GlobalCache stores
cached values in global static variables protected by locks, allowing cache
sharing across all threads in the application.
§Type Parameters
R- The return type to be cached. Must be'staticto be stored in global state.
§Features
- Thread-safe sharing: Multiple threads access the same cache through RwLock/Mutex
- Eviction policies: FIFO, LRU, LFU, ARC, Random, and TLRU
- FIFO: First In, First Out - simple and predictable
- LRU: Least Recently Used - evicts least recently accessed entries
- LFU: Least Frequently Used - evicts least frequently accessed entries
- ARC: Adaptive Replacement Cache - hybrid policy combining recency and frequency
- Random: Random replacement - O(1) eviction with minimal overhead
- TLRU: Time-aware LRU - combines recency, frequency, and age factors
- Customizable with
frequency_weightparameter - Formula:
score = frequency^weight × position × age_factor frequency_weight < 1.0: Emphasize recency (time-sensitive data)frequency_weight > 1.0: Emphasize frequency (popular content)
- Customizable with
- Cache limits: Entry count limits (
limit) and memory-based limits (max_memory) - TTL support: Automatic expiration of entries based on age
- Statistics: Optional cache hit/miss tracking (with
statsfeature) - Frequency tracking: For LFU, ARC, and TLRU policies
- Memory estimation: Support for memory-based eviction (requires
MemoryEstimator)
§Cache Entry Structure
Cache entries are stored as CacheEntry<R> which contains:
value: The cached value of type Rtimestamp: Unix timestamp when the entry was created (for TTL and TLRU age factor)frequency: Access counter for LFU, ARC, and TLRU policies
§Eviction Behavior
When the cache reaches its limit (entry count or memory), entries are evicted according to the configured policy:
- FIFO: Oldest entry (first in order queue) is evicted
- LRU: Least recently accessed entry (first in order queue) is evicted
- LFU: Entry with lowest frequency counter is evicted
- ARC: Entry with lowest score (frequency × position_weight) is evicted
- Random: Randomly selected entry is evicted
- TLRU: Entry with lowest score (frequency^weight × position × age_factor) is evicted
§Thread Safety
This cache uses parking_lot::RwLock for the cache map and parking_lot::Mutex for the order queue.
The parking_lot implementation provides:
- RwLock for reads: Multiple threads can read concurrently without blocking
- No lock poisoning (simpler API, no
Resultwrapping) - Better performance under contention (30-50% faster than std::sync)
- Smaller memory footprint (~40x smaller than std::sync)
- Fair locking algorithm prevents thread starvation
Read-heavy workloads (typical for caches) see 4-5x performance improvement with RwLock compared to Mutex, as multiple threads can read the cache simultaneously.
§Performance Characteristics
- Get: O(1) for cache lookup, O(n) for LRU/ARC/TLRU reordering
- Insert: O(1) for FIFO/Random, O(n) for LRU/LFU/ARC/TLRU eviction
- Memory: O(n) where n is the number of cached entries
- Synchronization: Lock acquisition overhead on every operation
§Performance Considerations
- Synchronization overhead: Each cache operation requires acquiring locks
- Lock contention: High concurrent access may cause threads to wait
- Read optimization: RwLock allows concurrent reads (no blocking for cache hits)
- Write bottleneck: Only one thread can modify cache at a time
- Shared benefits: All threads benefit from cached results
- Best for: Expensive computations where sharing outweighs synchronization cost
§Examples
§Basic Usage
use cachelito_core::{GlobalCache, EvictionPolicy, CacheEntry};
use once_cell::sync::Lazy;
use parking_lot::{Mutex, RwLock};
use std::collections::{HashMap, VecDeque};
static CACHE_MAP: Lazy<RwLock<HashMap<String, CacheEntry<i32>>>> =
Lazy::new(|| RwLock::new(HashMap::new()));
static CACHE_ORDER: Lazy<Mutex<VecDeque<String>>> =
Lazy::new(|| Mutex::new(VecDeque::new()));
let cache = GlobalCache::new(
&CACHE_MAP,
&CACHE_ORDER,
Some(100), // Max 100 entries
None, // No memory limit
EvictionPolicy::LRU,
Some(60), // 60 second TTL
None, // Default frequency_weight
);
// All threads can access the same cache
cache.insert("key1", 42);
assert_eq!(cache.get("key1"), Some(42));§TLRU with Custom Frequency Weight
use cachelito_core::{GlobalCache, EvictionPolicy};
// Emphasize frequency over recency (good for popular content)
let cache = GlobalCache::new(
&CACHE_MAP,
&CACHE_ORDER,
Some(100),
None,
EvictionPolicy::TLRU,
Some(300),
Some(1.5), // frequency_weight > 1.0
);
// Emphasize recency over frequency (good for time-sensitive data)
let cache = GlobalCache::new(
&CACHE_MAP,
&CACHE_ORDER,
Some(100),
None,
EvictionPolicy::TLRU,
Some(300),
Some(0.3), // frequency_weight < 1.0
);§With Memory Limits
use cachelito_core::{GlobalCache, EvictionPolicy, MemoryEstimator};
let cache = GlobalCache::new(
&CACHE_MAP,
&CACHE_ORDER,
Some(1000),
Some(100 * 1024 * 1024), // 100MB max
EvictionPolicy::LRU,
Some(300),
None,
);
// Insert with memory tracking (requires MemoryEstimator implementation)
cache.insert_with_memory("key", value);Fields§
§map: &'static Lazy<RwLock<RawRwLock, HashMap<String, CacheEntry<R>>>>§order: &'static Lazy<Mutex<RawMutex, VecDeque<String>>>§limit: Option<usize>§max_memory: Option<usize>§policy: EvictionPolicy§ttl: Option<u64>§frequency_weight: Option<f64>§stats: &'static Lazy<CacheStats>Implementations§
Source§impl<R> GlobalCache<R>where
R: Clone + 'static,
impl<R> GlobalCache<R>where
R: Clone + 'static,
Sourcepub fn new(
map: &'static Lazy<RwLock<RawRwLock, HashMap<String, CacheEntry<R>>>>,
order: &'static Lazy<Mutex<RawMutex, VecDeque<String>>>,
limit: Option<usize>,
max_memory: Option<usize>,
policy: EvictionPolicy,
ttl: Option<u64>,
frequency_weight: Option<f64>,
stats: &'static Lazy<CacheStats>,
) -> GlobalCache<R>
pub fn new( map: &'static Lazy<RwLock<RawRwLock, HashMap<String, CacheEntry<R>>>>, order: &'static Lazy<Mutex<RawMutex, VecDeque<String>>>, limit: Option<usize>, max_memory: Option<usize>, policy: EvictionPolicy, ttl: Option<u64>, frequency_weight: Option<f64>, stats: &'static Lazy<CacheStats>, ) -> GlobalCache<R>
Creates a new global cache instance.
§Parameters
map- Static reference to a RwLock-protected HashMap for storing cache entriesorder- Static reference to a Mutex-protected VecDeque for tracking entry orderlimit- Optional maximum number of entries (None for unlimited)max_memory- Optional maximum memory size in bytes (None for unlimited)policy- Eviction policy (FIFO, LRU, LFU, ARC, Random, or TLRU)ttl- Optional time-to-live in seconds for cache entries (None for no expiration)frequency_weight- Optional weight factor for frequency in TLRU policy- Values < 1.0: Emphasize recency and age
- Values > 1.0: Emphasize frequency
- None or 1.0: Balanced approach (default)
- Only used when policy is TLRU, ignored otherwise
stats- Static reference to CacheStats for tracking hit/miss statistics (stats feature only)
§Returns
A new GlobalCache instance configured with the provided parameters.
§Examples
§Basic LRU cache with TTL
let cache = GlobalCache::new(
&CACHE_MAP,
&CACHE_ORDER,
Some(1000), // Max 1000 entries
None, // No memory limit
EvictionPolicy::LRU, // LRU eviction
Some(300), // 5 minute TTL
None, // No frequency_weight (not needed for LRU)
#[cfg(feature = "stats")]
&CACHE_STATS,
);§TLRU with memory limit and custom frequency weight
let cache = GlobalCache::new(
&CACHE_MAP,
&CACHE_ORDER,
Some(1000),
Some(100 * 1024 * 1024), // 100MB max
EvictionPolicy::TLRU, // TLRU eviction
Some(300), // 5 minute TTL
Some(1.5), // Emphasize frequency (popular content)
#[cfg(feature = "stats")]
&CACHE_STATS,
);Sourcepub fn get(&self, key: &str) -> Option<R>
pub fn get(&self, key: &str) -> Option<R>
Retrieves a cached value by key.
This method attempts to retrieve a cached value, checking for expiration and updating access patterns based on the eviction policy.
§Parameters
key- The cache key to retrieve
§Returns
Some(R)- The cached value if found and not expiredNone- If the key is not in cache or the entry has expired
§Behavior by Policy
- FIFO: No updates on cache hit (order remains unchanged)
- LRU: Moves the key to the end of the order queue (most recently used)
- LFU: Increments the frequency counter for the entry
- ARC: Increments frequency counter and updates position in order queue
- Random: No updates on cache hit
- TLRU: Increments frequency counter and updates position in order queue
§TTL Expiration
If a TTL is configured and the entry has expired:
- The entry is removed from both the cache map and order queue
- A cache miss is recorded (if stats feature is enabled)
Noneis returned
§Statistics
When the stats feature is enabled:
- Cache hits are recorded when a valid entry is found
- Cache misses are recorded when the key doesn’t exist or has expired
§Thread Safety
This method is thread-safe and uses a multi-phase locking strategy:
- Read lock for initial lookup (allows concurrent reads)
- Mutex + Write lock for expired entry removal (if needed)
- Mutex lock for order queue updates (for LRU/ARC/TLRU)
Multiple threads can safely call this method concurrently. Read-heavy workloads benefit from RwLock’s concurrent read capability.
§Performance
- FIFO, Random: O(1) - no reordering needed
- LRU, ARC, TLRU: O(n) - requires finding and moving key in order queue
- LFU: O(1) - only increments counter
- Lock overhead: Read lock for lookup + potential write lock for updates
§Examples
// Insert and retrieve
cache.insert("user:123", user_data);
assert_eq!(cache.get("user:123"), Some(user_data));
// Non-existent key
assert_eq!(cache.get("user:999"), None);
// Expired entry (with TTL)
cache.insert("temp", data);
std::thread::sleep(Duration::from_secs(61)); // Wait for TTL expiration
assert_eq!(cache.get("temp"), None);Sourcepub fn insert(&self, key: &str, value: R)
pub fn insert(&self, key: &str, value: R)
Inserts or updates a value in the cache.
This method stores a new value in the cache or updates an existing one. It handles cache limit enforcement and eviction according to the configured policy.
§Parameters
key- The cache keyvalue- The value to cache
§Behavior
- Creates a new
CacheEntrywith the current timestamp - Inserts/updates the entry in the map
- Updates the order queue:
- If key already exists in queue, removes old position
- Adds key to the end of the queue
- Enforces cache limit:
- If limit is set and exceeded, evicts the oldest entry (front of queue)
- Removes evicted entry from both map and order queue
§Eviction Policies
When the cache limit is reached, entries are evicted according to the policy:
- FIFO: Evicts oldest inserted entry (front of queue)
- LRU: Evicts least recently used entry (front of queue, updated by
get()) - LFU: Evicts entry with lowest frequency counter
- ARC: Evicts based on hybrid score (frequency × position_weight)
- Random: Evicts randomly selected entry
- TLRU: Evicts based on TLRU score (frequency^weight × position × age_factor)
§Thread Safety
This method is thread-safe and uses mutex locks to ensure consistency between the map and order queue.
§Example
// Insert a value
cache.insert("user:123", user_data);
// Update existing value
cache.insert("user:123", updated_user_data);
// With limit=2, this will evict the oldest entry
cache.insert("user:456", another_user);
cache.insert("user:789", yet_another_user); // Evicts first entry§Note
This method does NOT require MemoryEstimator trait. It only handles entry-count limits.
If max_memory is configured, use insert_with_memory() instead, which requires
the type to implement MemoryEstimator.
Source§impl<R> GlobalCache<R>where
R: Clone + 'static + MemoryEstimator,
impl<R> GlobalCache<R>where
R: Clone + 'static + MemoryEstimator,
Sourcepub fn insert_with_memory(&self, key: &str, value: R)
pub fn insert_with_memory(&self, key: &str, value: R)
Insert with memory limit support.
This method requires R to implement MemoryEstimator and handles both
memory-based and entry-count-based eviction.
Use this method when max_memory is configured in the cache.
§Arguments
key- The cache keyvalue- The value to cache (must implementMemoryEstimator)
§Memory Management
The method calculates the memory footprint of all cached entries and evicts
entries as needed to stay within the max_memory limit. Eviction follows
the configured policy.
§Safety Check
If the value to be inserted is larger than max_memory, the insertion is
skipped entirely to avoid infinite eviction loops. This ensures the cache
respects the memory limit even if individual values are very large.
§Eviction Behavior by Policy
When memory limit is exceeded:
- FIFO/LRU: Evicts from front of order queue
- LFU: Evicts entry with lowest frequency
- ARC: Evicts based on hybrid score (frequency × position_weight)
- TLRU: Evicts based on TLRU score (frequency^weight × position × age_factor)
- Random: Evicts randomly selected entry
The eviction loop continues until there’s enough memory for the new value.
§Entry Count Limit
After satisfying memory constraints, this method also checks the entry count limit (if configured) and evicts additional entries if needed.
§Thread Safety
This method uses write locks to ensure consistency between the map and order queue during eviction and insertion.
§Examples
use cachelito_core::MemoryEstimator;
// Type must implement MemoryEstimator
impl MemoryEstimator for MyLargeStruct {
fn estimate_memory(&self) -> usize {
std::mem::size_of::<Self>() + self.data.capacity()
}
}
// Insert with automatic memory-based eviction
cache.insert_with_memory("large_data", expensive_value);§Performance
- Memory calculation: O(n) - iterates all entries to sum memory
- Eviction: Varies by policy (see individual policy documentation)
- May evict multiple entries in one call if memory limit is tight
Sourcepub fn stats(&self) -> &CacheStats
pub fn stats(&self) -> &CacheStats
Returns a reference to the cache statistics.
This method is only available when the stats feature is enabled.
§Available Metrics
The returned CacheStats provides:
- hits(): Number of successful cache lookups
- misses(): Number of cache misses (key not found or expired)
- hit_rate(): Ratio of hits to total accesses (0.0 to 1.0)
- total_accesses(): Total number of get operations
§Thread Safety
Statistics use atomic counters (AtomicU64) and can be safely accessed
from multiple threads without additional synchronization.
§Examples
// Get basic statistics
let stats = cache.stats();
println!("Hits: {}", stats.hits());
println!("Misses: {}", stats.misses());
println!("Hit rate: {:.2}%", stats.hit_rate() * 100.0);
println!("Total accesses: {}", stats.total_accesses());
// Monitor cache performance
let total = stats.total_accesses();
if total > 1000 && stats.hit_rate() < 0.5 {
println!("Warning: Low cache hit rate");
}§See Also
CacheStats- The statistics structurecrate::stats_registry::get()- Access stats by cache name
Sourcepub fn clear(&self)
pub fn clear(&self)
Clears all entries from the cache.
This method removes all entries from both the cache map and the order queue. It’s useful for testing or when you need to completely reset the cache state.
§Thread Safety
This method is thread-safe and can be safely called from multiple threads.
§Example
cache.insert("key1", 42);
cache.insert("key2", 84);
cache.clear();
assert_eq!(cache.get("key1"), None);
assert_eq!(cache.get("key2"), None);Source§impl<T, E> GlobalCache<Result<T, E>>
Implementation of GlobalCache for Result types.
impl<T, E> GlobalCache<Result<T, E>>
Implementation of GlobalCache for Result types.
This specialized implementation provides a insert_result method that only
caches successful (Ok) results, preventing error values from being cached.
§Type Parameters
T- The success type, must beCloneandDebugE- The error type, must beCloneandDebug
§Rationale
Errors are typically transient (network failures, temporary resource unavailability) and should not be cached. Only successful results should be memoized to avoid repeatedly returning stale errors.
§Example
let cache: GlobalCache<Result<String, Error>> = GlobalCache::new(...);
// Only Ok values are cached
let result = fetch_data();
cache.insert_result("key1", &result);
// If result was Err, nothing is cached
// If result was Ok, the value is cachedSourcepub fn insert_result(&self, key: &str, value: &Result<T, E>)
pub fn insert_result(&self, key: &str, value: &Result<T, E>)
Inserts a Result into the cache, but only if it’s an Ok variant.
This method intelligently caches only successful results, preventing error values from polluting the cache.
This version does NOT require MemoryEstimator. Use insert_result_with_memory()
when max_memory is configured.
§Parameters
key- The cache keyvalue- The Result to potentially cache
§Behavior
- If
valueisOk(v): CachesOk(v.clone())under the given key - If
valueisErr(_): Does nothing, no cache entry is created
§Thread Safety
This method is thread-safe and can be called concurrently from multiple threads.
§Example
fn fetch_user(id: u64) -> Result<User, DbError> {
// ... database query ...
}
let result = fetch_user(123);
cache.insert_result("user:123", &result);
// Success: cached
// Ok(user) -> cache contains Ok(user)
// Failure: not cached (will retry next time)
// Err(db_error) -> cache remains empty for this keySource§impl<T, E> GlobalCache<Result<T, E>>
Implementation of GlobalCache for Result types WITH MemoryEstimator support.
impl<T, E> GlobalCache<Result<T, E>>
Implementation of GlobalCache for Result types WITH MemoryEstimator support.
This specialized implementation provides memory-aware caching for Result types.
§Type Parameters
T- The success type, must beClone,Debug, and implementMemoryEstimatorE- The error type, must beClone,Debug, and implementMemoryEstimator
Sourcepub fn insert_result_with_memory(&self, key: &str, value: &Result<T, E>)
pub fn insert_result_with_memory(&self, key: &str, value: &Result<T, E>)
Inserts a Result into the cache with memory limit support.
This method requires both T and E to implement MemoryEstimator. Use this when max_memory is configured.
§Parameters
key- The cache keyvalue- The Result to potentially cache
§Behavior
- If
valueisOk(v): CachesOk(v.clone())under the given key - If
valueisErr(_): Does nothing, no cache entry is created