Expand description
§Cache-RS
A high-performance, memory-efficient cache library implementing multiple eviction algorithms with both single-threaded and thread-safe concurrent variants.
§Why Cache-RS?
- Zero-Copy Architecture: HashMap stores pointers; values live in cache-friendly linked structures
- O(1) Everything: All operations (get, put, remove) are constant time
- No-std Compatible: Works in embedded environments (with
alloc) - Comprehensive Algorithm Suite: LRU, SLRU, LFU, LFUDA, and GDSF
- Production Ready: Extensive testing, Miri-verified, benchmarked
§Quick Start
use cache_rs::LruCache;
use cache_rs::config::LruCacheConfig;
use core::num::NonZeroUsize;
let config = LruCacheConfig {
capacity: NonZeroUsize::new(100).unwrap(),
max_size: u64::MAX,
};
let mut cache = LruCache::init(config, None);
cache.put("key", "value");
assert_eq!(cache.get(&"key"), Some(&"value"));§Algorithm Selection Guide
┌─────────────────────────────────────────────────────────────────────────────┐
│ Which Cache Algorithm Should I Use? │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Is your workload primarily... │
│ │
│ ┌─────────────────┐ │
│ │ Recency-based? │──Yes──▶ Are you worried about scans? │
│ │ (recent = hot) │ │ │
│ └────────┬────────┘ Yes │ No │
│ │ │ │ │
│ No ▼ ▼ │
│ │ ┌──────────┐ ┌──────────┐ │
│ │ │ SLRU │ │ LRU │ │
│ ▼ └──────────┘ └──────────┘ │
│ ┌─────────────────┐ │
│ │ Frequency-based?│──Yes──▶ Does popularity change over time? │
│ │ (popular = hot) │ │ │
│ └────────┬────────┘ Yes │ No │
│ │ │ │ │
│ No ▼ ▼ │
│ │ ┌──────────┐ ┌──────────┐ │
│ │ │ LFUDA │ │ LFU │ │
│ ▼ └──────────┘ └──────────┘ │
│ ┌─────────────────┐ │
│ │ Variable-sized │──Yes──▶ ┌──────────┐ │
│ │ objects? │ │ GDSF │ │
│ └─────────────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘§Available Cache Algorithms
| Algorithm | Description | Best Use Case |
|---|---|---|
LruCache | Least Recently Used | General purpose, recency-based access |
SlruCache | Segmented LRU | Mixed workloads with scans |
LfuCache | Least Frequently Used | Stable popularity patterns |
LfudaCache | LFU with Dynamic Aging | Long-running, evolving popularity |
GdsfCache | Greedy Dual Size Frequency | CDNs, variable-sized objects |
§Performance Characteristics
| Algorithm | Get | Put | Remove | Memory/Entry | Scan Resist | Adapts |
|---|---|---|---|---|---|---|
| LRU | O(1) | O(1) | O(1) | ~80 bytes | Poor | N/A |
| SLRU | O(1) | O(1) | O(1) | ~90 bytes | Good | N/A |
| LFU | O(1) | O(1) | O(1) | ~100 bytes | Excellent | No |
| LFUDA | O(1) | O(1) | O(1) | ~110 bytes | Excellent | Yes |
| GDSF | O(1) | O(1) | O(1) | ~120 bytes | Good | Yes |
§Concurrent Caches
Enable the concurrent feature for thread-safe versions:
[dependencies]
cache-rs = { version = "0.2", features = ["concurrent"] }use cache_rs::ConcurrentLruCache;
use std::sync::Arc;
let cache = Arc::new(ConcurrentLruCache::new(10_000));
// Safe to share across threads
let cache_clone = Arc::clone(&cache);
std::thread::spawn(move || {
cache_clone.put("key".to_string(), 42);
});Concurrent caches use lock striping for high throughput:
┌────────────────────────────────────────────────────────┐
│ ConcurrentCache (16 segments) │
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │Segment 0│ │Segment 1│ │Segment 2│ ... │Segment15│ │
│ │ [Mutex] │ │ [Mutex] │ │ [Mutex] │ │ [Mutex] │ │
│ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
│ ▲ ▲ ▲ ▲ │
│ │ │ │ │ │
│ hash(k1)%16 hash(k2)%16 hash(k3)%16 hash(kN)%16 │
└────────────────────────────────────────────────────────┘§Dual-Limit Capacity
All caches support both entry count and size limits:
use cache_rs::LruCache;
use cache_rs::config::LruCacheConfig;
use core::num::NonZeroUsize;
// Limit by both count (1000 entries) AND size (10MB)
let config = LruCacheConfig {
capacity: NonZeroUsize::new(1000).unwrap(),
max_size: 10 * 1024 * 1024,
};
let mut cache: LruCache<String, Vec<u8>> = LruCache::init(config, None);
// Track size explicitly
let data = vec![0u8; 1024];
cache.put_with_size("file.bin".to_string(), data, 1024);§Feature Flags
| Feature | Default | Description |
|---|---|---|
hashbrown | ✓ | Use hashbrown for no_std HashMap |
std | Enable standard library features | |
concurrent | Thread-safe cache implementations | |
nightly | Nightly-only optimizations |
§No-std Support
This crate works in no_std environments by default (requires alloc):
// Works in embedded environments!
use cache_rs::LruCache;
use cache_rs::config::LruCacheConfig;
use core::num::NonZeroUsize;
let config = LruCacheConfig {
capacity: NonZeroUsize::new(100).unwrap(),
max_size: u64::MAX,
};
let mut cache = LruCache::init(config, None);
cache.put("sensor_1", 42.5f32);§Detailed Algorithm Descriptions
§LRU (Least Recently Used)
Evicts the item that hasn’t been accessed for the longest time. Simple and effective for workloads with temporal locality.
use cache_rs::LruCache;
use cache_rs::config::LruCacheConfig;
use core::num::NonZeroUsize;
let config = LruCacheConfig {
capacity: NonZeroUsize::new(2).unwrap(),
max_size: u64::MAX,
};
let mut cache = LruCache::init(config, None);
cache.put("a", 1);
cache.put("b", 2);
cache.get(&"a"); // "a" becomes most recently used
cache.put("c", 3); // "b" evicted (least recently used)
assert!(cache.get(&"b").is_none());§SLRU (Segmented LRU)
Divides cache into probationary and protected segments. Items enter probationary and are promoted to protected on second access. Excellent scan resistance.
use cache_rs::SlruCache;
use cache_rs::config::SlruCacheConfig;
use core::num::NonZeroUsize;
let config = SlruCacheConfig {
capacity: NonZeroUsize::new(100).unwrap(),
protected_capacity: NonZeroUsize::new(20).unwrap(),
max_size: u64::MAX,
};
let mut cache = SlruCache::init(config, None);
cache.put("hot", 1);
cache.get(&"hot"); // Promoted to protected segment!§LFU (Least Frequently Used)
Tracks access frequency and evicts the least frequently accessed item. Great for workloads with stable popularity patterns.
use cache_rs::LfuCache;
use cache_rs::config::LfuCacheConfig;
use core::num::NonZeroUsize;
let config = LfuCacheConfig {
capacity: NonZeroUsize::new(2).unwrap(),
max_size: u64::MAX,
};
let mut cache = LfuCache::init(config, None);
cache.put("rare", 1);
cache.put("popular", 2);
// Access "popular" multiple times
for _ in 0..10 { cache.get(&"popular"); }
cache.put("new", 3); // "rare" evicted (lowest frequency)
assert!(cache.get(&"popular").is_some());§LFUDA (LFU with Dynamic Aging)
Addresses LFU’s “cache pollution” problem by incorporating aging. Old popular items gradually lose priority, allowing new items to compete fairly.
use cache_rs::LfudaCache;
use cache_rs::config::LfudaCacheConfig;
use core::num::NonZeroUsize;
let config = LfudaCacheConfig {
capacity: NonZeroUsize::new(100).unwrap(),
initial_age: 0,
max_size: u64::MAX,
};
let mut cache = LfudaCache::init(config, None);
// Old popular items will eventually age out if not accessed
for i in 0..100 {
cache.put(i, i);
}§GDSF (Greedy Dual-Size Frequency)
Combines frequency, size, and aging. Priority = (Frequency / Size) + Age. Ideal for caching variable-sized objects like images or API responses.
use cache_rs::GdsfCache;
use cache_rs::config::GdsfCacheConfig;
use core::num::NonZeroUsize;
let config = GdsfCacheConfig {
capacity: NonZeroUsize::new(1000).unwrap(),
initial_age: 0.0,
max_size: 10 * 1024 * 1024, // 10MB
};
let mut cache: GdsfCache<String, Vec<u8>> = GdsfCache::init(config, None);
// Size-aware insertion
cache.put("small.txt".to_string(), vec![0u8; 100], 100);
cache.put("large.bin".to_string(), vec![0u8; 10000], 10000);
// Small items get higher priority per byte§Modules
lru: Least Recently Used cache implementationslru: Segmented LRU cache implementationlfu: Least Frequently Used cache implementationlfuda: LFU with Dynamic Aging cache implementationgdsf: Greedy Dual Size Frequency cache implementationconfig: Configuration structures for all cache algorithmsmetrics: Metrics collection for cache performance monitoringconcurrent: Thread-safe concurrent cache implementations (requiresconcurrentfeature)
Re-exports§
pub use gdsf::GdsfCache;pub use lfu::LfuCache;pub use lfuda::LfudaCache;pub use lru::LruCache;pub use slru::SlruCache;pub use entry::CacheEntry;pub use meta::GdsfMeta;pub use meta::LfuMeta;pub use meta::LfudaMeta;pub use meta::SlruMeta;pub use meta::SlruSegment;
Modules§
- config
- Cache configuration structures.
- entry
- Unified cache entry type.
- gdsf
- Greedy Dual-Size Frequency (GDSF) cache implementation.
- lfu
- Least Frequently Used (LFU) cache implementation.
- lfuda
- Least Frequently Used with Dynamic Aging (LFUDA) cache implementation.
- lru
- Least Recently Used (LRU) cache implementation.
- meta
- Algorithm-specific metadata types.
- metrics
- Cache metrics system.
- slru
- Segmented LRU (SLRU) cache implementation.