cache-rs
A high-performance, memory-efficient cache library for Rust supporting multiple eviction algorithms with O(1) operations.
โจ Features
- Multiple eviction algorithms: LRU, LFU, LFUDA, SLRU, GDSF
- High performance: All operations are O(1) with optimized data structures
- Memory efficient: Minimal overhead with careful memory layout
no_std
compatible: Works in embedded and resource-constrained environments- Thread-safe ready: Easy to wrap with
Mutex
/RwLock
for concurrent access - Well documented: Comprehensive documentation with usage examples
๐ Quick Start
Add to your Cargo.toml
:
[]
= "0.1.0"
Basic usage:
use LruCache;
use NonZeroUsize;
let mut cache = new;
cache.put;
assert_eq!;
๐ Algorithm Guide
Choose the right cache algorithm for your use case:
LRU (Least Recently Used)
Best for: General-purpose caching with temporal locality
use LruCache;
use NonZeroUsize;
let mut cache = new;
cache.put;
SLRU (Segmented LRU)
Best for: Workloads with scan resistance requirements
use SlruCache;
use NonZeroUsize;
// Total capacity: 100, Protected segment: 20
let mut cache = new;
LFU (Least Frequently Used)
Best for: Workloads with strong frequency patterns
use LfuCache;
use NonZeroUsize;
let mut cache = new;
cache.put;
LFUDA (LFU with Dynamic Aging)
Best for: Long-running applications where access patterns change
use LfudaCache;
use NonZeroUsize;
let mut cache = new;
GDSF (Greedy Dual Size Frequency)
Best for: Variable-sized objects (images, files, documents)
use GdsfCache;
use NonZeroUsize;
let mut cache = new;
cache.put; // key, value, size
๐ Performance Comparison
Algorithm | Get Operation | Use Case | Memory Overhead |
---|---|---|---|
LRU | ~887ns | General purpose | Low |
SLRU | ~983ns | Scan resistance | Medium |
GDSF | ~7.5ยตs | Size-aware | Medium |
LFUDA | ~20.5ยตs | Aging workloads | Medium |
LFU | ~22.7ยตs | Frequency-based | Medium |
Benchmarks run on mixed workloads with Zipf distribution
๐๏ธ no_std Support
Works out of the box in no_std
environments:
extern crate alloc;
use LruCache;
use NonZeroUsize;
use String;
let mut cache = new;
cache.put;
โ๏ธ Feature Flags
hashbrown
(default): Use hashbrown HashMap for better performancenightly
: Enable nightly-only optimizationsstd
: Enable standard library features (disabled by default)
# Default: no_std + hashbrown (recommended for most use cases)
= "0.1.0"
# std + hashbrown (recommended for std environments)
= { = "0.1.0", = ["std"] }
# std + hashbrown + nightly optimizations
= { = "0.1.0", = ["std", "nightly"] }
# no_std + nightly optimizations only
= { = "0.1.0", = ["nightly"] }
# Only std::HashMap (not recommended - slower than hashbrown)
= { = "0.1.0", = false, = ["std"] }
๐งต Thread Safety
The cache types are !Send
and !Sync
by design for performance. For concurrent access:
use LruCache;
use ;
use NonZeroUsize;
let cache = new;
// Clone Arc for use in other threads
let cache_clone = clone;
๐ง Advanced Usage
Custom Hash Function
use LruCache;
use RandomState;
use NonZeroUsize;
let cache = with_hasher;
Size-aware Caching with GDSF
use GdsfCache;
use NonZeroUsize;
let mut cache = new;
// Cache different sized objects
cache.put;
cache.put;
cache.put;
// GDSF automatically considers size, frequency, and recency
๐โโ๏ธ Benchmarks
Run the included benchmarks to compare performance:
Example results on modern hardware:
- LRU: Fastest for simple use cases (~887ns per operation)
- SLRU: Good balance of performance and scan resistance (~983ns)
- GDSF: Best for size-aware workloads (~7.5ยตs)
- LFUDA/LFU: Best for frequency-based patterns (~20ยตs)
๐ Documentation
๐ค Contributing
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
Development
# Run all tests
# Check formatting
# Run clippy
# Test no_std compatibility
๐ License
Licensed under the MIT License.
๐ Security
For security concerns, see SECURITY.md.