cache-rs
A high-performance, memory-efficient cache library for Rust supporting multiple eviction algorithms with O(1) operations.
✨ Features
- Multiple eviction algorithms: LRU, LFU, LFUDA, SLRU, GDSF
- High performance: All operations are O(1) with optimized data structures
- Memory efficient: Minimal overhead with careful memory layout
no_stdcompatible: Works in embedded and resource-constrained environments- Thread-safe ready: Easy to wrap with
Mutex/RwLockfor concurrent access - Well documented: Comprehensive documentation with usage examples
🚀 Quick Start
Add to your Cargo.toml:
[]
= "0.2.0"
Basic usage:
use LruCache;
use NonZeroUsize;
let mut cache = new;
cache.put;
assert_eq!;
📖 Algorithm Guide
Choose the right cache algorithm for your use case:
LRU (Least Recently Used)
Best for: General-purpose caching with temporal locality
use LruCache;
use NonZeroUsize;
let mut cache = new;
cache.put;
SLRU (Segmented LRU)
Best for: Workloads with scan resistance requirements
use SlruCache;
use NonZeroUsize;
// Total capacity: 100, Protected segment: 20
let mut cache = new;
LFU (Least Frequently Used)
Best for: Workloads with strong frequency patterns
use LfuCache;
use NonZeroUsize;
let mut cache = new;
cache.put;
LFUDA (LFU with Dynamic Aging)
Best for: Long-running applications where access patterns change
use LfudaCache;
use NonZeroUsize;
let mut cache = new;
GDSF (Greedy Dual Size Frequency)
Best for: Variable-sized objects (images, files, documents)
use GdsfCache;
use NonZeroUsize;
let mut cache = new;
cache.put; // key, value, size
📊 Performance Comparison
| Algorithm | Get Operation | Use Case | Memory Overhead |
|---|---|---|---|
| LRU | ~887ns | General purpose | Low |
| SLRU | ~983ns | Scan resistance | Medium |
| GDSF | ~7.5µs | Size-aware | Medium |
| LFUDA | ~20.5µs | Aging workloads | Medium |
| LFU | ~22.7µs | Frequency-based | Medium |
Benchmarks run on mixed workloads with Zipf distribution
🏗️ no_std Support
Works out of the box in no_std environments:
extern crate alloc;
use LruCache;
use NonZeroUsize;
use String;
let mut cache = new;
cache.put;
⚙️ Feature Flags
hashbrown(default): Use hashbrown HashMap for better performancenightly: Enable nightly-only optimizationsstd: Enable standard library features (disabled by default)concurrent: Enable thread-safe concurrent cache types (usesparking_lot)
# Default: no_std + hashbrown (recommended for most use cases)
= "0.2.0"
# Concurrent caching (recommended for multi-threaded apps)
= { = "0.2.0", = ["concurrent"] }
# std + hashbrown (recommended for std environments)
= { = "0.2.0", = ["std"] }
# std + concurrent + nightly optimizations
= { = "0.2.0", = ["std", "concurrent", "nightly"] }
# no_std + nightly optimizations only
= { = "0.2.0", = ["nightly"] }
# Only std::HashMap (not recommended - slower than hashbrown)
= { = "0.2.0", = false, = ["std"] }
🧵 Concurrent Cache Support
For high-performance multi-threaded scenarios, cache-rs provides dedicated concurrent cache types with the concurrent feature:
[]
= { = "0.2.0", = ["concurrent"] }
Available Concurrent Types
| Type | Description |
|---|---|
ConcurrentLruCache |
Thread-safe LRU with segmented storage |
ConcurrentSlruCache |
Thread-safe Segmented LRU |
ConcurrentLfuCache |
Thread-safe LFU |
ConcurrentLfudaCache |
Thread-safe LFUDA |
ConcurrentGdsfCache |
Thread-safe GDSF |
Usage Example
use ConcurrentLruCache;
use Arc;
use thread;
// Create a concurrent cache (default 16 segments)
let cache = new;
// Access from multiple threads
let handles: = .map.collect;
for handle in handles
Zero-Copy Access with get_with
Avoid cloning large values by processing them in-place:
use ConcurrentLruCache;
use NonZeroUsize;
let cache = new;
cache.put;
// Process value without cloning
let sum: = cache.get_with;
Segment Tuning
Configure segment count based on your workload:
use ConcurrentLruCache;
use NonZeroUsize;
// More segments = better concurrency, higher memory overhead
let cache = with_segments;
Performance Characteristics
| Segments | 8-Thread Mixed Workload |
|---|---|
| 1 | ~464µs |
| 8 | ~441µs |
| 16 | ~379µs |
| 32 | ~334µs (optimal) |
| 64 | ~372µs |
Thread Safety (Manual Wrapping)
For simpler use cases, you can also wrap single-threaded caches manually:
use LruCache;
use ;
use NonZeroUsize;
let cache = new;
// Clone Arc for use in other threads
let cache_clone = clone;
🔧 Advanced Usage
Custom Hash Function
use LruCache;
use RandomState;
use NonZeroUsize;
let cache = with_hasher;
Size-aware Caching with GDSF
use GdsfCache;
use NonZeroUsize;
let mut cache = new;
// Cache different sized objects
cache.put;
cache.put;
cache.put;
// GDSF automatically considers size, frequency, and recency
🏃♂️ Benchmarks
Run the included benchmarks to compare performance:
Example results on modern hardware:
- LRU: Fastest for simple use cases (~887ns per operation)
- SLRU: Good balance of performance and scan resistance (~983ns)
- GDSF: Best for size-aware workloads (~7.5µs)
- LFUDA/LFU: Best for frequency-based patterns (~20µs)
📚 Documentation
🤝 Contributing
Contributions welcome! Please see CONTRIBUTING.md for guidelines.
Development
# Run all tests
# Check formatting
# Run clippy
# Test no_std compatibility
# Run Miri for unsafe code validation (detects undefined behavior)
MIRIFLAGS="-Zmiri-ignore-leaks"
See MIRI_ANALYSIS.md for a detailed Miri usage guide and analysis of findings.
Release Process
Releases are tag-based. The CI workflow triggers a release only when a version tag is pushed.
# 1. Update version in Cargo.toml
# 2. Update CHANGELOG.md with release notes
# 3. Commit and push to main
# 4. Create an annotated tag (triggers release)
Tag Conventions:
- Format:
vMAJOR.MINOR.PATCH(e.g.,v0.2.0,v1.0.0) - Use annotated tags (
git tag -a), not lightweight tags - Tag message should summarize the release
What happens on tag push:
- Full CI pipeline runs (test, clippy, doc, no_std, security audit)
- If all checks pass, the crate is published to crates.io
- A GitHub Release is created with auto-generated release notes
Note: Publishing requires the
CARGO_REGISTRY_TOKENsecret to be configured in repository settings.
📄 License
Licensed under the MIT License.
🔒 Security
For security concerns, see SECURITY.md.