๐ MiniCache

A fast, lightweight, async-compatible in-memory cache for Rust with TTL (Time-To-Live) support and automatic cleanup. Perfect for async applications that need efficient caching without the complexity.
โจ Features
- ๐ฅ High Performance: Millions of operations per second
- โก Async/Await Ready: Built for
tokio and async applications
- โฐ TTL Support: Automatic expiration with background cleanup
- ๐ Thread-Safe: Concurrent access with
Arc + RwLock
- ๐พ Memory Efficient: Minimal overhead per cache entry
- ๐ Easy to Use: Simple API with comprehensive examples
- ๐ Battle Tested: Extensive benchmarks and tests included
๐ฆ Installation
Add to your Cargo.toml:
[dependencies]
minicache = "0.1.0"
tokio = { version = "1.0", features = ["full"] }
๐ Quick Start
use minicache::MiniCache;
use std::time::Duration;
#[tokio::main]
async fn main() {
let cache = MiniCache::new(Duration::from_secs(60));
cache.set("user:123", "John Doe", None).await;
cache.set("session:abc", "temp_data", Some(Duration::from_secs(300))).await;
if let Some(user) = cache.get(&"user:123").await {
println!("User: {}", user);
}
if cache.contains(&"session:abc").await {
println!("Session is active");
}
cache.remove(&"user:123").await;
println!("Cache size: {}", cache.len().await);
println!("All keys: {:?}", cache.keys().await);
}
๐ Usage Examples
Basic Operations
use minicache::MiniCache;
use std::time::Duration;
#[tokio::main]
async fn main() {
let cache = MiniCache::new(Duration::from_secs(60));
cache.set("name", "Alice", None).await;
assert_eq!(cache.get(&"name").await, Some("Alice"));
cache.set(42, "The Answer", None).await;
assert_eq!(cache.get(&42).await, Some("The Answer"));
#[derive(Clone, PartialEq, Debug)]
struct User { id: u32, name: String }
let user = User { id: 1, name: "Bob".to_string() };
cache.set("user:1", user.clone(), None).await;
assert_eq!(cache.get(&"user:1").await, Some(user));
}
TTL (Time-To-Live) Usage
use minicache::MiniCache;
use std::time::Duration;
use tokio::time::sleep;
#[tokio::main]
async fn main() {
let cache = MiniCache::new(Duration::from_millis(100));
cache.set("temp", "expires soon", Some(Duration::from_millis(200))).await;
assert_eq!(cache.get(&"temp").await, Some("expires soon"));
sleep(Duration::from_millis(250)).await;
assert_eq!(cache.get(&"temp").await, None);
}
Concurrent Access
use minicache::MiniCache;
use std::sync::Arc;
use std::time::Duration;
#[tokio::main]
async fn main() {
let cache = Arc::new(MiniCache::new(Duration::from_secs(60)));
let mut handles = vec![];
for i in 0..10 {
let cache_clone = cache.clone();
let handle = tokio::spawn(async move {
for j in 0..1000 {
let key = format!("task_{}_{}", i, j);
let value = format!("value_{}_{}", i, j);
cache_clone.set(key, value, None).await;
}
});
handles.push(handle);
}
for handle in handles {
handle.await.unwrap();
}
println!("Total entries: {}", cache.len().await);
}
Web Application Example
use minicache::MiniCache;
use std::sync::Arc;
use std::time::Duration;
type SharedCache = Arc<MiniCache<String, String>>;
async fn get_user_profile(cache: SharedCache, user_id: &str) -> Option<String> {
let cache_key = format!("user_profile:{}", user_id);
if let Some(profile) = cache.get(&cache_key).await {
return Some(profile);
}
let profile = fetch_from_database(user_id).await;
cache.set(cache_key, profile.clone(), Some(Duration::from_secs(300))).await;
Some(profile)
}
async fn fetch_from_database(user_id: &str) -> String {
tokio::time::sleep(Duration::from_millis(100)).await;
format!("Profile data for user {}", user_id)
}
#[tokio::main]
async fn main() {
let cache = Arc::new(MiniCache::new(Duration::from_secs(60)));
for _ in 0..5 {
let profile = get_user_profile(cache.clone(), "123").await;
println!("Got profile: {:?}", profile);
}
}
๐ง API Reference
Core Methods
| Method |
Description |
new(cleanup_interval) |
Create new cache with cleanup interval |
set(key, value, ttl) |
Store key-value pair with optional TTL |
get(key) |
Retrieve value by key |
remove(key) |
Delete specific key |
contains(key) |
Check if key exists (and not expired) |
clear() |
Remove all entries |
len() |
Get number of valid entries |
keys() |
Get all valid keys |
Generic Types
MiniCache<K, V>
where
K: Hash + Eq + Clone + Send + Sync + 'static,
V: Clone + Send + Sync + 'static,
โก Performance
Based on benchmarks (MacBook Pro M1):
- Basic Reads: ~13.7M operations/second
- Basic Writes: ~9.6M operations/second
- Concurrent Access: ~1.7M operations/second
- Memory Overhead: ~162 bytes per entry
- TTL Cleanup: Sub-millisecond automatic cleanup
Run benchmarks yourself:
cargo run --release --example quick_demo
๐ Comparison
| Feature |
MiniCache |
HashMap |
DashMap |
moka |
| Async/Await |
โ
|
โ |
โ |
โ
|
| TTL Support |
โ
|
โ |
โ |
โ
|
| Auto Cleanup |
โ
|
โ |
โ |
โ
|
| Zero Dependencies* |
โ
|
โ
|
โ |
โ |
| Memory Efficient |
โ
|
โ
|
โ |
โ |
*Except tokio for async runtime
๐ Advanced Usage
Custom Cleanup Intervals
let fast_cache = MiniCache::new(Duration::from_millis(100));
let slow_cache = MiniCache::new(Duration::from_secs(300));
Error Handling
#[tokio::main]
async fn main() {
let cache = MiniCache::new(Duration::from_secs(60));
cache.set("key", "value", None).await;
let value = cache.get(&"key").await;
match cache.get(&"missing").await {
Some(val) => println!("Found: {}", val),
None => println!("Key not found or expired"),
}
}
๐ Monitoring and Debugging
use minicache::MiniCache;
use std::time::Duration;
#[tokio::main]
async fn main() {
let cache = MiniCache::new(Duration::from_secs(60));
cache.set("key1", "value1", Some(Duration::from_secs(10))).await;
cache.set("key2", "value2", None).await;
println!("Cache size: {}", cache.len().await);
println!("All keys: {:?}", cache.keys().await);
for key in ["key1", "key2", "key3"] {
if cache.contains(&key).await {
println!("{}: exists", key);
} else {
println!("{}: missing or expired", key);
}
}
}
๐งช Testing
Run the test suite:
cargo test
Run with output:
cargo test -- --nocapture
Test specific modules:
cargo test cache_operations
๐ Benchmarking
Quick performance demo:
cargo run --release --example quick_demo
Detailed benchmarks:
cargo bench
Memory profiling:
cargo run --release --example memory_profiler
๐ค Contributing
Contributions are welcome! Please read our Contributing Guide for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature)
- Run tests (
cargo test)
- Commit changes (
git commit -am 'Add amazing feature')
- Push to branch (
git push origin feature/amazing-feature)
- Open a Pull Request
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Links
๐ Changelog
See CHANGELOG.md for version history and breaking changes.
Made with โค๏ธ for the Rust community