Cachelito
A lightweight, thread-safe caching library for Rust that provides automatic memoization through procedural macros.
Features
- 🚀 Easy to use: Simply add
#[cache]attribute to any function or method - 🌐 Global scope by default: Cache shared across all threads (use
scope = "thread"for thread-local) - ⚡ High-performance synchronization: Uses
parking_lot::RwLockfor global caches, enabling concurrent reads - 🔒 Thread-local option: Optional thread-local storage with
scope = "thread"for maximum performance - 🎯 Flexible key generation: Supports custom cache key implementations
- 🎨 Result-aware: Intelligently caches only successful
Result::Okvalues - 🗑️ Cache entry limits: Control growth with numeric
limit - 💾 Memory-based limits (v0.10.0): New
max_memory = "100MB"attribute for memory-aware eviction - 📊 Eviction policies: FIFO, LRU (default), LFU (v0.8.0), ARC (v0.9.0), Random (v0.11.0)
- 🎯 ARC (Adaptive Replacement Cache): Self-tuning policy combining recency & frequency
- 🎲 Random Replacement: O(1) eviction for baseline benchmarks and random access patterns
- ⏱️ TTL support: Time-to-live expiration for automatic cache invalidation
- 🔥 Smart Invalidation (v0.12.0): Tag-based, event-driven, and dependency-based cache invalidation
- 🎯 Conditional Invalidation (v0.13.0): Runtime invalidation with custom check functions and named invalidation checks
- 📏 MemoryEstimator trait: Used internally for memory-based limits (customizable for user types)
- 📈 Statistics (v0.6.0+): Track hit/miss rates via
statsfeature &stats_registry - 🔮 Async/await support (v0.7.0): Dedicated
cachelito-asynccrate (lock-free DashMap) - ✅ Type-safe: Full compile-time type checking
- 📦 Minimal dependencies: Uses
parking_lotfor optimal performance
Quick Start
For Synchronous Functions
Add this to your Cargo.toml:
[]
= "0.13.0"
# Or with statistics:
# cachelito = { version = "0.13.0", features = ["stats"] }
For Async Functions
Note:
cachelito-asyncfollows the same versioning ascachelitocore (0.13.x).
[]
= "0.13.0"
= { = "1", = ["full"] }
Which Version Should I Use?
| Use Case | Crate | Macro | Best For |
|---|---|---|---|
| Sync functions | cachelito |
#[cache] |
CPU-bound computations |
| Async functions | cachelito-async |
#[cache_async] |
I/O-bound / network operations |
| Thread-local cache | cachelito |
#[cache(scope = "thread")] |
Per-thread isolated cache |
| Global shared cache | cachelito / cachelito-async |
#[cache] / #[cache_async] |
Cross-thread/task sharing |
| High concurrency | cachelito-async |
#[cache_async] |
Many concurrent async tasks |
| Statistics tracking | cachelito (v0.6.0+) |
#[cache] + feature stats |
Performance monitoring |
| Memory limits | cachelito (v0.10.0) |
#[cache(max_memory = "64MB")] |
Large objects / controlled memory usage |
Quick Decision:
- 🔄 Synchronous code? → Use
cachelito - ⚡ Async/await code? → Use
cachelito-async - 💾 Need memory-based eviction? → Use
cachelitov0.10.0+
Usage
Basic Function Caching
use cache;
Caching with Methods
The #[cache] attribute also works with methods:
use cache;
use DefaultCacheableKey;
Custom Cache Keys
For complex types, you can implement custom cache key generation:
Option 1: Use Default Debug-based Key
use DefaultCacheableKey;
// Enable default cache key generation based on Debug
Option 2: Custom Key Implementation
use CacheableKey;
// More efficient custom key implementation
Caching Result Types
Functions returning Result<T, E> only cache successful results:
use cache;
Cache Limits and Eviction Policies
Control memory usage by setting cache limits and choosing an eviction policy:
LRU (Least Recently Used) - Default
use cache;
// Cache with a limit of 100 entries using LRU eviction
// LRU is the default policy, so this is equivalent:
FIFO (First In, First Out)
use cache;
// Cache with a limit of 100 entries using FIFO eviction
LFU (Least Frequently Used)
use cache;
// Cache with a limit of 100 entries using LFU eviction
ARC (Adaptive Replacement Cache)
use cache;
// Cache with a limit of 100 entries using ARC eviction
Random Replacement
use cache;
// Cache with a limit of 100 entries using Random eviction
Policy Comparison:
| Policy | Evicts | Best For | Performance |
|---|---|---|---|
| LRU | Least recently accessed | Temporal locality (recent items matter) | O(n) on hit |
| FIFO | Oldest inserted | Simple, predictable behavior | O(1) |
| LFU | Least frequently accessed | Frequency patterns (popular items matter) | O(n) on evict |
| ARC | Adaptive (recency + frequency) | Mixed workloads, self-tuning | O(n) on evict/hit |
| Random | Randomly selected | Baseline benchmarks, random access | O(1) |
Choosing the Right Policy:
- FIFO: Simple, predictable, minimal overhead. Use when you just need basic caching.
- LRU: Best for most use cases with temporal locality (recent items are likely to be accessed again).
- LFU: Best when certain items are accessed much more frequently (like "hot" products in e-commerce).
- ARC: Best for workloads with mixed patterns - automatically adapts between recency and frequency.
- Random: Best for baseline benchmarks, truly random access patterns, or when minimizing overhead is critical.
Time-To-Live (TTL) Expiration
Set automatic expiration times for cached entries:
use cache;
// Cache entries expire after 60 seconds
// Combine TTL with limits and policies
Benefits:
- Automatic expiration: Old data is automatically removed
- Per-entry tracking: Each entry has its own timestamp
- Lazy eviction: Expired entries removed on access
- Works with policies: Compatible with FIFO and LRU
Global Scope Cache
By default, the cache is shared across all threads (global scope). Use scope = "thread" for thread-local caches where
each thread has its own independent cache:
use cache;
// Global cache (default) - shared across all threads
// Thread-local cache - each thread has its own cache
When to use global scope (default):
- ✅ Cross-thread sharing: All threads benefit from cached results
- ✅ Statistics monitoring: Full access to cache statistics via
stats_registry - ✅ Expensive operations: Computation cost outweighs synchronization overhead
- ✅ Shared data: Same function called with same arguments across threads
When to use thread-local (scope = "thread"):
- ✅ Maximum performance: No synchronization overhead
- ✅ Thread isolation: Each thread needs independent cache
- ✅ Thread-specific data: Different threads process different data
Performance considerations:
- Global (default): Uses
RwLockfor synchronization, allows concurrent reads - Thread-local: No synchronization overhead, but cache is not shared
use cache;
use thread;
// Global by default
Performance with Large Values
The cache clones values on every get operation. For large values (big structs, vectors, strings), this can be
expensive. Wrap your return values in Arc<T> to share ownership without copying data:
Problem: Expensive Cloning
use cache;
Solution: Use Arc
use cache;
use Arc;
// Return Arc instead of the value directly
Real-World Example: Caching Parsed Data
use cache;
use Arc;
// Cache expensive parsing operations
When to Use Arc
Use Arc when:
- ✅ Values are large (>1KB)
- ✅ Values contain collections (Vec, HashMap, String)
- ✅ Values are frequently accessed from cache
- ✅ Multiple parts of your code need access to the same data
Don't need Arc when:
- ❌ Values are small primitives (i32, f64, bool)
- ❌ Values are rarely accessed from cache
- ❌ Clone is already cheap (e.g., types with
Copytrait)
Combining Arc with Global Scope
For maximum efficiency with multi-threaded applications:
use cache;
use Arc;
use thread;
Benefits:
- 🚀 Only one database/API call across all threads
- 💾 Minimal memory overhead (Arc clones are just pointer + ref count)
- 🔒 Thread-safe sharing with minimal synchronization cost
- ⚡ Fast cache access with no data copying
Synchronization with parking_lot
Starting from version 0.5.0, Cachelito uses parking_lot for
synchronization in global scope caches. The implementation uses RwLock for the cache map and Mutex for the
eviction queue, providing optimal performance for read-heavy workloads.
Why parking_lot + RwLock?
RwLock Benefits (for the cache map):
- Concurrent reads: Multiple threads can read simultaneously without blocking
- 4-5x faster for read-heavy workloads (typical for caches)
- Perfect for 90/10 read/write ratio (common in cache scenarios)
- Only writes acquire exclusive lock
parking_lot Advantages over std::sync:
- 30-50% faster under high contention scenarios
- Adaptive spinning for short critical sections (faster than kernel-based locks)
- Fair scheduling prevents thread starvation
- No lock poisoning - simpler API without
Resultwrapping - ~40x smaller memory footprint per lock (~1 byte vs ~40 bytes)
Architecture
GlobalCache Structure:
┌─────────────────────────────────────┐
│ map: RwLock<HashMap<...>> │ ← Multiple readers OR one writer
│ order: Mutex<VecDeque<...>> │ ← Always exclusive (needs modification)
└─────────────────────────────────────┘
Read Operation (cache hit):
Thread 1 ──┐
Thread 2 ──┼──> RwLock.read() ──> ✅ Concurrent, no blocking
Thread 3 ──┘
Write Operation (cache miss):
Thread 1 ──> RwLock.write() ──> ⏳ Exclusive access
Benchmark Results
Performance comparison on concurrent cache access:
Mixed workload (8 threads, 100 operations, 90% reads / 10% writes):
Thread-Local Cache: 1.26ms (no synchronization baseline)
Global + RwLock: 1.84ms (concurrent reads)
Global + Mutex only: ~3.20ms (all operations serialized)
std::sync::RwLock: ~2.80ms (less optimized)
Improvement: RwLock is ~74% faster than Mutex for read-heavy workloads
Pure concurrent reads (20 threads, 100 reads each):
With RwLock: ~2ms (all threads read simultaneously)
With Mutex: ~40ms (threads wait in queue)
20x improvement for concurrent reads!
Running the Benchmarks
You can run the included benchmarks to see the performance on your hardware:
# Run cache benchmarks (includes RwLock concurrent reads)
# Run RwLock concurrent reads demo
# Run parking_lot demo
# Compare thread-local vs global
How It Works
The #[cache] macro generates code that:
- Creates a thread-local cache using
thread_local!andRefCell<HashMap> - Creates a thread-local order queue using
VecDequefor eviction tracking - Wraps cached values in
CacheEntryto track insertion timestamps - Builds a cache key from function arguments using
CacheableKey::to_cache_key() - Checks the cache before executing the function body
- Validates TTL expiration if configured, removing expired entries
- Stores the result in the cache after execution
- For
Result<T, E>types, only cachesOkvalues - When cache limit is reached, evicts entries according to the configured policy:
- FIFO: Removes the oldest inserted entry
- LRU: Removes the least recently accessed entry
Async/Await Support
Starting with version 0.7.0, Cachelito provides dedicated support for async/await functions through the
cachelito-async crate.
Installation
[]
= "0.2.0"
= { = "1", = ["full"] }
# or use async-std, smol, etc.
Quick Example
use cache_async;
use Duration;
async
async
Key Features of Async Cache
| Feature | Sync (#[cache]) |
Async (#[cache_async]) |
|---|---|---|
| Scope | Global or Thread-local | Always Global |
| Storage | RwLock<HashMap> or RefCell<HashMap> |
DashMap (lock-free) |
| Concurrency | parking_lot::RwLock |
Lock-free concurrent |
| Best for | CPU-bound operations | I/O-bound async operations |
| Blocking | May block on lock | No blocking |
| Policies | FIFO, LRU | FIFO, LRU |
| TTL | ✅ Supported | ✅ Supported |
Why DashMap for Async?
The async version uses DashMap instead of traditional locks because:
- ✅ Lock-free: No blocking, perfect for async contexts
- ✅ High concurrency: Multiple tasks can access cache simultaneously
- ✅ No async overhead: Cache operations don't require
.await - ✅ Thread-safe: Safe to share across tasks and threads
- ✅ Performance: Optimized for high-concurrency scenarios
Limitations
- Always Global: No thread-local option (not needed in async context)
- Cache Stampede: Multiple concurrent requests for the same key may execute simultaneously (consider using request coalescing patterns for production use)
Complete Documentation
See the cachelito-async README for:
- Detailed API documentation
- More examples (LRU, concurrent access, TTL)
- Performance considerations
- Migration guide from sync version
Examples
The library includes several comprehensive examples demonstrating different features:
Run Examples
# Basic caching with custom types (default cache key)
# Custom cache key implementation
# Result type caching (only Ok values cached)
# Cache limits with LRU policy
# LRU eviction policy
# FIFO eviction policy
# Default policy (FIFO)
# TTL (Time To Live) expiration
# Global scope cache (shared across threads)
# Async examples (requires cachelito-async)
Example Output (LRU Policy):
=== Testing LRU Cache Policy ===
Calling compute_square(1)...
Executing compute_square(1)
Result: 1
Calling compute_square(2)...
Executing compute_square(2)
Result: 4
Calling compute_square(3)...
Executing compute_square(3)
Result: 9
Calling compute_square(2)...
Result: 4 (should be cached)
Calling compute_square(4)...
Executing compute_square(4)
Result: 16
...
Total executions: 6
✅ LRU Policy Test PASSED
Performance Considerations
- Thread-local storage (default): Each thread has its own cache, so cached data is not shared across threads. This means no locks or synchronization overhead.
- Global scope: When using
scope = "global", the cache is shared across all threads using aMutex. This adds synchronization overhead but allows cache sharing. - Memory usage: Without a limit, the cache grows unbounded. Use the
limitparameter to control memory usage. - Cache key generation: Uses
CacheableKey::to_cache_key()method. The default implementation usesDebugformatting, which may be slow for complex types. Consider implementingCacheableKeydirectly for better performance. - Value cloning: The cache clones values on every access. For large values (>1KB), wrap them in
Arc<T>to avoid expensive clones. See the Performance with Large Values section for details. - Cache hit performance: O(1) hash map lookup, with LRU having an additional O(n) reordering cost on hits
- FIFO: Minimal overhead, O(1) eviction
- LRU: Slightly higher overhead due to reordering on access, O(n) for reordering but still efficient
Cache Statistics
Available since v0.6.0 with the stats feature flag.
Track cache performance metrics including hit/miss rates and access counts. Statistics are automatically collected for global-scoped caches and can be queried programmatically.
Enabling Statistics
Add the stats feature to your Cargo.toml:
[]
= { = "0.6.0", = ["stats"] }
Basic Usage
Statistics are automatically tracked for global caches (default):
use cache;
// Global by default
Output:
Total accesses: 4
Cache hits: 2
Cache misses: 2
Hit rate: 50.00%
Miss rate: 50.00%
Statistics Registry API
The stats_registry module provides centralized access to all cache statistics:
Get Statistics
use stats_registry;
List All Cached Functions
use stats_registry;
Reset Statistics
use stats_registry;
Statistics Metrics
The CacheStats struct provides the following metrics:
hits()- Number of successful cache lookupsmisses()- Number of cache misses (computation required)total_accesses()- Total number of get operationshit_rate()- Ratio of hits to total accesses (0.0 to 1.0)miss_rate()- Ratio of misses to total accesses (0.0 to 1.0)reset()- Reset all counters to zero
Concurrent Statistics Example
Statistics are thread-safe and work correctly with concurrent access:
use cache;
use thread;
// Global by default
Monitoring Cache Performance
Use statistics to monitor and optimize cache performance:
use ;
// Global by default
Custom Cache Names
Use the name attribute to give your caches custom identifiers in the statistics registry:
use cache;
// API V1 - using custom name (global by default)
// API V2 - using custom name (global by default)
Benefits:
- Descriptive names: Use meaningful identifiers instead of function names
- Multiple versions: Track different implementations separately
- Easier debugging: Identify caches by purpose rather than function name
- Better monitoring: Compare performance of different cache strategies
Default behavior: If name is not provided, the function name is used as the identifier.
Smart Cache Invalidation
Starting from version 0.12.0, Cachelito supports smart invalidation mechanisms beyond simple TTL expiration, providing fine-grained control over when and how cached entries are invalidated.
Invalidation Strategies
Cachelito supports three complementary invalidation strategies:
- Tag-based invalidation: Group related entries and invalidate them together
- Event-driven invalidation: Trigger invalidation when specific events occur
- Dependency-based invalidation: Cascade invalidation to dependent caches
Tag-Based Invalidation
Use tags to group related cache entries and invalidate them together:
use ;
// Later, when user data is updated:
invalidate_by_tag; // Invalidates both functions
Event-Driven Invalidation
Trigger cache invalidation based on application events:
use ;
// When a permission changes:
invalidate_by_event;
// When user profile is updated:
invalidate_by_event;
Dependency-Based Invalidation
Create cascading invalidation when dependent caches change:
use ;
// When the user cache changes:
invalidate_by_dependency; // Invalidates get_user_dashboard
Combining Multiple Strategies
You can combine tags, events, and dependencies for maximum flexibility:
use cache;
Manual Cache Invalidation
Invalidate specific caches by their name:
use invalidate_cache;
// Invalidate a specific cache function
if invalidate_cache
Invalidation API
The invalidation API is simple and intuitive:
invalidate_by_tag(tag: &str) -> usize- Returns the number of caches invalidatedinvalidate_by_event(event: &str) -> usize- Returns the number of caches invalidatedinvalidate_by_dependency(dependency: &str) -> usize- Returns the number of caches invalidatedinvalidate_cache(cache_name: &str) -> bool- Returnstrueif the cache was found and invalidated
Benefits
- Fine-grained control: Invalidate only what needs to be invalidated
- Event-driven: React to application events automatically
- Cascading updates: Maintain consistency across dependent caches
- Flexible grouping: Use tags to organize related caches
- Performance: No overhead when invalidation attributes are not used
Conditional Invalidation with Check Functions (v0.13.0)
For even more control, you can use custom check functions (predicates) to selectively invalidate cache entries based on runtime conditions:
Single Cache Conditional Invalidation
Invalidate specific entries in a cache based on custom logic:
use ;
// Invalidate only users with ID > 1000
invalidate_with;
// Invalidate users based on a pattern
invalidate_with;
Global Conditional Invalidation
Apply a check function across all registered caches:
use invalidate_all_with;
// Invalidate all entries with numeric IDs >= 1000 across ALL caches
let count = invalidate_all_with;
println!;
Complex Check Conditions
Use any Rust logic in your check functions:
use invalidate_with;
// Invalidate entries where ID is divisible by 30
invalidate_with;
// Invalidate entries matching a range
invalidate_with;
Conditional Invalidation API
-
invalidate_with(cache_name: &str, check_fn: F) -> bool- Invalidates entries in a specific cache where
check_fn(key)returnstrue - Returns
trueif the cache was found and the check function was applied
- Invalidates entries in a specific cache where
-
invalidate_all_with(check_fn: F) -> usize- Invalidates entries across all caches where
check_fn(cache_name, key)returnstrue - Returns the number of caches that had the check function applied
- Invalidates entries across all caches where
Use Cases for Conditional Invalidation
- Time-based cleanup: Invalidate entries older than a specific timestamp
- Range-based invalidation: Remove entries with IDs above/below thresholds
- Pattern matching: Invalidate entries matching specific key patterns
- Selective cleanup: Remove stale data based on business logic
- Multi-cache coordination: Apply consistent invalidation rules across caches
Performance Considerations
- O(n) operation: Conditional invalidation checks all keys in the cache
- Lock acquisition: Briefly holds write lock during key collection and removal
- Automatic registration: All global-scope caches support conditional invalidation
- Thread-safe: Safe to call from multiple threads concurrently
Named Invalidation Check Functions (Macro Attribute)
For automatic validation on every cache access, you can specify an invalidation check function directly in the macro:
use cache;
use ;
// Define invalidation check function
// Use invalidation check as macro attribute
// Check function is evaluated on EVERY cache access
let user = get_user; // Returns cached value only if !is_stale()
How Named Invalidation Checks Work
- Evaluated on every access: The check function runs each time
get()is called - Signature:
fn check_fn(key: &String, value: &T) -> bool - Return
trueto invalidate: If the function returnstrue, the cached entry is considered stale - Re-execution: When stale, the function re-executes and the result is cached
- Works with all scopes: Compatible with both
globalandthreadscope
Common Check Function Patterns
// Time-based staleness
// Key-based invalidation
// Value-based validation
// Complex conditions
Key Format Note
Cache keys are stored using Rust's Debug format ({:?}), which means string keys will have quotes. Use contains() instead of exact matching:
// ✅ Correct
// ❌ Won't work (key is "\"admin_123\"" not "admin_123")
Complete Example
See examples/smart_invalidation.rs and examples/named_invalidation.rs for complete working examples demonstrating all invalidation strategies.
Limitations
- Cannot be used with generic functions (lifetime and type parameter support is limited)
- The function must be deterministic for correct caching behavior
- Cache is global by default (use
scope = "thread"for thread-local isolation) - LRU policy has O(n) overhead on cache hits for reordering (where n is the number of cached entries)
- Global scope adds synchronization overhead (though optimized with RwLock)
- Statistics are automatically available for global caches (default); thread-local caches track stats internally but
they're not accessible via
stats_registry
Documentation
For detailed API documentation, run:
Changelog
See CHANGELOG.md for a detailed history of changes.
Latest Release: Version 0.13.0
🎯 Conditional Invalidation with Custom Check Functions!
Version 0.13.0 introduces powerful conditional invalidation, allowing you to selectively invalidate cache entries based on runtime conditions:
New Features:
- 🎯 Conditional Invalidation - Invalidate entries matching custom check functions (predicates)
- 🌐 Global Conditional Invalidation Support - Apply check functions across all registered caches
- 🔑 Key-Based Filtering - Match entries by key patterns, ranges, or any custom logic
- 🏷️ Named Invalidation Check Functions - Automatic validation on every cache access with
invalidate_on = function_nameattribute - ⚡ Automatic Registration - All global-scope caches support conditional invalidation by default
- 🔒 Thread-Safe Execution - Safe concurrent check function execution
- 💡 Flexible Conditions - Use any Rust logic in your check functions
Quick Start:
use ;
// Named invalidation check function (evaluated on every access)
// Manual conditional invalidation
invalidate_with;
// Global invalidation across all caches
invalidate_all_with;
See also:
examples/conditional_invalidation.rs- Manual conditional invalidationexamples/named_invalidation.rs- Named invalidation check functions
Previous Release: Version 0.12.0
🔥 Smart Cache Invalidation!
Version 0.12.0 introduces intelligent cache invalidation mechanisms beyond simple TTL expiration:
New Features:
- 🏷️ Tag-Based Invalidation - Group related caches and invalidate them together
- 📡 Event-Driven Invalidation - Trigger invalidation when application events occur
- 🔗 Dependency-Based Invalidation - Cascade invalidation to dependent caches
- 🎯 Manual Invalidation - Invalidate specific caches by name
- 🔄 Flexible Combinations - Use tags, events, and dependencies together
- ⚡ Zero Overhead - No performance impact when not using invalidation
- 🔒 Thread-Safe - All operations are atomic and concurrent-safe
Quick Start:
use ;
// Tag-based grouping
// Event-driven invalidation
// Invalidate all user_data caches
invalidate_by_tag;
// Invalidate on event
invalidate_by_event;
See also: examples/smart_invalidation.rs
Previous Release: Version 0.11.0
🎲 Random Replacement Policy!
Version 0.11.0 introduces the Random eviction policy for baseline benchmarking and simple use cases:
New Features:
- 🎲 Random Eviction Policy - Randomly evicts entries when cache is full
- ⚡ O(1) Performance - Constant-time eviction with no access tracking overhead
- 🔒 Thread-Safe RNG - Uses
fastrandfor fast, lock-free random selection - 📊 Minimal Overhead - No order updates on cache hits (unlike LRU/ARC)
- 🎯 Benchmark Baseline - Ideal for comparing policy effectiveness
- 🔄 All Cache Types - Available in sync (thread-local & global) and async caches
- 📚 Full Support - Works with
limit,ttl, andmax_memoryattributes
Quick Start:
// Simple random eviction - O(1) performance
// Random with memory limit
When to Use Random:
- Baseline for performance benchmarks
- Truly random access patterns
- Simplicity preferred over optimization
- Reducing lock contention vs LRU/LFU
See the Cache Limits and Eviction Policies section for complete details.
Previous Release: Version 0.10.0
💾 Memory-Based Limits!
Version 0.10.0 introduces memory-aware caching controls:
New Features:
- 💾 Memory-Based Limits - Control cache size by memory footprint
- 📏
max_memoryAttribute - Specify memory limit (e.g.max_memory = "100MB") - 🔄 Combined Limits - Use both entry count and memory limits together
- ⚙️ Custom Memory Estimation - Implement
MemoryEstimatorfor precise control - 📊 Improved Statistics - Monitor memory usage and hit/miss rates together
Breaking Changes:
- Default policy remains LRU - No change, but now with memory limits!
- MemoryEstimator usage - Custom types with heap allocations must implement
MemoryEstimator
Quick Start:
// Memory limit - eviction when total size exceeds 100MB
// Combined limits - max 500 entries OR 128MB
See the Memory-Based Limits section above for complete details.
Previous Release: Version 0.9.0
🎯 ARC - Adaptive Replacement Cache!
Version 0.9.0 introduces a self-tuning cache policy that automatically adapts to your workload:
New Features:
- 🎯 ARC Eviction Policy - Adaptive Replacement Cache that combines LRU and LFU
- 🧠 Self-Tuning - Automatically balances between recency and frequency
- Scan-Resistant - Protects frequently accessed items from sequential scans
- ⚡ Operation Complexity - Insert is O(1); get and evict are O(n)
- 🔄 Mixed Workloads - Ideal for workloads with varying access patterns
- 📊 Both Sync & Async - ARC available in
cachelitoandcachelito-async
Breaking Changes:
- Default policy changed from FIFO to LRU - LRU is more effective for most use cases. To keep FIFO behavior, explicitly use
policy = "fifo"
See the Cache Limits and Eviction Policies section for complete details.
Previous Release: Version 0.8.0
🔥 LFU Eviction Policy & LRU as Default!
Version 0.8.0 completes the eviction policy trio and improves defaults:
New Features:
- 🔥 LFU Eviction Policy - Least Frequently Used eviction strategy
- 📊 Frequency Tracking - Automatic access frequency counters for each cache entry
- 🎯 Three Policies - Choose between FIFO, LRU (default), and LFU
- 📈 Smart Eviction - LFU keeps frequently accessed items cached longer
- ⚡ Optimized Performance - O(1) cache hits for LFU, O(n) eviction
- 🔄 Both Sync & Async - LFU available in
cachelitoandcachelito-async
Breaking Change:
- Default policy changed from FIFO to LRU - LRU is more effective for most use cases. To keep FIFO behavior, explicitly use
policy = "fifo"
See the Cache Limits and Eviction Policies section for complete details.
Version 0.7.0
🔮 Async/Await Support:
- 🚀 Async Function Caching - Use
#[cache_async]for async/await functions - 🔓 Lock-Free Concurrency - DashMap provides non-blocking cache access
- 🌐 Global Async Cache - Shared across all tasks and threads automatically
- ⚡ Zero Blocking - Cache operations don't require
.await - 📊 FIFO and LRU Policies - Eviction policies supported
- ⏱️ TTL Support - Time-based expiration for async caches
Previous Release: Version 0.6.0
Statistics & Global Scope:
- 🌐 Global scope by default - Cache is now shared across threads by default
- 📈 Cache Statistics - Track hit/miss rates and performance metrics
- 🎯 Stats Registry - Centralized API:
stats_registry::get("function_name") - 🏷️ Custom Cache Names - Use
nameattribute for custom identifiers
For full details, see the complete changelog.
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
See Also
- CHANGELOG - Detailed version history and release notes
- Macro Expansion Guide - How to view generated code and understand
format!("{:?}") - Thread-Local Statistics - Why thread-local cache stats aren't in
stats_registryand how they work - API Documentation - Full API reference