#[cache_async]Expand description
A procedural macro that adds automatic async memoization to async functions and methods.
This macro transforms an async function into a cached version that stores results in a global DashMap based on the function arguments. Subsequent calls with the same arguments will return the cached result instead of re-executing the function body.
§Requirements
- Function must be async: The function must be declared with
async fn - Arguments: Must implement
Debugfor key generation - Return type: Must implement
Clonefor cache storage and retrieval - Function purity: For correct behavior, the function should be pure (same inputs always produce same outputs with no side effects)
§Macro Parameters
limit(optional): Maximum number of entries in the cache. When the limit is reached, entries are evicted according to the specified policy. Default: unlimited.policy(optional): Eviction policy to use when the cache is full. Options:"fifo"- First In, First Out"lru"- Least Recently Used (default)"lfu"- Least Frequently Used"arc"- Adaptive Replacement Cache"random"- Random Replacement"tlru"- Time-aware Least Recently Used (combines recency, frequency, and age)"w_tinylfu"- Windowed Tiny LFU (two-segment cache with window and protected segments)
ttl(optional): Time-to-live in seconds. Entries older than this will be automatically removed when accessed. Default: None (no expiration).frequency_weight(optional): Weight factor for frequency in TLRU policy. Controls the balance between recency and frequency in eviction decisions.- Values < 1.0: Emphasize recency and age over frequency (good for time-sensitive data)
- Value = 1.0 (or omitted): Balanced approach (default TLRU behavior)
- Values > 1.0: Emphasize frequency over recency (good for popular content)
- Formula:
score = frequency^weight × position × age_factor - Only applicable when
policy = "tlru". Ignored for other policies. - Example:
frequency_weight = 1.5makes frequently accessed entries more resistant to eviction
window_ratio(optional): Window segment size ratio for W-TinyLFU policy (0.01-0.99, default: 0.20). Controls the balance between recency (window segment) and frequency (protected segment).- Values < 0.2 (e.g., 0.1): Emphasize frequency → good for stable workloads
- Value = 0.2 (default): Balanced approach
- Values > 0.2 (e.g., 0.3-0.4): Emphasize recency → good for trending content
- Only applicable when
policy = "w_tinylfu". Ignored for other policies.
name(optional): Custom identifier for the cache. Default: the function name.max_memory(optional): Maximum memory usage (e.g., “100MB”, “1GB”). Requires the return type to implementMemoryEstimator.tags(optional): Array of tags for group invalidation. Example:["user_data", "profile"]events(optional): Array of events that trigger invalidation. Example:["user_updated"]dependencies(optional): Array of cache dependencies. Example:["get_user"]invalidate_on(optional): Function that checks if a cached entry should be invalidated. Signature:fn(key: &String, value: &T) -> bool. Returntrueto invalidate.cache_if(optional): Function that determines if a result should be cached. Signature:fn(key: &String, value: &T) -> bool. Returntrueto cache the result. When not specified, all results are cached (default behavior).
§Cache Behavior
- Global scope: Cache is ALWAYS shared across all tasks and threads (no thread-local option)
- Regular async functions: All results are cached
- Result-returning async functions: Only
Okvalues are cached,Errvalues are not - Thread-safe: Uses lock-free concurrent hash map (DashMap)
- Eviction: When limit is reached, entries are removed according to the policy
- Expiration: When TTL is set, expired entries are removed on access
§Examples
§Basic Async Function Caching
ⓘ
use cachelito_async::cache_async;
#[cache_async]
async fn fetch_user(id: u64) -> User {
// Simulates async API call
tokio::time::sleep(Duration::from_secs(1)).await;
User { id, name: format!("User {}", id) }
}§Cache with Invalidation
ⓘ
use cachelito_async::cache_async;
use cachelito_core::{invalidate_by_tag, invalidate_by_event};
#[cache_async(
limit = 100,
policy = "lru",
tags = ["user_data"],
events = ["user_updated"]
)]
async fn get_user_profile(user_id: u64) -> UserProfile {
// Fetch from database
fetch_profile_from_db(user_id).await
}
// Later, invalidate all caches with the "user_data" tag
invalidate_by_tag("user_data");
// Or invalidate by event
invalidate_by_event("user_updated");§TLRU with Custom Frequency Weight
ⓘ
use cachelito_async::cache_async;
// Low frequency_weight (0.3) - emphasizes recency and age
// Good for time-sensitive data where freshness matters more than popularity
#[cache_async(
policy = "tlru",
limit = 100,
ttl = 300,
frequency_weight = 0.3
)]
async fn fetch_realtime_data(source: String) -> Data {
// Fetch time-sensitive data
api_client.fetch(source).await
}
// High frequency_weight (1.5) - emphasizes access frequency
// Good for popular content that should stay cached despite age
#[cache_async(
policy = "tlru",
limit = 100,
ttl = 300,
frequency_weight = 1.5
)]
async fn fetch_popular_content(id: u64) -> Content {
// Frequently accessed entries remain cached longer
database.fetch_content(id).await
}
// Default behavior (balanced) - omit frequency_weight
#[cache_async(policy = "tlru", limit = 100, ttl = 300)]
async fn fetch_balanced(key: String) -> Value {
// Balanced approach between recency and frequency
expensive_operation(key).await
}§Performance Considerations
- Lock-free: Uses DashMap for concurrent access without blocking
- Cache key generation: Uses
Debugformatting for keys - Memory usage: Controlled by the
limitparameter - Async overhead: Minimal, no
.awaitneeded for cache operations