pub struct CacheManager { /* private fields */ }Expand description
Cache Manager - Unified operations across L1 and L2
Implementations§
Source§impl CacheManager
impl CacheManager
Sourcepub async fn new(l1_cache: Arc<L1Cache>, l2_cache: Arc<L2Cache>) -> Result<Self>
pub async fn new(l1_cache: Arc<L1Cache>, l2_cache: Arc<L2Cache>) -> Result<Self>
Create new cache manager
Sourcepub async fn get(&self, key: &str) -> Result<Option<Value>>
pub async fn get(&self, key: &str) -> Result<Option<Value>>
Get value from cache (L1 first, then L2 fallback with promotion)
This method now includes built-in Cache Stampede protection when cache misses occur. Multiple concurrent requests for the same missing key will be coalesced to prevent unnecessary duplicate work on external data sources.
§Arguments
key- Cache key to retrieve
§Returns
Ok(Some(value))- Cache hit, value found in L1 or L2Ok(None)- Cache miss, value not found in either cacheErr(error)- Cache operation failed
Sourcepub async fn set_with_strategy(
&self,
key: &str,
value: Value,
strategy: CacheStrategy,
) -> Result<()>
pub async fn set_with_strategy( &self, key: &str, value: Value, strategy: CacheStrategy, ) -> Result<()>
Get value from cache with fallback computation (enhanced backward compatibility)
This is a convenience method that combines get() with optional computation.
If the value is not found in cache, it will execute the compute function
and cache the result automatically.
§Arguments
key- Cache keycompute_fn- Optional function to compute value if not in cachestrategy- Cache strategy for storing computed value (default: ShortTerm)
§Returns
Ok(Some(value))- Value found in cache or computed successfullyOk(None)- Value not in cache and no compute function providedErr(error)- Cache operation or computation failed
§Example
// Simple cache get (existing behavior)
let cached_data = cache_manager.get_with_fallback("my_key", None, None).await?;
// Get with computation fallback (new enhanced behavior)
let api_data = cache_manager.get_with_fallback(
"api_response",
Some(|| async { fetch_data_from_api().await }),
Some(CacheStrategy::RealTime)
).await?;Set value with specific cache strategy (both L1 and L2)
Sourcepub async fn get_or_compute_with<F, Fut>(
&self,
key: &str,
strategy: CacheStrategy,
compute_fn: F,
) -> Result<Value>
pub async fn get_or_compute_with<F, Fut>( &self, key: &str, strategy: CacheStrategy, compute_fn: F, ) -> Result<Value>
Get or compute value with Cache Stampede protection across L1+L2+Compute
This method provides comprehensive Cache Stampede protection:
- Check L1 cache first (uses Moka’s built-in coalescing)
- Check L2 cache with mutex-based coalescing
- Compute fresh data with protection against concurrent computations
§Arguments
key- Cache keystrategy- Cache strategy for TTL and storage behaviorcompute_fn- Async function to compute the value if not in any cache
§Example
let api_data = cache_manager.get_or_compute_with(
"api_response",
CacheStrategy::RealTime,
|| async {
fetch_data_from_api().await
}
).await?;Sourcepub async fn get_or_compute_typed<T, F, Fut>(
&self,
key: &str,
strategy: CacheStrategy,
compute_fn: F,
) -> Result<T>
pub async fn get_or_compute_typed<T, F, Fut>( &self, key: &str, strategy: CacheStrategy, compute_fn: F, ) -> Result<T>
Get or compute typed value with Cache Stampede protection (Type-Safe Version)
This method provides the same functionality as get_or_compute_with() but with
type-safe automatic serialization/deserialization. Perfect for database queries,
API calls, or any computation that returns structured data.
§Type Safety
- Returns your actual type
Tinstead ofserde_json::Value - Compiler enforces Serialize + DeserializeOwned bounds
- No manual JSON conversion needed
§Cache Flow
- Check L1 cache → deserialize if found
- Check L2 cache → deserialize + promote to L1 if found
- Execute compute_fn → serialize → store in L1+L2
- Full stampede protection (only ONE request computes)
§Arguments
key- Cache keystrategy- Cache strategy for TTLcompute_fn- Async function returningResult<T>
§Example - Database Query
use serde::{Serialize, Deserialize};
#[derive(Serialize, Deserialize)]
struct User {
id: i64,
name: String,
}
// Type-safe database caching
let user: User = cache_manager.get_or_compute_typed(
"user:123",
CacheStrategy::MediumTerm,
|| async {
// Your database query here
sqlx::query_as::<_, User>("SELECT * FROM users WHERE id = $1")
.bind(123)
.fetch_one(&pool)
.await
}
).await?;§Example - API Call
#[derive(Serialize, Deserialize)]
struct ApiResponse {
data: String,
timestamp: i64,
}
let response: ApiResponse = cache_manager.get_or_compute_typed(
"api:endpoint",
CacheStrategy::RealTime,
|| async {
reqwest::get("https://api.example.com/data")
.await?
.json::<ApiResponse>()
.await
}
).await?;§Performance
- L1 hit: <1ms + deserialization (~10-50μs for small structs)
- L2 hit: 2-5ms + deserialization + L1 promotion
- Compute: Your function time + serialization + L1+L2 storage
- Stampede protection: 99.6% latency reduction under high concurrency
§Errors
Returns error if:
- Compute function fails
- Serialization fails (invalid type for JSON)
- Deserialization fails (cache data doesn’t match type T)
- Cache operations fail (Redis connection issues)
Sourcepub fn get_stats(&self) -> CacheManagerStats
pub fn get_stats(&self) -> CacheManagerStats
Get comprehensive cache statistics