CacheManager

Struct CacheManager 

Source
pub struct CacheManager { /* private fields */ }
Expand description

Cache Manager - Unified operations across L1 and L2

Implementations§

Source§

impl CacheManager

Source

pub async fn new(l1_cache: Arc<L1Cache>, l2_cache: Arc<L2Cache>) -> Result<Self>

Create new cache manager

Source

pub async fn get(&self, key: &str) -> Result<Option<Value>>

Get value from cache (L1 first, then L2 fallback with promotion)

This method now includes built-in Cache Stampede protection when cache misses occur. Multiple concurrent requests for the same missing key will be coalesced to prevent unnecessary duplicate work on external data sources.

§Arguments
  • key - Cache key to retrieve
§Returns
  • Ok(Some(value)) - Cache hit, value found in L1 or L2
  • Ok(None) - Cache miss, value not found in either cache
  • Err(error) - Cache operation failed
Source

pub async fn set_with_strategy( &self, key: &str, value: Value, strategy: CacheStrategy, ) -> Result<()>

Get value from cache with fallback computation (enhanced backward compatibility)

This is a convenience method that combines get() with optional computation. If the value is not found in cache, it will execute the compute function and cache the result automatically.

§Arguments
  • key - Cache key
  • compute_fn - Optional function to compute value if not in cache
  • strategy - Cache strategy for storing computed value (default: ShortTerm)
§Returns
  • Ok(Some(value)) - Value found in cache or computed successfully
  • Ok(None) - Value not in cache and no compute function provided
  • Err(error) - Cache operation or computation failed
§Example
// Simple cache get (existing behavior)
let cached_data = cache_manager.get_with_fallback("my_key", None, None).await?;

// Get with computation fallback (new enhanced behavior)
let api_data = cache_manager.get_with_fallback(
    "api_response",
    Some(|| async { fetch_data_from_api().await }),
    Some(CacheStrategy::RealTime)
).await?;

Set value with specific cache strategy (both L1 and L2)

Source

pub async fn get_or_compute_with<F, Fut>( &self, key: &str, strategy: CacheStrategy, compute_fn: F, ) -> Result<Value>
where F: FnOnce() -> Fut + Send, Fut: Future<Output = Result<Value>> + Send,

Get or compute value with Cache Stampede protection across L1+L2+Compute

This method provides comprehensive Cache Stampede protection:

  1. Check L1 cache first (uses Moka’s built-in coalescing)
  2. Check L2 cache with mutex-based coalescing
  3. Compute fresh data with protection against concurrent computations
§Arguments
  • key - Cache key
  • strategy - Cache strategy for TTL and storage behavior
  • compute_fn - Async function to compute the value if not in any cache
§Example
let api_data = cache_manager.get_or_compute_with(
    "api_response",
    CacheStrategy::RealTime,
    || async {
        fetch_data_from_api().await
    }
).await?;
Source

pub fn get_stats(&self) -> CacheManagerStats

Get comprehensive cache statistics

Source

pub async fn publish_to_stream( &self, stream_key: &str, fields: Vec<(String, String)>, maxlen: Option<usize>, ) -> Result<String>

Publish data to Redis Stream

§Arguments
  • stream_key - Name of the stream (e.g., “events_stream”)
  • fields - Field-value pairs to publish
  • maxlen - Optional max length for stream trimming
§Returns

The entry ID generated by Redis

Source

pub async fn read_stream_latest( &self, stream_key: &str, count: usize, ) -> Result<Vec<(String, Vec<(String, String)>)>>

Read latest entries from Redis Stream

§Arguments
  • stream_key - Name of the stream
  • count - Number of latest entries to retrieve
§Returns

Vector of (entry_id, fields) tuples (newest first)

Source

pub async fn read_stream( &self, stream_key: &str, last_id: &str, count: usize, block_ms: Option<usize>, ) -> Result<Vec<(String, Vec<(String, String)>)>>

Read from Redis Stream with optional blocking

§Arguments
  • stream_key - Name of the stream
  • last_id - Last ID seen (“0” for start, “$” for new only)
  • count - Max entries to retrieve
  • block_ms - Optional blocking timeout in ms
§Returns

Vector of (entry_id, fields) tuples

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.