Skip to main content

Cache

Struct Cache 

Source
pub struct Cache { /* private fields */ }
Expand description

Thread-safe cache with weight-stratified clock eviction.

The cache can be shared across threads via Arc<Cache>. All methods are synchronous but safe to call from async contexts.

§Weight-Stratified Clock Eviction

This cache uses a two-level eviction strategy that combines policy-based prioritization with recency and frequency tracking:

  1. Policy-based stratification: Each entry has an eviction policy (Critical, Standard, or Volatile). When space is needed, lower-priority entries are evicted first:

    • Volatile (lowest priority) - evicted first
    • Standard (normal priority) - evicted if Volatile is empty
    • Critical (highest priority) - evicted only as a last resort
  2. Clock algorithm: Within each policy bucket, a “clock sweep” finds eviction candidates. The algorithm uses two signals:

    • Clock bit: Set when an entry is accessed. During eviction, if set, the bit is cleared and the entry gets another chance. If clear, the entry is a candidate.
    • Frequency counter (0-255): Tracks access frequency. Even with a clear clock bit, high-frequency entries get additional chances via frequency decay.

This approach ensures that frequently-accessed entries resist eviction regardless of policy, while still respecting policy-based priorities during memory pressure.

§Sharding for Concurrency

The cache divides its capacity across multiple shards (up to 64 by default). Each shard has its own lock, reducing contention in concurrent workloads. Keys are distributed to shards via their hash value.

The shard count is automatically scaled based on capacity to ensure each shard has at least 4KB of space. This prevents premature eviction due to uneven hash distribution:

  • Large caches (>= 256KB): 64 shards
  • Smaller caches: Scaled down proportionally (e.g., 64KB → 16 shards, 4KB → 1 shard)

§Async Usage

let cache = Arc::new(Cache::new(1024 * 1024));

// Share across tasks
let cache_clone = cache.clone();
tokio::spawn(async move {
    // Use get_clone() in async contexts to avoid holding guards across await points
    if let Some(value) = cache_clone.get_clone(&key) {
        do_async_work(&value).await;
    }
});

Implementations§

Source§

impl Cache

Source

pub fn new(max_size_bytes: usize) -> Self

Create a new cache with the given maximum size in bytes.

Uses default configuration with automatic shard scaling. The number of shards is chosen to balance concurrency (more shards = less contention) with per-shard capacity (each shard needs enough space to avoid premature eviction).

  • Large caches (>= 256KB): 64 shards
  • Smaller caches: Scaled down to ensure at least 4KB per shard

For explicit control over shard count, use [CacheBuilder] or [with_shards].

Source

pub fn with_shards(max_size_bytes: usize, shard_count: usize) -> Self

Create with custom shard count.

More shards reduce contention but increase memory overhead. Recommended: num_cpus * 8 to num_cpus * 16.

Note: The shard count may be reduced if the capacity is too small to support the requested number of shards (minimum 4KB per shard). This prevents premature eviction due to uneven hash distribution.

Source

pub fn insert<K: CacheKey>(&self, key: K, value: K::Value) -> Option<K::Value>

Insert a key-value pair. Evicts items if necessary.

Returns the previous value if the key existed.

§Runtime Complexity

Expected case: O(1) for successful insertion without eviction.

Worst case: O(n) where n is the number of entries per shard. Eviction happens via clock hand advancement within the shard.

Source

pub fn get<K: CacheKey>(&self, key: &K) -> Option<Guard<'_, K::Value>>

Retrieve a value via guard. The guard holds a read lock on the shard.

§Warning

Do NOT hold this guard across .await points. Use get_clone() instead for async contexts.

§Runtime Complexity

Expected case: O(1)

This method performs a zero-allocation hash table lookup and atomically increments the reference counter for Clock-PRO.

Source

pub fn get_clone<K: CacheKey>(&self, key: &K) -> Option<K::Value>
where K::Value: Clone,

Retrieve a cloned value. Safe to hold across .await points.

Requires V: Clone. This is the preferred method for async code.

§Runtime Complexity

Expected case: O(1) + O(m) where m is the cost of cloning the value.

This method performs a hash table lookup in O(1) expected time, then clones the underlying value. If cloning is expensive, consider using Arc<T> as your value type, which makes cloning O(1).

Source

pub fn remove<K: CacheKey>(&self, key: &K) -> Option<K::Value>

Remove a key from the cache.

§Runtime Complexity

Expected case: O(1)

Worst case: O(n) where n is the number of entries per shard.

The worst case occurs when the entry is removed from an IndexMap segment, which preserves insertion order and may require shifting elements. In practice, most removals are O(1) amortized.

Source

pub fn contains<K: CacheKey>(&self, key: &K) -> bool

Check if a key exists (zero allocation).

Source

pub fn size(&self) -> usize

Current total size in bytes.

Source

pub fn len(&self) -> usize

Number of entries across all types.

Source

pub fn is_empty(&self) -> bool

Check if cache is empty.

Source

pub fn clear(&self)

Clear all entries.

§Runtime Complexity

O(n) where n is the total number of entries in the cache.

This method acquires a write lock on each shard sequentially and clears all data structures (HashMap and IndexMaps).

Trait Implementations§

Auto Trait Implementations§

§

impl !Freeze for Cache

§

impl !RefUnwindSafe for Cache

§

impl Unpin for Cache

§

impl !UnwindSafe for Cache

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<G1, G2> Within<G2> for G1
where G2: Contains<G1>,

Source§

fn is_within(&self, b: &G2) -> bool