pub struct Cache { /* private fields */ }Expand description
Thread-safe cache with weight-stratified clock eviction.
The cache can be shared across threads via Arc<Cache>. All methods are synchronous
but safe to call from async contexts.
§Weight-Stratified Clock Eviction
This cache uses a two-level eviction strategy that combines policy-based prioritization with recency and frequency tracking:
-
Policy-based stratification: Each entry has an eviction policy (Critical, Standard, or Volatile). When space is needed, lower-priority entries are evicted first:
- Volatile (lowest priority) - evicted first
- Standard (normal priority) - evicted if Volatile is empty
- Critical (highest priority) - evicted only as a last resort
-
Clock algorithm: Within each policy bucket, a “clock sweep” finds eviction candidates. The algorithm uses two signals:
- Clock bit: Set when an entry is accessed. During eviction, if set, the bit is cleared and the entry gets another chance. If clear, the entry is a candidate.
- Frequency counter (0-255): Tracks access frequency. Even with a clear clock bit, high-frequency entries get additional chances via frequency decay.
This approach ensures that frequently-accessed entries resist eviction regardless of policy, while still respecting policy-based priorities during memory pressure.
§Sharding for Concurrency
The cache divides its capacity across multiple shards (up to 64 by default). Each shard has its own lock, reducing contention in concurrent workloads. Keys are distributed to shards via their hash value.
The shard count is automatically scaled based on capacity to ensure each shard has at least 4KB of space. This prevents premature eviction due to uneven hash distribution:
- Large caches (>= 256KB): 64 shards
- Smaller caches: Scaled down proportionally (e.g., 64KB → 16 shards, 4KB → 1 shard)
§Async Usage
let cache = Arc::new(Cache::new(1024 * 1024));
// Share across tasks
let cache_clone = cache.clone();
tokio::spawn(async move {
// Use get_clone() in async contexts to avoid holding guards across await points
if let Some(value) = cache_clone.get_clone(&key) {
do_async_work(&value).await;
}
});Implementations§
Source§impl Cache
impl Cache
Sourcepub fn new(max_size_bytes: usize) -> Self
pub fn new(max_size_bytes: usize) -> Self
Create a new cache with the given maximum size in bytes.
Uses default configuration with automatic shard scaling. The number of shards is chosen to balance concurrency (more shards = less contention) with per-shard capacity (each shard needs enough space to avoid premature eviction).
- Large caches (>= 256KB): 64 shards
- Smaller caches: Scaled down to ensure at least 4KB per shard
For explicit control over shard count, use [CacheBuilder] or [with_shards].
Sourcepub fn with_shards(max_size_bytes: usize, shard_count: usize) -> Self
pub fn with_shards(max_size_bytes: usize, shard_count: usize) -> Self
Create with custom shard count.
More shards reduce contention but increase memory overhead.
Recommended: num_cpus * 8 to num_cpus * 16.
Note: The shard count may be reduced if the capacity is too small to support the requested number of shards (minimum 4KB per shard). This prevents premature eviction due to uneven hash distribution.
Sourcepub fn insert<K: CacheKey>(&self, key: K, value: K::Value) -> Option<K::Value>
pub fn insert<K: CacheKey>(&self, key: K, value: K::Value) -> Option<K::Value>
Insert a key-value pair. Evicts items if necessary.
Returns the previous value if the key existed.
§Runtime Complexity
Expected case: O(1) for successful insertion without eviction.
Worst case: O(n) where n is the number of entries per shard. Eviction happens via clock hand advancement within the shard.
Sourcepub fn get<K: CacheKey>(&self, key: &K) -> Option<Guard<'_, K::Value>>
pub fn get<K: CacheKey>(&self, key: &K) -> Option<Guard<'_, K::Value>>
Retrieve a value via guard. The guard holds a read lock on the shard.
§Warning
Do NOT hold this guard across .await points. Use get_clone() instead
for async contexts.
§Runtime Complexity
Expected case: O(1)
This method performs a zero-allocation hash table lookup and atomically increments the reference counter for Clock-PRO.
Sourcepub fn get_clone<K: CacheKey>(&self, key: &K) -> Option<K::Value>
pub fn get_clone<K: CacheKey>(&self, key: &K) -> Option<K::Value>
Retrieve a cloned value. Safe to hold across .await points.
Requires V: Clone. This is the preferred method for async code.
§Runtime Complexity
Expected case: O(1) + O(m) where m is the cost of cloning the value.
This method performs a hash table lookup in O(1) expected time, then clones
the underlying value. If cloning is expensive, consider using Arc<T> as your
value type, which makes cloning O(1).
Sourcepub fn get_by<K, Q>(&self, key: &Q) -> Option<Guard<'_, K::Value>>
pub fn get_by<K, Q>(&self, key: &Q) -> Option<Guard<'_, K::Value>>
Retrieve a value via guard using a borrowed lookup key (zero allocation).
This method allows looking up entries using a borrowed key type Q that
implements CacheKeyLookup<K>, enabling zero-allocation lookups. For example,
you can use (&str, &str) to look up entries stored with (String, String) keys.
§Warning
Do NOT hold this guard across .await points. Use get_clone_by() instead
for async contexts.
§Example
// Define borrowed lookup type
struct DbCacheKeyRef<'a>(&'a str, &'a str);
impl Hash for DbCacheKeyRef<'_> {
fn hash<H: Hasher>(&self, state: &mut H) {
self.0.hash(state);
self.1.hash(state);
}
}
impl CacheKeyLookup<DbCacheKey> for DbCacheKeyRef<'_> {
fn eq_key(&self, key: &DbCacheKey) -> bool {
self.0 == key.0 && self.1 == key.1
}
}
// Zero-allocation lookup
let value = cache.get_by::<DbCacheKey, _>(&DbCacheKeyRef(ns, db));Sourcepub fn get_clone_by<K, Q>(&self, key: &Q) -> Option<K::Value>
pub fn get_clone_by<K, Q>(&self, key: &Q) -> Option<K::Value>
Retrieve a cloned value using a borrowed lookup key. Safe to hold across .await points.
This method allows looking up entries using a borrowed key type Q that
implements CacheKeyLookup<K>, enabling zero-allocation lookups. For example,
you can use (&str, &str) to look up entries stored with (String, String) keys.
Requires V: Clone. This is the preferred method for async code.
§Example
// Zero-allocation lookup with cloned result
let value = cache.get_clone_by::<DbCacheKey, _>(&DbCacheKeyRef(ns, db));Sourcepub fn remove<K: CacheKey>(&self, key: &K) -> Option<K::Value>
pub fn remove<K: CacheKey>(&self, key: &K) -> Option<K::Value>
Remove a key from the cache.
§Runtime Complexity
Expected case: O(1)
Worst case: O(n) where n is the number of entries per shard.
The worst case occurs when the entry is removed from an IndexMap segment, which preserves insertion order and may require shifting elements. In practice, most removals are O(1) amortized.
Sourcepub fn contains_by<K, Q>(&self, key: &Q) -> bool
pub fn contains_by<K, Q>(&self, key: &Q) -> bool
Check if a borrowed lookup key exists (zero allocation).
This method allows checking existence using a borrowed key type Q that
implements CacheKeyLookup<K>, enabling zero-allocation lookups.
§Example
// Zero-allocation existence check
let exists = cache.contains_by::<DbCacheKey, _>(&DbCacheKeyRef(ns, db));Trait Implementations§
Auto Trait Implementations§
impl !Freeze for Cache
impl !RefUnwindSafe for Cache
impl Unpin for Cache
impl !UnwindSafe for Cache
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more