pub struct ClockCache<K, V>{ /* private fields */ }Expand description
An implementation of a concurrent Clock cache.
The Clock algorithm is an efficient $O(1)$ approximation of Least Recently Used (LRU). It uses a circular “clock hand” logic to iterate over slots and evict entries that haven’t been recently accessed.
§Concurrency Model
This implementation is lock-free for reads and uses fine-grained atomic state transitions for concurrent writes and evictions.
- Tag-based Synchronization: Each slot is protected by an
AtomicU64tag that combines a versioned index, a frequency bit (Hot/Cold), and a “Busy” bit. - Memory Safety: Uses Epoch-based Reclamation (via
crossbeam-epoch) to ensure safe destruction of evicted entries while other threads hold references.
Implementations§
Source§impl<K, V> ClockCache<K, V>
impl<K, V> ClockCache<K, V>
Sourcepub fn new(capacity: usize, metrics_config: MetricsConfig) -> Self
pub fn new(capacity: usize, metrics_config: MetricsConfig) -> Self
Creates a new Clock cache with a fixed capacity.
§Initialization Logic
- Slots: Allocates a contiguous block of memory for
capacityslots. Each slot is wrapped inCachePaddedto eliminate L1 cache-line contention between concurrent readers and writers. - Index Pool: Initializes the MPMC
RingQueue. Note: The pool is populated with all available slot indices (0..capacity) immediately. This “cold start” state treats every slot as vacant and ready for the first wave of insertions. - Metrics: Sets up sharded atomic counters for tracking hits, misses, and eviction latency without creating a global bottleneck.
Sourcepub fn get<Q>(&self, key: &Q, context: &ThreadContext) -> Option<Ref<K, V>>
pub fn get<Q>(&self, key: &Q, context: &ThreadContext) -> Option<Ref<K, V>>
Retrieves an entry from the cache.
Uses an Acquire load on the slot tag to synchronize with the Release
store of the last writer. If the entry is Cold, it is upgraded to Hot
via a compare_exchange to protect it from the next eviction cycle.
Sourcepub fn insert(
&self,
entry: Entry<K, V>,
context: &ThreadContext,
quota: &mut RequestQuota,
)
pub fn insert( &self, entry: Entry<K, V>, context: &ThreadContext, quota: &mut RequestQuota, )
Inserts a new entry or updates an existing one.
If the key is missing, the thread initiates the Eviction Loop:
- Pops an index from the
index_pool. - If the slot is
Hot: Decrements frequency and pushes it back (Second Chance). - If the slot is
Cold: Claims the slot using aBusybit, swaps the entry, and updates theIndexTable.
§Memory Ordering Logic
- Claiming:
AcqRelon the tag ensures exclusive access to the slot. - Writing:
Relaxedstore on theentryis safe because it is guarded by the surrounding Tag barriers. - Publishing:
Releasestore on the finalTagmakes the new entry visible to all concurrent readers.
Sourcepub fn remove<Q>(&self, key: &Q, context: &ThreadContext) -> bool
pub fn remove<Q>(&self, key: &Q, context: &ThreadContext) -> bool
Removes an entry from the cache and resets its physical slot.
This operation performs a two-stage teardown:
- Logical Removal: Atomically removes the key from the
index_table. - Physical Invalidation: Transitions the associated slot’s
Tagto a reset/vacant state via acompare_exchangeloop.
§Synchronization
- Tag Match: If the
Tagno longer matches the expected index or hash (due to a concurrent eviction), the method returnstrueimmediately, as the target entry is already gone. - Reset: A successful
tag.reset()ensures that any “in-flight”getrequests observing this slot will see a signature mismatch and returnNone.
§Memory Reclamation
Note that while the Tag is reset, the Entry itself remains in the slot
and the index is not pushed back to the index_pool. The physical
memory is reclaimed by the insert logic when the Clock hand eventually
encounters this reset slot.
§Parameters
key: The key to be removed.context: The thread context for backoff coordination during contention.
§Returns
Returns true if the entry was found and successfully invalidated.