Skip to main content

S3FIFOCache

Struct S3FIFOCache 

Source
pub struct S3FIFOCache<K, V>
where K: Eq + Hash,
{ /* private fields */ }
Expand description

A high-concurrency, segmented cache implementing the S3-FIFO eviction algorithm.

This structure organizes memory into a multi-tiered hierarchy to achieve scan-resistance and high hit rates, specifically optimized for modern multicore processors.

§Architecture

S3-FIFO (Simple Scalable Static FIFO) extends traditional FIFO by using three distinct queues:

  1. Probationary: A small FIFO queue (typically 10% of capacity) for new entries.
  2. Protected: A large FIFO queue for frequently accessed entries.
  3. Ghost: A “shadow” queue that tracks the hashes of evicted entries to inform future admission decisions.

§Concurrency & Performance

  • Lock-Free Design: Uses atomic operations and crossbeam-epoch for thread-safe access without global mutexes.
  • False Sharing Protection: Slots are wrapped in CachePadded to ensure different threads don’t invalidate each other’s CPU cache lines.
  • Index Pool: A dedicated queue manages slot reuse, eliminating the need for expensive memory allocations during the steady state.

Implementations§

Source§

impl<K, V> S3FIFOCache<K, V>
where K: Eq + Hash,

Source

pub fn new(capacity: usize, metrics_config: MetricsConfig) -> Self

Initializes a new cache instance with a segmented S3-FIFO architecture.

This constructor allocates the underlying slot storage and partitions the cache capacity into functional segments designed to balance scan-resistance with high hit rates.

§Segmentation Logic
  • Probation Segment (10%): Acts as the initial landing zone for new entries. It prevents “one-hit wonders” from polluting the main cache body.
  • Protected Segment (90%): Houses frequency-proven entries. Data here has survived a probationary period or was identified as frequent via the ghost filter.
  • Ghost Queue & Filter: A historical tracking mechanism sized to match total capacity. It records the hashes of recently evicted items, allowing the admission policy to “remember” and promote returning keys.
§Resource Initialization
  • Index Pool: Pre-populated with all available slot indices (0 to capacity). This acts as a lock-free allocator for cache slots.
  • Cache Padding: Each Slot is wrapped in CachePadded to prevent “false sharing,” a critical optimization for performance on high-core-count processors like the M1 Pro.
  • Ghost Filter: Uses a CountMinSketch with a depth of 4 to provide space-efficient frequency estimation for the admission policy.
§Parameters
  • capacity: The total number of entries the cache can hold. This value is distributed between the probation and protected segments.
  • metrics_config: Configuration for hit/miss/latency tracking.
Source

pub fn get<Q>(&self, key: &Q, context: &ThreadContext) -> Option<Ref<K, V>>
where Key<K>: Borrow<Q>, Q: Eq + Hash + ?Sized,

Retrieves a value from the cache, upgrading its frequency on a successful match.

This method implements a lock-free read path that utilizes atomic tags for fast validation before accessing the actual entry memory.

§Control Flow
  1. Index Resolution: Performs a lookup in the index_table. If the key is not found, records a miss and returns None.
  2. Tag Validation: Loads the slot’s Tag using Acquire semantics and validates it against the provided hash and index. This prevents accessing a slot that has been repurposed (ABA protection).
  3. Liveness & Expiration:
    • Checks if the Entry is null or if the stored key has changed.
    • Validates the entry’s TTL. If expired, it records a miss.
  4. Frequency Upgrade:
    • Attempts to increment the access frequency in the Tag via compare_exchange_weak.
    • On success, the entry is considered “Hot,” potentially protecting it from future eviction.
    • On failure (contention), the thread performs a backoff and retries the loop.
  5. Reference Return: On a successful hit, returns a Ref which wraps the entry and the Guard, ensuring memory remains valid for the caller.
§Memory Model & Synchronization
  • Acquire/Release: The Tag load (Acquire) synchronizes with the insert or evict stores (Release), ensuring the entry pointer is valid.
  • Lock-Free Reads: Readers never block writers. The frequency update is optimistic and handles contention via the ThreadContext wait/decay mechanism.
§Parameters
  • key: The key to look up.
  • context: Thread-local state for frequency decay and contention management.
§Returns
  • Some(Ref<K, V>): A handle to the entry if found and valid.
  • None: If the key is missing, expired, or the signature mismatch occurs.
Source

pub fn insert( &self, entry: Entry<K, V>, context: &ThreadContext, quota: &mut RequestQuota, )

The main entry point for inserting or updating data within the cache.

This method implements an adaptive admission policy by distinguishing between existing entries, known “hot” candidates (via the ghost filter), and new arrivals.

§Control Flow
  1. Index Lookup: Checks the index_table to see if the key is already resident.
  2. Update Path (Resident Key):
    • If found, it attempts to lock the specific slot by transitioning its Tag to busy.
    • It validates the signature and index to ensure the slot hasn’t been repurposed (ABA protection).
    • Upon a successful lock, it swaps the Entry, increments the access frequency in the Tag, and releases the lock.
    • If the lock fails or a mismatch is detected, the thread performs a backoff and retries.
  3. Admission Path (New Key):
    • If the key is not in the index, the ghost_filter is consulted.
    • Promotion: If the key’s hash is present in the ghost filter (indicating it was recently evicted or seen), it is inserted directly into the protected_segment.
    • Probation: If the key is entirely new, it is placed in the probation_queue.
§Memory Model & Synchronization
  • AcqRel/Release: The compare_exchange_weak and subsequent store on the Tag ensure that the Entry swap is safely published to readers.
  • Epoch-Based Reclamation: guard.defer_destroy ensures the old_entry is only deallocated when no concurrent readers hold a reference to it.
  • Adaptive Backoff: Uses context.wait() and context.decay() to handle contention gracefully on highly active slots.
§Parameters
  • entry: The key-value pair to insert.
  • context: Thread-local state for synchronization and performance metrics.
  • quota: Budget for the operation, primarily used if insertion triggers cascading evictions.
Source

pub fn remove<Q>(&self, key: &Q, context: &ThreadContext) -> bool
where Key<K>: Borrow<Q>, Q: Eq + Hash + ?Sized,

Removes an entry from the cache by key, ensuring safe synchronization with concurrent readers and writers.

This method uses a two-phase approach to safely invalidate a slot: first by locking the metadata via a busy bit, and then by verifying the key identity before final removal.

§Control Flow
  1. Index Resolution: Performs a lookup in the index_table. Returns false immediately if the key is not present.
  2. Epoch & Liveness Check: Validates the Tag against the provided index to ensure the slot hasn’t been repurposed (ABA protection). If the slot is busy, it performs an adaptive backoff.
  3. Atomic Lock: Attempts to CAS the slot tag to a busy state. This grants exclusive access to the slot’s entry pointer for the duration of the removal.
  4. Key Verification: Once locked, it loads the Entry and performs a final check to ensure the resident key matches the target key.
    • Mismatch: If the key changed during the lock acquisition, the tag is restored to its original state and returns false.
    • Match: The key is removed from the index_table, and the slot tag is reset (clearing frequency and busy bits) before being released.
§Memory Model & Synchronization
  • AcqRel/Release: Ensures that the IndexTable removal and any local modifications are globally visible before the slot’s busy bit is cleared.
  • Spin-Reduction: Utilizes context.wait() and context.decay() to prevent CPU-churn when multiple threads attempt to remove or update the same hot key.
  • Epoch Safety: Uses a pin() guard to safely inspect the entry pointer without risking a use-after-free, even if another thread is concurrently evicting the slot.
§Parameters
  • key: The key of the entry to be removed.
  • context: Thread-local state for managing backoff and contention.
§Returns
  • true: The entry was found and successfully removed.
  • false: The entry was not found or the key did not match the current slot occupant.

Trait Implementations§

Source§

impl<K, V> CacheEngine<K, V> for S3FIFOCache<K, V>
where K: Eq + Hash,

Source§

fn get<Q>(&self, key: &Q, context: &ThreadContext) -> Option<Ref<K, V>>
where Key<K>: Borrow<Q>, Q: Eq + Hash + ?Sized,

Retrieves a protected reference to a cached value. Read more
Source§

fn insert( &self, entry: Entry<K, V>, context: &ThreadContext, quota: &mut RequestQuota, )

Inserts a key-value pair into the cache. Read more
Source§

fn remove<Q>(&self, key: &Q, context: &ThreadContext) -> bool
where Key<K>: Borrow<Q>, Q: Eq + Hash + ?Sized,

Removes an entry from the cache. Read more
Source§

fn capacity(&self) -> usize

Returns the total number of slots allocated for the cache.
Source§

fn metrics(&self) -> MetricsSnapshot

Returns a snapshot of internal cache statistics.
Source§

impl<K, V> Drop for S3FIFOCache<K, V>
where K: Eq + Hash,

Source§

fn drop(&mut self)

Executes the destructor for this type. Read more

Auto Trait Implementations§

§

impl<K, V> !Freeze for S3FIFOCache<K, V>

§

impl<K, V> !RefUnwindSafe for S3FIFOCache<K, V>

§

impl<K, V> Send for S3FIFOCache<K, V>
where V: Send + Sync, K: Sync + Send,

§

impl<K, V> Sync for S3FIFOCache<K, V>
where V: Send + Sync, K: Sync + Send,

§

impl<K, V> Unpin for S3FIFOCache<K, V>

§

impl<K, V> UnsafeUnpin for S3FIFOCache<K, V>

§

impl<K, V> !UnwindSafe for S3FIFOCache<K, V>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V