[][src]Crate xfetch

This crate implements the XFetch probabilistic early expiration algorithm.

It can be used in conjunction with cache containers like LRU cache to implement cache expiration and re-computation in parallel environment like multi-thread / multi-process computing.

It is very efficient because the algorithm does not need coordination (no locks) between processes.


Create a single cache entry and test it's expiration:

use xfetch::CacheEntry;
use std::time::Duration;

let entry = CacheEntry::builder(|| {
.with_ttl(|value| {


The CacheEntry can be used with any cache library. For example the lru crate:

use lru::LruCache;
use xfetch::CacheEntry;
use std::time::Duration;

struct SomeValue {
    value: u64,
    ttl: u64

fn recompute_value(n: u64) -> SomeValue {
    SomeValue { value: n, ttl: 10000 }

let mut cache = LruCache::new(2);

cache.put("apple", CacheEntry::builder(|| recompute_value(3))
    .with_ttl(|v| Duration::from_millis(v.ttl))
cache.put("banana", CacheEntry::builder(|| recompute_value(2))
    .with_ttl(|v| Duration::from_millis(v.ttl))

if let Some(entry) = cache.get(&"apple") {
    if !entry.is_expired() {
        assert_eq!(entry.get().value, 3);
    } else {
        cache.put("apple", CacheEntry::builder(|| recompute_value(3))
            .with_ttl(|v| Duration::from_millis(v.ttl))

The Algorithm

Cascading failure can occur when massively parallel computing systems with caching mechanisms come under very high load.

Under normal load, cache misses will trigger a recomputation to refresh the cache. Other process or thread can continue as before.

Under heavy load, cache misses may trigger multipre process / threads trying to refresh content thus add more loading to the resource source which the cache was meant to reduce the loading.

Several approaches can be used to mitigate this issue. The algorithm used here is proposed by Vattani, A.; Chierichetti, F.; Lowenstein, K. (2015) in the paper Optimal Probabilistic Cache Stampede Prevention.

The idea is any worker can volunteer to recompute the value before it expires. With a probability that increases when the cache entry approaches expiration, each worker may recompute the cache by making an independent decision. The effect of the cache stampede is mitigated as fewer workers will expire at the same time.

The following is the algorithm pseudo code:

This example is not tested
function XFetch(key, ttl; beta = 1)
    value, delta, expiry <- cache_read(key)
    if !value or time() - delta * beta * ln(rand()) >= expiry then
        start <- time()
        value <- recompute_value()
        delta <- time() - start
        cache_write(key, (value, delta), ttl)
    return value


The parameter beta can be set to greater than 1.0 to favor earlier recomputation or lesser to favor later. The default 1.0 is optimal for most use cases.

rand() is a random number in the range (0, 1].

delta is the time required for the recomputation. If it takes longer to recompute then the algorithm will also favor earlier recomputation.




A cache entry that employs probabilistic early expiration


The builder for building CacheEntry with supplied parameters.