An efficient implementation of concurrent (lockless) the PLRU cache replacement policy.
Cache replacement policies are used to determine which cache lines (buffers), that should be replaced because they're no longer "hot" (likely to be used in the near future). A number of different approaches exist.
LRU is the name of the cache replacement policies which will replace the cache line which was used the longest time ago.
PLRU is the name of the class of cache replacement deriving from LRU, but instead of using an exact measure they use an approximate measure of the recently used lines. This allows for much more efficient cache management (LRU cache management is notoriously slow) on the cost of possibly worse cache policies. PLRU caches are used in many major CPUs to maintain CPU cache lines.
We use a specific version of PLRU: BPLRU or bit PLRU. Tree PLRU is an alternative, but it is not easy to make concurrent.
This can be achieved through
plru::create() unless the
no_std feature is enabled.
Our implementation is generic over fixed-size arrays (or heap allocated vectors) without
relying on any external library. We abstract over
AsRef<[AtomicU64]> in order to allow a wide
varity of different forms of arrays.
For convenience, we define a bunch of type aliases for various sizes of caches.
This implementation of BPLRU makes use of atomic integers to store the bit flags, and thus make up a concurrent and entirely lockless cache manager with excellent performance.
The cache lines are organized into 64-bit integers storing the bit flags ("hot" and "cold"). Each of these flags are called "bulks". Whenever a cache line is touched (used), its bit flag is said.
Finding the cache line to replace is simply done by finding an unset bit in one of the bulks. If all cache lines in the bulk are hot, they're flipped so that they're all cold. This process is called cache inflation.
In order to avoid double cache replacement, we use a counter which is used to maximize the time between the same bulk being used to find a cache line replacement.
The algorithms are described in detail in the documentation of each method.
In short, the algorithm works by cache lines holding an entry until their slot (bulk) is filled, which will (upon cache replacement) lead to every cache line in that bulk being assumed to be inactive until they reclaim their place.
An analogy is parking lots. We only care about parking lot space if it is filled (if it isn't, we can just place our car, and we're good). When it's filled, we need to make sure that the owner of every car has been there in the last N hours (or: since the parking lot got filled) to avoid owners occupying places forever. When new cars come in and a car have been there for enough time, it have to be forcibly removed to open up new space.
Now, think of each parking lot as a bulk. And each car as a cache line.
✔ Conforms to Rust's style guidelines
✔ Idiomatic Rust
A pseudo-LRU cache tracker.
Create a heap allocated cache of a fixed (but dynamic) number of cache lines.
A big cache (512 lines).
A dynamic cache.
A huge cache (2048 lines).
A medium size cache (256 lines).
A micro cache (64 lines).
A small cache (128 lines).