pub struct CacheAdvisor { /* private fields */ }
Expand description
A simple eviction manager with 256 shards and two segments to provide for scan resistance. Tells you when to evict items from a cache.
features:
- two-segment LRU, protects against cache pollution from single-hit items
- 256 shards accessed via non-blocking flatcombining
- local access buffer that must fill up before accessing shared state
- compresses the costs associated with each item to a
u8
using a compression technique that will converge to the overall true sum of costs over time, but allows for much less memory to be used for accounting.
§Examples
use cache_advisor::CacheAdvisor;
// each shard stores 10 bytes, 10% of that is in the entry cache
let mut ca = CacheAdvisor::new(256 * 10, 10);
// add item 0 into entry cache
let should_evict = ca.accessed(0, 1);
assert!(should_evict.is_empty());
// promote item 0 into main cache
let should_evict = ca.accessed(0, 1);
assert!(should_evict.is_empty());
// hit other items only once, like a big scan
for i in 1..5000 {
let id = i * 256;
let evicted = ca.accessed(id, 1);
// assert that 0 is never evicted while scanning
assert!(!evicted.contains(&(0, 1)));
}
let mut zero_evicted = false;
// hit other items more than once, assert that zero does get
// evicted eventually.
for i in 1..5000 {
let id = i * 256;
zero_evicted |= ca.accessed(id, 1).contains(&(0, 1));
zero_evicted |= ca.accessed(id, 1).contains(&(0, 1));
zero_evicted |= ca.accessed(id, 1).contains(&(0, 1));
}
assert!(zero_evicted);
Implementations§
Source§impl CacheAdvisor
impl CacheAdvisor
Sourcepub fn new(capacity: usize, entry_percent: u8) -> Self
pub fn new(capacity: usize, entry_percent: u8) -> Self
Instantiates a new CacheAdvisor
eviction manager.
entry_percent
is how much of the cache should be
devoted to the “entry” cache. When new items are added
to the system, they are inserted into the entry cache
first. If they are accessed at some point while still
in the entry cache, they will be promoted to the main
cache. This provides “scan resistance” where the cache
will avoid being destroyed by things like a scan that
could otherwise push all of the frequently-accessed
items out. A value of 20
is a reasonable default,
which will reserve 20% of the cache capacity for the
entry cache, and 80% for the main cache. This value
must be less than or equal to 100. If the main cache
has never been filled to the point where items are
evicted, items that are pushed out of the entry cache
will flow into the main cache, so you don’t need to
worry about under-utilizing available memory. This
only changes behavior once the cache is full to prevent
scans from kicking other items out.
Sourcepub fn accessed(&mut self, id: u64, cost: usize) -> Vec<(u64, usize)>
pub fn accessed(&mut self, id: u64, cost: usize) -> Vec<(u64, usize)>
Called when an item is accessed. Returns a Vec of items to be evicted. Avoids blocking under contention by using flat-combining on 256 LRU shards.
Sourcepub fn accessed_reuse_buffer(&mut self, id: u64, cost: usize) -> &[(u64, usize)]
pub fn accessed_reuse_buffer(&mut self, id: u64, cost: usize) -> &[(u64, usize)]
Similar to accessed
except this will reuse an internal vector for storing
items to be evicted, which will be passed by reference to callers. If the
returned slice is huge and you would like to reclaim underlying memory, call
the reset_internal_access_buffer
method. This can improve throughput by around
10% in some cases compared to the simpler accessed
method above (which may
need to copy items several times as the returned vector is expanded).
Sourcepub fn reset_internal_access_buffer(&mut self)
pub fn reset_internal_access_buffer(&mut self)
Resets the internal access buffer, freeing any memory it may have been holding
onto. This should only be called in combination with accessed_reuse_buffer
if
you want to release the memory that the internal buffer may be consuming. You
probably don’t need to call this unless the previous slice returned by
accessed_reuse_buffer
is over a few thousand items long, if not an order of magnitude
or two larger than that, which should ideally be rare events in workloads where
most items being inserted are somewhat clustered in size.
Trait Implementations§
Source§impl Clone for CacheAdvisor
impl Clone for CacheAdvisor
Source§fn clone(&self) -> CacheAdvisor
fn clone(&self) -> CacheAdvisor
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for CacheAdvisor
impl Debug for CacheAdvisor
Source§impl Default for CacheAdvisor
impl Default for CacheAdvisor
Source§fn default() -> CacheAdvisor
fn default() -> CacheAdvisor
Returns a CacheAdvisor
with a default of 1 million capacity, and 20% entry cache