Struct moka::sync::Cache [−][src]
pub struct Cache<K, V, S = RandomState> { /* fields omitted */ }
Expand description
A thread-safe concurrent in-memory cache.
Cache
supports full concurrency of retrievals and a high expected concurrency
for updates.
Cache
utilizes a lock-free concurrent hash table SegmentedHashMap
from the
moka-cht crate for the central key-value storage. Cache
performs a best-effort bounding of the map using an entry replacement algorithm
to determine which entries to evict when the capacity is exceeded.
Examples
Cache entries are manually added using insert
method, and are stored in the
cache until either evicted or manually invalidated.
Here’s an example of reading and updating a cache by using multiple threads:
use moka::sync::Cache; use std::thread; fn value(n: usize) -> String { format!("value {}", n) } const NUM_THREADS: usize = 16; const NUM_KEYS_PER_THREAD: usize = 64; // Create a cache that can store up to 10,000 entries. let cache = Cache::new(10_000); // Spawn threads and read and update the cache simultaneously. let threads: Vec<_> = (0..NUM_THREADS) .map(|i| { // To share the same cache across the threads, clone it. // This is a cheap operation. let my_cache = cache.clone(); let start = i * NUM_KEYS_PER_THREAD; let end = (i + 1) * NUM_KEYS_PER_THREAD; thread::spawn(move || { // Insert 64 entries. (NUM_KEYS_PER_THREAD = 64) for key in start..end { my_cache.insert(key, value(key)); // get() returns Option<String>, a clone of the stored value. assert_eq!(my_cache.get(&key), Some(value(key))); } // Invalidate every 4 element of the inserted entries. for key in (start..end).step_by(4) { my_cache.invalidate(&key); } }) }) .collect(); // Wait for all threads to complete. threads.into_iter().for_each(|t| t.join().expect("Failed")); // Verify the result. for key in 0..(NUM_THREADS * NUM_KEYS_PER_THREAD) { if key % 4 == 0 { assert_eq!(cache.get(&key), None); } else { assert_eq!(cache.get(&key), Some(value(key))); } }
Thread Safety
All methods provided by the Cache
are considered thread-safe, and can be safely
accessed by multiple concurrent threads.
Cache<K, V, S>
requires trait boundsSend
,Sync
and'static
forK
(key),V
(value) andS
(hasher state).Cache<K, V, S>
will implementSend
andSync
.
Sharing a cache across threads
To share a cache across threads, do one of the followings:
- Create a clone of the cache by calling its
clone
method and pass it to other thread. - Wrap the cache by a
sync::OnceCell
orsync::Lazy
from once_cell create, and set it to astatic
variable.
Cloning is a cheap operation for Cache
as it only creates thread-safe
reference-counted pointers to the internal data structures.
Avoiding to clone the value at get
The return type of get
method is Option<V>
instead of Option<&V>
. Every
time get
is called for an existing key, it creates a clone of the stored value
V
and returns it. This is because the Cache
allows concurrent updates from
threads so a value stored in the cache can be dropped or replaced at any time by
any other thread. get
cannot return a reference &V
as it is impossible to
guarantee the value outlives the reference.
If you want to store values that will be expensive to clone, wrap them by
std::sync::Arc
before storing in a cache. Arc
is a
thread-safe reference-counted pointer and its clone()
method is cheap.
Expiration Policies
Cache
supports the following expiration policies:
- Time to live: A cached entry will be expired after the specified duration
past from
insert
. - Time to idle: A cached entry will be expired after the specified duration
past from
get
orinsert
.
See the CacheBuilder
’s doc for how to configure a cache
with them.
Hashing Algorithm
By default, Cache
uses a hashing algorithm selected to provide resistance
against HashDoS attacks. It will be the same one used by
std::collections::HashMap
, which is currently SipHash 1-3.
While SipHash’s performance is very competitive for medium sized keys, other hashing algorithms will outperform it for small keys such as integers as well as large keys such as long strings. However those algorithms will typically not protect against attacks such as HashDoS.
The hashing algorithm can be replaced on a per-Cache
basis using the
build_with_hasher
method of the
CacheBuilder
. Many alternative algorithms are available on crates.io, such
as the aHash crate.
Implementations
Constructs a new Cache<K, V>
that will store up to the max_capacity
entries.
To adjust various configuration knobs such as initial_capacity
or
time_to_live
, use the CacheBuilder
.
Returns a clone of the value corresponding to the key.
If you want to store values that will be expensive to clone, wrap them by
std::sync::Arc
before storing in a cache. Arc
is a
thread-safe reference-counted pointer and its clone()
method is cheap.
The key may be any borrowed form of the cache’s key type, but Hash
and Eq
on the borrowed form must match those for the key type.
Ensures the value of the key exists by inserting the result of the init function if not exist, and returns a clone of the value.
This method prevents to evaluate the init function multiple times on the same key even if the method is concurrently called by many threads; only one of the calls evaluates its function, and other calls wait for that function to complete.
Try to ensure the value of the key exists by inserting an Ok
result of the
init function if not exist, and returns a clone of the value or the Err
returned by the function.
This method prevents to evaluate the init function multiple times on the same key even if the method is concurrently called by many threads; only one of the calls evaluates its function, and other calls wait for that function to complete.
Inserts a key-value pair into the cache.
If the cache has this key present, the value is updated.
Discards any cached value for the key.
The key may be any borrowed form of the cache’s key type, but Hash
and Eq
on the borrowed form must match those for the key type.
Discards all cached values.
This method returns immediately and a background thread will evict all the
cached values inserted before the time when this method was called. It is
guaranteed that the get
method must not return these invalidated values
even if they have not been evicted.
Like the invalidate
method, this method does not clear the historic
popularity estimator of keys so that it retains the client activities of
trying to retrieve an item.
pub fn invalidate_entries_if<F>(
&self,
predicate: F
) -> Result<PredicateId, PredicateError> where
F: Fn(&K, &V) -> bool + Send + Sync + 'static,
pub fn invalidate_entries_if<F>(
&self,
predicate: F
) -> Result<PredicateId, PredicateError> where
F: Fn(&K, &V) -> bool + Send + Sync + 'static,
Discards cached values that satisfy a predicate.
invalidate_entries_if
takes a closure that returns true
or false
. This
method returns immediately and a background thread will apply the closure to
each cached value inserted before the time when invalidate_entries_if
was
called. If the closure returns true
on a value, that value will be evicted
from the cache.
Also the get
method will apply the closure to a value to determine if it
should have been invalidated. Therefore, it is guaranteed that the get
method must not return invalidated values.
Note that you must call
CacheBuilder::support_invalidation_closures
at the cache creation time as the cache needs to maintain additional internal
data structures to support this method. Otherwise, calling this method will
fail with a
PredicateError::InvalidationClosuresDisabled
.
Like the invalidate
method, this method does not clear the historic
popularity estimator of keys so that it retains the client activities of
trying to retrieve an item.
Returns the max_capacity
of this cache.
Returns the time_to_live
of this cache.
Returns the time_to_idle
of this cache.
Returns the number of internal segments of this cache.
Cache
always returns 1
.
Trait Implementations
Auto Trait Implementations
impl<K, V, S = RandomState> !RefUnwindSafe for Cache<K, V, S>
impl<K, V, S = RandomState> !UnwindSafe for Cache<K, V, S>
Blanket Implementations
Mutably borrows from an owned value. Read more