Struct moka::future::Cache[][src]

pub struct Cache<K, V, S = RandomState> { /* fields omitted */ }

A thread-safe, futures-aware concurrent in-memory cache.

Cache supports full concurrency of retrievals and a high expected concurrency for updates. It can be accessed inside and outside of asynchronous contexts.

Cache utilizes a lock-free concurrent hash table cht::SegmentedHashMap from the cht crate for the central key-value storage. Cache performs a best-effort bounding of the map using an entry replacement algorithm to determine which entries to evict when the capacity is exceeded.

To use this cache, enable a crate feature called “future”.

Examples

Cache entries are manually added using an insert method, and are stored in the cache until either evicted or manually invalidated:

  • Inside an async context (async fn or async block), use insert or invalidate method for updating the cache and await them.
  • Outside any async context, use blocking_insert or blocking_invalidate methods. They will block for a short time under heavy updates.

Here’s an example of reading and updating a cache by using multiple asynchronous tasks with Tokio runtime:

 // Cargo.toml
 //
 // [dependencies]
 // moka = { version = "0.3", features = ["future"] }
 // tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
 // futures = "0.3"

 use moka::future::Cache;

 #[tokio::main]
 async fn main() {
     const NUM_TASKS: usize = 16;
     const NUM_KEYS_PER_TASK: usize = 64;

     fn value(n: usize) -> String {
         format!("value {}", n)
     }

     // Create a cache that can store up to 10,000 entries.
     let cache = Cache::new(10_000);

     // Spawn async tasks and write to and read from the cache.
     let tasks: Vec<_> = (0..NUM_TASKS)
         .map(|i| {
             // To share the same cache across the async tasks, clone it.
             // This is a cheap operation.
             let my_cache = cache.clone();
             let start = i * NUM_KEYS_PER_TASK;
             let end = (i + 1) * NUM_KEYS_PER_TASK;

             tokio::spawn(async move {
                 // Insert 64 entries. (NUM_KEYS_PER_TASK = 64)
                 for key in start..end {
                     // insert() is an async method, so await it.
                     my_cache.insert(key, value(key)).await;
                     // get() returns Option<String>, a clone of the stored value.
                     assert_eq!(my_cache.get(&key), Some(value(key)));
                 }

                 // Invalidate every 4 element of the inserted entries.
                 for key in (start..end).step_by(4) {
                     // invalidate() is an async method, so await it.
                     my_cache.invalidate(&key).await;
                 }
             })
         })
         .collect();

     // Wait for all tasks to complete.
     futures::future::join_all(tasks).await;

     // Verify the result.
     for key in 0..(NUM_TASKS * NUM_KEYS_PER_TASK) {
         if key % 4 == 0 {
             assert_eq!(cache.get(&key), None);
         } else {
             assert_eq!(cache.get(&key), Some(value(key)));
         }
     }
 }

Thread Safety

All methods provided by the Cache are considered thread-safe, and can be safely accessed by multiple concurrent threads.

Cache<K, V, S> will implement Send when all of the following conditions meet:

  • K (key) and V (value) implement Send and Sync.
  • S (the hash-map state) implements Send.

and will implement Sync when all of the following conditions meet:

  • K (key) and V (value) implement Send and Sync.
  • S (the hash-map state) implements Sync.

Sharing a cache across asynchronous tasks

To share a cache across async tasks (or OS threads), do one of the followings:

  • Create a clone of the cache by calling its clone method and pass it to other task.
  • Wrap the cache by a sync::OnceCell or sync::Lazy from once_cell create, and set it to a static variable.

Cloning is a cheap operation for Cache as it only creates thread-safe reference-counted pointers to the internal data structures.

Avoiding to clone the value at get

The return type of get method is Option<V> instead of Option<&V>. Every time get is called for an existing key, it creates a clone of the stored value V and returns it. This is because the Cache allows concurrent updates from threads so a value stored in the cache can be dropped or replaced at any time by any other thread. get cannot return a reference &V as it is impossible to guarantee the value outlives the reference.

If you want to store values that will be expensive to clone, wrap them by std::sync::Arc before storing in a cache. Arc is a thread-safe reference-counted pointer and its clone() method is cheap.

Expiration Policies

Cache supports the following expiration policies:

  • Time to live: A cached entry will be expired after the specified duration past from insert.
  • Time to idle: A cached entry will be expired after the specified duration past from get or insert.

See the CacheBuilder’s doc for how to configure a cache with them.

Hashing Algorithm

By default, Cache uses a hashing algorithm selected to provide resistance against HashDoS attacks.

The default hashing algorithm is the one used by std::collections::HashMap, which is currently SipHash 1-3.

While its performance is very competitive for medium sized keys, other hashing algorithms will outperform it for small keys such as integers as well as large keys such as long strings. However those algorithms will typically not protect against attacks such as HashDoS.

The hashing algorithm can be replaced on a per-Cache basis using the build_with_hasher method of the CacheBuilder. Many alternative algorithms are available on crates.io, such as the aHash crate.

Implementations

impl<K, V> Cache<K, V, RandomState> where
    K: Hash + Eq,
    V: Clone
[src]

pub fn new(max_capacity: usize) -> Self[src]

Constructs a new Cache<K, V> that will store up to the max_capacity entries.

To adjust various configuration knobs such as initial_capacity or time_to_live, use the CacheBuilder.

impl<K, V, S> Cache<K, V, S> where
    K: Hash + Eq,
    V: Clone,
    S: BuildHasher + Clone
[src]

pub fn get<Q: ?Sized>(&self, key: &Q) -> Option<V> where
    Arc<K>: Borrow<Q>,
    Q: Hash + Eq
[src]

Returns a clone of the value corresponding to the key.

If you want to store values that will be expensive to clone, wrap them by std::sync::Arc before storing in a cache. Arc is a thread-safe reference-counted pointer and its clone() method is cheap.

The key may be any borrowed form of the cache’s key type, but Hash and Eq on the borrowed form must match those for the key type.

pub async fn insert(&self, key: K, value: V)[src]

Inserts a key-value pair into the cache.

If the cache has this key present, the value is updated.

pub fn blocking_insert(&self, key: K, value: V)[src]

Blocking insert to call outside of asynchronous contexts.

This method is intended for use cases where you are inserting from synchronous code.

pub async fn invalidate<Q: ?Sized>(&self, key: &Q) where
    Arc<K>: Borrow<Q>,
    Q: Hash + Eq
[src]

Discards any cached value for the key.

The key may be any borrowed form of the cache’s key type, but Hash and Eq on the borrowed form must match those for the key type.

pub fn blocking_invalidate<Q: ?Sized>(&self, key: &Q) where
    Arc<K>: Borrow<Q>,
    Q: Hash + Eq
[src]

Blocking invalidate to call outside of asynchronous contexts.

This method is intended for use cases where you are invalidating from synchronous code.

pub fn invalidate_all(&self)[src]

Discards all cached values.

This method returns immediately and a background thread will evict all the cached values inserted before the time when this method was called. It is guaranteed that the get method must not return these invalidated values even if they have not been evicted.

Like the invalidate method, this method does not clear the historic popularity estimator of keys so that it retains the client activities of trying to retrieve an item.

pub fn max_capacity(&self) -> usize[src]

Returns the max_capacity of this cache.

pub fn time_to_live(&self) -> Option<Duration>[src]

Returns the time_to_live of this cache.

pub fn time_to_idle(&self) -> Option<Duration>[src]

Returns the time_to_idle of this cache.

pub fn num_segments(&self) -> usize[src]

Returns the number of internal segments of this cache.

Cache always returns 1.

Trait Implementations

impl<K: Clone, V: Clone, S: Clone> Clone for Cache<K, V, S>[src]

impl<K, V, S> ConcurrentCacheExt<K, V> for Cache<K, V, S> where
    K: Hash + Eq,
    S: BuildHasher + Clone
[src]

impl<K, V, S> Send for Cache<K, V, S> where
    K: Send + Sync,
    V: Send + Sync,
    S: Send
[src]

impl<K, V, S> Sync for Cache<K, V, S> where
    K: Send + Sync,
    V: Send + Sync,
    S: Sync
[src]

Auto Trait Implementations

impl<K, V, S = RandomState> !RefUnwindSafe for Cache<K, V, S>

impl<K, V, S> Unpin for Cache<K, V, S>

impl<K, V, S = RandomState> !UnwindSafe for Cache<K, V, S>

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.