Expand description

This crate implements an LRU (least-recently-used) cache that is limited by the total size of its entries. As more entries are added than fit in the specified memory bound, the least-recently-used ones are ejected. The cache supports average-case O(1) insertion, retrieval, and removal.

Note that the memory required for each entry is only an estimate and some auxiliary structure is disregarded. With some data structures (such as the HashMap or HashSet), some internal data is not accessible, so the required memory is even more underestimated. Therefore, the actual data structure can take more memory than was assigned, however this should not be an excessive amount in most cases.

Motivating example

Imagine we are building a web server that sends large responses to clients. To reduce the load, they are split into sections and the client is given a token to access the different sections individually. However, recomputing the sections on each request leads to too much server load, so they need to be cached. An LRU cache is useful in this situation, as clients are most likely to request new sections temporally localized.

Now consider the situation when most responses are very small, but some may be large. This would either lead to the cache being conservatively sized and allow for less cached responses than would normally be possible, or to the cache being liberally sized and potentially overflow memory if too many large responses have to be cached. To prevent this, the cache is designed with an upper bound on its memory instead of the number of elements.

The code below shows how the basic structure might look like.

use lru_mem::LruCache;

struct WebServer {
    cache: LruCache<u128, Vec<String>>
}

fn random_token() -> u128 {
    // A cryptographically secure random token.
    42
}

fn generate_sections(input: String) -> Vec<String> {
    // A complicated set of sections that is highly variable in size.
    vec![input.clone(), input]
}

impl WebServer {
    fn new(max_size: usize) -> WebServer {
        // Create a new web server with a cache that holds at most max_size
        // bytes of elements.
        WebServer {
            cache: LruCache::new(max_size)
        }
    }

    fn on_query(&mut self, input: String) -> u128 {
        // Generate sections, store them in the cache, and return token.
        let token = random_token();
        let sections = generate_sections(input);
        self.cache.insert(token, sections)
            .expect("sections do not fit in the cache");
 
        token
    }

    fn on_section_request(&mut self, token: u128, index: usize)
            -> Option<&String> {
        // Lookup the token and get the section with given index.
        self.cache.get(&token).and_then(|s| s.get(index))
    }
}

For further details on how to use the cache, see the LruCache struct.

Structs

An iterator that drains key-value-pairs from an LruCache ordered from least- to most-recently-used. This is obtained by calling LruCache::drain.

An iterator that takes ownership of an LruCache and iterates over its entries as key-value-pairs ordered from least- to most-recently-used. This is obtained by calling IntoIterator::into_iter on the cache.

An iterator that takes ownership of an LruCache and iterates over its keys ordered from least- to most-recently-used. This is obtained by calling LruCache::into_keys.

An iterator that takes ownership of an LruCache and iterates over its values ordered from least- to most-recently-used. This is obtained by calling LruCache::into_values.

An iterator over references to the entries of an LruCache ordered from least- to most-recently-used. This is obtained by calling LruCache::iter.

An iterator over references to the keys of an LruCache ordered from least- to most-recently-used. This is obtained by calling LruCache::keys.

An LRU (least-recently-used) cache that stores values associated with keys. Insertion, retrieval, and removal all have average-case complexity in O(1). The cache has an upper memory bound, which is set at construction time. This is enforced using estimates on the memory requirement of each key-value-pair. Note that some auxiliary data structures may allocate more memory. So, this data structure may require more than the given limit.

An iterator over references to the values of an LruCache ordered from least- to most-recently-used. This is obtained by calling LruCache::values.

Enums

An enumeration of the different errors that can occur when calling LruCache::insert.

An enumeration of the different errors that can occur when calling LruCache.

An enumeration of the different errors that can occur when calling LruCache.

Traits

A trait for types whose size on the heap can be determined at runtime. Note for all Sized types, it is sufficient to implement this trait, as a blanket implementation of MemSize is alread provided. The latter is required for the LruCache to track the size of its entries. It has implementations for most common data types and containers.

A trait for types whose total size in memory can be determined at runtime. This is required for the LruCache to track the size of entries. It has implementations for most common data types and containers.

Functions

Gets the memory an entry with the given key and value would occupy in an LRU cache, in bytes. This is also the function used internally, thus if the returned number of bytes fits inside the cache (as can be determined using LruCache::current_size and LruCache::max_size), it is guaranteed not to eject an element.