fn_cache/lib.rs
1//! This crate implements an easy way to cache values for
2//! a function. If you have a slow running function, this
3//! can be used to speed up successive runs dramatically.
4//! It is also quite useful for memoization of recursive
5//! functions, to prevent calculating the same function
6//! twice in different calls.
7//!
8//! Of particular note, this caching is done without
9//! cloning or copying, allowing functions to return
10//! large objects, while the cache only returns a reference
11//! to them instead of copying them.
12//!
13//! # Allowed functions
14//! This crate attempts to remain fairly flexible with
15//! the functions it accepts. All of the following should
16//! be allowed:
17//! * [`fn`][fn primitive] types.
18//! * [`Fn`] types that have no references.
19//! * [`Fn`] + 'static types that take only static references.
20//! * [`Fn`] + 'a types that take references of lifetime 'a.
21//!
22//! For obvious reasons, [`FnMut`] and [`FnOnce`] are not allowed,
23//! as functions need to be rerunnable and pure.
24//!
25//! # Examples
26//!
27//! The caches can handle recursive functions, with a shortcut
28//! for defining it with non-recursive functions. Each cache
29//! has a `recursive` function to create a recursion capable
30//! cache, but requires the function to accept the cache as the
31//! first argument. Each `new` function takes a function that
32//! does not require the cache as an argument.
33//!
34//! ## Non-recursive
35//!
36//! Here is an example for a function that takes a while to
37//! calculate. Instead of running the calculations each time
38//! you'd like to do it just once, and recall the value. The
39//! results are stored in a [`HashCache`] for random access.
40//!
41//! ```rust
42//! use fn_cache::{FnCache, HashCache};
43//! use std::{thread, time};
44//!
45//! let sleep_time = time::Duration::from_secs(3);
46//!
47//! let mut cache = HashCache::new(|&x| {
48//! thread::sleep(sleep_time);
49//! x
50//! });
51//!
52//! let start = time::Instant::now();
53//! assert_eq!(cache.get(100), &100);
54//! assert_eq!(cache.get(100), &100);
55//!
56//! // time elapsed is only slightly longer than the sleep time
57//! // far less than twice.
58//! assert!(time::Instant::now() - start < sleep_time.mul_f32(1.1));
59//! ```
60//!
61//! ## Recursive
62//!
63//! The following example shows a recursive fibonacci
64//! implementation, which would be O(2ⁿ) without
65//! memoization (caching). With memoization, it becomes
66//! O(n), and can easily be calculated.
67//!
68//! ```rust
69//! use fn_cache::{FnCache, HashCache};
70//!
71//! let mut cache = HashCache::<u8,u128>::recursive(|cache, x|
72//! match x {
73//! 0 => 0,
74//! 1 => 1,
75//! _ => *cache.get(x - 1) + *cache.get(x - 2),
76//! }
77//! );
78//!
79//! assert_eq!(
80//! *cache.get(186),
81//! 332_825_110_087_067_562_321_196_029_789_634_457_848
82//! );
83//! ```
84//!
85//! For even bigger results, the [num] crate might be employed.
86//! In order to avoid copying the `BigUint`s while accessing the
87//! cache twice, using [`FnCacheMany::get_many`] can be used to get
88//! multiple values at once, to avoid trying to take a reference and
89//! then mutating again. Additionally, since the inputs start at 0
90//! and each value must be filled before the next is calculated, you
91//! might use a [`VecCache`] as an optimization.
92//!
93//! ```rust
94//! use fn_cache::{FnCache, FnCacheMany, VecCache};
95//! use num_bigint::BigUint;
96//!
97//! let mut cache = VecCache::recursive(|cache, x|
98//! match x {
99//! 0 => BigUint::new(vec![0]),
100//! 1 => BigUint::new(vec![1]),
101//! _ => cache.get_many([x - 1, x - 2]).into_iter().sum(),
102//! }
103//! );
104//!
105//! assert_eq!(
106//! cache.get(999),
107//! &BigUint::parse_bytes(b"26863810024485359386146727202142923967616609318986952340123175997617981700247881689338369654483356564191827856161443356312976673642210350324634850410377680367334151172899169723197082763985615764450078474174626", 10).unwrap()
108//! );
109//! ```
110//!
111//! ## Implementing your own Container
112//!
113//! Maybe you've got a more efficient container for your use case or access pattern. You can have
114//! [`GenericCache`] do most of the heavy lifting (in fact that's what `HashCache` and `BTreeCache`
115//! do!), by simply implementing [`SparseContainer`] on your storage type.
116//!
117//! ```rust
118//! use std::collections::BTreeMap;
119//! use fn_cache::{GenericCache, container::SparseContainer};
120//!
121//! type MyCache<'f, I, O> = GenericCache<'f, MyContainer<I, O>>;
122//!
123//! struct MyContainer<I, O>(BTreeMap<I, O>);
124//!
125//! impl<I: Ord, O> SparseContainer for MyContainer<I, O> {
126//! type Input = I;
127//! type Output = O;
128//!
129//! fn get(&self, input: &Self::Input) -> Option<&Self::Output> {
130//! self.0.get(input)
131//! }
132//!
133//! fn put(&mut self, input: Self::Input, output: Self::Output) -> &Self::Output {
134//! self.0.entry(input).or_insert(output)
135//! }
136//! }
137//! ```
138//!
139//! [fn primitive]: https://doc.rust-lang.org/std/primitive.fn.html
140//! [`Rc`]: std::rc::Rc
141//! [num]: https://docs.rs/num/
142pub mod btree_cache;
143pub mod container;
144pub mod fn_cache;
145pub mod generic_cache;
146pub mod hash_cache;
147pub mod vec_cache;
148
149#[cfg(test)]
150mod tests;
151
152pub use crate::btree_cache::BTreeCache;
153pub use crate::fn_cache::{FnCache, FnCacheMany};
154pub use crate::generic_cache::GenericCache;
155pub use crate::hash_cache::HashCache;
156pub use crate::vec_cache::VecCache;