Expand description
CacheDb implements the concurrent (bucketed) Key/Value store. Keys must implement ‘Bucketize’ which has more lax requirments than a full hash implmementation. ‘N’ is the number of buckets to use. This is const because less dereferencing and management overhead. Buckets by themself are not very expensive thus it is recommended to use a generous large enough number here. Think about expected number of concurrenct accesses times four.
Implementations
sourceimpl<K, V, const N: usize> CacheDb<K, V, N> where
K: KeyTraits,
impl<K, V, const N: usize> CacheDb<K, V, N> where
K: KeyTraits,
sourcepub fn get<'a, M>(
&'a self,
method: M,
key: &K
) -> DynResult<EntryReadGuard<'_, K, V, N>> where
M: 'a + ReadLockMethod,
pub fn get<'a, M>(
&'a self,
method: M,
key: &K
) -> DynResult<EntryReadGuard<'_, K, V, N>> where
M: 'a + ReadLockMethod,
Query the Entry associated with key for reading. On success a EntryReadGuard
protecting the looked up value is returned. When a default constructor is configured
(see with_constructor()
) it tries to construct missing entries, with get_or_insert()
.
When that fails the constructors error is returned. Otherwise
Error::NoEntry
will be returned when the queried item is not in the cache.
The ‘method’ defines how entries are locked and can be one of:
- Blocking: normal blocking lock, returns when the lock is acquired
- TryLock: tries to lock the entry, returns ‘Error::LockUnavailable’ when the lock can’t be obtained instantly.
- Duration: tries to lock the entry with a timeout, returns ‘Error::LockUnavailable’ when the lock can’t be obtained within this time.
- Instant: tries to lock the entry until some point in time, returns ‘Error::LockUnavailable’ when the lock can’t be obtained in time.
For read locks the methods above can be wraped in ‘Recursive()’ to allow a thread to relock any lock it already holds. Write/mutable locks do not support recursive locking.
sourcepub fn get_mut<'a, M>(
&'a self,
method: M,
key: &K
) -> DynResult<EntryWriteGuard<'_, K, V, N>> where
M: 'a + ReadLockMethod + WriteLockMethod,
pub fn get_mut<'a, M>(
&'a self,
method: M,
key: &K
) -> DynResult<EntryWriteGuard<'_, K, V, N>> where
M: 'a + ReadLockMethod + WriteLockMethod,
Query the Entry associated with key for writing. On success a EntryWriteGuard
protecting the looked up value is returned. When a default constructor is configured
(see with_constructor()
) it tries to construct missing entries with
get_or_insert_mut()
. When that fails the constructors error is returned. Otherwise
Error::NoEntry
will be returned when the queried item is not in the cache.
For locking methods see get()
.
sourcepub fn insert<F>(&self, key: &K, ctor: F) -> DynResult<bool> where
F: FnOnce(&K) -> DynResult<V>,
pub fn insert<F>(&self, key: &K, ctor: F) -> DynResult<bool> where
F: FnOnce(&K) -> DynResult<V>,
Tries to insert an entry with the given constructor. Returns Ok(true) when the constructor was called, Ok(false) when and item is already present under the given key or an Err() in case the constructor failed.
sourcepub fn get_or_insert<'a, M, F>(
&'a self,
method: M,
key: &K,
ctor: F
) -> DynResult<EntryReadGuard<'_, K, V, N>> where
F: FnOnce(&K) -> DynResult<V>,
M: 'a + ReadLockMethod,
pub fn get_or_insert<'a, M, F>(
&'a self,
method: M,
key: &K,
ctor: F
) -> DynResult<EntryReadGuard<'_, K, V, N>> where
F: FnOnce(&K) -> DynResult<V>,
M: 'a + ReadLockMethod,
Query an Entry for reading or construct it (atomically).
For locking methods see get()
.
sourcepub fn get_or_insert_mut<'a, M, F>(
&'a self,
method: M,
key: &K,
ctor: F
) -> DynResult<EntryWriteGuard<'_, K, V, N>> where
F: FnOnce(&K) -> DynResult<V>,
M: 'a + ReadLockMethod + WriteLockMethod,
pub fn get_or_insert_mut<'a, M, F>(
&'a self,
method: M,
key: &K,
ctor: F
) -> DynResult<EntryWriteGuard<'_, K, V, N>> where
F: FnOnce(&K) -> DynResult<V>,
M: 'a + ReadLockMethod + WriteLockMethod,
Query an Entry for writing or construct it (atomically).
For locking methods see get()
.
sourcepub fn remove(&self, key: &K)
pub fn remove(&self, key: &K)
Removes an element from the cache. When the element is not in use it will become dropped immediately. When it is in use then the expire bit gets set, thus it will be evicted with priority.
sourcepub fn disable_lru_eviction(&self) -> &Self
pub fn disable_lru_eviction(&self) -> &Self
Disable the LRU eviction. Can be called multiple times, every call should be paired with a ‘enable_lru()’ call to reenable the LRU finally. Failing to do so may keep the CacheDb filling up forever. However this might be intentional to disable the LRU expiration entirely.
sourcepub fn enable_lru_eviction(&self) -> &Self
pub fn enable_lru_eviction(&self) -> &Self
Re-Enables the LRU eviction after it was disabled. every call must be preceeded by a call to ‘disable_lru()’. Calling it without an matching ‘disable_lru()’ will panic with an integer underflow.
sourcepub fn contains_key(&self, key: &K) -> bool
pub fn contains_key(&self, key: &K) -> bool
Checks if the CacheDb has the given key stored. Note that this can be racy when other threads access the CacheDb at the same time.
sourcepub fn with_constructor(
self,
ctor: &'static (dyn Fn(&K) -> DynResult<V> + Sync + Send)
) -> Self
pub fn with_constructor(
self,
ctor: &'static (dyn Fn(&K) -> DynResult<V> + Sync + Send)
) -> Self
Registers a default constructor to the cachedb. When present cachedb.get() and cachedb.get_mut() will try to construct missing items.
sourcepub fn stats(&self) -> (usize, usize, usize)
pub fn stats(&self) -> (usize, usize, usize)
Get some basic stats about utilization. Returns a tuple of (capacity, len, cached)
summed from all buckets. The result are approximate values because other threads may
modify the underlying cache at the same time.
sourcepub fn config_highwater(&self, highwater: usize) -> &Self
pub fn config_highwater(&self, highwater: usize) -> &Self
The ‘highwater’ limit, thats the maximum number of elements each bucket may hold. When this is exceeded unused elements are evicted from the lru list. Note that the number of elements in use can still exceed this limit. For performance reasons this is per-bucket when exact accounting is needed use a one-shard cachedb. Defaults to ‘usize::MAX’, means no upper limit is set.
sourcepub fn config_target_cooldown(&self, target_cooldown: u32) -> &Self
pub fn config_target_cooldown(&self, target_cooldown: u32) -> &Self
The ‘cache_target’ will only recalculated after this many inserts in a bucket. Should
be in the lower hundreds. Defaults to 100
.
sourcepub fn config_min_capacity_limit(&self, min_capacity_limit: usize) -> &Self
pub fn config_min_capacity_limit(&self, min_capacity_limit: usize) -> &Self
Sets the lower limit for the ‘cache_target’ linear interpolation region. Some
hundreds to thousands of entries are recommended. Should be less than
‘max_capacity_limit’. Defaults to 1000
.
sourcepub fn config_max_capacity_limit(&self, max_capacity_limit: usize) -> &Self
pub fn config_max_capacity_limit(&self, max_capacity_limit: usize) -> &Self
Sets the upper limit for the ‘cache_target’ linear interpolation region. The
recommended value should be around the maximum expected number of entries. Defaults to
10000000
.
sourcepub fn config_min_cache_percent(&self, min_cache_percent: u8) -> &Self
pub fn config_min_cache_percent(&self, min_cache_percent: u8) -> &Self
Sets the lower limit for the ‘cache_target’ in percent at ‘max_capacity_limit’. Since
when a high number of entries are stored it is desireable to have a lower percentage of
cached items for wasting less memory. Note that this counts against the ‘capacity’ of
the underlying container, not the stored entries. Recommended values are around 5%,
but may vary on the access patterns. Should be lower than ‘max_cache_percent’
Defautls to 5%
.
sourcepub fn config_max_cache_percent(&self, max_cache_percent: u8) -> &Self
pub fn config_max_cache_percent(&self, max_cache_percent: u8) -> &Self
Sets the upper limit for the ‘cache_target’ in percent at ‘min_capacity_limit’. When
only few entries are stored in a CacheDb it is reasonable to use a lot space for
caching. Note that this counts against the ‘capacity’ of the underlying container,
thus it should be not significantly over 60% at most. Defaults to 60%
.
sourcepub fn config_evict_batch(&self, evict_batch: u8) -> &Self
pub fn config_evict_batch(&self, evict_batch: u8) -> &Self
Sets the number of entries removed at once when evicting entries from the cache. Since
evicting branches into the code parts for removing the entries and calling their
destructors it is a bit more cache friendly to batch a few such things together.
Defaults to 16
.
sourcepub fn evict(&self, number: usize) -> usize
pub fn evict(&self, number: usize) -> usize
Evicts up to number entries. The implementation is pretty simple trying to evict number/N from each bucket. Thus when the distribution is not optimal fewer elements will be removed. Will not remove any entries when the lru eviction is disabled. Returns the number of items that got evicted.
Trait Implementations
Auto Trait Implementations
impl<K, V, const N: usize> !RefUnwindSafe for CacheDb<K, V, N>
impl<K, V, const N: usize> Send for CacheDb<K, V, N> where
K: Send,
V: Send,
impl<K, V, const N: usize> Sync for CacheDb<K, V, N> where
K: Send,
V: Send,
impl<K, V, const N: usize> Unpin for CacheDb<K, V, N>
impl<K, V, const N: usize> !UnwindSafe for CacheDb<K, V, N>
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more