pub enum SyncLimit<M, V, H, P, E, OnEvictFn>where
M: ArcMutexMapLike,
H: Hooks<M::V>,
M::V: Borrow<V> + BorrowMut<V> + FromInto<V, H>,
P: Borrow<LockableMapImpl<M, V, H>>,
OnEvictFn: FnMut(Vec<Guard<M, V, H, P>>) -> Result<(), E>,{
NoLimit { /* private fields */ },
SoftLimit {
max_entries: NonZeroUsize,
on_evict: OnEvictFn,
},
}
Expand description
An instance of this enum defines a limit on the number of entries in a LockableLruCache or a LockableHashMap. It can be used to cause old entries to be evicted if a limit on the number of entries is exceeded in a call to the following functions:
LockableLruCache | LockableHashMap |
---|---|
blocking_lock | blocking_lock |
blocking_lock_owned | blocking_lock_owned |
try_lock | try_lock |
try_lock_owned | try_lock_owned |
The purpose of this class is the same as the purpose of AsyncLimit, but it has a synchronous callback
to evict entries instead of an async
callback.
§Example (without limit)
use lockable::{LockableHashMap, SyncLimit};
let lockable_map = LockableHashMap::<i64, String>::new();
let guard = lockable_map.blocking_lock(4, SyncLimit::no_limit())?;
§Example (with limit)
use lockable::{LockableLruCache, SyncLimit};
use std::cell::RefCell;
use std::rc::Rc;
let lockable_map = LockableLruCache::<i64, String>::new();
// Insert two entries
lockable_map.blocking_lock(4, SyncLimit::no_limit())?.insert("Value 4".to_string());
lockable_map.blocking_lock(5, SyncLimit::no_limit())?.insert("Value 5".to_string());
// Lock a third entry but set a limit of 2 entries
// Collect any evicted entries in the `evicted` vector
let mut evicted = vec![];
let guard = lockable_map.blocking_lock(4, SyncLimit::SoftLimit {
max_entries: 2.try_into().unwrap(),
on_evict: |entries| {
for mut entry in entries {
evicted.push(*entry.key());
// We have to remove the entry from the map, the map doesn't do it for us.
// If we don't remove it, we could end up in an infinite loop because the
// map is still above the limit.
entry.remove();
}
Ok::<(), lockable::Never>(())
}
})?;
// We evicted the entry with key 4 because it was the least recently used
assert_eq!(evicted.len(), 1);
assert!(evicted.contains(&4));
Variants§
NoLimit
This enum variant specifies that there is no limit on the number of entries. If the locking operation causes a new entry to be created, it will be created without evicting anything.
Use SyncLimit::no_limit to create an instance.
SoftLimit
Fields
max_entries: NonZeroUsize
The maximal allowed number of entries in the cache. If this number gets exceeded by a locking call with
this SyncLimit set, the on_evict
callback will be called.
on_evict: OnEvictFn
This callback will be called if max_entries
is exceeded. It will be passed a list of guards for entries
and it will be expected to delete those entries from the LockableHashMap or
LockableLruCache using Guard::remove. This callback can also do any operations
you need to clean up or flush data from those entries before you delete them. It is not async
. If you need
an async
callback, take a look at the functions taking AsyncLimit instead of SyncLimit.
Setting a SyncLimit::SoftLimit for a locking call means that there is a limit on the number of entries. Entries that either have a value or that don’t have a value but are currently locked count towards that limit, see LockableHashMap::num_entries_or_locked or LockableLruCache::num_entries_or_locked.
If the locking call would cause the limit to be exceeded, the given on_evict
callback will be called with
some other entries. Those entries are already locked for you and on_evict
is expected to delete those entries.
It is possible that on_evict
is called multiple times if the limit is still exceeded after the call.
The on_evict
callback is responsible for deleting those entries, LockableHashMap
and LockableLruCache will not delete any entries for you. If on_evict
doesn’t
delete any entries, you will end up in an infinite loop because the total number of entries never gets below the
limit.
There is one exception, and this is why this is called a “soft” limit. If a call to a locking function has
a SyncLimit::SoftLimit set but there are no entries in the cache that are currently unlocked and that could
be passed to an on_evict
callback, i.e. if the limit is exceeded but at the same time all entries are currently
locked, then exceeding the limit will be allowed, on_evict
will not be called, and the locking function
will successfully lock return. This is to protect against a deadlock that would otherwise be hard to protect
against where multiple threads or tasks lock different keys and want to lock more keys, but the limit would block
them and no thread/task wants to give up their held locks. Note that this only protects against a deadlock
caused by the limit. If those threads or tasks are trying to lock each others locks, you will still run into
a deadlock.
If this is used in a LockableLruCache, then on_evict
will be called with the
least recently used entries, to allow for LRU style pruning.
Implementations§
source§impl<M, V, H, P> SyncLimit<M, V, H, P, Never, fn(_: Vec<Guard<M, V, H, P>>) -> Result<(), Never>>
impl<M, V, H, P> SyncLimit<M, V, H, P, Never, fn(_: Vec<Guard<M, V, H, P>>) -> Result<(), Never>>
sourcepub fn no_limit() -> Self
pub fn no_limit() -> Self
See SyncLimit::NoLimit. This helper function can be used to create an instance of SyncLimit::NoLimit without having to specify all the PhantomData members.