pub struct SegmentedCache<K, V, S: BuildHasher = RandomState, W: Weigher<K, V> = One, L: Lifecycle<K, V> = DefaultLifecycle> { /* private fields */ }
Implementations§
Source§impl<K, V, S, W, L> SegmentedCache<K, V, S, W, L>
impl<K, V, S, W, L> SegmentedCache<K, V, S, W, L>
Source§impl<K, V, S, W, L> SegmentedCache<K, V, S, W, L>
impl<K, V, S, W, L> SegmentedCache<K, V, S, W, L>
pub fn new_with_lifecycle( segments: usize, max_weight: usize, lifecycle: L, ) -> Self
pub fn put(&self, key: K, value: V)
pub fn remove(&self, key: &K) -> Option<V>
pub fn get<Q>(&self, key: &Q) -> Option<V>
Sourcepub fn evict_all(&self)
pub fn evict_all(&self)
Clears the cache, via eviction.
If you want to handle the evicted entries, use the Lifecycle trait. The order of eviction is unspecified.
This function does not guarantee that the cache is empty after calling it in the presence of concurrent writes. If you want to empty the cache, you need to stop writes first, and then call this function. If you just want to reset the cache, then you can call this function with concurrent writes. It’s safe, it’s just not going to be empty.
It will block 1/segments
of the cache at a time for segment size time, so
it’s potentially fairly intrusive with large segments, especially with an
expensive on_eviction Lifecycle if you have one.
Trait Implementations§
Auto Trait Implementations§
impl<K, V, S, W, L> Freeze for SegmentedCache<K, V, S, W, L>where
S: Freeze,
impl<K, V, S = RandomState, W = One, L = DefaultLifecycle> !RefUnwindSafe for SegmentedCache<K, V, S, W, L>
impl<K, V, S, W, L> Send for SegmentedCache<K, V, S, W, L>
impl<K, V, S, W, L> Sync for SegmentedCache<K, V, S, W, L>
impl<K, V, S, W, L> Unpin for SegmentedCache<K, V, S, W, L>
impl<K, V, S, W, L> UnwindSafe for SegmentedCache<K, V, S, W, L>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more