pub struct LockableHashMap<K, V>
where K: Eq + PartialEq + Hash + Clone,
{ /* private fields */ }
Expand description

A threadsafe hash map where individual keys can be locked/unlocked, even if there is no entry for this key in the map. It initially considers all keys as “unlocked”, but they can be locked and if a second thread tries to acquire a lock for the same key, they will have to wait.

use lockable::{AsyncLimit, LockableHashMap};

let lockable_map: LockableHashMap<i64, String> = LockableHashMap::new();
let entry1 = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;
let entry2 = lockable_map.async_lock(5, AsyncLimit::no_limit()).await?;

// This next line would cause a deadlock or panic because `4` is already locked on this thread
// let entry3 = lockable_map.async_lock(4).await;

// After dropping the corresponding guard, we can lock it again
std::mem::drop(entry1);
let entry3 = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;

The guards holding a lock for an entry can be used to insert that entry to the hash map, remove it from the hash map, or to modify the value of an existing entry.

use lockable::{AsyncLimit, LockableHashMap};

async fn insert_entry(
    lockable_map: &LockableHashMap<i64, String>,
) -> Result<(), lockable::Never> {
    let mut entry_guard = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;
    entry_guard.insert(String::from("Hello World"));
    Ok(())
}

async fn remove_entry(
    lockable_map: &LockableHashMap<i64, String>,
) -> Result<(), lockable::Never> {
    let mut entry_guard = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;
    entry_guard.remove();
    Ok(())
}

let lockable_map: LockableHashMap<i64, String> = LockableHashMap::new();
assert_eq!(
    None,
    lockable_map
        .async_lock(4, AsyncLimit::no_limit())
        .await?
        .value()
);
insert_entry(&lockable_map).await;
assert_eq!(
    Some(&String::from("Hello World")),
    lockable_map
        .async_lock(4, AsyncLimit::no_limit())
        .await?
        .value()
);
remove_entry(&lockable_map).await;
assert_eq!(
    None,
    lockable_map
        .async_lock(4, AsyncLimit::no_limit())
        .await?
        .value()
);

You can use an arbitrary type to index hash map entries by, as long as that type implements PartialEq + Eq + Hash + Clone.

use lockable::{AsyncLimit, LockableHashMap};

#[derive(PartialEq, Eq, Hash, Clone)]
struct CustomLockKey(u32);

let lockable_map: LockableHashMap<CustomLockKey, String> = LockableHashMap::new();
let guard = lockable_map
    .async_lock(CustomLockKey(4), AsyncLimit::no_limit())
    .await?;

Under the hood, a LockableHashMap is a std::collections::HashMap of Mutexes, with some logic making sure that empty entries can also be locked and that there aren’t any race conditions when adding or removing entries.

Implementations§

source§

impl<K, V> LockableHashMap<K, V>
where K: Eq + PartialEq + Hash + Clone,

source

pub fn new() -> Self

Create a new hash map with no entries and no locked keys.

§Examples
use lockable::{AsyncLimit, LockableHashMap};

let lockable_map: LockableHashMap<i64, String> = LockableHashMap::new();
let guard = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;
source

pub fn num_entries_or_locked(&self) -> usize

Return the number of map entries.

Corner case: Currently locked keys are counted even if they don’t exist in the map.

§Examples
use lockable::{AsyncLimit, LockableHashMap};

let lockable_map = LockableHashMap::<i64, String>::new();

// Insert two entries
lockable_map
    .async_lock(4, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 4"));
lockable_map
    .async_lock(5, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 5"));
// Keep a lock on a third entry but don't insert it
let guard = lockable_map.async_lock(6, AsyncLimit::no_limit()).await?;

// Now we have two entries and one additional locked guard
assert_eq!(3, lockable_map.num_entries_or_locked());
source

pub fn blocking_lock<'a, E, OnEvictFn>( &'a self, key: K, limit: <Self as Lockable<K, V>>::SyncLimit<'a, OnEvictFn, E> ) -> Result<<Self as Lockable<K, V>>::Guard<'a>, E>
where OnEvictFn: FnMut(Vec<<Self as Lockable<K, V>>::Guard<'a>>) -> Result<(), E>,

Lock a key and return a guard with any potential map entry for that key. Any changes to that entry will be persisted in the map. Locking a key prevents any other threads from locking the same key, but the action of locking a key doesn’t insert a map entry by itself. Map entries can be inserted and removed using Guard::insert and Guard::remove on the returned entry guard.

If the lock with this key is currently locked by a different thread, then the current thread blocks until it becomes available. Upon returning, the thread is the only thread with the lock held. A RAII guard is returned to allow scoped unlock of the lock. When the guard goes out of scope, the lock will be unlocked.

This function can only be used from non-async contexts and will panic if used from async contexts.

The exact behavior on locking a lock in the thread which already holds the lock is left unspecified. However, this function will not return on the second call (it might panic or deadlock, for example).

The limit parameter can be used to set a limit on the number of entries in the cache, see the documentation of SyncLimit for an explanation of how exactly it works.

§Panics
  • This function might panic when called if the lock is already held by the current thread.
  • This function will also panic when called from an async context. See documentation of tokio::sync::Mutex for details.
§Examples
use lockable::{LockableHashMap, SyncLimit};

let lockable_map = LockableHashMap::<i64, String>::new();
let guard1 = lockable_map.blocking_lock(4, SyncLimit::no_limit())?;
let guard2 = lockable_map.blocking_lock(5, SyncLimit::no_limit())?;

// This next line would cause a deadlock or panic because `4` is already locked on this thread
// let guard3 = lockable_map.blocking_lock(4);

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockable_map.blocking_lock(4, SyncLimit::no_limit())?;
source

pub fn blocking_lock_owned<E, OnEvictFn>( self: &Arc<Self>, key: K, limit: <Self as Lockable<K, V>>::SyncLimitOwned<OnEvictFn, E> ) -> Result<<Self as Lockable<K, V>>::OwnedGuard, E>
where OnEvictFn: FnMut(Vec<<Self as Lockable<K, V>>::OwnedGuard>) -> Result<(), E>,

Lock a lock by key and return a guard with any potential map entry for that key.

This is identical to LockableHashMap::blocking_lock, please see documentation for that function for more information. But different to LockableHashMap::blocking_lock, LockableHashMap::blocking_lock_owned works on an Arc<LockableHashMap> instead of a LockableHashMap and returns a Lockable::OwnedGuard that binds its lifetime to the LockableHashMap in that Arc. Such a Lockable::OwnedGuard can be more easily moved around or cloned than the Lockable::Guard returned by LockableHashMap::blocking_lock.

§Examples
use lockable::{LockableHashMap, SyncLimit};
use std::sync::Arc;

let lockable_map = Arc::new(LockableHashMap::<i64, String>::new());
let guard1 = lockable_map.blocking_lock_owned(4, SyncLimit::no_limit())?;
let guard2 = lockable_map.blocking_lock_owned(5, SyncLimit::no_limit())?;

// This next line would cause a deadlock or panic because `4` is already locked on this thread
// let guard3 = lockable_map.blocking_lock_owned(4);

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockable_map.blocking_lock_owned(4, SyncLimit::no_limit())?;
source

pub fn try_lock<'a, E, OnEvictFn>( &'a self, key: K, limit: <Self as Lockable<K, V>>::SyncLimit<'a, OnEvictFn, E> ) -> Result<Option<<Self as Lockable<K, V>>::Guard<'a>>, E>
where OnEvictFn: FnMut(Vec<<Self as Lockable<K, V>>::Guard<'a>>) -> Result<(), E>,

Attempts to acquire the lock with the given key and if successful, returns a guard with any potential map entry for that key. Any changes to that entry will be persisted in the map. Locking a key prevents any other threads from locking the same key, but the action of locking a key doesn’t insert a map entry by itself. Map entries can be inserted and removed using Guard::insert and Guard::remove on the returned entry guard.

If the lock could not be acquired because it is already locked, then Ok(None) is returned. Otherwise, a RAII guard is returned. The lock will be unlocked when the guard is dropped.

This function does not block and can be used from both async and non-async contexts.

The limit parameter can be used to set a limit on the number of entries in the cache, see the documentation of SyncLimit for an explanation of how exactly it works.

§Examples
use lockable::{LockableHashMap, SyncLimit};

let lockable_map: LockableHashMap<i64, String> = LockableHashMap::new();
let guard1 = lockable_map.blocking_lock(4, SyncLimit::no_limit())?;
let guard2 = lockable_map.blocking_lock(5, SyncLimit::no_limit())?;

// This next line cannot acquire the lock because `4` is already locked on this thread
let guard3 = lockable_map.try_lock(4, SyncLimit::no_limit())?;
assert!(guard3.is_none());

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockable_map.try_lock(4, SyncLimit::no_limit())?;
assert!(guard3.is_some());
source

pub fn try_lock_owned<E, OnEvictFn>( self: &Arc<Self>, key: K, limit: <Self as Lockable<K, V>>::SyncLimitOwned<OnEvictFn, E> ) -> Result<Option<<Self as Lockable<K, V>>::OwnedGuard>, E>
where OnEvictFn: FnMut(Vec<<Self as Lockable<K, V>>::OwnedGuard>) -> Result<(), E>,

Attempts to acquire the lock with the given key and if successful, returns a guard with any potential map entry for that key.

This is identical to LockableHashMap::try_lock, please see documentation for that function for more information. But different to LockableHashMap::try_lock, LockableHashMap::try_lock_owned works on an Arc<LockableHashMap> instead of a LockableHashMap and returns a Lockable::OwnedGuard that binds its lifetime to the LockableHashMap in that Arc. Such a Lockable::OwnedGuard can be more easily moved around or cloned than the Lockable::Guard returned by LockableHashMap::try_lock.

§Examples
use lockable::{LockableHashMap, SyncLimit};
use std::sync::Arc;

let lockable_map = Arc::new(LockableHashMap::<i64, String>::new());
let guard1 = lockable_map.blocking_lock(4, SyncLimit::no_limit())?;
let guard2 = lockable_map.blocking_lock(5, SyncLimit::no_limit())?;

// This next line cannot acquire the lock because `4` is already locked on this thread
let guard3 = lockable_map.try_lock_owned(4, SyncLimit::no_limit())?;
assert!(guard3.is_none());

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockable_map.try_lock_owned(4, SyncLimit::no_limit())?;
assert!(guard3.is_some());
source

pub async fn try_lock_async<'a, E, F, OnEvictFn>( &'a self, key: K, limit: <Self as Lockable<K, V>>::AsyncLimit<'a, OnEvictFn, E, F> ) -> Result<Option<<Self as Lockable<K, V>>::Guard<'a>>, E>
where F: Future<Output = Result<(), E>>, OnEvictFn: FnMut(Vec<<Self as Lockable<K, V>>::Guard<'a>>) -> F,

Attempts to acquire the lock with the given key and if successful, returns a guard with any potential map entry for that key.

This is identical to LockableHashMap::try_lock, please see documentation for that function for more information. But different to LockableHashMap::try_lock, LockableHashMap::try_lock_async takes an AsyncLimit instead of a SyncLimit and therefore allows an async callback to be specified for when the cache reaches its limit.

This function does not block and can be used in async contexts.

§Examples
use lockable::{AsyncLimit, LockableHashMap};
use std::sync::Arc;

let lockable_map = LockableHashMap::<i64, String>::new();
let guard1 = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;
let guard2 = lockable_map.async_lock(5, AsyncLimit::no_limit()).await?;

// This next line cannot acquire the lock because `4` is already locked on this thread
let guard3 = lockable_map
    .try_lock_async(4, AsyncLimit::no_limit())
    .await?;
assert!(guard3.is_none());

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockable_map
    .try_lock_async(4, AsyncLimit::no_limit())
    .await?;
assert!(guard3.is_some());
source

pub async fn try_lock_owned_async<E, F, OnEvictFn>( self: &Arc<Self>, key: K, limit: <Self as Lockable<K, V>>::AsyncLimitOwned<OnEvictFn, E, F> ) -> Result<Option<<Self as Lockable<K, V>>::OwnedGuard>, E>
where F: Future<Output = Result<(), E>>, OnEvictFn: FnMut(Vec<<Self as Lockable<K, V>>::OwnedGuard>) -> F,

Attempts to acquire the lock with the given key and if successful, returns a guard with any potential map entry for that key.

This is identical to LockableHashMap::try_lock_async, please see documentation for that function for more information. But different to LockableHashMap::try_lock_async, LockableHashMap::try_lock_owned_async works on an Arc<LockableHashMap> instead of a LockableHashMap and returns a Lockable::OwnedGuard that binds its lifetime to the LockableHashMap in that Arc. Such a Lockable::OwnedGuard can be more easily moved around or cloned than the Lockable::Guard returned by LockableHashMap::try_lock_async.

This is identical to LockableHashMap::try_lock_owned, please see documentation for that function for more information. But different to LockableHashMap::try_lock_owned, LockableHashMap::try_lock_owned_async takes an AsyncLimit instead of a SyncLimit and therefore allows an async callback to be specified for when the cache reaches its limit.

§Examples
use lockable::{AsyncLimit, LockableHashMap};
use std::sync::Arc;

let lockable_map = Arc::new(LockableHashMap::<i64, String>::new());
let guard1 = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;
let guard2 = lockable_map.async_lock(5, AsyncLimit::no_limit()).await?;

// This next line cannot acquire the lock because `4` is already locked on this thread
let guard3 = lockable_map
    .try_lock_owned_async(4, AsyncLimit::no_limit())
    .await?;
assert!(guard3.is_none());

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockable_map
    .try_lock_owned_async(4, AsyncLimit::no_limit())
    .await?;
assert!(guard3.is_some());
source

pub async fn async_lock<'a, E, F, OnEvictFn>( &'a self, key: K, limit: <Self as Lockable<K, V>>::AsyncLimit<'a, OnEvictFn, E, F> ) -> Result<<Self as Lockable<K, V>>::Guard<'a>, E>
where F: Future<Output = Result<(), E>>, OnEvictFn: FnMut(Vec<<Self as Lockable<K, V>>::Guard<'a>>) -> F,

Lock a key and return a guard with any potential map entry for that key. Any changes to that entry will be persisted in the map. Locking a key prevents any other tasks from locking the same key, but the action of locking a key doesn’t insert a map entry by itself. Map entries can be inserted and removed using Guard::insert and Guard::remove on the returned entry guard.

If the lock with this key is currently locked by a different task, then the current tasks awaits until it becomes available. Upon returning, the task is the only task with the lock held. A RAII guard is returned to allow scoped unlock of the lock. When the guard goes out of scope, the lock will be unlocked.

The limit parameter can be used to set a limit on the number of entries in the cache, see the documentation of AsyncLimit for an explanation of how exactly it works.

§Examples
use lockable::{AsyncLimit, LockableHashMap};

let lockable_map = LockableHashMap::<i64, String>::new();
let guard1 = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;
let guard2 = lockable_map.async_lock(5, AsyncLimit::no_limit()).await?;

// This next line would cause a deadlock or panic because `4` is already locked on this thread
// let guard3 = lockable_map.async_lock(4).await?;

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockable_map.async_lock(4, AsyncLimit::no_limit()).await?;
source

pub async fn async_lock_owned<E, F, OnEvictFn>( self: &Arc<Self>, key: K, limit: <Self as Lockable<K, V>>::AsyncLimitOwned<OnEvictFn, E, F> ) -> Result<<Self as Lockable<K, V>>::OwnedGuard, E>
where F: Future<Output = Result<(), E>>, OnEvictFn: FnMut(Vec<<Self as Lockable<K, V>>::OwnedGuard>) -> F,

Lock a key and return a guard with any potential map entry for that key. Any changes to that entry will be persisted in the map. Locking a key prevents any other tasks from locking the same key, but the action of locking a key doesn’t insert a map entry by itself. Map entries can be inserted and removed using Guard::insert and Guard::remove on the returned entry guard.

This is identical to LockableHashMap::async_lock, please see documentation for that function for more information. But different to LockableHashMap::async_lock, LockableHashMap::async_lock_owned works on an Arc<LockableHashMap> instead of a LockableHashMap and returns a Lockable::OwnedGuard that binds its lifetime to the LockableHashMap in that Arc. Such a Lockable::OwnedGuard can be more easily moved around or cloned than the Lockable::Guard returned by LockableHashMap::async_lock.

§Examples
use lockable::{AsyncLimit, LockableHashMap};
use std::sync::Arc;

let lockable_map = Arc::new(LockableHashMap::<i64, String>::new());
let guard1 = lockable_map
    .async_lock_owned(4, AsyncLimit::no_limit())
    .await?;
let guard2 = lockable_map
    .async_lock_owned(5, AsyncLimit::no_limit())
    .await?;

// This next line would cause a deadlock or panic because `4` is already locked on this thread
// let guard3 = lockable_map.async_lock_owned(4).await?;

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockable_map
    .async_lock_owned(4, AsyncLimit::no_limit())
    .await?;
source

pub fn into_entries_unordered(self) -> impl Iterator<Item = (K, V)>

Consumes the hash map and returns an iterator over all of its entries.

§Examples
use lockable::{AsyncLimit, LockableHashMap};

let lockable_map = LockableHashMap::<i64, String>::new();

// Insert two entries
lockable_map
    .async_lock(4, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 4"));
lockable_map
    .async_lock(5, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 5"));

let entries: Vec<(i64, String)> = lockable_map.into_entries_unordered().collect();

// `entries` now contains both entries, but in an arbitrary order
assert_eq!(2, entries.len());
assert!(entries.contains(&(4, String::from("Value 4"))));
assert!(entries.contains(&(5, String::from("Value 5"))));
source

pub fn keys_with_entries_or_locked(&self) -> Vec<K>

Returns all of the keys that currently have an entry in the map. Caveat: Currently locked keys are listed even if they don’t carry a value.

This function has a high performance cost because it needs to lock the whole map to get a consistent snapshot and clone all the keys.

§Examples
use lockable::{AsyncLimit, LockableHashMap};

let lockable_map = LockableHashMap::<i64, String>::new();

// Insert two entries
lockable_map
    .async_lock(4, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 4"));
lockable_map
    .async_lock(5, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 5"));
// Keep a lock on a third entry but don't insert it
let guard = lockable_map.async_lock(6, AsyncLimit::no_limit()).await?;

let keys: Vec<i64> = lockable_map.keys_with_entries_or_locked();

// `keys` now contains all three keys
assert_eq!(3, keys.len());
assert!(keys.contains(&4));
assert!(keys.contains(&5));
assert!(keys.contains(&6));
source

pub async fn lock_all_entries( &self ) -> impl Stream<Item = <Self as Lockable<K, V>>::Guard<'_>>

Lock all entries of the cache once. The result of this is a [Stream] that will produce the corresponding lock guards. If items are locked, the [Stream] will produce them as they become unlocked and can be locked by the stream.

The returned stream is async and therefore may return items much later than when this function was called, but it only returns an entry if it existed or was locked at the time this function was called, and still exists when the stream is returning the entry. For any entry currently locked by another thread or task while this function is called, the following rules apply:

  • If that thread/task creates the entry => the stream will return it
  • If that thread/task removes the entry => the stream will not return it
  • If the entry was not pre-existing and that thread/task does not create it => the stream will not return it.
§Examples
use futures::stream::StreamExt;
use lockable::{AsyncLimit, LockableHashMap};

let lockable_map = LockableHashMap::<i64, String>::new();

// Insert two entries
lockable_map
    .async_lock(4, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 4"));
lockable_map
    .async_lock(5, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 5"));

// Lock all entries and add them to an `entries` vector
let mut entries: Vec<(i64, String)> = Vec::new();
let mut stream = lockable_map.lock_all_entries().await;
while let Some(guard) = stream.next().await {
    entries.push((*guard.key(), guard.value().unwrap().clone()));
}

// `entries` now contains both entries, but in an arbitrary order
assert_eq!(2, entries.len());
assert!(entries.contains(&(4, String::from("Value 4"))));
assert!(entries.contains(&(5, String::from("Value 5"))));
source

pub async fn lock_all_entries_owned( self: &Arc<Self> ) -> impl Stream<Item = <Self as Lockable<K, V>>::OwnedGuard>

Lock all entries of the cache once. The result of this is a [Stream] that will produce the corresponding lock guards. If items are locked, the [Stream] will produce them as they become unlocked and can be locked by the stream.

This is identical to LockableHashMap::lock_all_entries, but but it works on an Arc<LockableHashMap> instead of a LockableHashMap and returns a Lockable::OwnedGuard that binds its lifetime to the LockableHashMap in that Arc. Such a Lockable::OwnedGuard can be more easily moved around or cloned.

§Examples
use futures::stream::StreamExt;
use lockable::{AsyncLimit, LockableHashMap};
use std::sync::Arc;

let lockable_map = Arc::new(LockableHashMap::<i64, String>::new());

// Insert two entries
lockable_map
    .async_lock(4, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 4"));
lockable_map
    .async_lock(5, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value 5"));

// Lock all entries and add them to an `entries` vector
let mut entries: Vec<(i64, String)> = Vec::new();
let mut stream = lockable_map.lock_all_entries_owned().await;
while let Some(guard) = stream.next().await {
    entries.push((*guard.key(), guard.value().unwrap().clone()));
}

// `entries` now contains both entries, but in an arbitrary order
assert_eq!(2, entries.len());
assert!(entries.contains(&(4, String::from("Value 4"))));
assert!(entries.contains(&(5, String::from("Value 5"))));

Trait Implementations§

source§

impl<K, V: Debug> Debug for LockableHashMap<K, V>
where K: Eq + PartialEq + Hash + Clone + Debug,

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl<K, V> Default for LockableHashMap<K, V>
where K: Eq + PartialEq + Hash + Clone,

source§

fn default() -> Self

Returns the “default value” for a type. Read more
source§

impl<K, V> Lockable<K, V> for LockableHashMap<K, V>
where K: Eq + PartialEq + Hash + Clone,

§

type Guard<'a> = Guard<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks, &'a LockableMapImpl<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks>> where K: 'a, V: 'a

A non-owning guard holding a lock for an entry in a LockableHashMap or a LockableLruCache. This guard is created via LockableHashMap::blocking_lock, LockableHashMap::async_lock or LockableHashMap::try_lock, or the corresponding LockableLruCache methods, and its lifetime is bound to the lifetime of the LockableHashMap/LockableLruCache. Read more
§

type OwnedGuard = Guard<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks, Arc<LockableHashMap<K, V>>>

A owning guard holding a lock for an entry in a LockableHashMap or a LockableLruCache. This guard is created via LockableHashMap::blocking_lock_owned, LockableHashMap::async_lock_owned or LockableHashMap::try_lock_owned, or the corresponding LockableLruCache methods, and its lifetime is bound to the lifetime of the LockableHashMap/LockableLruCache within its Arc. Read more
§

type SyncLimit<'a, OnEvictFn, E> = SyncLimit<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks, &'a LockableMapImpl<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks>, E, OnEvictFn> where OnEvictFn: FnMut(Vec<Self::Guard<'a>>) -> Result<(), E>, K: 'a, V: 'a

TODO Documentation
§

type SyncLimitOwned<OnEvictFn, E> = SyncLimit<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks, Arc<LockableHashMap<K, V>>, E, OnEvictFn> where OnEvictFn: FnMut(Vec<Self::OwnedGuard>) -> Result<(), E>

TODO Documentation
§

type AsyncLimit<'a, OnEvictFn, E, F> = AsyncLimit<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks, &'a LockableMapImpl<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks>, E, F, OnEvictFn> where F: Future<Output = Result<(), E>>, OnEvictFn: FnMut(Vec<Self::Guard<'a>>) -> F, K: 'a, V: 'a

TODO Documentation
§

type AsyncLimitOwned<OnEvictFn, E, F> = AsyncLimit<HashMap<K, Arc<Mutex<EntryValue<V>>>>, V, NoopHooks, Arc<LockableHashMap<K, V>>, E, F, OnEvictFn> where F: Future<Output = Result<(), E>>, OnEvictFn: FnMut(Vec<Self::OwnedGuard>) -> F

TODO Documentation

Auto Trait Implementations§

§

impl<K, V> RefUnwindSafe for LockableHashMap<K, V>
where V: RefUnwindSafe,

§

impl<K, V> Send for LockableHashMap<K, V>
where K: Send, V: Send,

§

impl<K, V> Sync for LockableHashMap<K, V>
where K: Send, V: Sync + Send,

§

impl<K, V> Unpin for LockableHashMap<K, V>
where K: Unpin, V: Unpin,

§

impl<K, V> UnwindSafe for LockableHashMap<K, V>
where V: UnwindSafe,

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.