lockable 0.0.4

This library offers hash map and cache data structures where individual entries can be locked
Documentation

Build Status Latest Version docs.rs License License codecov unsafe forbidden

lockable

The lockable library offers thread-safe HashMap (see LockableHashMap) and LruCache (see LockableLruCache) types where individual keys can be locked/unlocked, even if there is no entry for this key in the map.

This can be very useful for synchronizing access to an underlying key-value store or for building cache data structures on top of such a key-value store.

LRU cache example

This example builds a simple LRU cache and locks some entries.

use lockable::{AsyncLimit, LockableLruCache};

let lockable_cache: LockableLruCache<i64, String> = LockableLruCache::new();

// Insert an entry
lockable_cache.async_lock(4, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value"));

// Hold a lock on a different entry
let guard = lockable_cache.async_lock(5, AsyncLimit::no_limit())
    .await?;

// This next line would wait until the lock gets released, which in this case would
// cause a deadlock because we're on the same thread.
// let guard2 = lockable_cache.async_lock(5, AsyncLimit::no_limit()).await?;

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard);
let guard2 = lockable_cache.async_lock(5, AsyncLimit::no_limit()).await?;

Lockpool example

This example builds a simple lock pool using the LockableHashMap data structure. A lock pool is a pool of keyable locks. In this example, the entries don't have a value assigned to them and the lock pool is only used to synchronize access to some keyed resource.

use lockable::{AsyncLimit, LockableHashMap};

let lockable_cache: LockableHashMap<i64, ()> = LockableHashMap::new();
let entry1 = lockable_cache.async_lock(4, AsyncLimit::no_limit()).await?;
let entry2 = lockable_cache.async_lock(5, AsyncLimit::no_limit()).await?;

// This next line would wait until the lock gets released, which in this case would
// cause a deadlock because we're on the same thread.
// let entry3 = lockable_cache.async_lock(4, AsyncLimit::no_limit()).await?;

// After dropping the corresponding guard, we can lock it again
std::mem::drop(entry1);
let entry3 = lockable_cache.async_lock(4, AsyncLimit::no_limit()).await?;

License: MIT OR Apache-2.0