lockable 0.2.0

This library offers hash map, hash set and cache data structures where individual entries can be locked
Documentation

Build Status Latest Version docs.rs License License codecov unsafe forbidden

lockable

The lockable library offers thread-safe HashMap (see LockableHashMap), LruCache (see LockableLruCache) and LockPool (see LockPool) types. In all of these data types, individual keys can be locked/unlocked, even if there is no entry for this key in the map or cache.

This can be very useful for synchronizing access to an underlying key-value store or for building cache data structures on top of such a key-value store.

LRU cache example

This example builds a simple LRU cache and locks some entries.

use lockable::{AsyncLimit, LockableLruCache};

let lockable_cache = LockableLruCache::<i64, String>::new();

// Insert an entry
lockable_cache.async_lock(4, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value"));

// Hold a lock on a different entry
let guard = lockable_cache.async_lock(5, AsyncLimit::no_limit())
    .await?;

// This next line would wait until the lock gets released,
// which in this case would cause a deadlock because we're
// on the same thread
// let guard2 = lockable_cache.async_lock(5, AsyncLimit::no_limit())
//    .await?;

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard);
let guard2 = lockable_cache.async_lock(5, AsyncLimit::no_limit())
    .await?;

Lockpool example

This example builds a simple lock pool using the LockPool data structure. A lock pool is a pool of keyable locks. This can be used if you don't need a cache but just some way to synchronize access to an underlying resource.

use lockable::LockPool;

let lockpool = LockPool::new();
let guard1 = lockpool.async_lock(4).await;
let guard2 = lockpool.async_lock(5).await;

// This next line would wait until the lock gets released,
// which in this case would cause a deadlock because we're
// on the same thread.
// let guard3 = lockpool.async_lock(4).await;

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard1);
let guard3 = lockpool.async_lock(4).await;

HashMap example

If you need a lockable key-value store but don't need the LRU ordering, you can use LockableHashMap.

use lockable::{AsyncLimit, LockableHashMap};

let lockable_map = LockableHashMap::<i64, String>::new();

// Insert an entry
lockable_map.async_lock(4, AsyncLimit::no_limit())
    .await?
    .insert(String::from("Value"));

// Hold a lock on a different entry
let guard = lockable_map.async_lock(5, AsyncLimit::no_limit())
    .await?;

// This next line would wait until the lock gets released,
// which in this case would cause a deadlock because we're
// on the same thread
// let guard2 = lockable_map.async_lock(5, AsyncLimit::no_limit())
//    .await?;

// After dropping the corresponding guard, we can lock it again
std::mem::drop(guard);
let guard2 = lockable_map.async_lock(5, AsyncLimit::no_limit())
    .await?;

WARNING: Deadlocks

This data structure is powerful and with great power comes great danger (or something like that). Having concurrent threads or tasks lock keys in an arbitrary order can easily lead to deadlocks where one thread is waiting for a lock held by another thread, while the second thread is waiting for a lock held by the first thread. Be careful and apply common deadlock prevention strategies, e.g. always lock keys in the same order.

Crate Features

  • lru: Enables the LockableLruCache type which adds a dependency on the lru crate.
  • slow_assertions: Enables slow assertions. Don't use this in production code. It is very slow. This is useful to assert invariants and search for bugs within the lockable crate. It is not helpful in finding bugs in user code. If you do enable this and encounter an assertion failing, please report it in a GitHub issue.

License: MIT OR Apache-2.0