happylock/lib.rs
1#![warn(clippy::pedantic)]
2#![warn(clippy::nursery)]
3#![allow(clippy::module_name_repetitions)]
4#![allow(clippy::declare_interior_mutable_const)]
5#![allow(clippy::semicolon_if_nothing_returned)]
6#![allow(clippy::module_inception)]
7#![allow(clippy::single_match_else)]
8
9//! As it turns out, the Rust borrow checker is powerful enough that, if the
10//! standard library supported it, we could've made deadlocks undefined
11//! behavior. This library currently serves as a proof of concept for how that
12//! would work.
13//!
14//! # Theory
15//!
16//! There are four conditions necessary for a deadlock to occur. In order to
17//! prevent deadlocks, we just need to prevent one of the following:
18//!
19//! 1. mutual exclusion
20//! 2. non-preemptive allocation
21//! 3. circular wait
22//! 4. **partial allocation**
23//!
24//! This library seeks to solve **partial allocation** by requiring total
25//! allocation. All the resources a thread needs must be allocated at the same
26//! time. In order to request new resources, the old resources must be dropped
27//! first. Requesting multiple resources at once is atomic. You either get all
28//! the requested resources or none at all.
29//!
30//! As an optimization, this library also often prevents **circular wait**.
31//! Many collections sort the locks in order of their memory address. As long
32//! as the locks are always acquired in that order, then time doesn't need to
33//! be wasted on releasing locks after a failure and re-acquiring them later.
34//!
35//! # Examples
36//!
37//! Simple example:
38//! ```
39//! use std::thread;
40//! use happylock::{Mutex, ThreadKey};
41//!
42//! const N: usize = 10;
43//!
44//! static DATA: Mutex<i32> = Mutex::new(0);
45//!
46//! for _ in 0..N {
47//! thread::spawn(move || {
48//! // each thread gets one thread key
49//! let key = ThreadKey::get().unwrap();
50//!
51//! // unlocking a mutex requires a ThreadKey
52//! let mut data = DATA.lock(key);
53//! *data += 1;
54//!
55//! // the key is unlocked at the end of the scope
56//! });
57//! }
58//!
59//! let key = ThreadKey::get().unwrap();
60//! let data = DATA.lock(key);
61//! println!("{}", *data);
62//! ```
63//!
64//! To lock multiple mutexes at a time, create a [`LockCollection`]:
65//!
66//! ```
67//! use std::thread;
68//! use happylock::{LockCollection, Mutex, ThreadKey};
69//!
70//! const N: usize = 10;
71//!
72//! static DATA_1: Mutex<i32> = Mutex::new(0);
73//! static DATA_2: Mutex<String> = Mutex::new(String::new());
74//!
75//! for _ in 0..N {
76//! thread::spawn(move || {
77//! let key = ThreadKey::get().unwrap();
78//!
79//! // happylock ensures at runtime there are no duplicate locks
80//! let collection = LockCollection::try_new((&DATA_1, &DATA_2)).unwrap();
81//! let mut guard = collection.lock(key);
82//!
83//! *guard.1 = (100 - *guard.0).to_string();
84//! *guard.0 += 1;
85//! });
86//! }
87//!
88//! let key = ThreadKey::get().unwrap();
89//! let data = LockCollection::try_new((&DATA_1, &DATA_2)).unwrap();
90//! let data = data.lock(key);
91//! println!("{}", *data.0);
92//! println!("{}", *data.1);
93//! ```
94//!
95//! In many cases, the [`LockCollection::new`] or [`LockCollection::new_ref`]
96//! method can be used, improving performance.
97//!
98//! ```rust
99//! use std::thread;
100//! use happylock::{LockCollection, Mutex, ThreadKey};
101//!
102//! const N: usize = 32;
103//!
104//! static DATA: [Mutex<i32>; 2] = [Mutex::new(0), Mutex::new(1)];
105//!
106//! for _ in 0..N {
107//! thread::spawn(move || {
108//! let key = ThreadKey::get().unwrap();
109//!
110//! // a reference to a type that implements `OwnedLockable` will never
111//! // contain duplicates, so no duplicate checking is needed.
112//! let collection = LockCollection::new_ref(&DATA);
113//! let mut guard = collection.lock(key);
114//!
115//! let x = *guard[1];
116//! *guard[1] += *guard[0];
117//! *guard[0] = x;
118//! });
119//! }
120//!
121//! let key = ThreadKey::get().unwrap();
122//! let data = LockCollection::new_ref(&DATA);
123//! let data = data.lock(key);
124//! println!("{}", data[0]);
125//! println!("{}", data[1]);
126//! ```
127//!
128//! # Performance
129//!
130//! **The `ThreadKey` is a mostly-zero cost abstraction.** It doesn't use any
131//! memory, and it doesn't really exist at run-time. The only cost comes from
132//! calling `ThreadKey::get()`, because the function has to ensure at runtime
133//! that the key hasn't already been taken. Dropping the key will also have a
134//! small cost.
135//!
136//! **Consider [`OwnedLockCollection`].** This will almost always be the
137//! fastest lock collection. It doesn't expose the underlying collection
138//! immutably, which means that it will always be locked in the same order, and
139//! doesn't need any sorting.
140//!
141//! **Avoid [`LockCollection::try_new`].** This constructor will check to make
142//! sure that the collection contains no duplicate locks. In most cases, this
143//! is O(nlogn), where n is the number of locks in the collections but in the
144//! case of [`RetryingLockCollection`], it's close to O(n).
145//! [`LockCollection::new`] and [`LockCollection::new_ref`] don't need these
146//! checks because they use [`OwnedLockable`], which is guaranteed to be unique
147//! as long as it is accessible. As a last resort,
148//! [`LockCollection::new_unchecked`] doesn't do this check, but is unsafe to
149//! call.
150//!
151//! **Know how to use [`RetryingLockCollection`].** This collection doesn't do
152//! any sorting, but uses a wasteful lock algorithm. It can't rely on the order
153//! of the locks to be the same across threads, so if it finds a lock that it
154//! can't acquire without blocking, it'll first release all of the locks it
155//! already acquired to avoid blocking other threads. This is wasteful because
156//! this algorithm may end up re-acquiring the same lock multiple times. To
157//! avoid this, ensure that (1) the first lock in the collection is always the
158//! first lock in any collection it appears in, and (2) the other locks in the
159//! collection are always preceded by that first lock. This will prevent any
160//! wasted time from re-acquiring locks. If you're unsure, [`LockCollection`]
161//! is a sensible default.
162//!
163//! [`OwnedLockable`]: `lockable::OwnedLockable`
164//! [`OwnedLockCollection`]: `collection::OwnedLockCollection`
165//! [`RetryingLockCollection`]: `collection::RetryingLockCollection`
166
167mod handle_unwind;
168mod key;
169
170pub mod collection;
171pub mod lockable;
172pub mod mutex;
173pub mod poisonable;
174pub mod rwlock;
175
176pub use key::{Keyable, ThreadKey};
177
178#[cfg(feature = "spin")]
179pub use mutex::SpinLock;
180
181// Personally, I think re-exports look ugly in the rust documentation, so I
182// went with type aliases instead.
183
184/// A collection of locks that can be acquired simultaneously.
185///
186/// This re-exports [`BoxedLockCollection`] as a sensible default.
187///
188/// [`BoxedLockCollection`]: collection::BoxedLockCollection
189pub type LockCollection<L> = collection::BoxedLockCollection<L>;
190
191/// A re-export for [`poisonable::Poisonable`]
192pub type Poisonable<L> = poisonable::Poisonable<L>;
193
194/// A mutual exclusion primitive useful for protecting shared data, which cannot deadlock.
195///
196/// By default, this uses `parking_lot` as a backend.
197#[cfg(feature = "parking_lot")]
198pub type Mutex<T> = mutex::Mutex<T, parking_lot::RawMutex>;
199
200/// A reader-writer lock
201///
202/// By default, this uses `parking_lot` as a backend.
203#[cfg(feature = "parking_lot")]
204pub type RwLock<T> = rwlock::RwLock<T, parking_lot::RawRwLock>;