A simple and correct implementation of MCS lock
MCS lock is a List-Based Queuing Lock that avoids network contention by having threads spin on local memory locations. The main properties of this mechanism are:
- guarantees FIFO ordering of lock acquisitions;
- spins on locally-accessible flag variables only;
- requires a small constant amount of space per lock; and
- works equally well (requiring only O(1) network transactions per lock acquisition) on machines with and without coherent caches.
This algorithm and several others were introduced by Mellor-Crummey and Scott paper. And a simpler correctness proof of the MCS lock was proposed by Johnson and Harathi.
Use cases
It is noteworthy to mention that spinlocks are usually not what you want. The
majority of use cases are well covered by OS-based mutexes like
std::sync::Mutex or parking_lot::Mutex. These implementations will notify
the system that the waiting thread should be parked, freeing the processor to
work on something else.
Spinlocks are only efficient in very few circumstances where the overhead
of context switching or process rescheduling are greater than busy waiting
for very short periods. Spinlocks can be useful inside operating-system kernels,
on embedded systems or even complement other locking designs. As a reference
use case, some Linux kernel mutexes run an customized MCS lock specifically
tailored for optimistic spinning during contention before actually sleeping.
This implementation is no_std by default, so it's useful in those environments.
Install
Run the following Cargo command in your project directory:
Or add a entry under the [dependencies] section in your Cargo.toml:
# Cargo.toml
[]
# Available features: `yield`, `barging`, `thread_local` and `lock_api`.
= { = "0.1", = ["barging"] }
Documentation
This project documentation is hosted at docs.rs. Or you can build it locally with the following command:
RUSTDOCFLAGS="--cfg docsrs"
Raw MCS lock
This implementation operates under FIFO. Raw locking APIs require exclusive
access to a locally accessible queue node. This node is represented by the
MutexNode type. Callers are responsible for instantiating the queue nodes
themselves. This implementation is no_std compatible. See raw module for
more information.
use Arc;
use thread;
use ;
Barging MCS lock
This implementation will have non-waiting threads race for the lock against
the front of the waiting queue thread, which means this it is an unfair lock.
This implementation is suitable for no_std environments, and the locking
APIs are compatible with the lock_api crate. See barging and lock_api
modules for more information.
use Arc;
use thread;
// Requires `barging` feature.
use Mutex;
Thread local MCS lock
This implementation also operates under FIFO. The locking APIs provided
by this module do not require user-side node allocation, critical
sections must be provided as closures and at most one lock can be held at
any time within a thread. It is not no_std compatible and can be enabled
through the thread_local feature. See thread_local module for more
information.
use Arc;
use thread;
// Requires `thread_local` feature.
use Mutex;
Features
This crate dos not provide any default features. Features that can be enabled are:
yield
The yield feature requires linking to the standard library, so it is not
suitable for no_std environments. By enabling the yield feature, instead
of busy-waiting during lock acquisitions and releases, this will call
std::thread::yield_now, which cooperatively gives up a timeslice to the
OS scheduler. This may cause a context switch, so you may not want to enable
this feature if your intention is to to actually do optimistic spinning. The
default implementation calls core::hint::spin_loop, which does in fact
just simply busy-waits.
barging
The barging feature provides locking APIs that are compatible with the
lock_api crate. It does not require node allocations from the caller,
and it is suitable for no_std environments. This implementation is not
fair (does not guarantee FIFO), but can improve throughput when the lock
is heavily contended.
thread_local
The thread_local feature provides locking APIs that do not require user-side
node allocation, but critical sections must be provided as closures. This
implementation handles the queue's nodes transparently, by storing them in
the thread local storage of the waiting threads. This locking implementation
will panic if more than one guard is alive within a single thread. Not
no_std compatible.
lock_api
This feature implements the RawMutex trait from the lock_api crate for
barging::Mutex. Aliases are provided by the lock_api module. This feature
is no_std compatible.
Minimum Supported Rust Version (MSRV)
This crate is guaranteed to compile on a Minimum Supported Rust Version (MSRV) of 1.65.0 and above. This version will not be changed without a minor version bump. If you intend to use this crate but can only target a older Rust version, feel free to open a issue with your specific target, it is possible to lower this crate MSRV substantially, it just has not been explored yet.
Related projects
These projects provide MCS lock implementations with slightly different APIs, implementation details or compiler requirements, you can check their repositories:
License
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE or http://apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Code review
It is recommended to always use cargo-crev to verify the trustworthiness of each of your dependencies, including this one.