1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
//! Monotonic lock identity.
//!
//! Every [`Mutex`](crate::mutex::Mutex) is assigned a unique [`LockId`] at creation
//! time via a global atomic counter. The total order on `LockId` values
//! is the foundation of [`LockSet`](crate::set::LockSet)'s deadlock prevention: locks
//! are always acquired in ascending `LockId` order.
//!
//! By default, `LockId` uses `AtomicU32` (works on all targets including
//! 32-bit embedded). Enable the `atomic-u64` feature (on by default with
//! `std`) for `AtomicU64`, or `portable-atomic` for targets without
//! native CAS support (e.g., thumbv6m).
use crate;
static NEXT_ID: AtomicId = new;
/// Unique, totally-ordered lock identifier.
///
/// Assigned once at [`Mutex`](crate::mutex::Mutex) creation, immutable thereafter.
/// The ordering on `LockId` values determines the acquisition order
/// within a [`LockSet`](crate::set::LockSet).
///
/// Uses `u32` by default (works on all targets). Enable `atomic-u64`
/// for `u64` identifiers.
///
/// `Relaxed` ordering on the counter is sufficient: we need uniqueness
/// and a total order on the _values_, not a happens-before relationship
/// between the `fetch_add` calls.
///
/// The counter wraps on overflow (`u32::MAX` or `u64::MAX` allocations).
/// With `atomic-u64` (default on `std`), exhaustion takes ~584 years at
/// one allocation per nanosecond. With the default `u32` counter on
/// `no_std`, wrap occurs at ~4 billion allocations -- safe for typical
/// embedded use, but a concern if mutexes are created in a hot loop.
/// A `debug_assert!` fires on wrap in debug builds.
;