reclaim/
lib.rs

1//! A generic trait based interface for abstracting over various schemes for
2//! concurrent memory reclamation.
3//!
4//! # Memory Management in Rust
5//!
6//! Unlike garbage collected languages such as *Go* or *Java*, memory
7//! management in *Rust* is primarily scope or ownership based and more akin to
8//! *C++*.
9//! Rust's ownership model in combination with the standard library's smart
10//! pointer types `Box`, `Rc` and `Arc` make memory management as painless as
11//! possible and are able to handle the vast majority of use-cases, while at the
12//! same time preventing the classic memory bugs such as *use-after-free*,
13//! *double-free* or memory leaks.
14//! Consequently, there is usually little need for the relatively small
15//! additional comfort provided by a fully automated **Garbage Collector** (GC).
16//!
17//! ## The Need for Automatic Memory Reclamation
18//!
19//! In the domain of concurrent lock-free data structures, however, the
20//! aforementioned memory management schemes are insufficient for determining,
21//! when a removed entry can be actually dropped and de-allocated:
22//! Just because an entry has been removed (*unlinked*) from some shared data
23//! structure does not guarantee, that no other thread could still be in the
24//! process of reading that same entry at the same time.
25//! This is due to the possible existence of stale references that were created
26//! by other threads before the unlinking occurred.
27//! The only fact that can be ascertained, due to nature of atomic *swap* and
28//! *compare-and-swap* operations, is that other threads can not acquire *new*
29//! references after an entry has been unlinked.
30//!
31//! ## Extending the Grace Period
32//!
33//! Concurrent memory reclamation schemes work by granting every value
34//! (*record*) earmarked for deletion (*retired*) a certain **grace period**
35//! before being actually dropped and de-allocated.
36//! During this period the value will be cached and can still be safely read by
37//! other threads with live references to it, but no new references must be
38//! possible.
39//! Determining the exact length of this grace period is up to each individual
40//! reclamation scheme.
41//! It is usually either not possible or not practical to determine the exact
42//! moment at which it becomes safe to reclaim a retired.
43//! Hence, reclamation schemes commonly tend to only guarantee grace periods
44//! that are *at least* as long as to ensure no references can possibly exist
45//! afterwards.
46//!
47//! # The Reclaim Interface
48//!
49//! Lock-free data structures are usually required to work on atomic pointers to
50//! heap allocated memory.
51//! This is due to the restrictions of atomic CPU instructions to machine-word
52//! sized values, such as pointers.
53//! Working with raw pointers is inherently *unsafe* in the Rust sense.
54//! Consequently, this crate avoids and discourages the use of raw pointers as
55//! much as possible in favor of safer abstractions with stricter constraints
56//! for their usage.
57//! In effect, the vast majority of this crate's public API is safe to use under
58//! any circumstances.
59//! This, however, is achieved by shifting and concentrating the burden of
60//! manually maintaining safety invariants into one specific key aspect:
61//! The retiring (and eventual reclamation) of records.
62//!
63//! ## Traits and Types
64//!
65//! The `reclaim` crate primarily exposes four different traits, which are
66//! relevant for users of generic code and implementors of reclamation schemes
67//! alike.
68//! The first trait is [`Reclaim`], which provides generic functionality for
69//! retiring records.
70//! Note, that this trait does not presume the presence of an operating system
71//! and functionality like thread local storage.
72//! Hence, this trait can even be used in `#[no_std]` environments.
73//! However, in order to use this trait's associated methods, an explicit
74//! reference to the current thread's (local) state is required.
75//! For environments with implicit access to thread local storage, the
76//! [`GlobalReclaim`] trait exists as an extension to [`Reclaim`].
77//! This trait additionally requires an associated type
78//! [`Guard`][GlobalReclaim::Guard], which must implement both the [`Default`]
79//! and the [`Protect`] trait.
80//!
81//! Types implementing the [`Protect`] trait can be used to safely read values
82//! from shared memory that are subsequently safe from concurrent reclamation
83//! and can hence be safely de-referenced.
84//! Note that a single guard can only protect one value at a time.
85//! This follows the design of many reclamation schemes, such as *hazard
86//! pointers*.
87//! This is represented by the requirement to pass a *mutable* reference to a
88//! guard in order to safely load a shared value.
89//!
90//! Some reclamation schemes (e.g. epoch based ones) do not require individual
91//! protection of values, but instead protect arbitrary numbers of shared at
92//! once.
93//! The guard types for these schemes can additionally implement the
94//! [`ProtectRegion`] trait.
95//! Such guards do not have to carry any state and protect values simply by
96//! their existence.
97//! Consequently, it is also possible to call eg [`Atomic::load`] with a shared
98//! reference to a guard implementing that trait.
99//!
100//! ## The `Atomic` Type
101//!
102//! The [`Atomic`] markable concurrent pointer type is the main point of
103//! interaction with this crate.
104//! It can only be safely created as a `null` pointer or with valid heap
105//! allocation backing it up.
106//! It supports all common atomic operations like `load`, `store`,
107//! `compare_exchange`, etc.
108//! The key aspect of this type is that together with a guard, shared values
109//! can be safely loaded and de-referenced while other threads can concurrently
110//! reclaim removed values.
111//! In addition to the [`Shared`] type, which represents a shared reference that
112//! is protected from reclamation, other atomic operations can yield
113//! [`Unprotected`] or [`Unlinked`] values.
114//! The former are explicitly not protected from reclamation and can be loaded
115//! without any guards.
116//! They are not safe to de-reference, but can be used to e.g. swing a pointer
117//! from one linked list node to another.
118//! [`Unlinked`] values are the result of *swap* or *compare-and-swap*
119//! operations and represent values/references to which no new references can be
120//! acquired any more by other threads.
121//! They are like *owned* values that are also borrowed, since other threads may
122//! still reference them.
123//! All of these three different types are guaranteed to never be null at the
124//! type level.
125//!
126//! ## Marked Pointer & Reference Types
127//!
128//! It is a ubiquitous technique in lock-free programming to use the lower bits
129//! of a pointer address to store additional information alongside an address.
130//! Common use-cases are ABA problem mitigation or to mark a node of a linked
131//! list for removal.
132//!
133//! Accordingly, this crate allows all pointer and reference types
134//! ([`MarkedPtr`], [`Shared`], etc.) to be marked.
135//! The number of usable mark bits is encoded in the type itself as a generic
136//! parameter `N`.
137//! However, the number of available mark bits has a physical upper bound, which
138//! is dictated by the alignment of the pointed-to type.
139//! For instance, a `bool` has an alignment of 1, hence pointers to boolean
140//! values can not, in fact, be marked.
141//! On a 64-bit system, an `usize` has an alignment of 8, which means a pointer
142//! to one can use up to 3 mark bits.
143//! Since the number `N` is encoded in the pointer types themselves, attempting
144//! to declare types with more available mark bits than what the pointed-to
145//! type's alignment will lead to a (currently fairly cryptic) compile time
146//! error.
147//! Note, that tags are allowed to overflow. This can lead to surprising results
148//! when attempting to e.g. mark a pointer that is declared to support zero mark
149//! bits (`N = 0`), as the tag will be silently truncated.
150//!
151//! # Terminology
152//!
153//! Throughout this crate's API and its documentation a certain terminology is
154//! consistently used, which is summarized below:
155//!
156//! - record
157//!
158//!   A heap allocated value which is managed by some reclamation scheme.
159//!
160//! - unlink
161//!
162//!   The act removing the pointer to a *record* from shared memory through an
163//!   atomic operation such as *compare-and-swap*.
164//!
165//! - retire
166//!
167//!   The act of marking an *unlinked* record as no longer in use and handing
168//!   off the responsibility for de-allocation to the reclamation scheme.
169//!
170//! - reclaim
171//!
172//!   The act of dropping and de-allocating a *retired* record.
173//!   The reclamation scheme is responsible for guaranteeing that *retired*
174//!   records are kept alive (cached) **at least** until their respective *grace
175//!   periods* have expired.
176
177#![cfg_attr(not(any(test, feature = "std")), no_std)]
178#![warn(missing_docs)]
179
180#[cfg(not(feature = "std"))]
181extern crate alloc;
182
183#[macro_use]
184mod macros;
185
186pub mod align;
187pub mod leak;
188pub mod prelude {
189    //! Useful and/or required types, discriminants and traits for the `reclaim`
190    //! crate.
191
192    pub use crate::pointer::{
193        Marked::{self, Null, Value},
194        MarkedPointer, NonNullable,
195    };
196
197    pub use crate::GlobalReclaim;
198    pub use crate::Protect;
199    pub use crate::ProtectRegion;
200    pub use crate::Reclaim;
201}
202
203mod atomic;
204mod internal;
205mod owned;
206mod pointer;
207mod retired;
208mod shared;
209mod unlinked;
210mod unprotected;
211
212#[cfg(feature = "std")]
213use std::error::Error;
214
215use core::fmt;
216use core::marker::PhantomData;
217use core::mem;
218use core::ptr::NonNull;
219use core::sync::atomic::Ordering;
220
221// TODO: replace with const generics once available
222pub use typenum;
223
224use memoffset::offset_of;
225use typenum::Unsigned;
226
227pub use crate::atomic::{Atomic, CompareExchangeFailure};
228pub use crate::pointer::{
229    AtomicMarkedPtr, InvalidNullError, Marked, MarkedNonNull, MarkedPointer, MarkedPtr, NonNullable,
230};
231pub use crate::retired::Retired;
232
233////////////////////////////////////////////////////////////////////////////////////////////////////
234// GlobalReclaim (trait)
235////////////////////////////////////////////////////////////////////////////////////////////////////
236
237/// A trait for retiring and reclaiming entries removed from concurrent
238/// collections and data structures.
239///
240/// Implementing this trait requires first implementing the [`Reclaim`]
241/// trait for the same type and is usually only possible in `std` environments
242/// with access to thread local storage.
243///
244/// # Examples
245///
246/// Defining a concurrent data structure generic over the employed reclamation
247/// scheme:
248///
249/// ```
250/// use reclaim::typenum::U0;
251/// use reclaim::GlobalReclaim;
252///
253/// type Atomic<T, R> = reclaim::Atomic<T, R, U0>;
254///
255/// pub struct Stack<T, R: GlobalReclaim> {
256///     head: Atomic<Node<T, R>, R>,
257/// }
258///
259/// struct Node<T, R: GlobalReclaim> {
260///     elem: T,
261///     next: Atomic<Node<T, R>, R>,
262/// }
263/// ```
264pub unsafe trait GlobalReclaim
265where
266    Self: Reclaim,
267{
268    /// The type used for protecting concurrently shared references.
269    type Guard: Protect<Reclaimer = Self> + Default;
270
271    /// Creates a new [`Guard`][GlobalReclaim::Guard].
272    ///
273    /// When `Self::Guard` implements [`ProtectRegion`], this operation
274    /// instantly establishes protection for loaded values.
275    /// Otherwise, the guard must first explicitly protect a specific shared
276    /// value.
277    fn guard() -> Self::Guard {
278        Self::Guard::default()
279    }
280
281    /// Attempts to safely reclaim some retired records.
282    ///
283    /// Retired records are often cached locally by each thread to some
284    /// extent.
285    /// In cases, where opportunities for reclaiming these records (usually
286    /// actions requiring protection of shared values) materialize only rarely,
287    /// this function can be used to initiate the (safe) reclamation manually.
288    fn try_flush();
289
290    /// Retires a record and caches it **at least** until it is safe to
291    /// deallocate it.
292    ///
293    /// For further information, refer to the documentation of
294    /// [`retire_local`][`Reclaim::retire_local`].
295    ///
296    /// # Safety
297    ///
298    /// The same caveats as with [`retire_local`][`LocalReclaim::retire_local`]
299    /// apply.
300    unsafe fn retire<T: 'static, N: Unsigned>(unlinked: Unlinked<T, Self, N>);
301
302    /// Retires a record and caches it **at least** until it is safe to
303    /// deallocate it.
304    ///
305    /// For further information, refer to the documentation of
306    /// [`retire_local_unchecked`][`Reclaim::retire_local_unchecked`].
307    ///
308    /// # Safety
309    ///
310    /// The same caveats as with [`retire_local`][`Reclaim::retire_local`]
311    /// apply.
312    unsafe fn retire_unchecked<T, N: Unsigned>(unlinked: Unlinked<T, Self, N>);
313
314    /// Retires a raw marked pointer to a record.
315    ///
316    /// # Safety
317    ///
318    /// The same caveats as with [`retire_local_raw`][`Reclaim::retire_local_raw`]
319    /// apply.
320    ///
321    /// # Panics
322    ///
323    /// In debug mode, this function panics if `ptr` is `null`.
324    unsafe fn retire_raw<T, N: Unsigned>(ptr: MarkedPtr<T, N>) {
325        debug_assert!(!ptr.is_null());
326        Self::retire_unchecked(Unlinked::from_marked_ptr(ptr));
327    }
328}
329
330////////////////////////////////////////////////////////////////////////////////////////////////////
331// Reclaim (trait)
332////////////////////////////////////////////////////////////////////////////////////////////////////
333
334/// A trait, which constitutes the foundation for the [`GlobalReclaim`] trait.
335///
336/// This trait is specifically intended to be fully compatible with `#[no_std]`
337/// environments.
338/// This is expressed by the requirement to explicitly pass references to thread
339/// local state or storage when calling functions that retire records.
340///
341/// If a reclamation scheme does not require or deliberately chooses to avoid
342/// using thread local storage for the sake of simplicity or portability
343/// (usually at the cost of performance), it is valid to implement `Self::Local`
344/// as `()` and pass all retired records directly through to some global state.
345/// Note, that this will generally require more and also more frequent
346/// synchronization.
347/// For cases in which `Local` is defined to be `()`, there exists a blanket
348/// implementation of [`GlobalReclaim].
349pub unsafe trait Reclaim
350where
351    Self: Sized + 'static,
352{
353    /// The type used for storing all relevant thread local state.
354    type Local: Sized;
355
356    /// Every record allocates this type alongside itself to store additional
357    /// reclamation scheme specific data.
358    /// When no such data is required, `()` is the recommended choice.
359    type RecordHeader: Default + Sync + Sized;
360
361    /// Retires a record and caches it **at least** until it is safe to
362    /// deallocate it.
363    ///
364    /// How to determine that no other thread can possibly have any (protected)
365    /// reference to a record depends on the respective reclamation scheme.
366    ///
367    /// # Safety
368    ///
369    /// The caller has to guarantee that the record is **fully** unlinked from
370    /// any data structure it was previously inserted in:
371    /// There must be no way for another thread to acquire a *new* reference to
372    /// the given `unlinked` record.
373    ///
374    /// While an [`Unlinked`] value can only safely be obtained by atomic
375    /// operations that do in fact remove a value from its place in memory (i.e.
376    /// *swap* or *compare-and-swap*), this is only the *necessary* condition
377    /// for safe reclamation, but not always *sufficient*.
378    /// When a unique address to heap allocated memory is inserted in more than
379    /// one element of a shared data structure, it is still possible for other
380    /// threads to access this address even if its unlinked from one spot.
381    ///
382    /// This invariant also mandates, that correct synchronization of atomic
383    /// operations around calls to functions that retire records is ensured.
384    /// Consider the following (incorrect) example:
385    ///
386    /// ```ignore
387    /// # use core::sync::atomic::Ordering::{Relaxed};
388    /// # use reclaim::Unlinked;
389    ///
390    /// let g = Atomic::from(Owned::new(1));
391    ///
392    /// // thread 1
393    /// let expected = g.load_unprotected(Relaxed); // reads &1
394    /// let unlinked = g
395    ///     .compare_exchange(expected, Owned::null(), Relaxed, Relaxed)
396    ///     .unwrap();
397    ///
398    /// unsafe { unlinked.retire() };
399    ///
400    /// // thread 2
401    /// if let Some(shared) = g.load(Relaxed, &mut guard) {
402    ///     assert_eq!(*shared, &1); // !!! may read freed memory
403    /// }
404    /// ```
405    ///
406    /// In this example, the invariant can not be guaranteed to be maintained,
407    /// due to the incorrect (relaxed) memory orderings.
408    /// Thread 1 can potentially unlink the shared value, retire and reclaim it,
409    /// without the `compare_exchange` operation ever becoming visible to
410    /// thread 2.
411    /// The thread could then proceed to load and read the previous
412    /// value instead of the inserted `null`, accessing freed memory.
413    unsafe fn retire_local<T: 'static, N: Unsigned>(
414        local: &Self::Local,
415        unlinked: Unlinked<T, Self, N>,
416    );
417
418    /// Retires a record and caches it **at least** until it is safe to
419    /// deallocate it.
420    ///
421    /// How to determine that no other thread can possibly have any (protected)
422    /// reference to a record depends on the respective reclamation scheme.
423    ///
424    /// # Safety
425    ///
426    /// The same restrictions as with the [`retire_local`][Reclaim::retire_local]
427    /// function apply here as well.
428    ///
429    /// In addition to these invariants, this method additionally requires the
430    /// caller to ensure any `Drop` implementation for `T` or any contained type
431    /// does not access any **non-static** references.
432    /// The `reclaim` interface makes no guarantees about the precise time a
433    /// retired record is actually reclaimed.
434    /// Hence, it is not possible to ensure any references stored within the
435    /// record have not become invalid at that point.
436    unsafe fn retire_local_unchecked<T, N: Unsigned>(
437        local: &Self::Local,
438        unlinked: Unlinked<T, Self, N>,
439    );
440
441    /// Retires a raw marked pointer to a record.
442    ///
443    /// # Safety
444    ///
445    /// The same restrictions as with [`retire_local_unchecked`][Reclaim::retire_local_unchecked]
446    /// apply.
447    /// Since this function accepts a raw pointer, no type level checks on the validity are possible
448    /// and are hence the responsibility of the caller.
449    ///
450    /// # Panics
451    ///
452    /// In debug mode, this function panics if `ptr` is `null`.
453    unsafe fn retire_local_raw<T, N: Unsigned>(local: &Self::Local, ptr: MarkedPtr<T, N>) {
454        debug_assert!(!ptr.is_null());
455        Self::retire_local_unchecked(local, Unlinked::from_marked_ptr(ptr));
456    }
457}
458
459////////////////////////////////////////////////////////////////////////////////////////////////////
460// Protect (trait)
461////////////////////////////////////////////////////////////////////////////////////////////////////
462
463/// A trait for guard types that *protect* a specific value from reclamation
464/// during the lifetime of the protecting guard.
465///
466/// # Examples
467///
468/// ```
469/// use core::sync::atomic::Ordering::Relaxed;
470///
471/// use reclaim::typenum::U0;
472/// use reclaim::prelude::*;
473/// // `Leaking` implements both `Protect` and `ProtectRegion`
474/// use reclaim::leak::Guard;
475///
476/// type Atomic<T> = reclaim::leak::Atomic<T, U0>;
477///
478/// let atomic = Atomic::new(1);
479///
480/// let mut guard = Guard::new();
481/// let shared = atomic.load(Relaxed, &mut guard).unwrap();
482/// assert_eq!(&*shared, &1);
483/// ```
484pub unsafe trait Protect
485where
486    Self: Clone + Sized,
487{
488    /// The reclamation scheme associated with this type of guard
489    type Reclaimer: Reclaim;
490
491    /// Converts the guard into a [`Guarded`] by fusing it with a value loaded
492    /// from `atomic`.
493    ///
494    /// # Errors
495    ///
496    /// If the value loaded from `atomic` is `null`, this method instead `self`
497    /// again, wrapped in an [`Err`].
498    #[inline]
499    fn try_fuse<T, N: Unsigned>(
500        mut self,
501        atomic: &Atomic<T, Self::Reclaimer, N>,
502        order: Ordering,
503    ) -> Result<Guarded<T, Self, N>, Self> {
504        if let Marked::Value(shared) = self.protect(atomic, order) {
505            let ptr = Shared::into_marked_non_null(shared);
506            Ok(Guarded { guard: self, ptr })
507        } else {
508            Err(self)
509        }
510    }
511
512    /// Releases any current protection that may be provided by the guard.
513    ///
514    /// By borrowing `self` mutably it is ensured that no loaded values
515    /// protected by this guard can be used after calling this method.
516    /// If `Self` additionally implements [`ProtectRegion`], this is a no-op
517    fn release(&mut self);
518
519    /// Atomically takes a snapshot of `atomic` and returns a protected
520    /// [`Shared`] reference wrapped in a [`Marked`] to it.
521    ///
522    /// The loaded value is stored within `self`. If the value of `atomic` is
523    /// `null` or a pure tag (marked `null` pointer), no protection has to be
524    /// established. Any previously protected value will be overwritten and be
525    /// no longer protected, regardless of the loaded value.
526    ///
527    /// # Panics
528    ///
529    /// *May* panic if `order` is [`Release`][release] or [`AcqRel`][acq_rel].
530    ///
531    /// [release]: core::sync::atomic::Ordering::Release
532    /// [acq_rel]: core::sync::atomic::Ordering::AcqRel
533    fn protect<T, N: Unsigned>(
534        &mut self,
535        atomic: &Atomic<T, Self::Reclaimer, N>,
536        order: Ordering,
537    ) -> Marked<Shared<T, Self::Reclaimer, N>>;
538
539    /// Atomically takes a snapshot of `atomic` and returns a protected
540    /// [`Shared`] reference wrapped in a [`Marked`] to it, **if** the loaded
541    /// value is equal to `expected`.
542    ///
543    /// A *successfully* loaded value is stored within `self`. If the value of
544    /// `atomic` is `null` or a pure tag (marked `null` pointer), no protection
545    /// has to be established. After a *successful* load, any previously
546    /// protected value will be overwritten and be no longer protected,
547    /// regardless of the loaded value. In case of a unsuccessful load, the
548    /// previously protected value does not change.
549    ///
550    /// # Errors
551    ///
552    /// This method returns an [`Err(NotEqualError)`][NotEqualError] result, if
553    /// the atomically loaded snapshot from `atomic` does not match the
554    /// `expected` value.
555    ///
556    /// # Panics
557    ///
558    /// *May* panic if `order` is [`Release`][release] or [`AcqRel`][acq_rel].
559    ///
560    /// [release]: core::sync::atomic::Ordering::Release
561    /// [acq_rel]: core::sync::atomic::Ordering::AcqRel
562    fn protect_if_equal<T, N: Unsigned>(
563        &mut self,
564        atomic: &Atomic<T, Self::Reclaimer, N>,
565        expected: MarkedPtr<T, N>,
566        order: Ordering,
567    ) -> AcquireResult<T, Self::Reclaimer, N>;
568}
569
570////////////////////////////////////////////////////////////////////////////////////////////////////
571// ProtectRegion (trait)
572////////////////////////////////////////////////////////////////////////////////////////////////////
573
574/// A trait for guard types that protect any values loaded during their
575/// existence and lifetime.
576///
577/// # Examples
578///
579/// ```
580/// use core::sync::atomic::Ordering::Relaxed;
581///
582/// use reclaim::typenum::U0;
583/// use reclaim::prelude::*;
584/// // `Leaking` implements both `Protect` and `ProtectRegion`
585/// use reclaim::leak::Guard;
586///
587/// type Atomic<T> = reclaim::leak::Atomic<T, U0>;
588///
589/// let atomic = Atomic::new(1);
590/// let other = Atomic::new(0);
591///
592/// let guard = Guard::new();
593/// let shared1 = atomic.load(Relaxed, &guard).unwrap();
594/// let shared0 = other.load(Relaxed, &guard).unwrap();
595/// assert_eq!(&*shared1, &1);
596/// assert_eq!(&*shared0, &0);
597/// ```
598pub unsafe trait ProtectRegion
599where
600    Self: Protect,
601{
602}
603
604////////////////////////////////////////////////////////////////////////////////////////////////////
605// AcquireResult
606////////////////////////////////////////////////////////////////////////////////////////////////////
607
608/// Result type for [`acquire_if_equal`][Protect::acquire_if_equal] operations.
609pub type AcquireResult<'g, T, R, N> = Result<Marked<Shared<'g, T, R, N>>, NotEqualError>;
610
611////////////////////////////////////////////////////////////////////////////////////////////////////
612// NotEqualError
613////////////////////////////////////////////////////////////////////////////////////////////////////
614
615/// A zero-size marker type that represents the failure state of an
616/// [`acquire_if_equal`][Protect::acquire_if_equal] operation.
617#[derive(Clone, Copy, Debug, Default, Eq, Ord, PartialEq, PartialOrd)]
618pub struct NotEqualError;
619
620impl fmt::Display for NotEqualError {
621    #[inline]
622    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
623        write!(f, "acquired value does not match `expected`.")
624    }
625}
626
627#[cfg(feature = "std")]
628impl Error for NotEqualError {}
629
630////////////////////////////////////////////////////////////////////////////////////////////////////
631// Record
632////////////////////////////////////////////////////////////////////////////////////////////////////
633
634/// A record type that is associated with a specific reclamation scheme.
635///
636/// Whenever a new [`Owned`] or (non-null) [`Atomic`] is created, a value of
637/// this type is allocated on the heap as a wrapper for the desired record.
638/// The record and its header are never directly exposed to the data structure
639/// using a given memory reclamation scheme and should only be accessed by the
640/// reclamation scheme itself.
641pub struct Record<T, R: Reclaim> {
642    /// The record's header
643    header: R::RecordHeader,
644    /// The record's wrapped (inner) element
645    elem: T,
646}
647
648impl<T, R: Reclaim> Record<T, R> {
649    /// Creates a new record with the specified `elem` and a default header.
650    #[inline]
651    pub fn new(elem: T) -> Self {
652        Self { header: Default::default(), elem }
653    }
654
655    /// Creates a new record with the specified `elem` and `header`.
656    #[inline]
657    pub fn with_header(elem: T, header: R::RecordHeader) -> Self {
658        Self { header, elem }
659    }
660
661    /// Returns a reference to the record's header.
662    #[inline]
663    pub fn header(&self) -> &R::RecordHeader {
664        &self.header
665    }
666
667    /// Returns a reference to the record's element.
668    #[inline]
669    pub fn elem(&self) -> &T {
670        &self.elem
671    }
672
673    /// Calculates the address of the [`Record`] for the given pointer to a
674    /// wrapped non-nullable `elem` and returns the resulting pointer.
675    ///
676    /// # Safety
677    ///
678    /// The `elem` pointer must be a valid pointer to an instance of `T` that
679    /// was constructed as part of a [`Record`]. Otherwise, the pointer
680    /// arithmetic used to determine the address will result in a pointer to
681    /// unrelated memory, which is likely to lead to undefined behaviour.
682    #[inline]
683    pub unsafe fn from_raw_non_null(elem: NonNull<T>) -> NonNull<Self> {
684        Self::from_raw(elem.as_ptr())
685    }
686
687    /// Calculates the address of the [`Record`] for the given pointer to a
688    /// wrapped `elem` and returns the resulting pointer.
689    ///
690    /// # Safety
691    ///
692    /// The `elem` pointer must be a valid pointer to an instance of `T` that
693    /// was constructed as part of a [`Record`]. Otherwise, the pointer
694    /// arithmetic used to determine the address will result in a pointer to
695    /// unrelated memory, which is likely to lead to undefined behaviour.
696    #[inline]
697    pub unsafe fn from_raw(elem: *mut T) -> NonNull<Self> {
698        let addr = (elem as usize) - Self::offset_elem();
699        NonNull::new_unchecked(addr as *mut _)
700    }
701
702    /// Returns a reference to the header for the record at the pointed-to
703    /// location of the pointer `elem`.
704    ///
705    /// # Safety
706    ///
707    /// The pointer `elem` must be a valid pointer to an instance of `T` that
708    /// was allocated as part of a `Record`.
709    /// Otherwise, the pointer arithmetic used to calculate the header's address
710    /// will be incorrect and lead to undefined behavior.
711    #[inline]
712    pub unsafe fn header_from_raw<'a>(elem: *mut T) -> &'a R::RecordHeader {
713        let header = (elem as usize) - Self::offset_elem() + Self::offset_header();
714        &*(header as *mut _)
715    }
716
717    /// Returns a reference to the header for the record at the pointed-to
718    /// location of the non-nullable pointer `elem`.
719    ///
720    /// # Safety
721    ///
722    /// The pointer `elem` must be a valid pointer to an instance of `T` that
723    /// was allocated as part of a `Record`.
724    /// Otherwise, the pointer arithmetic used to calculate the header's address
725    /// will be incorrect and lead to undefined behavior.
726    #[inline]
727    pub unsafe fn header_from_raw_non_null<'a>(elem: NonNull<T>) -> &'a R::RecordHeader {
728        let header = (elem.as_ptr() as usize) - Self::offset_elem() + Self::offset_header();
729        &*(header as *mut _)
730    }
731
732    /// Returns the offset in bytes from the address of a record to its header
733    /// field.
734    #[inline]
735    pub fn offset_header() -> usize {
736        // FIXME: the offset_of! macro is unsound, this allows at least avoiding using it
737        //        in many cases until a better solution becomes available
738        // https://internals.rust-lang.org/t/pre-rfc-add-a-new-offset-of-macro-to-core-mem/9273
739        if mem::size_of::<R::RecordHeader>() == 0 {
740            0
741        } else {
742            offset_of!(Self, header)
743        }
744    }
745
746    /// Returns the offset in bytes from the address of a record to its element
747    /// field.
748    #[inline]
749    pub fn offset_elem() -> usize {
750        // FIXME: the offset_of! macro is currently, this allows at least avoiding using it
751        //        in many cases until a better solution becomes available
752        // https://internals.rust-lang.org/t/pre-rfc-add-a-new-offset-of-macro-to-core-mem/9273
753        if mem::size_of::<R::RecordHeader>() == 0 {
754            0
755        } else {
756            offset_of!(Self, elem)
757        }
758    }
759}
760
761////////////////////////////////////////////////////////////////////////////////////////////////////
762// Guarded
763////////////////////////////////////////////////////////////////////////////////////////////////////
764
765/// A guard type fused with a protected value.
766#[derive(Debug)]
767pub struct Guarded<T, G, N: Unsigned> {
768    guard: G,
769    ptr: MarkedNonNull<T, N>,
770}
771
772impl<T, G: Protect, N: Unsigned> Guarded<T, G, N> {
773    /// Returns a [`Shared`] reference borrowed from the [`Guarded`].
774    #[inline]
775    pub fn shared(&self) -> Shared<T, G::Reclaimer, N> {
776        Shared { inner: self.ptr, _marker: PhantomData }
777    }
778
779    /// Converts the [`Guarded`] into the internally stored guard.
780    ///
781    /// If `G` does not implement [`ProtectRegion`], the returned guard is
782    /// guaranteed to be [`released`][Protect::release] before being returned.
783    #[inline]
784    pub fn into_guard(self) -> G {
785        let mut guard = self.guard;
786        guard.release();
787        guard
788    }
789}
790
791////////////////////////////////////////////////////////////////////////////////////////////////////
792// Owned
793////////////////////////////////////////////////////////////////////////////////////////////////////
794
795/// A pointer type for heap allocated values similar to `Box`.
796///
797/// `Owned` values function like marked pointers and are also guaranteed to
798/// allocate the appropriate [`RecordHeader`][Reclaim::RecordHeader] type
799/// for its generic [`Reclaim`] parameter alongside their actual content.
800#[derive(Eq, Ord, PartialEq, PartialOrd)]
801pub struct Owned<T, R: Reclaim, N: Unsigned> {
802    inner: MarkedNonNull<T, N>,
803    _marker: PhantomData<(T, R)>,
804}
805
806////////////////////////////////////////////////////////////////////////////////////////////////////
807// Shared
808////////////////////////////////////////////////////////////////////////////////////////////////////
809
810/// A shared reference to a value that is actively protected from reclamation by
811/// other threads.
812///
813/// `Shared` values have similar semantics to shared references (`&'g T`), i.e.
814/// they can be trivially copied, cloned and (safely) de-referenced.
815/// However, they do retain potential mark bits of the atomic value from which
816/// they were originally read.
817/// They are also usually borrowed from guard values implementing the
818/// [`Protect`] trait.
819pub struct Shared<'g, T, R, N> {
820    inner: MarkedNonNull<T, N>,
821    _marker: PhantomData<(&'g T, R)>,
822}
823
824////////////////////////////////////////////////////////////////////////////////////////////////////
825// Unlinked
826////////////////////////////////////////////////////////////////////////////////////////////////////
827
828/// A reference to a value that has been removed from its previous location in
829/// memory and is hence no longer reachable by other threads.
830///
831/// `Unlinked` values are the result of (successful) atomic *swap* or
832/// *compare-and-swap* operations on [`Atomic`] values.
833/// They are move-only types, but they don't have full ownership semantics,
834/// either.
835/// Dropping an `Unlinked` value without explicitly retiring it almost certainly
836/// results in a memory leak.
837///
838/// The safety invariants around retiring `Unlinked` references are explained
839/// in detail in the documentation for [`retire_local`][Reclaim::retire_local].
840#[derive(Eq, Ord, PartialEq, PartialOrd)]
841#[must_use = "unlinked values are meant to be retired, otherwise a memory leak is highly likely"]
842pub struct Unlinked<T, R, N> {
843    inner: MarkedNonNull<T, N>,
844    _marker: PhantomData<(T, R)>,
845}
846
847////////////////////////////////////////////////////////////////////////////////////////////////////
848// Unprotected
849////////////////////////////////////////////////////////////////////////////////////////////////////
850
851/// A reference to a value loaded from an [`Atomic`] that is not actively
852/// protected from reclamation.
853///
854/// `Unprotected` values can not be safely de-referenced under usual
855/// circumstances (i.e. other threads can retire and reclaim unlinked records).
856/// They do, however, have stronger guarantees than raw (marked) pointers:
857/// Since are loaded from [`Atomic`] values they must (at least at one point)
858/// have been *valid* references.
859#[derive(Eq, Ord, PartialEq, PartialOrd)]
860pub struct Unprotected<T, R, N> {
861    inner: MarkedNonNull<T, N>,
862    _marker: PhantomData<R>,
863}