attachable-slab-allocator 0.1.0

A high-performance, $O(1)$, Master-Slave slab allocator designed for `no_std` environments, kernels, and embedded systems. This library provides fixed-size memory management with RAII safety while remaining completely agnostic of the underlying memory provider.
Documentation
//! # Spin-based Mutual Exclusion
//!
//! A simple, low-level SpinLock implementation using atomic primitives.
//! This lock is suitable for environments where `std::sync::Mutex` is unavailable
//! (like `no_std` kernels) and critical sections are very short.
//!
//! ## Mechanism
//! The lock uses an `AtomicBool` with `Acquire`/`Release` memory ordering to
//! ensure that memory operations performed while holding the lock are not
//! reordered outside of the protected section.

use super::lock_trait::LockTrait;
use core::sync::atomic::AtomicBool;
use core::sync::atomic::Ordering;

/// A simple spinlock using an `AtomicBool`.
///
/// Size: 1 Byte (well within the 16-byte limit).
pub struct SpinLock {
    locked: AtomicBool,
}

/// RAII Guard for [`SpinLock`].
///
/// When dropped, it stores `false` in the lock's `AtomicBool` with `Release` ordering.
pub struct SpinLockGuard<'a> {
    lock: &'a SpinLock,
}

impl<'a> Drop for SpinLockGuard<'a> {
    fn drop(&mut self) {
        self.lock.locked.store(false, Ordering::Release);
    }
}

impl LockTrait for SpinLock {
    type Guard<'a>
        = SpinLockGuard<'a>
    where
        Self: 'a;

    /// Continuously polls the lock until it can be acquired.
    ///
    /// Uses `core::hint::spin_loop()` to signal the CPU that it is in a
    /// busy-wait loop, which can save power or improve performance on some architectures.
    fn lock(&self) -> Self::Guard<'_> {
        while self
            .locked
            .compare_exchange(false, true, Ordering::Acquire, Ordering::Relaxed)
            .is_err()
        {
            // Optimization for the CPU during the spin-wait.
            core::hint::spin_loop();
        }

        SpinLockGuard { lock: self }
    }

    /// Creates a new, unlocked `SpinLock`.
    fn init() -> Self {
        Self {
            locked: AtomicBool::new(false),
        }
    }
}

/// Manual `Sync` implementation because the lock is explicitly designed
/// to be shared across threads.
unsafe impl Sync for SpinLock {}