attachable-slab-allocator 0.1.0

A high-performance, $O(1)$, Master-Slave slab allocator designed for `no_std` environments, kernels, and embedded systems. This library provides fixed-size memory management with RAII safety while remaining completely agnostic of the underlying memory provider.
Documentation
//! # SlabCache: The Governor of the Master-Slave Allocator
//!
//! `SlabCache` serves as the high-level management interface for the slab allocation system.
//! While the `Slab` module handles the raw memory mechanics, `SlabCache` manages the
//! lifecycle, ownership, and thread-safe distribution of these resources.
//!
//! ## The Master-Slave Lifecycle
//!
//! A `SlabCache` holds a reference to a **Master Slab**.
//! - **Initialization**: When a cache is created, it allocates the first Master slab.
//! - **Shared Ownership**: Because `SlabCache` implements `Clone`, multiple handles can
//!   point to the same Master slab. The Master slab uses internal reference counting
//!   to stay alive as long as at least one `SlabCache` or one `Slave Slab` exists.
//! - **Automatic Cleanup**: When the last `SlabCache` is dropped and all allocated slots
//!   are returned, the entire hierarchy (Master and all Slaves) is automatically freed.
//!
//! ## Memory Layout & The Power-of-Two Rule
//!
//! For the "Global Deallocation" mechanism to work (where you can free a pointer without
//! knowing which slab it came from), this allocator relies on **Address Alignment**.
//!
//! 1. **Alignment == Size**: The `SLAB_SIZE` must be a power of two.
//! 2. **Header Location**: By aligning the start of the slab to its size, we guarantee
//!    that for any pointer `P` inside the slab, `P & !(SLAB_SIZE - 1)` points exactly
//!    to the [`Slab`] header.
//!
//! ## Compile-Time Safety (Static Assertions)
//!
//! This module uses Rust's `const` evaluation to perform "sanity checks" before your
//! code even runs:
//! - **Minimum Slot Size**: Prevents types smaller than `u32` to ensure internal metadata
//!   tracking (freelist pointers) fits within the slot.
//! - **Maximum Slot Size**: Ensures that at least two slots fit into a single slab
//!   header-to-slab-size ratio, preventing inefficient "one-slot-per-page" allocations.
//!
//! ## Thread Safety & Synchronization (`Send` + `Sync`)
//!
//! By default, `SlabCache` contains a raw pointer (`NonNull<Slab<T, LOCK>>`). In Rust, raw pointers
//! are explicitly marked as `!Send` and `!Sync` because the compiler cannot automatically
//! guarantee that sharing or moving them across thread boundaries is safe.
//!
//! To make `SlabCache` multi-threaded ready, we manually implement `Send` and `Sync` with strict
//! trait bounds:
//!
//! 1. **Why `LOCK: LockTrait + Sync`?**
//!    Even though `alloc(&mut self)` requires a mutable reference to the local cache handle, `SlabCache`
//!    is designed to be `Clone`. When cloned, multiple threads hold their own mutable handles pointing
//!    to the **same fize-allocated Master Slab**. Therefore, the synchronization bottleneck shifts entirely
//!    to the internal `LOCK` inside the slab header. If the `LOCK` is `Sync`, multiple threads can safely
//!    mutate the internal slots and lists concurrently.
//!
//! 2. **Why `T: Send`?**
//!    Since data allocated within a slab can be wrapped in a `SlabBox` and moved to another thread to be
//!    used or dropped (freed), the underlying type `T` must be safe to transfer across thread boundaries.

use crate::{
    locks::LockTrait,
    mem_lay::get_layout,
    prelude::*,
    slab::{Slab, free_master},
    slab_box::SlabBox,
};

use core::{alloc::Layout, ptr::NonNull};

/// A thread-safe, reference-counted handle to a Master-Slave slab hierarchy.
///
/// `SlabCache` allows you to allocate fixed-size objects of type `T` with $O(1)$
/// complexity. It manages the expansion of memory by adding Slave slabs when needed.
pub struct SlabCache<T, LOCK, const SLAB_SIZE: usize>
where
    LOCK: LockTrait,
{
    /// Pointer to the primary Master Slab.
    master: Option<NonNull<Slab<T, LOCK>>>,
}

unsafe impl<T, LOCK, const SLAB_SIZE: usize> Sync for SlabCache<T, LOCK, SLAB_SIZE>
where
    T: Send,
    LOCK: LockTrait + Sync,
{
}

unsafe impl<T, LOCK, const SLAB_SIZE: usize> Send for SlabCache<T, LOCK, SLAB_SIZE>
where
    T: Send,
    LOCK: LockTrait + Sync,
{
}

impl<T, LOCK, const SLAB_SIZE: usize> Clone for SlabCache<T, LOCK, SLAB_SIZE>
where
    LOCK: LockTrait,
{
    /// Creates a new handle to the same Master slab.
    ///
    /// This increments the internal reference count of the Master slab,
    /// ensuring it remains valid even if the original `SlabCache` is dropped.
    fn clone(&self) -> Self {
        let master_ptr = self
            .master
            .expect("can not clone non-initialized SlabCache");
        Slab::<T, LOCK>::atomic_ref_up(master_ptr).expect("internal error");

        Self {
            master: Some(master_ptr),
        }
    }
}

impl<T, LOCK, const SLAB_SIZE: usize> Drop for SlabCache<T, LOCK, SLAB_SIZE>
where
    LOCK: LockTrait,
{
    /// Drops the handle and potentially cleans up the allocator.
    ///
    /// Decrements the Master's reference count. If this was the last handle
    /// and the Master has no active Slaves or used slots, the memory is returned to the system.
    fn drop(&mut self) {
        self.free_self();
    }
}

impl<T, LOCK, const SLAB_SIZE: usize> SlabCache<T, LOCK, SLAB_SIZE>
where
    LOCK: LockTrait,
{
    /// Compile-time configuration and validation block.
    const SLAB_SIZE_CHECK: usize = const {
        let size: usize = size_of::<T>();

        // Requirement 1: Slot must be large enough to hold a 32-bit index/pointer for the freelist.
        assert!(
            size >= size_of::<u32>(),
            "Slot Type Smaller Than `u32` Not Allowed At SlabCache"
        );

        let slab_header_size = size_of::<Slab<T, LOCK>>();
        let slot_per_slab = (SLAB_SIZE - slab_header_size) / size;

        // Requirement 2: The Slab must be large enough to hold the header AND at least two slots.
        assert!(
            slot_per_slab >= 2,
            "Slot Type Size Is Greater Than SlabSize (must fit at least 2 slots per slab)"
        );
        size
    };

    /// Calculated layout for the slab, ensuring alignment equals the power-of-two size.
    const SLAB_LAYOUT: Layout = get_layout(SLAB_SIZE);

    /// Creates a new `SlabCache` and initializes the Master Slab.
    ///
    /// This will trigger an initial system allocation for the first Master segment.
    ///
    /// # Errors
    /// - `SlabError::OutOfMemory`: If the initial system allocation fails.
    ///
    /// # Example
    /// ```ignore
    /// let mut cache = SlabCache::<MyStruct, SpinLock, 4096>::new()?;
    /// let my_obj = cache.alloc()?;
    /// ```
    pub fn new() -> Result<SlabCache<T, LOCK, SLAB_SIZE>> {
        let _ = Self::SLAB_SIZE_CHECK; // Trigger const assertions

        let slab_master = Slab::<T, LOCK>::alloc_slab_ptr(Self::SLAB_LAYOUT, None, free_master)?;

        // The Cache itself counts as one reference to the Master.
        Slab::<T, LOCK>::atomic_ref_up(slab_master)?;
        Ok(Self {
            master: Some(slab_master),
        })
    }

    /// Internal helper to release the Master reference during Drop or cleanup.
    fn free_self(&mut self) {
        if let Some(master) = self.master.take() {
            Slab::<T, LOCK>::atomic_release_master(master, Self::SLAB_LAYOUT)
                .expect("internal curruption");
        }
    }

    /// Allocates a new object of type `T` from the cache.
    ///
    /// The returned [`SlabBox`] acts like a `Box<T>`, automatically returning
    /// the memory to the slab when it goes out of scope.
    ///
    /// If the current Master and its Slaves are full, this method will
    /// automatically allocate a new Slave slab and link it to the hierarchy.
    ///
    /// # Errors
    /// - `SlabError::OutOfMemory`: If no space is available and a new Slave cannot be created.
    /// - `SlabError::FatalError`: If the cache's master pointer is null.
    pub fn alloc(&mut self) -> Result<SlabBox<T, LOCK, SLAB_SIZE>> {
        if let Some(master) = self.master {
            let ptr = Slab::<T, LOCK>::alloc_slot(master, Self::SLAB_LAYOUT)?;

            let ptr_box = SlabBox::<T, LOCK, SLAB_SIZE>::new(ptr);

            Ok(ptr_box)
        } else {
            Err(SlabError::FatalError)
        }
    }
}