Skip to main content

attachable_slab_allocator/
slab_cache.rs

1//! # SlabCache: The Governor of the Master-Slave Allocator
2//!
3//! `SlabCache` serves as the high-level management interface for the slab allocation system.
4//! While the `Slab` module handles the raw memory mechanics, `SlabCache` manages the
5//! lifecycle, ownership, and thread-safe distribution of these resources.
6//!
7//! ## The Master-Slave Lifecycle
8//!
9//! A `SlabCache` holds a reference to a **Master Slab**.
10//! - **Initialization**: When a cache is created, it allocates the first Master slab.
11//! - **Shared Ownership**: Because `SlabCache` implements `Clone`, multiple handles can
12//!   point to the same Master slab. The Master slab uses internal reference counting
13//!   to stay alive as long as at least one `SlabCache` or one `Slave Slab` exists.
14//! - **Automatic Cleanup**: When the last `SlabCache` is dropped and all allocated slots
15//!   are returned, the entire hierarchy (Master and all Slaves) is automatically freed.
16//!
17//! ## Memory Layout & The Power-of-Two Rule
18//!
19//! For the "Global Deallocation" mechanism to work (where you can free a pointer without
20//! knowing which slab it came from), this allocator relies on **Address Alignment**.
21//!
22//! 1. **Alignment == Size**: The `SLAB_SIZE` must be a power of two.
23//! 2. **Header Location**: By aligning the start of the slab to its size, we guarantee
24//!    that for any pointer `P` inside the slab, `P & !(SLAB_SIZE - 1)` points exactly
25//!    to the [`Slab`] header.
26//!
27//! ## Compile-Time Safety (Static Assertions)
28//!
29//! This module uses Rust's `const` evaluation to perform "sanity checks" before your
30//! code even runs:
31//! - **Minimum Slot Size**: Prevents types smaller than `u32` to ensure internal metadata
32//!   tracking (freelist pointers) fits within the slot.
33//! - **Maximum Slot Size**: Ensures that at least two slots fit into a single slab
34//!   header-to-slab-size ratio, preventing inefficient "one-slot-per-page" allocations.
35//!
36//! ## Thread Safety & Synchronization (`Send` + `Sync`)
37//!
38//! By default, `SlabCache` contains a raw pointer (`NonNull<Slab<T, LOCK>>`). In Rust, raw pointers
39//! are explicitly marked as `!Send` and `!Sync` because the compiler cannot automatically
40//! guarantee that sharing or moving them across thread boundaries is safe.
41//!
42//! To make `SlabCache` multi-threaded ready, we manually implement `Send` and `Sync` with strict
43//! trait bounds:
44//!
45//! 1. **Why `LOCK: LockTrait + Sync`?**
46//!    Even though `alloc(&mut self)` requires a mutable reference to the local cache handle, `SlabCache`
47//!    is designed to be `Clone`. When cloned, multiple threads hold their own mutable handles pointing
48//!    to the **same fize-allocated Master Slab**. Therefore, the synchronization bottleneck shifts entirely
49//!    to the internal `LOCK` inside the slab header. If the `LOCK` is `Sync`, multiple threads can safely
50//!    mutate the internal slots and lists concurrently.
51//!
52//! 2. **Why `T: Send`?**
53//!    Since data allocated within a slab can be wrapped in a `SlabBox` and moved to another thread to be
54//!    used or dropped (freed), the underlying type `T` must be safe to transfer across thread boundaries.
55
56use crate::{
57    locks::LockTrait,
58    mem_lay::get_layout,
59    prelude::*,
60    slab::{Slab, free_master},
61    slab_box::SlabBox,
62};
63
64use core::{alloc::Layout, ptr::NonNull};
65
66/// A thread-safe, reference-counted handle to a Master-Slave slab hierarchy.
67///
68/// `SlabCache` allows you to allocate fixed-size objects of type `T` with $O(1)$
69/// complexity. It manages the expansion of memory by adding Slave slabs when needed.
70pub struct SlabCache<T, LOCK, const SLAB_SIZE: usize>
71where
72    LOCK: LockTrait,
73{
74    /// Pointer to the primary Master Slab.
75    master: Option<NonNull<Slab<T, LOCK>>>,
76}
77
78unsafe impl<T, LOCK, const SLAB_SIZE: usize> Sync for SlabCache<T, LOCK, SLAB_SIZE>
79where
80    T: Send,
81    LOCK: LockTrait + Sync,
82{
83}
84
85unsafe impl<T, LOCK, const SLAB_SIZE: usize> Send for SlabCache<T, LOCK, SLAB_SIZE>
86where
87    T: Send,
88    LOCK: LockTrait + Sync,
89{
90}
91
92impl<T, LOCK, const SLAB_SIZE: usize> Clone for SlabCache<T, LOCK, SLAB_SIZE>
93where
94    LOCK: LockTrait,
95{
96    /// Creates a new handle to the same Master slab.
97    ///
98    /// This increments the internal reference count of the Master slab,
99    /// ensuring it remains valid even if the original `SlabCache` is dropped.
100    fn clone(&self) -> Self {
101        let master_ptr = self
102            .master
103            .expect("can not clone non-initialized SlabCache");
104        Slab::<T, LOCK>::atomic_ref_up(master_ptr).expect("internal error");
105
106        Self {
107            master: Some(master_ptr),
108        }
109    }
110}
111
112impl<T, LOCK, const SLAB_SIZE: usize> Drop for SlabCache<T, LOCK, SLAB_SIZE>
113where
114    LOCK: LockTrait,
115{
116    /// Drops the handle and potentially cleans up the allocator.
117    ///
118    /// Decrements the Master's reference count. If this was the last handle
119    /// and the Master has no active Slaves or used slots, the memory is returned to the system.
120    fn drop(&mut self) {
121        self.free_self();
122    }
123}
124
125impl<T, LOCK, const SLAB_SIZE: usize> SlabCache<T, LOCK, SLAB_SIZE>
126where
127    LOCK: LockTrait,
128{
129    /// Compile-time configuration and validation block.
130    const SLAB_SIZE_CHECK: usize = const {
131        let size: usize = size_of::<T>();
132
133        // Requirement 1: Slot must be large enough to hold a 32-bit index/pointer for the freelist.
134        assert!(
135            size >= size_of::<u32>(),
136            "Slot Type Smaller Than `u32` Not Allowed At SlabCache"
137        );
138
139        let slab_header_size = size_of::<Slab<T, LOCK>>();
140        let slot_per_slab = (SLAB_SIZE - slab_header_size) / size;
141
142        // Requirement 2: The Slab must be large enough to hold the header AND at least two slots.
143        assert!(
144            slot_per_slab >= 2,
145            "Slot Type Size Is Greater Than SlabSize (must fit at least 2 slots per slab)"
146        );
147        size
148    };
149
150    /// Calculated layout for the slab, ensuring alignment equals the power-of-two size.
151    const SLAB_LAYOUT: Layout = get_layout(SLAB_SIZE);
152
153    /// Creates a new `SlabCache` and initializes the Master Slab.
154    ///
155    /// This will trigger an initial system allocation for the first Master segment.
156    ///
157    /// # Errors
158    /// - `SlabError::OutOfMemory`: If the initial system allocation fails.
159    ///
160    /// # Example
161    /// ```ignore
162    /// let mut cache = SlabCache::<MyStruct, SpinLock, 4096>::new()?;
163    /// let my_obj = cache.alloc()?;
164    /// ```
165    pub fn new() -> Result<SlabCache<T, LOCK, SLAB_SIZE>> {
166        let _ = Self::SLAB_SIZE_CHECK; // Trigger const assertions
167
168        let slab_master = Slab::<T, LOCK>::alloc_slab_ptr(Self::SLAB_LAYOUT, None, free_master)?;
169
170        // The Cache itself counts as one reference to the Master.
171        Slab::<T, LOCK>::atomic_ref_up(slab_master)?;
172        Ok(Self {
173            master: Some(slab_master),
174        })
175    }
176
177    /// Internal helper to release the Master reference during Drop or cleanup.
178    fn free_self(&mut self) {
179        if let Some(master) = self.master.take() {
180            Slab::<T, LOCK>::atomic_release_master(master, Self::SLAB_LAYOUT)
181                .expect("internal curruption");
182        }
183    }
184
185    /// Allocates a new object of type `T` from the cache.
186    ///
187    /// The returned [`SlabBox`] acts like a `Box<T>`, automatically returning
188    /// the memory to the slab when it goes out of scope.
189    ///
190    /// If the current Master and its Slaves are full, this method will
191    /// automatically allocate a new Slave slab and link it to the hierarchy.
192    ///
193    /// # Errors
194    /// - `SlabError::OutOfMemory`: If no space is available and a new Slave cannot be created.
195    /// - `SlabError::FatalError`: If the cache's master pointer is null.
196    pub fn alloc(&mut self) -> Result<SlabBox<T, LOCK, SLAB_SIZE>> {
197        if let Some(master) = self.master {
198            let ptr = Slab::<T, LOCK>::alloc_slot(master, Self::SLAB_LAYOUT)?;
199
200            let ptr_box = SlabBox::<T, LOCK, SLAB_SIZE>::new(ptr);
201
202            Ok(ptr_box)
203        } else {
204            Err(SlabError::FatalError)
205        }
206    }
207}