attachable-slab-allocator 0.1.0

A high-performance, $O(1)$, Master-Slave slab allocator designed for `no_std` environments, kernels, and embedded systems. This library provides fixed-size memory management with RAII safety while remaining completely agnostic of the underlying memory provider.
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
//! # Slots Module
//!
//! This module provides a low-level, intrusive slab allocator segment, represented by the [`Slots`] struct.
//!
//! A `Slots` container manages a contiguous memory buffer, dividing it into uniform, fixed-size
//! blocks (slots) of type `T`. It uses an **intrusive linked list** approach to track free slots,
//! where the "next" free index is stored directly within the free slots themselves when they are not in use.
//!
//! ### Key Features:
//! - **Intrusive Metadata**: Minimal overhead as no extra memory is needed for a linked list structure.
//! - **Generic**: Works with any type `T` that satisfies alignment requirements.
//! - **Safe Pointer Validation**: Includes rigorous checks to ensure only valid, aligned, and
//!   in-bounds pointers are returned or accepted.
//! - **`no_std` Ready**: Designed to work in environments without a standard library.
//!
//! ### Safety Considerations:
//! - The implementation makes heavy use of `unsafe` for pointer arithmetic.
//! - Consumers must ensure that they do not use a slab after it has been freed (`try_push_slot`),
//!   as the memory is immediately reused to store the linked list metadata.
//! - `Slots` does not track if a pointer has already been freed; double-freeing will lead
//!   to circular references in the free-list and subsequent memory corruption.
//!
//! ### The Double-Free Problem
//! This module **does not** track whether a slot has already been freed.
//! Due to the intrusive nature of the freelist, pushing the same pointer twice
//! will corrupt the internal linked list by creating a cycle.
//!
//! **Prevention Strategy:**
//! Safety is intended to be enforced at a higher level via RAII (Resource Acquisition
//! Is Initialization). By wrapping the pointers returned by this module in a
//! smart pointer (like `SlabBox<T>`) that handles deallocation in its `Drop`
//! implementation, Rust's ownership system will compile-time-guarantee that
//! a double-free cannot occur.
//!
//!
/// ### Compile-Time Safety Tests
///
/// The following code will fail to compile because `u8` is smaller than `u16`,
/// and our intrusive list requires at least enough space to store a `u16` index.
///
/// ```compile_fail, ignore
/// use attachable_slab_allocator::slots::Slots;
/// // This will trigger the const assert: "Slots Size Is To Small"
/// let _ = Slots::<u8>::type_validate();
/// ```
///
/// The following code will fail to compile if the alignment is incompatible.
/// (Example using a custom struct with repr(packed))
///
/// ```compile_fail, ignore
/// use attachable_slab_allocator::slots::Slots;
/// #[repr(packed)]
/// struct Tiny { a: u8 }
/// // Even if size was okay, if alignment is less than u16, it fails.
/// let _ = Slots::<Tiny>::type_validate();
/// ```
///
use crate::prelude::*;
use core::marker::PhantomData;
use core::ptr::NonNull;

type SlotIndex = u16;

#[derive(PartialEq)]
pub enum SlotsState {
    Free,
    Partial,
    Full,
}

#[derive(Debug)]
pub struct Slots<T> {
    pub buffer_ptr: NonNull<T>,
    pub total_slots: SlotIndex,
    pub free_slots: SlotIndex,
    pub free_count: SlotIndex,
    pub used_count: SlotIndex,

    _marker: PhantomData<T>,
}

impl<T> Slots<T> {
    const SLOT_SIZE: usize = const {
        let size: usize = size_of::<T>();
        Self::type_validate();
        size
    };
    const SLOT_ALIGNMENT: usize = align_of::<T>();

    /// Verifies at compile-time and runtime that type `T` meets the necessary requirements.
    /// Ensures `T` has a size at least equal to `SlotIndex` and alignment compatible with it.
    /// Crucial for maintaining the integrity of the intrusive linked list metadata.
    pub const fn type_validate() {
        let type_size = size_of::<T>();
        let type_align = align_of::<T>();

        let base_size = size_of::<SlotIndex>();
        let base_align = align_of::<SlotIndex>();

        assert!(
            type_align.is_multiple_of(base_align),
            "Slab Type Should Align With u16 Alignment"
        );
        assert!(
            type_size >= base_size,
            "Slots Size Is To Small \n Generic Slot Type Should Bigger Than u16"
        );
    }

    /// Initializes a new `Slots` manager by aligning the provided buffer to the required type alignment.
    /// Calculates the maximum number of slots possible and sets up initial state with no pre-initialization.
    ///
    /// # Errors
    /// Returns `SlabError::OutOfMemory` if the provided buffer is too small to be
    /// properly aligned or cannot fit at least one slot of type `T`.
    pub fn new(buffer_ptr: NonNull<u8>, buffer_size: usize) -> Result<Self> {
        //
        let (al_buffer_ptr, al_buffer_size) = buffer_ptr
            .align_up_buffer(Self::SLOT_ALIGNMENT, buffer_size)
            .ok_or(SlabError::OutOfMemory)?;

        let total_slots = Self::calculate_total_slots(al_buffer_size);

        (total_slots != 0).on_err(SlabError::OutOfMemory)?;

        let type_buffer_ptr = al_buffer_ptr.cast();

        Ok(Self {
            buffer_ptr: type_buffer_ptr,
            total_slots,
            free_slots: SlotIndex::MAX,
            free_count: 0,
            used_count: 0,
            //
            _marker: PhantomData,
        })
    }

    /// Determines the maximum number of slots of type `T` that can fit into the given buffer size.
    /// Caps the result at `SlotIndex::MAX - 1` to ensure the sentinel value remains reserved.
    /// Used during initialization to define the bounds of the managed memory segment.
    pub const fn calculate_total_slots(buffer_size: usize) -> SlotIndex {
        let count = buffer_size / Self::SLOT_SIZE;
        // We must cap at MAX - 1 because MAX is our sentinel value
        if count >= SlotIndex::MAX as usize {
            SlotIndex::MAX - 1
        } else {
            count as SlotIndex
        }
    }

    /// Attempts to allocate a slot by either taking from the recycled freelist or carving a new one.
    /// Updates internal counters and shifts the freelist head if a recycled slot is used.
    /// Returns a pointer to the allocated memory or a `SlabError::OutOfMemory` if no slots are available.
    ///
    /// # Errors
    /// Returns `SlabError::OutOfMemory` if the segment is full (i.e., all available
    /// slots are currently allocated and no recycled slots exist in the freelist).
    #[inline]
    pub fn try_pop_slot(&mut self) -> Result<NonNull<T>> {
        let index = match self.free_slots {
            SlotIndex::MAX => {
                if self.used_count >= self.total_slots {
                    return Err(SlabError::OutOfMemory);
                }
                let index = self.used_count;
                self.used_count += 1;

                index
            }
            _ => {
                let index = self.free_slots;
                let next_index = self.read_from_index(index);
                self.free_slots = next_index;
                self.free_count -= 1;

                index
            }
        };

        self.check_index(index)?;
        Ok(self.index_to_ptr(index))
    }

    /// Returns a previously allocated slot to the manager, making it available for future use.
    /// Performs rigorous pointer validation to ensure the memory belongs to this slab and is correctly aligned.
    /// Links the slot into the intrusive freelist, potentially overwriting user data with metadata.
    ///
    /// # Errors
    /// Returns `SlabError::InvalidPointer` if the provided pointer is:
    /// - Outside the memory range managed by this segment.
    /// - Not aligned to the start of a valid slot boundary.
    #[inline]
    pub fn try_push_slot(&mut self, ptr: NonNull<T>) -> Result<()> {
        self.check_ptr(ptr)?;
        let index = self.ptr_to_index(ptr)?;
        self.check_index(index)?;

        let root_index = self.free_slots;
        self.free_slots = index;
        self.write_at_index(index, root_index);
        self.free_count += 1;

        Ok(())
    }

    /// Checks if there is at least one slot available for allocation in this segment.
    /// Evaluates both the recycled freelist and the uncarved portion of the buffer.
    /// Used to quickly determine if an allocation request can be satisfied.
    #[inline]
    #[allow(unused)]
    pub fn has_free(&self) -> bool {
        self.free_count > 0 || self.used_count < self.total_slots
    }

    /// Returns the current state of the segment (Free, Partial, or Full) based on occupancy.
    /// Helps higher-level managers decide which slab to prioritize for allocations or deallocations.
    /// Provides a high-level overview of the segment's utilization.
    #[inline]
    pub fn get_state(&self) -> SlotsState {
        let free_slots = self.get_free_slots();
        let total_slot = self.total_slots;

        if free_slots == 0 {
            SlotsState::Full
        } else if free_slots == total_slot {
            SlotsState::Free
        } else {
            SlotsState::Partial
        }
    }

    /// Calculates the total number of currently available slots in this segment.
    /// Combines the count of recycled slots with the remaining uncarved memory space.
    /// Useful for monitoring and reporting memory usage statistics.
    #[inline]
    pub fn get_free_slots(&self) -> SlotIndex {
        (self.total_slots - self.used_count) + self.free_count
    }

    /// Returns the total capacity of the segment in terms of individual slots.
    /// This value is determined at creation time and remains constant throughout the slab's life.
    /// Provides a baseline for calculating utilization percentages.
    #[inline]
    #[allow(unused)]
    pub fn get_slot_count(&self) -> SlotIndex {
        self.total_slots
    }

    /// Writes a `SlotIndex` value into the memory location designated by the provided index.
    /// This is used to maintain the intrusive linked list by storing "next" pointers in free slots.
    /// Internal helper that performs direct pointer writes; must be used with valid indices.
    #[inline]
    fn write_at_index(&mut self, index: SlotIndex, value: SlotIndex) {
        unsafe {
            // let offset = index as usize * Self::SLOT_SIZE; // self.slot_size;
            self.buffer_ptr
                .add(index.into())
                .cast::<SlotIndex>()
                .write(value);
        }
    }

    /// Reads a `SlotIndex` value from the memory location designated by the provided index.
    /// Allows traversing the intrusive freelist to find the next available recycled slot.
    /// Internal helper that performs direct pointer reads; must be used with valid indices.
    #[inline]
    fn read_from_index(&self, index: SlotIndex) -> SlotIndex {
        unsafe {
            // let offset = index as usize * Self::SLOT_SIZE; // self.slot_size;
            self.buffer_ptr.add(index.into()).cast::<SlotIndex>().read()
        }
    }

    /// Converts a raw pointer back into its corresponding numerical slot index.
    /// Includes validation to ensure the pointer sits exactly on a slot boundary.
    /// Essential for identifying which slot is being returned during a push operation.
    ///
    /// # Errors
    /// Returns `SlabError::InvalidPointer` if the pointer's offset from the base
    /// buffer is not a valid multiple of the slot size or exceeds the buffer bounds.
    #[inline]
    fn ptr_to_index(&self, ptr: NonNull<T>) -> Result<SlotIndex> {
        let offset = unsafe { ptr.offset_from(self.buffer_ptr) };

        let index = offset.try_into().map_err(|_| SlabError::InvalidPointer)?;

        self.check_index(index)?;

        Ok(index)
    }

    /// Computes the memory address of a slot given its numerical index.
    /// Uses fast pointer arithmetic based on the base buffer address and fixed slot size.
    /// Internal helper used to generate the return values for pop operations.
    #[inline]
    fn index_to_ptr(&self, index: SlotIndex) -> NonNull<T> {
        self.buffer_ptr.unsafe_unsafe_add(index.into())
    }

    /// Validates that a given `SlotIndex` falls within the range of total slots in this segment.
    /// Prevents out-of-bounds memory access during internal operations.
    /// Used as a safety guard before performing pointer arithmetic or reads/writes.
    ///
    /// # Errors
    /// Returns `SlabError::InvalidPointer` if the index is greater than or equal
    /// to the total number of slots managed by this segment.
    #[inline]
    fn check_index(&self, index: SlotIndex) -> Result<()> {
        (index < self.total_slots).as_result((), SlabError::InvalidPointer)
    }

    /// Performs a comprehensive check to verify that a pointer is valid for this segment.
    /// Ensures the pointer is within bounds and aligned precisely to the start of a slot.
    /// Protects against memory corruption from invalid or misaligned deallocation requests.
    ///
    /// # Errors
    /// Returns `SlabError::InvalidPointer` if the pointer is null, out of bounds,
    /// or not aligned to the size of type `T`.
    #[inline]
    fn check_ptr(&self, ptr: NonNull<T>) -> Result<()> {
        let address = ptr.as_address();
        let base_address = self.buffer_ptr.as_address();

        let buffer_bound_address =
            self.buffer_ptr.as_address() + (self.total_slots as usize * Self::SLOT_SIZE);

        // 1. Range check first to ensure we can safely subtract for the offset
        let valid_range = address >= base_address && address < buffer_bound_address;
        valid_range.on_err(SlabError::InvalidPointer)?;

        // 2. Check that the pointer points to a valid slot boundary, not just aligned.
        // This ensures the offset is a multiple of SLOT_SIZE.
        let offset = address - base_address;
        offset // Use is_aligned_to for non-power-of-two sizes
            .is_aligned_to(Self::SLOT_SIZE)
            .on_err(SlabError::InvalidPointer)
    }
}

#[cfg(test)]
mod tests {
    extern crate std;
    use bolero::AnySliceMutExt;
    use std::vec::Vec;

    use super::*;

    use std::alloc::{Layout, alloc_zeroed, dealloc};

    use std::ptr::NonNull;

    use crate::slots::Slots;

    #[test]
    fn slab_container_test() {
        type SlotType = usize;
        let alignment = align_of::<SlotType>();

        const BUF_SZ: usize = size_of::<SlotType>() * 700;

        let layout = Layout::from_size_align(BUF_SZ, alignment).expect("memory layout failed");
        let ptr = unsafe { alloc_zeroed(layout) };

        let buffer_ptr = NonNull::new(ptr).unwrap().cast::<SlotType>();
        let unsafe_buffer_ptr = unsafe { buffer_ptr.cast::<u8>().add(1) };

        Slots::<SlotType>::new(unsafe_buffer_ptr.cast(), 1).expect_err("Slots::new failed");

        let mut slots = Slots::<SlotType>::new(buffer_ptr.cast(), BUF_SZ).unwrap();

        assert!(
            slots.check_ptr(buffer_ptr).is_ok(),
            "Slots::check_ptr failed 0"
        );
        assert!(
            slots.check_ptr(unsafe_buffer_ptr.cast()).is_err(),
            "Slots::check_ptr failed 1"
        );

        let slots_count = Slots::<SlotType>::calculate_total_slots(BUF_SZ);

        let mut v = Vec::new();

        // println!("{}", slots.get_free_slots());
        let mut count = 0;
        while let Ok(l) = slots.try_pop_slot() {
            count += 1;
            v.push(l);
        }

        assert_eq!(count, 700, "Slots buffer allocation faile");
        // println!("{}", slots.get_free_slots());
        v.shuffle();

        assert!(slots.get_free_slots() == 0, "Slots::get_free_slots failed");

        for ptr in v.iter() {
            let unsafe_ptr = unsafe { ptr.cast::<u8>().add(1) };

            slots
                .try_push_slot(unsafe_ptr.cast())
                .expect_err("Slots::try_push_slot failed 0");

            slots
                .try_push_slot(*ptr)
                .expect("Slots::try_push_slot failed 1");
        }

        assert!(slots.has_free(), "Slots::has_free failed");

        assert!(
            slots.get_free_slots() == slots_count,
            "Slots::get_free_slots failed"
        );
        // println!("{}", slots.get_free_slots());
        let mut allocated_total: u16 = 0u16;

        while let Ok(_) = slots.try_pop_slot() {
            allocated_total += 1;
        }

        assert!(
            allocated_total == slots_count,
            "Slots::try_pop_slot failed 1"
        );

        unsafe {
            dealloc(ptr, layout);
        }
    }

    #[test]
    #[cfg(not(miri))]
    fn pressure_test() {
        type SlotType = usize;
        let alignment = align_of::<SlotType>();

        const BUF_SZ: usize = (size_of::<SlotType>() * 200) + 1;

        let layout = Layout::from_size_align(BUF_SZ, alignment).expect("memory layout failed");
        let ptr = unsafe { alloc_zeroed(layout) };

        let buffer_ptr = NonNull::new(ptr).unwrap();

        type SlotSize = usize; //= size_of::<usize>();
        let mut slots = Slots::<SlotSize>::new(buffer_ptr, BUF_SZ).expect("Slots::new failed 0");

        #[derive(Debug, bolero::TypeGenerator)]
        enum Op {
            PUSH,
            POP,
        }

        bolero::check!()
            .with_type::<(usize, Vec<Op>)>()
            .for_each(|(rand, ops)| {
                let mut holder_v = Vec::with_capacity(ops.len());

                for op in ops {
                    match op {
                        Op::PUSH => {
                            if let Ok(ptr) = slots.try_pop_slot() {
                                let address_c = ptr.as_address() ^ rand;
                                unsafe { ptr.cast::<usize>().write(address_c) }
                                holder_v.push(ptr);
                            }
                        }
                        Op::POP => {
                            if let Some(ptr) = holder_v.pop() {
                                let address_c = ptr.as_address() ^ rand;
                                let cookie = unsafe { ptr.cast::<usize>().read() };
                                assert_eq!(address_c, cookie, "data currupted");
                                slots
                                    .try_push_slot(ptr)
                                    .expect("Slots::try_push_slot failed 0");
                            }
                        }
                    }
                }

                while let Some(ptr) = holder_v.pop() {
                    let address_c = ptr.as_address() ^ rand;
                    let cookie = unsafe { ptr.cast::<usize>().read() };
                    assert_eq!(address_c, cookie, "data currupted");
                    slots
                        .try_push_slot(ptr)
                        .expect("Slots::try_push_slot failed 0");
                }
            });

        unsafe {
            dealloc(ptr, layout);
        }
    }
}