pub struct Tlsf<'pool, FLBitmap, SLBitmap, const FLLEN: usize, const SLLEN: usize> { /* private fields */ }
Expand description
The TLSF header (top-level) data structure.
Data Structure Overview
Properties
The allocation granularity (GRANULARITY
) is size_of::<usize>() * 4
bytes, which is the minimum size of a free block.
The maximum block size is (GRANULARITY << FLLEN) - GRANULARITY
.
Implementations§
source§impl<'pool, FLBitmap: BinInteger, SLBitmap: BinInteger, const FLLEN: usize, const SLLEN: usize> Tlsf<'pool, FLBitmap, SLBitmap, FLLEN, SLLEN>
impl<'pool, FLBitmap: BinInteger, SLBitmap: BinInteger, const FLLEN: usize, const SLLEN: usize> Tlsf<'pool, FLBitmap, SLBitmap, FLLEN, SLLEN>
sourcepub unsafe fn insert_free_block_ptr(
&mut self,
block: NonNull<[u8]>
) -> Option<NonZeroUsize>
pub unsafe fn insert_free_block_ptr( &mut self, block: NonNull<[u8]> ) -> Option<NonZeroUsize>
Create a new memory pool at the location specified by a slice pointer.
Returns the actual number of bytes (counted from the beginning of
block
) used to create the memory pool. This value is necessary to
calculate the start address to pass to Self::append_free_block_ptr
.
This method does nothing and returns None
if the given memory block is
too small.
Time Complexity
This method will complete in linear time (O(block.len())
) because
it might need to divide the memory block to meet the maximum block size
requirement ((GRANULARITY << FLLEN) - GRANULARITY
).
Examples
use rlsf::Tlsf;
use std::{mem::MaybeUninit, ptr::NonNull};
static mut POOL: MaybeUninit<[u8; 1024]> = MaybeUninit::uninit();
let mut tlsf: Tlsf<u8, u8, 8, 8> = Tlsf::new();
unsafe {
tlsf.insert_free_block_ptr(NonNull::new(POOL.as_mut_ptr()).unwrap());
}
Safety
The memory block will be considered owned by self
. The memory block
must outlive self
.
Panics
This method never panics.
sourcepub unsafe fn append_free_block_ptr(&mut self, block: NonNull<[u8]>) -> usize
pub unsafe fn append_free_block_ptr(&mut self, block: NonNull<[u8]>) -> usize
Extend an existing memory pool by incorporating the specified memory block.
Returns the number of incorporated bytes, counted from the beginning of
block
.
In the current implementation, this method can coalesce memory pools
only if the maximum pool size is outside the range of usize
, i.e.,
log2(GRANULARITY) + FLLEN >= usize::BITS
. This is because it does not
track each pool’s size and cannot check whether the resulting pool will
have a valid size.
Time Complexity
This method will complete in linear time (O(block.len())
) because
it might need to divide the memory block to meet the maximum block size
requirement ((GRANULARITY << FLLEN) - GRANULARITY
).
Examples
use rlsf::Tlsf;
use std::{mem::MaybeUninit, ptr::NonNull};
static mut POOL: MaybeUninit<[u8; 1024]> = MaybeUninit::uninit();
let mut cursor = unsafe { POOL.as_mut_ptr() } as *mut u8;
let mut remaining_len = 1024;
let mut tlsf: Tlsf<u8, u8, 8, 8> = Tlsf::new();
let pool0_len = unsafe {
tlsf.insert_free_block_ptr(nonnull_slice_from_raw_parts(
NonNull::new(cursor).unwrap(), remaining_len / 2))
}.unwrap().get();
cursor = cursor.wrapping_add(pool0_len);
remaining_len -= pool0_len;
let pool1_len = unsafe {
tlsf.append_free_block_ptr(nonnull_slice_from_raw_parts(
NonNull::new(cursor).unwrap(), remaining_len / 2))
};
cursor = cursor.wrapping_add(pool1_len);
remaining_len -= pool1_len;
let pool2_len = unsafe {
tlsf.append_free_block_ptr(nonnull_slice_from_raw_parts(
NonNull::new(cursor).unwrap(), remaining_len))
};
cursor = cursor.wrapping_add(pool2_len);
remaining_len -= pool2_len;
// polyfill for <https://github.com/rust-lang/rust/issues/71941>
fn nonnull_slice_from_raw_parts<T>(ptr: NonNull<T>, len: usize) -> NonNull<[T]> {
NonNull::new(std::ptr::slice_from_raw_parts_mut(ptr.as_ptr(), len)).unwrap()
}
Safety
The memory block will be considered owned by self
. The memory block
must outlive self
.
block
’s starting address must match an existing memory pool’s
ending address. See the above example for how to obtain one.
Panics
This method never panics.
sourcepub fn insert_free_block(
&mut self,
block: &'pool mut [MaybeUninit<u8>]
) -> impl Send + Sync
pub fn insert_free_block( &mut self, block: &'pool mut [MaybeUninit<u8>] ) -> impl Send + Sync
Create a new memory pool at the location specified by a slice.
This method does nothing if the given memory block is too small.
(The return type is yet to be determined.)
Time Complexity
See Self::insert_free_block_ptr
.
Examples
use rlsf::Tlsf;
use std::mem::MaybeUninit;
let mut pool = [MaybeUninit::uninit(); 1024];
let mut tlsf: Tlsf<u8, u8, 8, 8> = Tlsf::new();
tlsf.insert_free_block(&mut pool);
The insertred memory block must outlive self
:
use rlsf::Tlsf;
use std::mem::MaybeUninit;
let mut tlsf: Tlsf<u8, u8, 8, 8> = Tlsf::new();
let mut pool = [MaybeUninit::uninit(); 1024];
tlsf.insert_free_block(&mut pool);
drop(pool); // dropping the memory block first is not allowed
drop(tlsf);
Panics
This method never panics.
sourcepub fn allocate(&mut self, layout: Layout) -> Option<NonNull<u8>>
pub fn allocate(&mut self, layout: Layout) -> Option<NonNull<u8>>
Attempt to allocate a block of memory.
Returns the starting address of the allocated memory block on success;
None
otherwise.
Time Complexity
This method will complete in constant time.
sourcepub unsafe fn deallocate(&mut self, ptr: NonNull<u8>, align: usize)
pub unsafe fn deallocate(&mut self, ptr: NonNull<u8>, align: usize)
Deallocate a previously allocated memory block.
Time Complexity
This method will complete in constant time.
Safety
ptr
must denote a memory block previously allocated viaself
.- The memory block must have been allocated with the same alignment
(
Layout::align
) asalign
.
sourcepub unsafe fn allocation_usable_size(ptr: NonNull<u8>) -> usize
Available on crate feature unstable
only.
pub unsafe fn allocation_usable_size(ptr: NonNull<u8>) -> usize
unstable
only.Get the actual usable size of a previously allocated memory block.
Safety
ptr
must denote a memory block previously allocated via some instance ofSelf
.- The call must happen-before the deallocation or reallocation of the memory block.
sourcepub unsafe fn reallocate(
&mut self,
ptr: NonNull<u8>,
new_layout: Layout
) -> Option<NonNull<u8>>
pub unsafe fn reallocate( &mut self, ptr: NonNull<u8>, new_layout: Layout ) -> Option<NonNull<u8>>
Shrink or grow a previously allocated memory block.
Returns the new starting address of the memory block on success;
None
otherwise.
Time Complexity
Unlike other methods, this method will complete in linear time
(O(old_size)
).
Safety
ptr
must denote a memory block previously allocated viaself
.- The memory block must have been allocated with the same alignment
(
Layout::align
) asnew_layout
.
sourcepub unsafe fn iter_blocks(
&self,
pool: NonNull<[u8]>
) -> impl Iterator<Item = BlockInfo<'_>> + Send + '_
Available on crate feature unstable
only.
pub unsafe fn iter_blocks( &self, pool: NonNull<[u8]> ) -> impl Iterator<Item = BlockInfo<'_>> + Send + '_
unstable
only.Enumerate memory blocks in the specified memory pool.
Safety
pool
must precisely represent a memory pool that belongs to self
.
Specifically, its starting address must be the one that was previously
passed to Self::insert_free_block_ptr
, and its length must be the
sum of the return values of that call to insert_free_block_ptr
and
all subsequent calls to Self::append_free_block_ptr
that have been
made to expand this memory pool.
Examples
use rlsf::Tlsf;
use std::{mem::MaybeUninit, alloc::Layout, ptr::{NonNull, slice_from_raw_parts_mut}};
static mut POOL: MaybeUninit<[u8; 1024]> = MaybeUninit::uninit();
let pool_ptr = NonNull::new(unsafe { POOL.as_mut_ptr() }).unwrap();
let mut tlsf: Tlsf<'_, u16, u16, 12, 16> = Tlsf::new();
// Insert a memory pool. We need to remember the actual pool size
// to safely use `Tlsf::iter_blocks` later.
let pool_len = unsafe { tlsf.insert_free_block_ptr(pool_ptr) }.unwrap().get();
let pool_ptr = NonNull::new(
slice_from_raw_parts_mut(pool_ptr.as_ptr() as *mut u8, pool_len)
).unwrap();
// A closure to calculate the remaining free space
let free_bytes = |p: &Tlsf<'_, _, _, 12, 16>| unsafe { p.iter_blocks(pool_ptr) }
.filter(|block_info| !block_info.is_occupied())
.map(|block_info| block_info.max_payload_size())
.sum::<usize>();
// Allocate memory
let free_bytes1 = dbg!(free_bytes(&tlsf));
tlsf.allocate(Layout::new::<u64>()).unwrap();
let free_bytes2 = dbg!(free_bytes(&tlsf));
// Since we have allocated memory, we should have less free space now
assert!(free_bytes2 < free_bytes1);