Struct static_alloc::bump::Bump

source ·
pub struct Bump<T> { /* private fields */ }
Expand description

Allocator drawing from an inner, statically sized memory resource.

The type parameter T is used only to annotate the required size and alignment of the region and has no futher use. Note that in particular there is no safe way to retrieve or unwrap an inner instance even if the Bump was not constructed as a shared global static. Nevertheless, the choice of type makes it easier to reason about potentially required extra space due to alignment padding.

This type is always Sync to allow creating static instances. This works only because there is no actual instance of T contained inside.

§Usage as global allocator

You can use the stable rust attribute to use an instance of this type as the global allocator.

use static_alloc::Bump;

#[global_allocator]
static A: Bump<[u8; 1 << 16]> = Bump::uninit();

fn main() { }

Take care, some runtime features of Rust will allocate some memory before or after your own code. In particular, it was found to be be tricky to predict the usage of the builtin test framework which seemingly allocates some structures per test.

§Usage as a non-dropping local allocator

It is also possible to use a Bump as a stack local allocator or a specialized allocator. The interface offers some utilities for allocating values from references to shared or unshared instances directly. Note: this will never call the Drop implementation of the allocated type. In particular, it would almost surely not be safe to Pin the values, except if there is a guarantee for the Bump itself to not be deallocated either.

use static_alloc::Bump;

let local: Bump<[u64; 3]> = Bump::uninit();

let one = local.leak(0_u64).unwrap();
let two = local.leak(1_u64).unwrap();
let three = local.leak(2_u64).unwrap();

// Exhausted the space.
assert!(local.leak(3_u64).is_err());

Mind that the supplied type parameter influenced both size and alignment and a [u8; 24] does not guarantee being able to allocation three u64 even though most targets have a minimum alignment requirement of 16 and it works fine on those.

// Just enough space for `u128` but no alignment requirement.
let local: Bump<[u8; 16]> = Bump::uninit();

// May or may not return an err.
let _ = local.leak(0_u128);

Instead use the type parameter to Bump as a hint for the best alignment.

// Enough space and align for `u128`.
let local: Bump<[u128; 1]> = Bump::uninit();

assert!(local.leak(0_u128).is_ok());

§Usage as a (local) bag of bits

It is of course entirely possible to use a local instance instead of a single global allocator. For example you could utilize the pointer interface directly to build a #[no_std] dynamic data structure in an environment without extern lib alloc. This feature was the original motivation behind the crate but no such data structures are provided here so a quick sketch of the idea must do:

use core::alloc;
use static_alloc::Bump;

#[repr(align(4096))]
struct PageTable {
    // some non-trivial type.
}

impl PageTable {
    /// Avoid stack allocation of the full struct.
    pub unsafe fn new(into: *mut u8) -> &'static mut Self {
        // ...
    }
}

// Allocator for pages for page tables. Provides 64 pages. When the
// program/kernel is provided as an ELF the bootloader reserves
// memory for us as part of the loading process that we can use
// purely for page tables. Replaces asm `paging: .BYTE <size>;`
static Paging: Bump<[u8; 1 << 18]> = Bump::uninit();

fn main() {
    let layout = alloc::Layout::new::<PageTable>();
    let memory = Paging.alloc(layout).unwrap();
    let table = unsafe {
        PageTable::new(memory.as_ptr())
    };
}

A similar structure would of course work to allocate some non-'static' objects from a temporary Bump`.

§More insights

The ordering used is currently SeqCst. This enforces a single global sequence of observed effects on the slab level. The author is fully aware that this is not strictly necessary. In fact, even AcqRel may not be required as the monotonic bump allocator does not synchronize other memory itself. If you bring forward a PR with a formalized reasoning for relaxing the requirements to Relaxed (llvm Monotonic) it will be greatly appreciated (even more if you demonstrate performance gains).

WIP: slices.

Implementations§

source§

impl<T> Bump<T>

source

pub const fn uninit() -> Self

Make a new allocatable slab of certain byte size and alignment.

The storage will contain uninitialized bytes.

source

pub fn zeroed() -> Self

Make a new allocatable slab of certain byte size and alignment.

The storage will contain zeroed bytes. This is not yet available as a const fn which currently limits its potential usefulness but there is no good reason not to provide it regardless.

source

pub const fn new(storage: T) -> Self

Make a new allocatable slab provided with some bytes it can hand out.

Note that storage will never be dropped and there is no way to get it back.

source

pub fn reset(&mut self)

Reset the bump allocator.

Requires a mutable reference, as no allocations can be active when doing it. This behaves as if a fresh instance was assigned but it does not overwrite the bytes in the backing storage. (You can unsafely rely on this).

§Usage
let mut stack_buf = Bump::<usize>::uninit();

let bytes = stack_buf.leak(0usize.to_be_bytes()).unwrap();
// Now the bump allocator is full.
assert!(stack_buf.leak(0u8).is_err());

// We can reuse if we are okay with forgetting the previous value.
stack_buf.reset();
let val = stack_buf.leak(0usize).unwrap();

Trying to use the previous value does not work, as the stack is still borrowed. Note that any user unsafely tracking the lifetime must also ensure this through proper lifetimes that guarantee that borrows are alive for appropriate times.

// error[E0502]: cannot borrow `stack_buf` as mutable because it is also borrowed as immutable
let mut stack_buf = Bump::<usize>::uninit();

let bytes = stack_buf.leak(0usize).unwrap();
//          --------- immutably borrow occurs here
stack_buf.reset();
// ^^^^^^^ mutable borrow occurs here.
let other = stack_buf.leak(0usize).unwrap();

*bytes += *other;
// ------------- immutable borrow later used here
source

pub fn alloc(&self, layout: Layout) -> Option<NonNull<u8>>

Allocate a region of memory.

This is a safe alternative to GlobalAlloc::alloc.

§Panics

This function will panic if the requested layout has a size of 0. For the use in a GlobalAlloc this is explicitely forbidden to request and would allow any behaviour but we instead strictly check it.

source

pub fn alloc_at( &self, layout: Layout, level: Level ) -> Result<Allocation<'_>, Failure>

Try to allocate some layout with a precise base location.

The base location is the currently consumed byte count, without correction for the alignment of the allocation. This will succeed if it can be allocate exactly at the expected location.

§Panics

This function may panic if the provided level is from a different slab.

source

pub fn get_layout(&self, layout: Layout) -> Option<Allocation<'_>>

Get an allocation with detailed layout.

Provides an Uninit wrapping several aspects of initialization in a safe interface, bound by the lifetime of the reference to the allocator.

source

pub fn get_layout_at( &self, layout: Layout, at: Level ) -> Result<Allocation<'_>, Failure>

Get an allocation with detailed layout at a specific level.

Provides an Uninit wrapping several aspects of initialization in a safe interface, bound by the lifetime of the reference to the allocator.

Since the underlying allocation is the same, it would be unsafe but justified to fuse this allocation with the preceding or succeeding one.

source

pub fn get<V>(&self) -> Option<Allocation<'_, V>>

Get an allocation for a specific type.

It is not yet initialized but provides a safe interface for that initialization.

§Usage
use core::cell::{Ref, RefCell};

let slab: Bump<[Ref<'static, usize>; 1]> = Bump::uninit();
let data = RefCell::new(0xff);

// We can place a `Ref` here but we did not yet.
let alloc = slab.get::<Ref<usize>>().unwrap();
let cell_ref = unsafe {
    alloc.leak(data.borrow())
};

assert_eq!(**cell_ref, 0xff);
source

pub fn get_at<V>(&self, level: Level) -> Result<Allocation<'_, V>, Failure>

Get an allocation for a specific type at a specific level.

See get for usage.

source

pub fn leak_box<V>(&self, val: V) -> Option<LeakBox<'_, V>>

Move a value into an owned allocation.

For safely initializing a value after a successful allocation, see LeakBox::write.

§Usage

This can be used to push the value into a caller provided stack buffer where it lives longer than the current stack frame. For example, you might create a linked list with a dynamic number of values living in the frame below while still being dropped properly. This is impossible to do with a return value.

fn rand() -> usize { 4 }

enum Chain<'buf, T> {
   Tail,
   Link(T, LeakBox<'buf, Self>),
}

fn make_chain<Buf, T>(buf: &Bump<Buf>, mut new_node: impl FnMut() -> T)
    -> Option<Chain<'_, T>>
{
    let count = rand();
    let mut chain = Chain::Tail;
    for _ in 0..count {
        let node = new_node();
        chain = Chain::Link(node, buf.leak_box(chain)?);
    }
    Some(chain)
}

struct Node (usize);
impl Drop for Node {
    fn drop(&mut self) {
        println!("Dropped {}", self.0);
    }
}
let mut counter = 0..;
let new_node = || Node(counter.next().unwrap());

let buffer: Bump<[u8; 128]> = Bump::uninit();
let head = make_chain(&buffer, new_node).unwrap();

// Prints the message in reverse order.
// Dropped 3
// Dropped 2
// Dropped 1
// Dropped 0
drop(head);
source

pub fn leak_box_at<V>( &self, val: V, level: Level ) -> Result<LeakBox<'_, V>, Failure>

Move a value into an owned allocation.

See leak_box for usage.

source

pub fn level(&self) -> Level

Observe the current level.

Keep in mind that concurrent usage of the same slab may modify the level before you are able to use it in alloc_at. Calling this method provides also no other guarantees on synchronization of memory accesses, only that the values observed by the caller are a monotonically increasing seequence while a shared reference exists.

source

pub fn leak<V>(&self, val: V) -> Result<&mut V, LeakError<V>>

Allocate a value for the lifetime of the allocator.

The value is leaked in the sense that

  1. the drop implementation of the allocated value is never called;
  2. reusing the memory for another allocation in the same Bump requires manual unsafe code to handle dropping and reinitialization.

However, it does not mean that the underlying memory used for the allocated value is never reclaimed. If the Bump itself is a stack value then it will get reclaimed together with it.

§Safety notice

It is important to understand that it is undefined behaviour to reuse the allocation for the whole lifetime of the returned reference. That is, dropping the allocation in-place while the reference is still within its lifetime comes with the exact same unsafety caveats as ManuallyDrop::drop.

#[derive(Debug, Default)]
struct FooBar {
    // ...
}

let local: Bump<[FooBar; 3]> = Bump::uninit();
let one = local.leak(FooBar::default()).unwrap();

// Dangerous but justifiable.
let one = unsafe {
    // Ensures there is no current mutable borrow.
    core::ptr::drop_in_place(&mut *one);
};
§Usage
use static_alloc::Bump;

let local: Bump<[u64; 3]> = Bump::uninit();

let one = local.leak(0_u64).unwrap();
assert_eq!(*one, 0);
*one = 42;
§Limitations

Only sized values can be allocated in this manner for now, unsized values are blocked on stabilization of ptr::slice_from_raw_parts. We can not otherwise get a fat pointer to the allocated region.

TODO: will be deprecated sooner or later in favor of a method that does not move the resource on failure.

source

pub fn leak_at<V>( &self, val: V, level: Level ) -> Result<(&mut V, Level), LeakError<V>>

Allocate a value with a precise location.

See leak for basics on allocation of values.

The level is an identifer for a base location (more at level). This will succeed if it can be allocate exactly at the expected location.

This method will return the new level of the slab allocator. A next allocation at the returned level will be placed next to this allocation, only separated by necessary padding from alignment. In particular, this is the same strategy as applied for the placement of #[repr(C)] struct members. (Except for the final padding at the last member to the full struct alignment.)

§Usage
use static_alloc::Bump;

let local: Bump<[u64; 3]> = Bump::uninit();

let base = local.level();
let (one, level) = local.leak_at(1_u64, base).unwrap();
// Will panic when an allocation happens in between.
let (two, _) = local.leak_at(2_u64, level).unwrap();

assert_eq!((one as *const u64).wrapping_offset(1), two);

TODO: will be deprecated sooner or later in favor of a method that does not move the resource on failure.

Trait Implementations§

source§

impl<T> GlobalAlloc for Bump<T>

source§

unsafe fn alloc(&self, layout: Layout) -> *mut u8

Allocate memory as described by the given layout. Read more
source§

unsafe fn realloc( &self, ptr: *mut u8, current: Layout, new_size: usize ) -> *mut u8

Shrink or grow a block of memory to the given new_size in bytes. The block is described by the given ptr pointer and layout. Read more
source§

unsafe fn dealloc(&self, _ptr: *mut u8, _layout: Layout)

Deallocate the block of memory at the given ptr pointer with the given layout. Read more
1.28.0 · source§

unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8

Behaves like alloc, but also ensures that the contents are set to zero before being returned. Read more
source§

impl<'alloc, T> LocalAlloc<'alloc> for Bump<T>

source§

unsafe fn realloc( &'alloc self, alloc: Allocation<'alloc>, layout: NonZeroLayout ) -> Option<Allocation<'alloc>>

Reallocates if the layout is strictly smaller and the allocation aligned.

Note that this may succeed spuriously if the previous allocation is incidentally aligned to a larger alignment than had been request.

Also not, reallocating to a smaller layout is NOT useless.

It confirms that this allocator does not need the allocated layout to re/deallocate. Otherwise, even reallocating to a strictly smaller layout would be impossible without storing the prior layout.

source§

fn alloc(&'alloc self, layout: NonZeroLayout) -> Option<Allocation<'alloc>>

Allocate one block of memory. Read more
source§

unsafe fn dealloc(&'alloc self, _: Allocation<'alloc>)

Deallocate a block previously allocated. Read more
source§

fn alloc_zeroed( &'alloc self, layout: NonZeroLayout ) -> Option<Allocation<'alloc>>

Allocate a block of memory initialized with zeros. Read more
source§

impl<T> Sync for Bump<T>

Auto Trait Implementations§

§

impl<T> !RefUnwindSafe for Bump<T>

§

impl<T> Send for Bump<T>
where T: Send,

§

impl<T> Unpin for Bump<T>
where T: Unpin,

§

impl<T> UnwindSafe for Bump<T>
where T: UnwindSafe,

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.