1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118
//! A heap allocator for Cortex-M processors //! //! # Example //! //! ``` //! #![feature(alloc)] //! #![feature(global_allocator)] //! #![feature(lang_items)] //! //! // Plug in the allocator crate //! extern crate alloc; //! extern crate alloc_cortex_m; //! #[macro_use] //! extern crate cortex_m_rt as rt; // v0.5.x //! //! use alloc::Vec; //! use alloc_cortex_m::CortexMHeap; //! //! #[global_allocator] //! static ALLOCATOR: CortexMHeap = CortexMHeap::empty(); //! //! entry!(main); //! //! fn main() -> ! { //! // Initialize the allocator BEFORE you use it //! let start = rt::heap_start() as usize; //! let size = 1024; // in bytes //! unsafe { ALLOCATOR.init(start, size) } //! //! let mut xs = Vec::new(); //! xs.push(1); //! //! loop { /* .. */ } //! } //! //! // required: define how Out Of Memory (OOM) conditions should be handled //! // *if* no other crate has already defined `oom` //! #[lang = "oom"] //! #[no_mangle] //! pub fn rust_oom() -> ! { //! // .. //! } //! //! //! // omitted: exception handlers //! ``` #![feature(alloc)] #![feature(allocator_api)] #![feature(const_fn)] #![no_std] extern crate alloc; extern crate cortex_m; extern crate linked_list_allocator; use core::alloc::{GlobalAlloc, Layout, Opaque}; use core::ptr::NonNull; use cortex_m::interrupt::Mutex; use linked_list_allocator::Heap; pub struct CortexMHeap { heap: Mutex<Heap>, } impl CortexMHeap { /// Crate a new UNINITIALIZED heap allocator /// /// You must initialize this heap using the /// [`init`](struct.CortexMHeap.html#method.init) method before using the allocator. pub const fn empty() -> CortexMHeap { CortexMHeap { heap: Mutex::new(Heap::empty()), } } /// Initializes the heap /// /// This function must be called BEFORE you run any code that makes use of the /// allocator. /// /// `start_addr` is the address where the heap will be located. /// /// `size` is the size of the heap in bytes. /// /// Note that: /// /// - The heap grows "upwards", towards larger addresses. Thus `end_addr` must /// be larger than `start_addr` /// /// - The size of the heap is `(end_addr as usize) - (start_addr as usize)`. The /// allocator won't use the byte at `end_addr`. /// /// # Unsafety /// /// Obey these or Bad Stuff will happen. /// /// - This function must be called exactly ONCE. /// - `size > 0` pub unsafe fn init(&self, start_addr: usize, size: usize) { self.heap.lock(|heap| heap.init(start_addr, size)); } } unsafe impl GlobalAlloc for CortexMHeap { unsafe fn alloc(&self, layout: Layout) -> *mut Opaque { self.heap .lock(|heap| heap.allocate_first_fit(layout)) .ok() .map_or(0 as *mut Opaque, |allocation| allocation.as_ptr()) } unsafe fn dealloc(&self, ptr: *mut Opaque, layout: Layout) { self.heap .lock(|heap| heap.deallocate(NonNull::new_unchecked(ptr), layout)); } }