virtual_buffer/
lib.rs

1//! This crate provides a cross-platform API for dealing with buffers backed by raw virtual memory.
2//!
3//! Apart from providing protection and isolation between processes, paging, and memory mapped
4//! hardware, virtual memory serves to solve another critical issue: the issue of the virtual
5//! buffer. It allows us to [reserve] a range of memory only in the process's virtual address
6//! space, without actually [committing] any of the memory. This can be used to create a buffer
7//! that's infinitely growable and shrinkable *in-place*, without wasting any physical memory, nor
8//! even overcommitting any memory. It can also be used to create sparse data structures that don't
9//! overcommit memory.
10//!
11//! The property of growing in-place is very valuable when reallocation is impossible, for example
12//! because the data structure needs to be concurrent or otherwise pinned. It may also be of use
13//! for single-threaded use cases if reallocation is too expensive (say, tens to hundreds of MB).
14//! However, it's probably easier to use something like [`Vec::with_capacity`] in that case.
15//!
16//! See also [the `vec` module] for an implementation of a concurrent vector.
17//!
18//! # Reserving
19//!
20//! Reserving memory involves allocating a range of virtual address space, such that other
21//! allocations within the same process can't reserve any of the same virtual address space for
22//! anything else. Memory that has been reserved has zero memory cost, however, it can't be
23//! accessed. In order to access any of the pages, you will have to commit them first.
24//!
25//! # Committing
26//!
27//! A range of reserved memory can be committed to make it accessible. Memory that has been freshly
28//! committed doesn't use up any physical memory. It merely counts towards overcommitment, which
29//! may increase the likelihood of being OOM-killed, and may take up space for page tables and may
30//! use some space in the page file. A committed page is only ever backed by a physical page after
31//! being written to for the first time (being "faulted"), or when it was [prefaulted].
32//!
33//! Committed memory can be committed again without issue, so there is no need to keep track of
34//! which pages have been committed in order to safely commit some of them.
35//!
36//! # Decommitting
37//!
38//! A range of committed memory can be decommitted, making it inaccessible again, and releasing any
39//! physical memory that may have been used for them back to the operating system. Decommitted
40//! memory is still reserved.
41//!
42//! Reserved but uncommitted memory can be decommitted without issue, so there is no need to keep
43//! track of which pages have been committed in order to safely decommit some of them.
44//!
45//! # Unreserving
46//!
47//! Memory that is unreserved is available for new allocations to reserve again.
48//!
49//! Committed memory can be unreserved without needing to be decommitted first. However, it's not
50//! possible to unreserve a range of reserved memory, only the entire allocation.
51//!
52//! # Prefaulting
53//!
54//! By default, each committed page is only ever backed by physical memory after it was first
55//! written to. Since this happens for every page, and can be slightly costly due to the overhead
56//! of a context switch, operating systems provide a way to *prefault* multiple pages at once.
57//!
58//! # Pages
59//!
60//! A page refers to the granularity at which the processor's Memory Management Unit operates and
61//! varies between processor architectures. As such, virtual memory operations can only affect
62//! ranges that are aligned to the *page size*.
63//!
64//! # Cargo features
65//!
66//! | Feature | Description                                       |
67//! |---------|---------------------------------------------------|
68//! | std     | Enables the use of `std::error` and `std::borrow` |
69//!
70//! [reserve]: self#reserving
71//! [committing]: self#committing
72//! [`Vec::with_capacity`]: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.with_capacity
73//! [the `vec` module]: self::vec
74//! [prefaulted]: self#prefaulting
75
76#![allow(
77    unused_unsafe,
78    clippy::doc_markdown,
79    clippy::inline_always,
80    clippy::unused_self
81)]
82#![forbid(unsafe_op_in_unsafe_fn)]
83#![cfg_attr(not(feature = "std"), no_std)]
84
85#[cfg(not(any(unix, windows)))]
86compile_error!("unsupported platform");
87
88#[cfg(unix)]
89use self::unix as sys;
90#[cfg(windows)]
91use self::windows as sys;
92use core::{fmt, mem};
93
94pub mod vec;
95
96/// An allocation backed by raw virtual memory, giving you the power to directly manipulate the
97/// pages within it.
98///
99/// See also [the crate-level documentation] for more information about virtual memory.
100///
101/// [the crate-level documentation]: self
102pub struct Allocation {
103    inner: sys::Allocation,
104}
105
106impl Allocation {
107    /// Allocates a new region in the process's virtual address space.
108    ///
109    /// `size` is the size to [reserve] in bytes. This number can be excessively huge, as none of
110    /// the memory is [committed] until you call [`commit`]. The memory is [unreserved] when the
111    /// `Allocation` is dropped.
112    ///
113    /// # Errors
114    ///
115    /// Returns an error if the operating system returns an error.
116    ///
117    /// # Panics
118    ///
119    /// - Panics if `size` is not aligned to the [page size].
120    /// - Panics if `size` is zero.
121    ///
122    /// [reserve]: self#reserving
123    /// [page size]: self#pages
124    /// [committed]: self#committing
125    /// [unreserved]: self#unreserving
126    /// [`commit`]: Self::commit
127    pub fn new(size: usize) -> Result<Self> {
128        assert!(is_aligned(size, page_size()));
129        assert_ne!(size, 0);
130
131        let inner = sys::Allocation::new(size)?;
132
133        Ok(Allocation { inner })
134    }
135
136    /// Creates a dangling `Allocation`, that is, an allocation with a dangling pointer and zero
137    /// size.
138    ///
139    /// This is useful as a placeholder value to defer allocation until later or if no allocation
140    /// is needed.
141    ///
142    /// `alignment` is the alignment of the allocation's pointer, and must be a power of two.
143    ///
144    /// # Panics
145    ///
146    /// Panics if `alignment` is not a power of two.
147    #[inline]
148    #[must_use]
149    pub const fn dangling(alignment: usize) -> Allocation {
150        let inner = sys::Allocation::dangling(alignment);
151
152        Allocation { inner }
153    }
154
155    /// Returns the pointer to the beginning of the allocation.
156    ///
157    /// The returned pointer is always valid, including [dangling allocations], for reads and
158    /// writes of [`size()`] bytes in the sense that it can never lead to undefined behavior.
159    /// However, doing a read or write access to [pages] that have not been [committed] will result
160    /// in the process receiving SIGSEGV / STATUS_ACCESS_VIOLATION.
161    ///
162    /// The pointer must not be accessed after `self` has been dropped.
163    ///
164    /// [dangling allocations]: Self::dangling
165    /// [pages]: self#pages
166    /// [committed]: self#committing
167    /// [`size()`]: Self::size
168    #[inline(always)]
169    #[must_use]
170    pub const fn ptr(&self) -> *mut u8 {
171        self.inner.ptr().cast()
172    }
173
174    /// Returns the size that was used to [allocate] `self`.
175    ///
176    /// [allocate]: Self::new
177    #[inline(always)]
178    #[must_use]
179    pub const fn size(&self) -> usize {
180        self.inner.size()
181    }
182
183    /// [Commits] the given region of memory.
184    ///
185    /// # Errors
186    ///
187    /// Returns an error if the operating system returns an error.
188    ///
189    /// # Panics
190    ///
191    /// - Panics if the allocation is [dangling].
192    /// - Panics if `ptr` and `size` denote a region that is out of bounds of the allocation.
193    /// - Panics if `ptr` and/or `size` is not aligned to the [page size].
194    /// - Panics if `size` is zero.
195    ///
196    /// [Commits]: self#committing
197    /// [dangling]: Self::dangling
198    /// [page size]: self#pages
199    pub fn commit(&self, ptr: *mut u8, size: usize) -> Result<()> {
200        self.check_range(ptr, size);
201
202        // SAFETY: Enforced by the `check_range` call above.
203        unsafe { self.inner.commit(ptr.cast(), size) }
204    }
205
206    /// [Decommits] the given region of memory.
207    ///
208    /// # Errors
209    ///
210    /// Returns an error if the operating system returns an error.
211    ///
212    /// # Panics
213    ///
214    /// - Panics if the allocation is [dangling].
215    /// - Panics if `ptr` and `size` denote a region that is out of bounds of the allocation.
216    /// - Panics if `ptr` and/or `size` is not aligned to the [page size].
217    /// - Panics if `size` is zero.
218    ///
219    /// [Decommits]: self#decommitting
220    /// [dangling]: Self::dangling
221    /// [page size]: self#pages
222    pub fn decommit(&self, ptr: *mut u8, size: usize) -> Result<()> {
223        self.check_range(ptr, size);
224
225        // SAFETY: Enforced by the `check_range` call above.
226        unsafe { self.inner.decommit(ptr.cast(), size) }
227    }
228
229    /// [Prefaults] the given region of memory.
230    ///
231    /// # Errors
232    ///
233    /// Returns an error if the operating system returns an error.
234    ///
235    /// # Panics
236    ///
237    /// - Panics if the allocation is [dangling].
238    /// - Panics if `ptr` and `size` denote a region that is out of bounds of the allocation.
239    /// - Panics if `ptr` and/or `size` is not aligned to the [page size].
240    /// - Panics if `size` is zero.
241    ///
242    /// [Prefaults]: self#prefaulting
243    /// [dangling]: Self::dangling
244    /// [page size]: self#pages
245    pub fn prefault(&self, ptr: *mut u8, size: usize) -> Result<()> {
246        self.check_range(ptr, size);
247
248        // SAFETY: Enforced by the `check_range` call above.
249        unsafe { self.inner.prefault(ptr.cast(), size) }
250    }
251
252    #[inline(never)]
253    fn check_range(&self, ptr: *mut u8, size: usize) {
254        assert_ne!(self.size(), 0, "the allocation is dangling");
255        assert_ne!(size, 0);
256
257        let allocated_range = addr(self.ptr())..addr(self.ptr()) + self.size();
258        let requested_range = addr(ptr)..addr(ptr).checked_add(size).unwrap();
259        assert!(allocated_range.start <= requested_range.start);
260        assert!(requested_range.end <= allocated_range.end);
261
262        let page_size = page_size();
263        assert!(is_aligned(addr(ptr), page_size));
264        assert!(is_aligned(size, page_size));
265    }
266}
267
268impl fmt::Debug for Allocation {
269    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
270        fmt::Debug::fmt(&self.inner, f)
271    }
272}
273
274/// Returns the [page size] of the system.
275///
276/// The value is cached globally and very fast to retrieve.
277///
278/// [page size]: self#pages
279#[inline(always)]
280#[must_use]
281pub fn page_size() -> usize {
282    sys::page_size()
283}
284
285/// Returns the smallest value greater or equal to `val` that is a multiple of `alignment`. Returns
286/// zero on overflow.
287///
288/// You may use this together with [`page_size`] to align your regions for committing/decommitting.
289///
290/// `alignment` must be a power of two (which implies that it must be non-zero).
291#[inline(always)]
292#[must_use]
293pub const fn align_up(val: usize, alignment: usize) -> usize {
294    debug_assert!(alignment.is_power_of_two());
295
296    val.wrapping_add(alignment - 1) & !(alignment - 1)
297}
298
299/// Returns the largest value smaller or equal to `val` that is a multiple of `alignment`.
300///
301/// You may use this together with [`page_size`] to align your regions for committing/decommitting.
302///
303/// `alignment` must be a power of two (which implies that it must be non-zero).
304#[inline(always)]
305#[must_use]
306pub const fn align_down(val: usize, alignment: usize) -> usize {
307    debug_assert!(alignment.is_power_of_two());
308
309    val & !(alignment - 1)
310}
311
312fn is_aligned(val: usize, alignment: usize) -> bool {
313    debug_assert!(alignment.is_power_of_two());
314
315    val & (alignment - 1) == 0
316}
317
318/// The type returned by the various [`Allocation`] methods.
319pub type Result<T, E = Error> = ::core::result::Result<T, E>;
320
321/// Represents an OS error that can be returned by the various [`Allocation`] methods.
322#[derive(Debug)]
323pub struct Error {
324    code: i32,
325}
326
327impl Error {
328    /// Returns the OS error that this error represents.
329    #[inline]
330    #[must_use]
331    pub fn as_raw_os_error(&self) -> i32 {
332        self.code
333    }
334}
335
336impl fmt::Display for Error {
337    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
338        sys::format_error(self.code, f)
339    }
340}
341
342#[cfg(feature = "std")]
343impl std::error::Error for Error {}
344
345#[cfg(unix)]
346mod unix {
347    #![allow(non_camel_case_types)]
348
349    use super::{without_provenance_mut, Result};
350    use core::{
351        ffi::{c_char, c_int, c_void, CStr},
352        fmt,
353        ptr::{self, NonNull},
354        str,
355        sync::atomic::{AtomicUsize, Ordering},
356    };
357
358    #[derive(Debug)]
359    pub struct Allocation {
360        ptr: NonNull<c_void>,
361        size: usize,
362    }
363
364    // SAFETY: It is safe to send `Allocation::ptr` to another thread because the user would have
365    // to use unsafe code themself by dereferencing it.
366    unsafe impl Send for Allocation {}
367
368    // SAFETY: It is safe to share `Allocation::ptr` between threads because the user would have to
369    // use unsafe code themself by dereferencing it.
370    unsafe impl Sync for Allocation {}
371
372    impl Allocation {
373        pub fn new(size: usize) -> Result<Self> {
374            // Miri doesn't support protections other than read/write.
375            #[cfg(not(miri))]
376            let prot = libc::PROT_NONE;
377            #[cfg(miri)]
378            let prot = libc::PROT_READ | libc::PROT_WRITE;
379
380            let flags = libc::MAP_PRIVATE | libc::MAP_ANONYMOUS;
381
382            // SAFETY: Enforced by the fact that we are passing in a null pointer as the address,
383            // so that no existing mappings can be affected in any way.
384            let ptr = unsafe { libc::mmap(ptr::null_mut(), size, prot, flags, -1, 0) };
385
386            result(ptr != libc::MAP_FAILED)?;
387
388            Ok(Allocation {
389                ptr: NonNull::new(ptr).unwrap(),
390                size,
391            })
392        }
393
394        #[inline]
395        pub const fn dangling(alignment: usize) -> Self {
396            assert!(alignment.is_power_of_two());
397
398            Allocation {
399                // SAFETY: We checked that `alignment` is a power of two, which means it must be
400                // non-zero.
401                ptr: unsafe { NonNull::new_unchecked(without_provenance_mut(alignment).cast()) },
402                size: 0,
403            }
404        }
405
406        #[inline(always)]
407        pub const fn ptr(&self) -> *mut c_void {
408            self.ptr.as_ptr()
409        }
410
411        #[inline(always)]
412        pub const fn size(&self) -> usize {
413            self.size
414        }
415
416        #[cfg(not(miri))]
417        pub unsafe fn commit(&self, ptr: *mut c_void, size: usize) -> Result<()> {
418            // SAFETY: The caller must guarantee that `ptr` and `size` are in bounds of the
419            // allocation such that no other allocations can be affected and that `ptr` is aligned
420            // to the page size. As for this allocation, the only way to access it is by unsafely
421            // dererencing its pointer, where the user has the responsibility to make sure that
422            // that is valid.
423            result(unsafe { libc::mprotect(ptr, size, libc::PROT_READ | libc::PROT_WRITE) } == 0)
424        }
425
426        #[cfg(miri)]
427        pub unsafe fn commit(&self, _ptr: *mut c_void, _size: usize) -> Result<()> {
428            // Committing memory has no effect on the operational semantics, so there's nothing for
429            // Miri to test anyway except hitting a segmentation fault which is perfectly defined
430            // behavior.
431            Ok(())
432        }
433
434        #[cfg(not(miri))]
435        pub unsafe fn decommit(&self, ptr: *mut c_void, size: usize) -> Result<()> {
436            // God forbid this be one syscall :ferrisPensive:
437
438            // SAFETY: The caller must guarantee that `ptr` and `size` are in bounds of the
439            // allocation such that no other allocations can be affected and that `ptr` is aligned
440            // to the page size. As for this allocation, the only way to access it is by unsafely
441            // dererencing its pointer, where the user has the responsibility to make sure that
442            // that is valid.
443            result(unsafe { libc::madvise(ptr, size, libc::MADV_DONTNEED) } == 0)?;
444
445            // SAFETY: Same as the previous.
446            result(unsafe { libc::mprotect(ptr, size, libc::PROT_NONE) } == 0)?;
447
448            Ok(())
449        }
450
451        #[cfg(miri)]
452        pub unsafe fn decommit(&self, _ptr: *mut c_void, _size: usize) -> Result<()> {
453            // Decommitting memory has no effect on the operational semantics, so there's nothing
454            // for Miri to test anyway except hitting a segmentation fault which is perfectly
455            // defined behavior.
456            Ok(())
457        }
458
459        #[cfg(not(miri))]
460        pub unsafe fn prefault(&self, ptr: *mut c_void, size: usize) -> Result<()> {
461            // SAFETY: The caller must guarantee that `ptr` and `size` are in bounds of the
462            // allocation such that no other allocations can be affected and that `ptr` is aligned
463            // to the page size. This call is otherwise purely an optimization hint and can't
464            // change program behavior.
465            result(unsafe { libc::madvise(ptr, size, libc::MADV_WILLNEED) } == 0)
466        }
467
468        #[cfg(miri)]
469        pub unsafe fn prefault(&self, _ptr: *mut c_void, _size: usize) -> Result<()> {
470            Ok(())
471        }
472    }
473
474    impl Drop for Allocation {
475        fn drop(&mut self) {
476            if self.size != 0 {
477                // SAFETY: It is the responsibility of the user who is unsafely derefercing the
478                // allocation's pointer to ensure that those accesses don't happen after the
479                // allocation has been dropped. We know the pointer and its size is valid because
480                // we allocated it.
481                unsafe { libc::munmap(self.ptr(), self.size) };
482            }
483        }
484    }
485
486    #[inline(always)]
487    pub fn page_size() -> usize {
488        static PAGE_SIZE: AtomicUsize = AtomicUsize::new(0);
489
490        #[cold]
491        fn page_size_slow() -> usize {
492            let page_size = usize::try_from(unsafe { libc::sysconf(libc::_SC_PAGE_SIZE) }).unwrap();
493            PAGE_SIZE.store(page_size, Ordering::Relaxed);
494
495            page_size
496        }
497
498        let cached = PAGE_SIZE.load(Ordering::Relaxed);
499
500        if cached != 0 {
501            cached
502        } else {
503            page_size_slow()
504        }
505    }
506
507    fn result(condition: bool) -> Result<()> {
508        if condition {
509            Ok(())
510        } else {
511            Err(super::Error { code: errno() })
512        }
513    }
514
515    #[cfg(not(target_os = "vxworks"))]
516    fn errno() -> i32 {
517        unsafe { *errno_location() as i32 }
518    }
519
520    #[cfg(target_os = "vxworks")]
521    fn errno() -> i32 {
522        unsafe { libc::errnoGet() as i32 }
523    }
524
525    pub fn format_error(errnum: i32, f: &mut fmt::Formatter<'_>) -> fmt::Result {
526        let mut buf = [0 as c_char; 128];
527
528        let res = unsafe {
529            libc::strerror_r(errnum as c_int, buf.as_mut_ptr(), buf.len() as libc::size_t)
530        };
531
532        assert!(res >= 0, "strerror_r failure");
533
534        let buf = unsafe { CStr::from_ptr(buf.as_ptr()) }.to_bytes();
535
536        let s = str::from_utf8(buf).unwrap_or_else(|err| {
537            // SAFETY: The `from_utf8` call above checked that `err.valid_up_to()` bytes are valid.
538            unsafe { str::from_utf8_unchecked(&buf[..err.valid_up_to()]) }
539        });
540
541        f.write_str(s)
542    }
543
544    extern "C" {
545        #[cfg(not(target_os = "vxworks"))]
546        #[cfg_attr(
547            any(
548                target_os = "linux",
549                target_os = "emscripten",
550                target_os = "fuchsia",
551                target_os = "l4re",
552                target_os = "hurd",
553                target_os = "dragonfly"
554            ),
555            link_name = "__errno_location"
556        )]
557        #[cfg_attr(
558            any(
559                target_os = "netbsd",
560                target_os = "openbsd",
561                target_os = "android",
562                target_os = "redox",
563                target_env = "newlib"
564            ),
565            link_name = "__errno"
566        )]
567        #[cfg_attr(
568            any(target_os = "solaris", target_os = "illumos"),
569            link_name = "___errno"
570        )]
571        #[cfg_attr(target_os = "nto", link_name = "__get_errno_ptr")]
572        #[cfg_attr(
573            any(target_os = "freebsd", target_vendor = "apple"),
574            link_name = "__error"
575        )]
576        #[cfg_attr(target_os = "haiku", link_name = "_errnop")]
577        #[cfg_attr(target_os = "aix", link_name = "_Errno")]
578        fn errno_location() -> *mut c_int;
579    }
580}
581
582#[cfg(windows)]
583mod windows {
584    #![allow(non_camel_case_types, non_snake_case)]
585
586    use super::{without_provenance_mut, Result};
587    use core::{
588        ffi::c_void,
589        fmt, mem,
590        ptr::{self, NonNull},
591        str,
592        sync::atomic::{AtomicUsize, Ordering},
593    };
594
595    #[derive(Debug)]
596    pub struct Allocation {
597        ptr: NonNull<c_void>,
598        size: usize,
599    }
600
601    // SAFETY: It is safe to send `Allocation::ptr` to another thread because the user would have
602    // to use unsafe code themself by dereferencing it.
603    unsafe impl Send for Allocation {}
604
605    // SAFETY: It is safe to share `Allocation::ptr` between threads because the user would have to
606    // use unsafe code themself by dereferencing it.
607    unsafe impl Sync for Allocation {}
608
609    impl Allocation {
610        pub fn new(size: usize) -> Result<Self> {
611            // Miri doesn't support protections other than read/write.
612            #[cfg(not(miri))]
613            let protect = PAGE_NOACCESS;
614            #[cfg(miri)]
615            let protect = PAGE_READWRITE;
616
617            // SAFETY: Enforced by the fact that we are passing in a null pointer as the address,
618            // so that no existing mappings can be affected in any way.
619            let ptr = unsafe { VirtualAlloc(ptr::null_mut(), size, MEM_RESERVE, protect) };
620
621            result(!ptr.is_null())?;
622
623            Ok(Allocation {
624                ptr: NonNull::new(ptr).unwrap(),
625                size,
626            })
627        }
628
629        #[inline]
630        pub const fn dangling(alignment: usize) -> Self {
631            assert!(alignment.is_power_of_two());
632
633            Allocation {
634                // SAFETY: We checked that `alignment` is a power of two, which means it must be
635                // non-zero.
636                ptr: unsafe { NonNull::new_unchecked(without_provenance_mut(alignment).cast()) },
637                size: 0,
638            }
639        }
640
641        #[inline(always)]
642        pub const fn ptr(&self) -> *mut c_void {
643            self.ptr.as_ptr()
644        }
645
646        #[inline(always)]
647        pub const fn size(&self) -> usize {
648            self.size
649        }
650
651        #[cfg(not(miri))]
652        pub unsafe fn commit(&self, ptr: *mut c_void, size: usize) -> Result<()> {
653            // SAFETY: The caller must guarantee that `ptr` and `size` are in bounds of the
654            // allocation such that no other allocations can be affected. As for this allocation,
655            // the only way to access it is by unsafely dererencing its pointer, where the user
656            // has the responsibility to make sure that that is valid.
657            result(!unsafe { VirtualAlloc(ptr, size, MEM_COMMIT, PAGE_READWRITE) }.is_null())
658        }
659
660        #[cfg(miri)]
661        pub unsafe fn commit(&self, _ptr: *mut c_void, _size: usize) -> Result<()> {
662            // Committing memory has no effect on the operational semantics, so there's nothing for
663            // Miri to test anyway except hitting a segmentation fault which is perfectly defined
664            // behavior.
665            Ok(())
666        }
667
668        #[cfg(not(miri))]
669        pub unsafe fn decommit(&self, ptr: *mut c_void, size: usize) -> Result<()> {
670            // SAFETY: The caller must guarantee that `ptr` and `size` are in bounds of the
671            // allocation such that no other allocations can be affected. As for this allocation,
672            // the only way to access it is by unsafely dererencing its pointer, where the user
673            // has the responsibility to make sure that that is valid.
674            result(unsafe { VirtualFree(ptr, size, MEM_DECOMMIT) } != 0)
675        }
676
677        #[cfg(miri)]
678        pub unsafe fn decommit(&self, _ptr: *mut c_void, _size: usize) -> Result<()> {
679            // Decommitting memory has no effect on the operational semantics, so there's nothing
680            // for Miri to test anyway except hitting a segmentation fault which is perfectly
681            // defined behavior.
682            Ok(())
683        }
684
685        #[cfg(all(not(miri), not(target_vendor = "win7")))]
686        pub unsafe fn prefault(&self, ptr: *mut c_void, size: usize) -> Result<()> {
687            let entry = WIN32_MEMORY_RANGE_ENTRY {
688                VirtualAddress: ptr,
689                NumberOfBytes: size,
690            };
691
692            // SAFETY: The caller must guarantee that `ptr` and `size` are in bounds of the
693            // allocation such that no other allocations can be affected. We are targetting our own
694            // process, the pointer points to a valid and initialized memory location above, and
695            // the size of 1 is correct as there is only one entry. This call is otherwise purely
696            // an optimization hint and can't change program behavior.
697            result(unsafe { PrefetchVirtualMemory(GetCurrentProcess(), 1, &entry, 0) } != 0)
698        }
699
700        #[cfg(any(miri, target_vendor = "win7"))]
701        pub unsafe fn prefault(&self, _ptr: *mut c_void, _size: usize) -> Result<()> {
702            Ok(())
703        }
704    }
705
706    impl Drop for Allocation {
707        fn drop(&mut self) {
708            if self.size != 0 {
709                // SAFETY: It is the responsibility of the user who is unsafely derefercing the
710                // allocation's pointer to ensure that those accesses don't happen after the
711                // allocation has been dropped. We know that the pointer is valid because we
712                // allocated it, and we are passing in 0 as the size as required for `MEM_RELEASE`.
713                unsafe { VirtualFree(self.ptr(), 0, MEM_RELEASE) };
714            }
715        }
716    }
717
718    #[inline(always)]
719    pub fn page_size() -> usize {
720        static PAGE_SIZE: AtomicUsize = AtomicUsize::new(0);
721
722        #[cold]
723        fn page_size_slow() -> usize {
724            // SAFETY: `SYSTEM_INFO` is composed only of primitive types.
725            let mut system_info = unsafe { mem::zeroed() };
726            // SAFETY: The pointer points to a valid memory location above.
727            unsafe { GetSystemInfo(&mut system_info) };
728            let page_size = usize::try_from(system_info.dwPageSize).unwrap();
729            PAGE_SIZE.store(page_size, Ordering::Relaxed);
730
731            page_size
732        }
733
734        let cached = PAGE_SIZE.load(Ordering::Relaxed);
735
736        if cached != 0 {
737            cached
738        } else {
739            page_size_slow()
740        }
741    }
742
743    fn result(condition: bool) -> Result<()> {
744        if condition {
745            Ok(())
746        } else {
747            Err(super::Error { code: errno() })
748        }
749    }
750
751    fn errno() -> i32 {
752        unsafe { GetLastError() as i32 }
753    }
754
755    pub fn format_error(mut errnum: i32, f: &mut fmt::Formatter<'_>) -> fmt::Result {
756        let mut buf = [0u16; 2048];
757        let mut module = ptr::null_mut();
758        let mut flags = 0;
759
760        // NTSTATUS errors may be encoded as HRESULT, which may returned from
761        // GetLastError. For more information about Windows error codes, see
762        // `[MS-ERREF]`: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/0642cb2f-2075-4469-918c-4441e69c548a
763        if (errnum & FACILITY_NT_BIT as i32) != 0 {
764            // format according to https://support.microsoft.com/en-us/help/259693
765            const NTDLL_DLL: &[u16] = &[
766                'N' as _, 'T' as _, 'D' as _, 'L' as _, 'L' as _, '.' as _, 'D' as _, 'L' as _,
767                'L' as _, 0,
768            ];
769
770            module = unsafe { GetModuleHandleW(NTDLL_DLL.as_ptr()) };
771
772            if !module.is_null() {
773                errnum ^= FACILITY_NT_BIT as i32;
774                flags = FORMAT_MESSAGE_FROM_HMODULE;
775            }
776        }
777
778        let res = unsafe {
779            FormatMessageW(
780                flags | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS,
781                module,
782                errnum as u32,
783                0,
784                buf.as_mut_ptr(),
785                buf.len() as u32,
786                ptr::null(),
787            ) as usize
788        };
789
790        if res == 0 {
791            // Sometimes FormatMessageW can fail e.g., system doesn't like 0 as langId,
792            let fm_err = errno();
793            return write!(
794                f,
795                "OS Error {errnum} (FormatMessageW() returned error {fm_err})",
796            );
797        }
798
799        let mut output_len = 0;
800        let mut output = [0u8; 2048];
801
802        for c in char::decode_utf16(buf[..res].iter().copied()) {
803            let Ok(c) = c else {
804                return write!(
805                    f,
806                    "OS Error {errnum} (FormatMessageW() returned invalid UTF-16)",
807                );
808            };
809
810            let len = c.len_utf8();
811
812            if len > output.len() - output_len {
813                break;
814            }
815
816            c.encode_utf8(&mut output[output_len..]);
817            output_len += len;
818        }
819
820        // SAFETY: The `encode_utf8` calls above were used to encode valid UTF-8.
821        let s = unsafe { str::from_utf8_unchecked(&output[..output_len]) };
822
823        f.write_str(s)
824    }
825
826    windows_targets::link!("kernel32.dll" "system" fn GetSystemInfo(
827        lpSystemInfo: *mut SYSTEM_INFO,
828    ));
829
830    windows_targets::link!("kernel32.dll" "system" fn VirtualAlloc(
831        lpAddress: *mut c_void,
832        dwSize: usize,
833        flAllocationType: u32,
834        flProtect: u32,
835    ) -> *mut c_void);
836
837    windows_targets::link!("kernel32.dll" "system" fn VirtualFree(
838        lpAddress: *mut c_void,
839        dwSize: usize,
840        dwFreeType: u32,
841    ) -> i32);
842
843    #[cfg(not(target_vendor = "win7"))]
844    windows_targets::link!("kernel32.dll" "system" fn GetCurrentProcess() -> HANDLE);
845
846    #[cfg(not(target_vendor = "win7"))]
847    windows_targets::link!("kernel32.dll" "system" fn PrefetchVirtualMemory(
848        hProcess: HANDLE,
849        NumberOfEntries: usize,
850        VirtualAddresses: *const WIN32_MEMORY_RANGE_ENTRY,
851        Flags: u32,
852    ) -> i32);
853
854    windows_targets::link!("kernel32.dll" "system" fn GetLastError() -> u32);
855
856    windows_targets::link!("kernel32.dll" "system" fn FormatMessageW(
857        dwFlags: u32,
858        lpSource: *const c_void,
859        dwMessageId: u32,
860        dwLanguageId: u32,
861        lpBuffer: *mut u16,
862        nSize: u32,
863        arguments: *const *const i8,
864    ) -> u32);
865
866    windows_targets::link!("kernel32.dll" "system" fn GetModuleHandleW(
867        lpModuleName: *const u16,
868    ) -> HMODULE);
869
870    #[repr(C)]
871    struct SYSTEM_INFO {
872        wProcessorArchitecture: u16,
873        wReserved: u16,
874        dwPageSize: u32,
875        lpMinimumApplicationAddress: *mut c_void,
876        lpMaximumApplicationAddress: *mut c_void,
877        dwActiveProcessorMask: usize,
878        dwNumberOfProcessors: u32,
879        dwProcessorType: u32,
880        dwAllocationGranularity: u32,
881        wProcessorLevel: u16,
882        wProcessorRevision: u16,
883    }
884
885    const MEM_COMMIT: u32 = 1 << 12;
886    const MEM_RESERVE: u32 = 1 << 13;
887    const MEM_DECOMMIT: u32 = 1 << 14;
888    const MEM_RELEASE: u32 = 1 << 15;
889
890    const PAGE_NOACCESS: u32 = 1 << 0;
891    const PAGE_READWRITE: u32 = 1 << 2;
892
893    #[cfg(not(target_vendor = "win7"))]
894    type HANDLE = isize;
895
896    #[cfg(not(target_vendor = "win7"))]
897    #[repr(C)]
898    struct WIN32_MEMORY_RANGE_ENTRY {
899        VirtualAddress: *mut c_void,
900        NumberOfBytes: usize,
901    }
902
903    const FACILITY_NT_BIT: u32 = 1 << 28;
904
905    const FORMAT_MESSAGE_FROM_HMODULE: u32 = 1 << 11;
906    const FORMAT_MESSAGE_FROM_SYSTEM: u32 = 1 << 12;
907    const FORMAT_MESSAGE_IGNORE_INSERTS: u32 = 1 << 9;
908
909    type HMODULE = *mut c_void;
910}
911
912// TODO: Replace this with `<*const u8>::addr` once we release a breaking version.
913#[allow(clippy::transmutes_expressible_as_ptr_casts)]
914fn addr(ptr: *const u8) -> usize {
915    // SAFETY: `*const u8` and `usize` have the same layout.
916    unsafe { mem::transmute::<*const u8, usize>(ptr) }
917}
918
919// TODO: Replace this with `ptr::without_provenance_mut` once we release a breaking version.
920#[allow(clippy::useless_transmute)]
921const fn without_provenance_mut(addr: usize) -> *mut u8 {
922    // SAFETY: `usize` and `*mut u8` have the same layout.
923    unsafe { mem::transmute::<usize, *mut u8>(addr) }
924}