virtual_buffer/lib.rs
1//! This crate provides a cross-platform API for dealing with buffers backed by raw virtual memory.
2//!
3//! Apart from providing protection and isolation between processes, paging, and memory mapped
4//! hardware, virtual memory serves to solve another critical issue: the issue of the virtual
5//! buffer. It allows us to [reserve] a range of memory only in the process's virtual address
6//! space, without actually [committing] any of the memory. This can be used to create a buffer
7//! that's infinitely growable and shrinkable *in-place*, without wasting any physical memory, nor
8//! even overcommitting any memory. It can also be used to create sparse data structures that don't
9//! overcommit memory.
10//!
11//! The property of growing in-place is very valuable when reallocation is impossible, for example
12//! because the data structure needs to be concurrent or otherwise pinned. It may also be of use
13//! for single-threaded use cases if reallocation is too expensive (say, tens to hundreds of MB).
14//! However, it's probably easier to use something like [`Vec::with_capacity`] in that case.
15//!
16//! See also [the `vec` module] for an implementation of a vector and [the `concurrent::vec`
17//! module] for an implementation of a concurrent vector.
18//!
19//! # Reserving
20//!
21//! Reserving memory involves allocating a range of virtual address space such that other
22//! allocations within the same process can't reserve any of the same virtual address space for
23//! anything else. Memory that has been reserved has zero memory cost; however, it can't be
24//! accessed. In order to access any of the pages, you will have to commit them first.
25//!
26//! # Committing
27//!
28//! A range of reserved memory can be committed to make it accessible. Memory that has been freshly
29//! committed doesn't use up any physical memory. It merely counts towards overcommitment, which
30//! may increase the likelihood of being OOM-killed, and may take up space for page tables and may
31//! use some space in the page file. A committed page is only ever backed by a physical page after
32//! being written to for the first time (being "faulted"), or when it was [prefaulted].
33//!
34//! Committed memory can be committed again without issue, so there is no need to keep track of
35//! which pages have been committed in order to safely commit some of them.
36//!
37//! # Decommitting
38//!
39//! A range of committed memory can be decommitted, making it inaccessible again, and releasing any
40//! physical memory that may have been used for them back to the operating system. Decommitted
41//! memory is still reserved.
42//!
43//! Reserved but uncommitted memory can be decommitted without issue, so there is no need to keep
44//! track of which pages have been committed in order to safely decommit some of them.
45//!
46//! # Unreserving
47//!
48//! Memory that is unreserved is available for new allocations to reserve again.
49//!
50//! Committed memory can be unreserved without needing to be decommitted first. However, it's not
51//! possible to unreserve a range of reserved memory, only the entire allocation.
52//!
53//! # Prefaulting
54//!
55//! By default, each committed page is only ever backed by physical memory after it was first
56//! written to. Since this happens for every page, and can be slightly costly due to the overhead
57//! of a context switch, operating systems provide a way to *prefault* multiple pages at once.
58//!
59//! # Pages
60//!
61//! A page refers to the granularity at which the processor's Memory Management Unit operates and
62//! varies between processor architectures. As such, virtual memory operations can only affect
63//! ranges that are aligned to the *page size*. You can retrieve the page size using [`page_size`],
64//! and you can align regions to the page size using [`align_up`] and [`align_down`].
65//!
66//! # Cargo features
67//!
68//! | Feature | Description |
69//! |---------|-----------------------------------------------------|
70//! | std | Enables `libc/std` and `alloc`. Enabled by default. |
71//! | alloc | Enables the use of `alloc::borrow`. |
72//!
73//! [reserve]: self#reserving
74//! [committing]: self#committing
75//! [the `vec` module]: self::vec
76//! [the `concurrent::vec` module]: self::concurrent::vec
77//! [prefaulted]: self#prefaulting
78
79#![cfg_attr(not(feature = "std"), no_std)]
80
81#[cfg(not(any(unix, windows)))]
82compile_error!("unsupported platform");
83
84#[cfg(feature = "alloc")]
85extern crate alloc;
86
87#[cfg(unix)]
88use self::unix as sys;
89#[cfg(windows)]
90use self::windows as sys;
91use core::{
92 fmt,
93 ptr::{self, NonNull},
94 sync::atomic::{AtomicUsize, Ordering},
95};
96
97pub mod concurrent;
98pub mod vec;
99
100/// An allocation backed by raw virtual memory, giving you the power to directly manipulate the
101/// pages within it.
102///
103/// See also [the crate-level documentation] for more information about virtual memory.
104///
105/// [the crate-level documentation]: self
106#[derive(Debug)]
107pub struct Allocation {
108 ptr: NonNull<u8>,
109 size: usize,
110}
111
112// SAFETY: It is safe to send `Allocation::ptr` to another thread because it is a heap allocation
113// and we own it.
114unsafe impl Send for Allocation {}
115
116// SAFETY: It is safe to share `Allocation::ptr` between threads because the user would have to use
117// unsafe code themself by dereferencing it.
118unsafe impl Sync for Allocation {}
119
120impl Allocation {
121 /// Allocates a new region in the process's virtual address space.
122 ///
123 /// `size` is the size to [reserve] in bytes. This number can be excessively huge, as none of
124 /// the memory is [committed] until you call [`commit`]. The memory is [unreserved] when the
125 /// `Allocation` is dropped.
126 ///
127 /// # Errors
128 ///
129 /// Returns an error if the operating system returns an error.
130 ///
131 /// # Panics
132 ///
133 /// Panics if `size` is not aligned to the [page size].
134 ///
135 /// [reserve]: self#reserving
136 /// [committed]: self#committing
137 /// [`commit`]: Self::commit
138 /// [unreserved]: self#unreserving
139 /// [page size]: self#pages
140 #[track_caller]
141 pub fn new(size: usize) -> Result<Self> {
142 let page_size = page_size();
143
144 assert!(is_aligned(size, page_size));
145
146 let ptr = if size == 0 {
147 ptr::without_provenance_mut(page_size)
148 } else {
149 sys::reserve(size)?
150 };
151
152 Ok(Allocation {
153 ptr: NonNull::new(ptr.cast()).unwrap(),
154 size,
155 })
156 }
157
158 /// Creates a dangling `Allocation`, that is, an allocation with a dangling pointer and zero
159 /// size.
160 ///
161 /// This is useful as a placeholder value to defer allocation until later or if no allocation
162 /// is needed.
163 ///
164 /// `alignment` is the alignment of the allocation's pointer, and must be a power of two.
165 ///
166 /// # Panics
167 ///
168 /// Panics if `alignment` is not a power of two.
169 #[inline]
170 #[must_use]
171 pub const fn dangling(alignment: usize) -> Allocation {
172 assert!(alignment.is_power_of_two());
173
174 Allocation {
175 // SAFETY: We checked that `alignment` is a power of two, which means it must be
176 // nonzero.
177 ptr: unsafe { NonNull::new_unchecked(ptr::without_provenance_mut(alignment)) },
178 size: 0,
179 }
180 }
181
182 /// Returns the pointer to the beginning of the allocation.
183 ///
184 /// The returned pointer is always valid, including [dangling allocations], for reads and
185 /// writes of [`size()`] bytes in the sense that it can never lead to undefined behavior,
186 /// assuming the [pages] have been [committed]. Doing a read or write access to pages that have
187 /// not been committed will result in the process receiving SIGSEGV / STATUS_ACCESS_VIOLATION.
188 /// This means in particular that the pointer stays valid until `self` is dropped.
189 ///
190 /// The pointer must not be accessed after `self` has been dropped.
191 ///
192 /// [dangling allocations]: Self::dangling
193 /// [pages]: self#pages
194 /// [committed]: self#committing
195 /// [`size()`]: Self::size
196 #[inline]
197 #[must_use]
198 pub const fn ptr(&self) -> *mut u8 {
199 self.ptr.as_ptr()
200 }
201
202 /// Returns the size that was used to [allocate] `self`.
203 ///
204 /// [allocate]: Self::new
205 #[inline]
206 #[must_use]
207 pub const fn size(&self) -> usize {
208 self.size
209 }
210
211 /// [Commits] the given region of memory.
212 ///
213 /// # Errors
214 ///
215 /// Returns an error if the operating system returns an error.
216 ///
217 /// # Panics
218 ///
219 /// - Panics if the allocation is [dangling].
220 /// - Panics if `ptr` and `size` denote a region that is out of bounds of the allocation.
221 /// - Panics if `ptr` and/or `size` is not aligned to the [page size].
222 /// - Panics if `size` is zero.
223 ///
224 /// [Commits]: self#committing
225 /// [dangling]: Self::dangling
226 /// [page size]: self#pages
227 #[track_caller]
228 pub fn commit(&self, ptr: *mut u8, size: usize) -> Result {
229 self.check_range(ptr, size);
230
231 // SAFETY: We checked that `ptr` and `size` are in bounds of the allocation such that no
232 // other allocations can be affected and that `ptr` is aligned to the page size. As for
233 // this allocation, the only way to access it is by unsafely dererencing its pointer, where
234 // the user has the responsibility to make sure that that is valid.
235 unsafe { sys::commit(ptr.cast(), size) }
236 }
237
238 /// [Decommits] the given region of memory.
239 ///
240 /// # Errors
241 ///
242 /// Returns an error if the operating system returns an error.
243 ///
244 /// # Panics
245 ///
246 /// - Panics if the allocation is [dangling].
247 /// - Panics if `ptr` and `size` denote a region that is out of bounds of the allocation.
248 /// - Panics if `ptr` and/or `size` is not aligned to the [page size].
249 /// - Panics if `size` is zero.
250 ///
251 /// [Decommits]: self#decommitting
252 /// [dangling]: Self::dangling
253 /// [page size]: self#pages
254 #[track_caller]
255 pub fn decommit(&self, ptr: *mut u8, size: usize) -> Result {
256 self.check_range(ptr, size);
257
258 // SAFETY: We checked that `ptr` and `size` are in bounds of the allocation such that no
259 // other allocations can be affected and that `ptr` is aligned to the page size. As for
260 // this allocation, the only way to access it is by unsafely dererencing its pointer, where
261 // the user has the responsibility to make sure that that is valid.
262 unsafe { sys::decommit(ptr.cast(), size) }
263 }
264
265 /// [Prefaults] the given region of memory.
266 ///
267 /// # Errors
268 ///
269 /// Returns an error if the operating system returns an error.
270 ///
271 /// # Panics
272 ///
273 /// - Panics if the allocation is [dangling].
274 /// - Panics if `ptr` and `size` denote a region that is out of bounds of the allocation.
275 /// - Panics if `ptr` and/or `size` is not aligned to the [page size].
276 /// - Panics if `size` is zero.
277 ///
278 /// [Prefaults]: self#prefaulting
279 /// [dangling]: Self::dangling
280 /// [page size]: self#pages
281 #[track_caller]
282 pub fn prefault(&self, ptr: *mut u8, size: usize) -> Result {
283 self.check_range(ptr, size);
284
285 sys::prefault(ptr.cast(), size)
286 }
287
288 #[inline(never)]
289 #[track_caller]
290 fn check_range(&self, ptr: *mut u8, size: usize) {
291 assert_ne!(self.size(), 0, "the allocation is dangling");
292 assert_ne!(size, 0);
293
294 let allocated_range = self.ptr().addr()..self.ptr().addr() + self.size();
295 let requested_range = ptr.addr()..ptr.addr().checked_add(size).unwrap();
296 assert!(allocated_range.start <= requested_range.start);
297 assert!(requested_range.end <= allocated_range.end);
298
299 let page_size = page_size();
300 assert!(is_aligned(ptr.addr(), page_size));
301 assert!(is_aligned(size, page_size));
302 }
303}
304
305impl Drop for Allocation {
306 fn drop(&mut self) {
307 if self.size != 0 {
308 // SAFETY: It is the responsibility of the user who is unsafely derefercing the
309 // allocation's pointer to ensure that those accesses don't happen after the allocation
310 // has been dropped. We know the pointer and its size is valid because we allocated it.
311 unsafe { sys::unreserve(self.ptr.as_ptr().cast(), self.size) };
312 }
313 }
314}
315
316/// Returns the [page size] of the system.
317///
318/// The value is cached globally and very fast to retrieve.
319///
320/// You can align regions to the page size using [`align_up`] and [`align_down`].
321///
322/// [page size]: self#pages
323#[inline]
324#[must_use]
325pub fn page_size() -> usize {
326 static PAGE_SIZE: AtomicUsize = AtomicUsize::new(0);
327
328 #[cold]
329 #[inline(never)]
330 fn page_size_slow() -> usize {
331 let page_size = sys::page_size();
332 assert!(page_size.is_power_of_two());
333 PAGE_SIZE.store(page_size, Ordering::Relaxed);
334
335 page_size
336 }
337
338 let cached = PAGE_SIZE.load(Ordering::Relaxed);
339
340 if cached != 0 {
341 cached
342 } else {
343 page_size_slow()
344 }
345}
346
347/// Returns the smallest value greater than or equal to `val` that is a multiple of `alignment`.
348/// Returns zero on overflow.
349///
350/// You can use this together with [`page_size`] to align your regions for committing/decommitting.
351///
352/// `alignment` must be a power of two (which implies that it must be nonzero).
353#[inline(always)]
354#[must_use]
355pub const fn align_up(val: usize, alignment: usize) -> usize {
356 debug_assert!(alignment.is_power_of_two());
357
358 val.wrapping_add(alignment - 1) & !(alignment - 1)
359}
360
361/// Returns the largest value smaller than or equal to `val` that is a multiple of `alignment`.
362///
363/// You can use this together with [`page_size`] to align your regions for committing/decommitting.
364///
365/// `alignment` must be a power of two (which implies that it must be nonzero).
366#[inline(always)]
367#[must_use]
368pub const fn align_down(val: usize, alignment: usize) -> usize {
369 debug_assert!(alignment.is_power_of_two());
370
371 val & !(alignment - 1)
372}
373
374const fn is_aligned(val: usize, alignment: usize) -> bool {
375 debug_assert!(alignment.is_power_of_two());
376
377 val & (alignment - 1) == 0
378}
379
380trait SizedTypeProperties: Sized {
381 const IS_ZST: bool = size_of::<Self>() == 0;
382}
383
384impl<T> SizedTypeProperties for T {}
385
386macro_rules! assert_unsafe_precondition {
387 ($condition:expr, $message:expr $(,)?) => {
388 // The nesting is intentional. There is a special path in the compiler for `if false`,
389 // facilitating conditional compilation without `#[cfg]` and the problems that come with it.
390 if cfg!(debug_assertions) {
391 if !$condition {
392 crate::panic_nounwind(concat!("unsafe precondition(s) violated: ", $message));
393 }
394 }
395 };
396}
397use assert_unsafe_precondition;
398
399/// Polyfill for `core::panicking::panic_nounwind`.
400#[cold]
401#[inline(never)]
402const fn panic_nounwind(message: &'static str) -> ! {
403 // This is an `extern "C"` function because these are guaranteed to abort instead of unwinding
404 // as of Rust 1.81.0. They also get the `nounwind` LLVM attribute, so this approach optimizes
405 // better than the alternatives. We wrap it in a Rust function for the better calling
406 // convention for DST references and to make it clearer that this is not actually used from C.
407 #[allow(improper_ctypes_definitions)]
408 #[inline]
409 const extern "C" fn inner(message: &'static str) -> ! {
410 panic!("{}", message);
411 }
412
413 inner(message)
414}
415
416/// The type returned by the various [`Allocation`] methods.
417pub type Result<T = (), E = Error> = ::core::result::Result<T, E>;
418
419/// Represents an OS error that can be returned by the various [`Allocation`] methods.
420#[derive(Debug)]
421pub struct Error {
422 code: i32,
423}
424
425impl Error {
426 /// Returns the OS error that this error represents.
427 #[inline]
428 #[must_use]
429 pub fn as_raw_os_error(&self) -> i32 {
430 self.code
431 }
432}
433
434impl fmt::Display for Error {
435 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
436 sys::format_error(self.code, f)
437 }
438}
439
440impl core::error::Error for Error {}
441
442#[cfg(unix)]
443mod unix {
444 #![allow(non_camel_case_types)]
445
446 use super::Result;
447 use core::{
448 ffi::{CStr, c_char, c_int, c_void},
449 fmt, ptr, str,
450 };
451
452 pub fn reserve(size: usize) -> Result<*mut c_void> {
453 let prot = if cfg!(miri) {
454 // Miri doesn't support protections other than read/write.
455 libc::PROT_READ | libc::PROT_WRITE
456 } else {
457 libc::PROT_NONE
458 };
459
460 let flags = libc::MAP_PRIVATE | libc::MAP_ANONYMOUS;
461
462 // SAFETY: Enforced by the fact that we are passing in a null pointer as the address so
463 // that no existing mappings can be affected in any way.
464 let ptr = unsafe { libc::mmap(ptr::null_mut(), size, prot, flags, -1, 0) };
465
466 result(ptr != libc::MAP_FAILED)?;
467
468 Ok(ptr)
469 }
470
471 pub unsafe fn commit(ptr: *mut c_void, size: usize) -> Result {
472 if cfg!(miri) {
473 // There is no equivalent to committing memory in the AM, so there's nothing for Miri
474 // to check here. The worst that can happen is an unintentional segfault.
475 Ok(())
476 } else {
477 result(unsafe { libc::mprotect(ptr, size, libc::PROT_READ | libc::PROT_WRITE) } == 0)
478 }
479 }
480
481 pub unsafe fn decommit(ptr: *mut c_void, size: usize) -> Result {
482 if cfg!(miri) {
483 // There is no equivalent to decommitting memory in the AM, so there's nothing for Miri
484 // to check here. The worst that can happen is an unintentional segfault.
485 Ok(())
486 } else {
487 // God forbid this be one syscall :ferrisPensive:
488 result(unsafe { libc::madvise(ptr, size, libc::MADV_DONTNEED) } == 0)?;
489 result(unsafe { libc::mprotect(ptr, size, libc::PROT_NONE) } == 0)?;
490
491 Ok(())
492 }
493 }
494
495 pub fn prefault(ptr: *mut c_void, size: usize) -> Result {
496 if cfg!(miri) {
497 // Prefaulting is just an optimization hint and can't change program behavior.
498 Ok(())
499 } else {
500 // SAFETY: Prefaulting is just an optimization hint and can't change program behavior.
501 result(unsafe { libc::madvise(ptr, size, libc::MADV_WILLNEED) } == 0)
502 }
503 }
504
505 pub unsafe fn unreserve(ptr: *mut c_void, size: usize) {
506 unsafe { libc::munmap(ptr, size) };
507 }
508
509 pub fn page_size() -> usize {
510 usize::try_from(unsafe { libc::sysconf(libc::_SC_PAGE_SIZE) }).unwrap()
511 }
512
513 fn result(condition: bool) -> Result {
514 if condition {
515 Ok(())
516 } else {
517 Err(super::Error { code: errno() })
518 }
519 }
520
521 #[cfg(not(target_os = "vxworks"))]
522 fn errno() -> i32 {
523 unsafe { *errno_location() as i32 }
524 }
525
526 #[cfg(target_os = "vxworks")]
527 fn errno() -> i32 {
528 unsafe { libc::errnoGet() as i32 }
529 }
530
531 pub fn format_error(errnum: i32, f: &mut fmt::Formatter<'_>) -> fmt::Result {
532 let mut buf = [0 as c_char; 128];
533
534 let res = unsafe {
535 libc::strerror_r(errnum as c_int, buf.as_mut_ptr(), buf.len() as libc::size_t)
536 };
537
538 assert!(res >= 0, "strerror_r failure");
539
540 let buf = unsafe { CStr::from_ptr(buf.as_ptr()) }.to_bytes();
541
542 let s = str::from_utf8(buf).unwrap_or_else(|err| {
543 // SAFETY: The `from_utf8` call above checked that `err.valid_up_to()` bytes are valid.
544 unsafe { str::from_utf8_unchecked(&buf[..err.valid_up_to()]) }
545 });
546
547 f.write_str(s)
548 }
549
550 unsafe extern "C" {
551 #[cfg(not(target_os = "vxworks"))]
552 #[cfg_attr(
553 any(
554 target_os = "linux",
555 target_os = "emscripten",
556 target_os = "fuchsia",
557 target_os = "l4re",
558 target_os = "hurd",
559 target_os = "dragonfly"
560 ),
561 link_name = "__errno_location"
562 )]
563 #[cfg_attr(
564 any(
565 target_os = "netbsd",
566 target_os = "openbsd",
567 target_os = "android",
568 target_os = "redox",
569 target_env = "newlib"
570 ),
571 link_name = "__errno"
572 )]
573 #[cfg_attr(
574 any(target_os = "solaris", target_os = "illumos"),
575 link_name = "___errno"
576 )]
577 #[cfg_attr(target_os = "nto", link_name = "__get_errno_ptr")]
578 #[cfg_attr(
579 any(target_os = "freebsd", target_vendor = "apple"),
580 link_name = "__error"
581 )]
582 #[cfg_attr(target_os = "haiku", link_name = "_errnop")]
583 #[cfg_attr(target_os = "aix", link_name = "_Errno")]
584 fn errno_location() -> *mut c_int;
585 }
586}
587
588#[cfg(windows)]
589mod windows {
590 #![allow(non_camel_case_types, non_snake_case, clippy::upper_case_acronyms)]
591
592 use super::Result;
593 use core::{ffi::c_void, fmt, mem, ptr, str};
594
595 pub fn reserve(size: usize) -> Result<*mut c_void> {
596 let protect = if cfg!(miri) {
597 // Miri doesn't support protections other than read/write.
598 PAGE_READWRITE
599 } else {
600 PAGE_NOACCESS
601 };
602
603 // SAFETY: Enforced by the fact that we are passing in a null pointer as the address so
604 // that no existing mappings can be affected in any way.
605 let ptr = unsafe { VirtualAlloc(ptr::null_mut(), size, MEM_RESERVE, protect) };
606
607 result(!ptr.is_null())?;
608
609 Ok(ptr)
610 }
611
612 pub unsafe fn commit(ptr: *mut c_void, size: usize) -> Result {
613 if cfg!(miri) {
614 // There is no equivalent to committing memory in the AM, so there's nothing for Miri
615 // to check here. The worst that can happen is an unintentional segfault.
616 Ok(())
617 } else {
618 result(!unsafe { VirtualAlloc(ptr, size, MEM_COMMIT, PAGE_READWRITE) }.is_null())
619 }
620 }
621
622 pub unsafe fn decommit(ptr: *mut c_void, size: usize) -> Result {
623 if cfg!(miri) {
624 // There is no equivalent to decommitting memory in the AM, so there's nothing for Miri
625 // to check here. The worst that can happen is an unintentional segfault.
626 Ok(())
627 } else {
628 result(unsafe { VirtualFree(ptr, size, MEM_DECOMMIT) } != 0)
629 }
630 }
631
632 #[cfg(not(target_vendor = "win7"))]
633 pub fn prefault(ptr: *mut c_void, size: usize) -> Result {
634 if cfg!(miri) {
635 // Prefaulting is just an optimization hint and can't change program behavior.
636 Ok(())
637 } else {
638 let entry = WIN32_MEMORY_RANGE_ENTRY {
639 VirtualAddress: ptr,
640 NumberOfBytes: size,
641 };
642
643 // SAFETY: Prefaulting is just an optimization hint and can't change program behavior.
644 result(unsafe { PrefetchVirtualMemory(GetCurrentProcess(), 1, &entry, 0) } != 0)
645 }
646 }
647
648 #[cfg(target_vendor = "win7")]
649 pub fn prefault(_ptr: *mut c_void, _size: usize) -> Result {
650 // Prefaulting is just an optimization hint and can't change program behavior.
651 Ok(())
652 }
653
654 pub unsafe fn unreserve(ptr: *mut c_void, _size: usize) {
655 unsafe { VirtualFree(ptr, 0, MEM_RELEASE) };
656 }
657
658 pub fn page_size() -> usize {
659 // SAFETY: `SYSTEM_INFO` is composed only of primitive types.
660 let mut system_info = unsafe { mem::zeroed() };
661
662 // SAFETY: The pointer points to a valid memory location above.
663 unsafe { GetSystemInfo(&mut system_info) };
664
665 usize::try_from(system_info.dwPageSize).unwrap()
666 }
667
668 fn result(condition: bool) -> Result {
669 if condition {
670 Ok(())
671 } else {
672 Err(super::Error { code: errno() })
673 }
674 }
675
676 fn errno() -> i32 {
677 unsafe { GetLastError() }.cast_signed()
678 }
679
680 pub fn format_error(errnum: i32, f: &mut fmt::Formatter<'_>) -> fmt::Result {
681 const BUF_LEN: u32 = 2048;
682
683 let mut errnum = errnum.cast_unsigned();
684 let mut buf = [0u16; BUF_LEN as usize];
685 let mut module = ptr::null_mut();
686 let mut flags = 0;
687
688 // NTSTATUS errors may be encoded as HRESULT, which may returned from
689 // GetLastError. For more information about Windows error codes, see
690 // `[MS-ERREF]`: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/0642cb2f-2075-4469-918c-4441e69c548a
691 if (errnum & FACILITY_NT_BIT) != 0 {
692 // format according to https://support.microsoft.com/en-us/help/259693
693 const NTDLL_DLL: &[u16] = &[
694 'N' as _, 'T' as _, 'D' as _, 'L' as _, 'L' as _, '.' as _, 'D' as _, 'L' as _,
695 'L' as _, 0,
696 ];
697
698 module = unsafe { GetModuleHandleW(NTDLL_DLL.as_ptr()) };
699
700 if !module.is_null() {
701 errnum ^= FACILITY_NT_BIT;
702 flags = FORMAT_MESSAGE_FROM_HMODULE;
703 }
704 }
705
706 let res = unsafe {
707 FormatMessageW(
708 flags | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS,
709 module,
710 errnum,
711 0,
712 buf.as_mut_ptr(),
713 BUF_LEN,
714 ptr::null(),
715 ) as usize
716 };
717
718 if res == 0 {
719 // Sometimes FormatMessageW can fail e.g., system doesn't like 0 as langId,
720 let fm_err = errno();
721 return write!(
722 f,
723 "OS Error {errnum} (FormatMessageW() returned error {fm_err})",
724 );
725 }
726
727 let mut output_len = 0;
728 let mut output = [0u8; BUF_LEN as usize];
729
730 for c in char::decode_utf16(buf[..res].iter().copied()) {
731 let Ok(c) = c else {
732 return write!(
733 f,
734 "OS Error {errnum} (FormatMessageW() returned invalid UTF-16)",
735 );
736 };
737
738 let len = c.len_utf8();
739
740 if len > output.len() - output_len {
741 break;
742 }
743
744 c.encode_utf8(&mut output[output_len..]);
745 output_len += len;
746 }
747
748 // SAFETY: The `encode_utf8` calls above were used to encode valid UTF-8.
749 let s = unsafe { str::from_utf8_unchecked(&output[..output_len]) };
750
751 f.write_str(s)
752 }
753
754 windows_targets::link!("kernel32.dll" "system" fn GetSystemInfo(
755 lpSystemInfo: *mut SYSTEM_INFO,
756 ));
757
758 windows_targets::link!("kernel32.dll" "system" fn VirtualAlloc(
759 lpAddress: *mut c_void,
760 dwSize: usize,
761 flAllocationType: u32,
762 flProtect: u32,
763 ) -> *mut c_void);
764
765 windows_targets::link!("kernel32.dll" "system" fn VirtualFree(
766 lpAddress: *mut c_void,
767 dwSize: usize,
768 dwFreeType: u32,
769 ) -> i32);
770
771 #[cfg(not(target_vendor = "win7"))]
772 windows_targets::link!("kernel32.dll" "system" fn GetCurrentProcess() -> HANDLE);
773
774 #[cfg(not(target_vendor = "win7"))]
775 windows_targets::link!("kernel32.dll" "system" fn PrefetchVirtualMemory(
776 hProcess: HANDLE,
777 NumberOfEntries: usize,
778 VirtualAddresses: *const WIN32_MEMORY_RANGE_ENTRY,
779 Flags: u32,
780 ) -> i32);
781
782 windows_targets::link!("kernel32.dll" "system" fn GetLastError() -> u32);
783
784 windows_targets::link!("kernel32.dll" "system" fn FormatMessageW(
785 dwFlags: u32,
786 lpSource: *const c_void,
787 dwMessageId: u32,
788 dwLanguageId: u32,
789 lpBuffer: *mut u16,
790 nSize: u32,
791 arguments: *const *const i8,
792 ) -> u32);
793
794 windows_targets::link!("kernel32.dll" "system" fn GetModuleHandleW(
795 lpModuleName: *const u16,
796 ) -> HMODULE);
797
798 #[repr(C)]
799 struct SYSTEM_INFO {
800 wProcessorArchitecture: u16,
801 wReserved: u16,
802 dwPageSize: u32,
803 lpMinimumApplicationAddress: *mut c_void,
804 lpMaximumApplicationAddress: *mut c_void,
805 dwActiveProcessorMask: usize,
806 dwNumberOfProcessors: u32,
807 dwProcessorType: u32,
808 dwAllocationGranularity: u32,
809 wProcessorLevel: u16,
810 wProcessorRevision: u16,
811 }
812
813 const MEM_COMMIT: u32 = 1 << 12;
814 const MEM_RESERVE: u32 = 1 << 13;
815 const MEM_DECOMMIT: u32 = 1 << 14;
816 const MEM_RELEASE: u32 = 1 << 15;
817
818 const PAGE_NOACCESS: u32 = 1 << 0;
819 const PAGE_READWRITE: u32 = 1 << 2;
820
821 #[cfg(not(target_vendor = "win7"))]
822 type HANDLE = isize;
823
824 #[cfg(not(target_vendor = "win7"))]
825 #[repr(C)]
826 struct WIN32_MEMORY_RANGE_ENTRY {
827 VirtualAddress: *mut c_void,
828 NumberOfBytes: usize,
829 }
830
831 const FACILITY_NT_BIT: u32 = 1 << 28;
832
833 const FORMAT_MESSAGE_FROM_HMODULE: u32 = 1 << 11;
834 const FORMAT_MESSAGE_FROM_SYSTEM: u32 = 1 << 12;
835 const FORMAT_MESSAGE_IGNORE_INSERTS: u32 = 1 << 9;
836
837 type HMODULE = *mut c_void;
838}