Struct aarch64_paging::idmap::IdMap

source ·
pub struct IdMap { /* private fields */ }
Expand description

Manages a level 1 page table using identity mapping, where every virtual address is either unmapped or mapped to the identical IPA.

This assumes that identity mapping is used both for the page table being managed, and for code that is managing it.

Mappings should be added with map_range before calling activate to start using the new page table. To make changes which may require break-before-make semantics you must first call deactivate to switch back to a previous static page table, and then activate again after making the desired changes.

Example

use aarch64_paging::{
    idmap::IdMap,
    paging::{Attributes, MemoryRegion},
};

const ASID: usize = 1;
const ROOT_LEVEL: usize = 1;

// Create a new page table with identity mapping.
let mut idmap = IdMap::new(ASID, ROOT_LEVEL);
// Map a 2 MiB region of memory as read-write.
idmap.map_range(
    &MemoryRegion::new(0x80200000, 0x80400000),
    Attributes::NORMAL | Attributes::NON_GLOBAL | Attributes::VALID,
).unwrap();
// SAFETY: Everything the program uses is within the 2 MiB region mapped above.
unsafe {
    // Set `TTBR0_EL1` to activate the page table.
    idmap.activate();
}

// Write something to the memory...

// SAFETY: The program will only use memory within the initially mapped region until `idmap` is
// reactivated below.
unsafe {
    // Restore `TTBR0_EL1` to its earlier value while we modify the page table.
    idmap.deactivate();
}
// Now change the mapping to read-only and executable.
idmap.map_range(
    &MemoryRegion::new(0x80200000, 0x80400000),
    Attributes::NORMAL | Attributes::NON_GLOBAL | Attributes::READ_ONLY | Attributes::VALID,
).unwrap();
// SAFETY: Everything the program will used is mapped in by this page table.
unsafe {
    idmap.activate();
}

Implementations§

source§

impl IdMap

source

pub fn new(asid: usize, rootlevel: usize) -> Self

Creates a new identity-mapping page table with the given ASID and root level.

source

pub unsafe fn activate(&mut self)

Activates the page table by setting TTBR0_EL1 to point to it, and saves the previous value of TTBR0_EL1 so that it may later be restored by deactivate.

Panics if a previous value of TTBR0_EL1 is already saved and not yet used by a call to deactivate.

In test builds or builds that do not target aarch64, the TTBR0_EL1 access is omitted.

Safety

The caller must ensure that the page table doesn’t unmap any memory which the program is using, or introduce aliases which break Rust’s aliasing rules. The page table must not be dropped as long as its mappings are required, as it will automatically be deactivated when it is dropped.

source

pub unsafe fn deactivate(&mut self)

Deactivates the page table, by setting TTBR0_EL1 back to the value it had before activate was called, and invalidating the TLB for this page table’s configured ASID.

Panics if there is no saved TTBR0_EL1 value because activate has not previously been called.

In test builds or builds that do not target aarch64, the TTBR0_EL1 access is omitted.

Safety

The caller must ensure that the previous page table which this is switching back to doesn’t unmap any memory which the program is using.

source

pub fn map_range( &mut self, range: &MemoryRegion, flags: Attributes ) -> Result<(), MapError>

Maps the given range of virtual addresses to the identical physical addresses with the given flags.

This should generally only be called while the page table is not active. In particular, any change that may require break-before-make per the architecture must be made while the page table is inactive. Mapping a previously unmapped memory range may be done while the page table is active. This function writes block and page entries, but only maps them if flags contains Attributes::VALID, otherwise the entries remain invalid.

Errors

Returns MapError::RegionBackwards if the range is backwards.

Returns MapError::AddressRange if the largest address in the range is greater than the largest virtual address covered by the page table given its root level.

Returns MapError::InvalidFlags if the flags argument has unsupported attributes set.

Returns [`MapError::BreakBeforeMakeViolation’] if the range intersects with live mappings, and modifying those would violate architectural break-before-make (BBM) requirements.

source

pub fn map_range_with_constraints( &mut self, range: &MemoryRegion, flags: Attributes, constraints: Constraints ) -> Result<(), MapError>

Maps the given range of virtual addresses to the identical physical addresses with the given given flags, taking the given constraints into account.

This should generally only be called while the page table is not active. In particular, any change that may require break-before-make per the architecture must be made while the page table is inactive. Mapping a previously unmapped memory range may be done while the page table is active. This function writes block and page entries, but only maps them if flags contains Attributes::VALID, otherwise the entries remain invalid.

Errors

Returns MapError::RegionBackwards if the range is backwards.

Returns MapError::AddressRange if the largest address in the range is greater than the largest virtual address covered by the page table given its root level.

Returns MapError::InvalidFlags if the flags argument has unsupported attributes set.

Returns [`MapError::BreakBeforeMakeViolation’] if the range intersects with live mappings, and modifying those would violate architectural break-before-make (BBM) requirements.

source

pub fn modify_range<F>( &mut self, range: &MemoryRegion, f: &F ) -> Result<(), MapError>where F: Fn(&MemoryRegion, &mut Descriptor, usize) -> Result<(), ()> + ?Sized,

Applies the provided updater function to the page table descriptors covering a given memory range.

This may involve splitting block entries if the provided range is not currently mapped down to its precise boundaries. For visiting all the descriptors covering a memory range without potential splitting (and no descriptor updates), use walk_range instead.

The updater function receives the following arguments:

  • The virtual address range mapped by each page table descriptor. A new descriptor will have been allocated before the invocation of the updater function if a page table split was needed.
  • A mutable reference to the page table descriptor that permits modifications.
  • The level of a translation table the descriptor belongs to.

The updater function should return:

  • Ok to continue updating the remaining entries.
  • Err to signal an error and stop updating the remaining entries.

This should generally only be called while the page table is not active. In particular, any change that may require break-before-make per the architecture must be made while the page table is inactive. Mapping a previously unmapped memory range may be done while the page table is active.

Errors

Returns MapError::PteUpdateFault if the updater function returns an error.

Returns MapError::RegionBackwards if the range is backwards.

Returns MapError::AddressRange if the largest address in the range is greater than the largest virtual address covered by the page table given its root level.

Returns [`MapError::BreakBeforeMakeViolation’] if the range intersects with live mappings, and modifying those would violate architectural break-before-make (BBM) requirements.

source

pub fn walk_range<F>( &self, range: &MemoryRegion, f: &mut F ) -> Result<(), MapError>where F: FnMut(&MemoryRegion, &Descriptor, usize) -> Result<(), ()>,

Applies the provided callback function to the page table descriptors covering a given memory range.

The callback function receives the following arguments:

  • The full virtual address range mapped by each visited page table descriptor, which may exceed the original range passed to walk_range, due to alignment to block boundaries.
  • The page table descriptor itself.
  • The level of a translation table the descriptor belongs to.

The callback function should return:

  • Ok to continue visiting the remaining entries.
  • Err to signal an error and stop visiting the remaining entries.
Errors

Returns MapError::PteUpdateFault if the callback function returns an error.

Returns MapError::RegionBackwards if the range is backwards.

Returns MapError::AddressRange if the largest address in the range is greater than the largest virtual address covered by the page table given its root level.

Trait Implementations§

source§

impl Debug for IdMap

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl RefUnwindSafe for IdMap

§

impl Send for IdMap

§

impl !Sync for IdMap

§

impl Unpin for IdMap

§

impl UnwindSafe for IdMap

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for Twhere U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.