Struct aarch64_paging::idmap::IdMap
source · pub struct IdMap { /* private fields */ }
Expand description
Manages a level 1 page table using identity mapping, where every virtual address is either unmapped or mapped to the identical IPA.
This assumes that identity mapping is used both for the page table being managed, and for code that is managing it.
Mappings should be added with map_range
before calling
activate
to start using the new page table. To make changes which may
require break-before-make semantics you must first call deactivate
to
switch back to a previous static page table, and then activate
again after making the desired
changes.
Example
use aarch64_paging::{
idmap::IdMap,
paging::{Attributes, MemoryRegion},
};
const ASID: usize = 1;
const ROOT_LEVEL: usize = 1;
// Create a new page table with identity mapping.
let mut idmap = IdMap::new(ASID, ROOT_LEVEL);
// Map a 2 MiB region of memory as read-write.
idmap.map_range(
&MemoryRegion::new(0x80200000, 0x80400000),
Attributes::NORMAL | Attributes::NON_GLOBAL | Attributes::VALID,
).unwrap();
// SAFETY: Everything the program uses is within the 2 MiB region mapped above.
unsafe {
// Set `TTBR0_EL1` to activate the page table.
idmap.activate();
}
// Write something to the memory...
// SAFETY: The program will only use memory within the initially mapped region until `idmap` is
// reactivated below.
unsafe {
// Restore `TTBR0_EL1` to its earlier value while we modify the page table.
idmap.deactivate();
}
// Now change the mapping to read-only and executable.
idmap.map_range(
&MemoryRegion::new(0x80200000, 0x80400000),
Attributes::NORMAL | Attributes::NON_GLOBAL | Attributes::READ_ONLY | Attributes::VALID,
).unwrap();
// SAFETY: Everything the program will used is mapped in by this page table.
unsafe {
idmap.activate();
}
Implementations§
source§impl IdMap
impl IdMap
sourcepub fn new(asid: usize, rootlevel: usize) -> Self
pub fn new(asid: usize, rootlevel: usize) -> Self
Creates a new identity-mapping page table with the given ASID and root level.
sourcepub unsafe fn activate(&mut self)
pub unsafe fn activate(&mut self)
Activates the page table by setting TTBR0_EL1
to point to it, and saves the previous value
of TTBR0_EL1
so that it may later be restored by deactivate
.
Panics if a previous value of TTBR0_EL1
is already saved and not yet used by a call to
deactivate
.
In test builds or builds that do not target aarch64, the TTBR0_EL1
access is omitted.
Safety
The caller must ensure that the page table doesn’t unmap any memory which the program is using, or introduce aliases which break Rust’s aliasing rules. The page table must not be dropped as long as its mappings are required, as it will automatically be deactivated when it is dropped.
sourcepub unsafe fn deactivate(&mut self)
pub unsafe fn deactivate(&mut self)
Deactivates the page table, by setting TTBR0_EL1
back to the value it had before
activate
was called, and invalidating the TLB for this page table’s
configured ASID.
Panics if there is no saved TTBR0_EL1
value because activate
has not previously been
called.
In test builds or builds that do not target aarch64, the TTBR0_EL1
access is omitted.
Safety
The caller must ensure that the previous page table which this is switching back to doesn’t unmap any memory which the program is using.
sourcepub fn map_range(
&mut self,
range: &MemoryRegion,
flags: Attributes
) -> Result<(), MapError>
pub fn map_range( &mut self, range: &MemoryRegion, flags: Attributes ) -> Result<(), MapError>
Maps the given range of virtual addresses to the identical physical addresses with the given flags.
This should generally only be called while the page table is not active. In particular, any
change that may require break-before-make per the architecture must be made while the page
table is inactive. Mapping a previously unmapped memory range may be done while the page
table is active. This function writes block and page entries, but only maps them if flags
contains Attributes::VALID
, otherwise the entries remain invalid.
Errors
Returns MapError::RegionBackwards
if the range is backwards.
Returns MapError::AddressRange
if the largest address in the range
is greater than the
largest virtual address covered by the page table given its root level.
Returns MapError::InvalidFlags
if the flags
argument has unsupported attributes set.
Returns [`MapError::BreakBeforeMakeViolation’] if the range intersects with live mappings, and modifying those would violate architectural break-before-make (BBM) requirements.
sourcepub fn map_range_with_constraints(
&mut self,
range: &MemoryRegion,
flags: Attributes,
constraints: Constraints
) -> Result<(), MapError>
pub fn map_range_with_constraints( &mut self, range: &MemoryRegion, flags: Attributes, constraints: Constraints ) -> Result<(), MapError>
Maps the given range of virtual addresses to the identical physical addresses with the given given flags, taking the given constraints into account.
This should generally only be called while the page table is not active. In particular, any
change that may require break-before-make per the architecture must be made while the page
table is inactive. Mapping a previously unmapped memory range may be done while the page
table is active. This function writes block and page entries, but only maps them if flags
contains Attributes::VALID
, otherwise the entries remain invalid.
Errors
Returns MapError::RegionBackwards
if the range is backwards.
Returns MapError::AddressRange
if the largest address in the range
is greater than the
largest virtual address covered by the page table given its root level.
Returns MapError::InvalidFlags
if the flags
argument has unsupported attributes set.
Returns [`MapError::BreakBeforeMakeViolation’] if the range intersects with live mappings, and modifying those would violate architectural break-before-make (BBM) requirements.
sourcepub fn modify_range<F>(
&mut self,
range: &MemoryRegion,
f: &F
) -> Result<(), MapError>where
F: Fn(&MemoryRegion, &mut Descriptor, usize) -> Result<(), ()> + ?Sized,
pub fn modify_range<F>( &mut self, range: &MemoryRegion, f: &F ) -> Result<(), MapError>where F: Fn(&MemoryRegion, &mut Descriptor, usize) -> Result<(), ()> + ?Sized,
Applies the provided updater function to the page table descriptors covering a given memory range.
This may involve splitting block entries if the provided range is not currently mapped
down to its precise boundaries. For visiting all the descriptors covering a memory range
without potential splitting (and no descriptor updates), use
walk_range
instead.
The updater function receives the following arguments:
- The virtual address range mapped by each page table descriptor. A new descriptor will have been allocated before the invocation of the updater function if a page table split was needed.
- A mutable reference to the page table descriptor that permits modifications.
- The level of a translation table the descriptor belongs to.
The updater function should return:
Ok
to continue updating the remaining entries.Err
to signal an error and stop updating the remaining entries.
This should generally only be called while the page table is not active. In particular, any change that may require break-before-make per the architecture must be made while the page table is inactive. Mapping a previously unmapped memory range may be done while the page table is active.
Errors
Returns MapError::PteUpdateFault
if the updater function returns an error.
Returns MapError::RegionBackwards
if the range is backwards.
Returns MapError::AddressRange
if the largest address in the range
is greater than the
largest virtual address covered by the page table given its root level.
Returns [`MapError::BreakBeforeMakeViolation’] if the range intersects with live mappings, and modifying those would violate architectural break-before-make (BBM) requirements.
sourcepub fn walk_range<F>(
&self,
range: &MemoryRegion,
f: &mut F
) -> Result<(), MapError>where
F: FnMut(&MemoryRegion, &Descriptor, usize) -> Result<(), ()>,
pub fn walk_range<F>( &self, range: &MemoryRegion, f: &mut F ) -> Result<(), MapError>where F: FnMut(&MemoryRegion, &Descriptor, usize) -> Result<(), ()>,
Applies the provided callback function to the page table descriptors covering a given memory range.
The callback function receives the following arguments:
- The full virtual address range mapped by each visited page table descriptor, which may
exceed the original range passed to
walk_range
, due to alignment to block boundaries. - The page table descriptor itself.
- The level of a translation table the descriptor belongs to.
The callback function should return:
Ok
to continue visiting the remaining entries.Err
to signal an error and stop visiting the remaining entries.
Errors
Returns MapError::PteUpdateFault
if the callback function returns an error.
Returns MapError::RegionBackwards
if the range is backwards.
Returns MapError::AddressRange
if the largest address in the range
is greater than the
largest virtual address covered by the page table given its root level.