Struct aarch64_paging::Mapping
source · pub struct Mapping<T: Translation + Clone> { /* private fields */ }
Expand description
Manages a level 1 page table and associated state.
Mappings should be added with map_range
before calling
activate
to start using the new page table. To make changes which may
require break-before-make semantics you must first call deactivate
to
switch back to a previous static page table, and then activate
again after making the desired
changes.
Implementations§
source§impl<T: Translation + Clone> Mapping<T>
impl<T: Translation + Clone> Mapping<T>
sourcepub fn new(
translation: T,
asid: usize,
rootlevel: usize,
va_range: VaRange
) -> Self
pub fn new( translation: T, asid: usize, rootlevel: usize, va_range: VaRange ) -> Self
Creates a new page table with the given ASID, root level and translation mapping.
sourcepub unsafe fn activate(&mut self)
pub unsafe fn activate(&mut self)
Activates the page table by setting TTBRn_EL1
to point to it, and saves the previous value
of TTBRn_EL1
so that it may later be restored by deactivate
.
Panics if a previous value of TTBRn_EL1
is already saved and not yet used by a call to
deactivate
.
In test builds or builds that do not target aarch64, the TTBRn_EL1
access is omitted.
Safety
The caller must ensure that the page table doesn’t unmap any memory which the program is using, or introduce aliases which break Rust’s aliasing rules. The page table must not be dropped as long as its mappings are required, as it will automatically be deactivated when it is dropped.
sourcepub unsafe fn deactivate(&mut self)
pub unsafe fn deactivate(&mut self)
Deactivates the page table, by setting TTBRn_EL1
back to the value it had before
activate
was called, and invalidating the TLB for this page table’s
configured ASID.
Panics if there is no saved TTBRn_EL1
value because activate
has not previously been
called.
In test builds or builds that do not target aarch64, the TTBRn_EL1
access is omitted.
Safety
The caller must ensure that the previous page table which this is switching back to doesn’t unmap any memory which the program is using.
sourcepub fn map_range(
&mut self,
range: &MemoryRegion,
pa: PhysicalAddress,
flags: Attributes,
constraints: Constraints
) -> Result<(), MapError>
pub fn map_range( &mut self, range: &MemoryRegion, pa: PhysicalAddress, flags: Attributes, constraints: Constraints ) -> Result<(), MapError>
Maps the given range of virtual addresses to the corresponding range of physical addresses
starting at pa
, with the given flags, taking the given constraints into account.
This should generally only be called while the page table is not active. In particular, any
change that may require break-before-make per the architecture must be made while the page
table is inactive. Mapping a previously unmapped memory range may be done while the page
table is active. This function writes block and page entries, but only maps them if flags
contains Attributes::VALID
, otherwise the entries remain invalid.
Errors
Returns MapError::RegionBackwards
if the range is backwards.
Returns MapError::AddressRange
if the largest address in the range
is greater than the
largest virtual address covered by the page table given its root level.
Returns MapError::InvalidFlags
if the flags
argument has unsupported attributes set.
Returns [`MapError::BreakBeforeMakeViolation’] if the range intersects with live mappings, and modifying those would violate architectural break-before-make (BBM) requirements.
sourcepub fn modify_range<F>(
&mut self,
range: &MemoryRegion,
f: &F
) -> Result<(), MapError>where
F: Fn(&MemoryRegion, &mut Descriptor, usize) -> Result<(), ()> + ?Sized,
pub fn modify_range<F>( &mut self, range: &MemoryRegion, f: &F ) -> Result<(), MapError>where F: Fn(&MemoryRegion, &mut Descriptor, usize) -> Result<(), ()> + ?Sized,
Applies the provided updater function to a number of PTEs corresponding to a given memory range.
This may involve splitting block entries if the provided range is not currently mapped
down to its precise boundaries. For visiting all the descriptors covering a memory range
without potential splitting (and no descriptor updates), use
walk_range
instead.
This should generally only be called while the page table is not active. In particular, any change that may require break-before-make per the architecture must be made while the page table is inactive. Mapping a previously unmapped memory range may be done while the page table is active.
Errors
Returns MapError::PteUpdateFault
if the updater function returns an error.
Returns MapError::RegionBackwards
if the range is backwards.
Returns MapError::AddressRange
if the largest address in the range
is greater than the
largest virtual address covered by the page table given its root level.
Returns [`MapError::BreakBeforeMakeViolation’] if the range intersects with live mappings, and modifying those would violate architectural break-before-make (BBM) requirements.
sourcepub fn walk_range<F>(
&self,
range: &MemoryRegion,
f: &mut F
) -> Result<(), MapError>where
F: FnMut(&MemoryRegion, &Descriptor, usize) -> Result<(), ()>,
pub fn walk_range<F>( &self, range: &MemoryRegion, f: &mut F ) -> Result<(), MapError>where F: FnMut(&MemoryRegion, &Descriptor, usize) -> Result<(), ()>,
Applies the provided function to a number of PTEs corresponding to a given memory range.
The virtual address range passed to the callback function may be expanded compared to the
range
parameter, due to alignment to block boundaries.
Errors
Returns MapError::PteUpdateFault
if the callback function returns an error.
Returns MapError::RegionBackwards
if the range is backwards.
Returns MapError::AddressRange
if the largest address in the range
is greater than the
largest virtual address covered by the page table given its root level.