pub struct PageTable64<M: PagingMetaData, PTE: GenericPTE, H: PagingHandler> { /* private fields */ }Expand description
A generic page table struct for 64-bit platform.
It also tracks all intermediate level tables. They will be deallocated
When the PageTable64 itself is dropped.
Implementations§
source§impl<M: PagingMetaData, PTE: GenericPTE, H: PagingHandler> PageTable64<M, PTE, H>
impl<M: PagingMetaData, PTE: GenericPTE, H: PagingHandler> PageTable64<M, PTE, H>
sourcepub fn try_new() -> PagingResult<Self>
pub fn try_new() -> PagingResult<Self>
Creates a new page table instance or returns the error.
It will allocate a new page for the root page table.
sourcepub const fn root_paddr(&self) -> PhysAddr
pub const fn root_paddr(&self) -> PhysAddr
Returns the physical address of the root page table.
sourcepub fn map(
&mut self,
vaddr: M::VirtAddr,
target: PhysAddr,
page_size: PageSize,
flags: MappingFlags,
) -> PagingResult<TlbFlush<M>>
pub fn map( &mut self, vaddr: M::VirtAddr, target: PhysAddr, page_size: PageSize, flags: MappingFlags, ) -> PagingResult<TlbFlush<M>>
Maps a virtual page to a physical frame with the given page_size
and mapping flags.
The virtual page starts with vaddr, amd the physical frame starts with
target. If the addresses is not aligned to the page size, they will be
aligned down automatically.
Returns Err(PagingError::AlreadyMapped)
if the mapping is already present.
sourcepub fn remap(
&mut self,
vaddr: M::VirtAddr,
paddr: PhysAddr,
flags: MappingFlags,
) -> PagingResult<(PageSize, TlbFlush<M>)>
pub fn remap( &mut self, vaddr: M::VirtAddr, paddr: PhysAddr, flags: MappingFlags, ) -> PagingResult<(PageSize, TlbFlush<M>)>
Remap the mapping starts with vaddr, updates both the physical address
and flags.
Returns the page size of the mapping.
Returns Err(PagingError::NotMapped) if the
intermediate level tables of the mapping is not present.
sourcepub fn protect(
&mut self,
vaddr: M::VirtAddr,
flags: MappingFlags,
) -> PagingResult<(PageSize, TlbFlush<M>)>
pub fn protect( &mut self, vaddr: M::VirtAddr, flags: MappingFlags, ) -> PagingResult<(PageSize, TlbFlush<M>)>
Updates the flags of the mapping starts with vaddr.
Returns the page size of the mapping.
Returns Err(PagingError::NotMapped) if the
mapping is not present.
sourcepub fn unmap(
&mut self,
vaddr: M::VirtAddr,
) -> PagingResult<(PhysAddr, PageSize, TlbFlush<M>)>
pub fn unmap( &mut self, vaddr: M::VirtAddr, ) -> PagingResult<(PhysAddr, PageSize, TlbFlush<M>)>
Unmaps the mapping starts with vaddr.
Returns Err(PagingError::NotMapped) if the
mapping is not present.
sourcepub fn query(
&self,
vaddr: M::VirtAddr,
) -> PagingResult<(PhysAddr, MappingFlags, PageSize)>
pub fn query( &self, vaddr: M::VirtAddr, ) -> PagingResult<(PhysAddr, MappingFlags, PageSize)>
Queries the result of the mapping starts with vaddr.
Returns the physical address of the target frame, mapping flags, and the page size.
Returns Err(PagingError::NotMapped) if the
mapping is not present.
sourcepub fn map_region(
&mut self,
vaddr: M::VirtAddr,
get_paddr: impl Fn(M::VirtAddr) -> PhysAddr,
size: usize,
flags: MappingFlags,
allow_huge: bool,
flush_tlb_by_page: bool,
) -> PagingResult<TlbFlushAll<M>>
pub fn map_region( &mut self, vaddr: M::VirtAddr, get_paddr: impl Fn(M::VirtAddr) -> PhysAddr, size: usize, flags: MappingFlags, allow_huge: bool, flush_tlb_by_page: bool, ) -> PagingResult<TlbFlushAll<M>>
Maps a contiguous virtual memory region to a contiguous physical memory
region with the given mapping flags.
The virtual and physical memory regions start with vaddr and paddr
respectively. The region size is size. The addresses and size must
be aligned to 4K, otherwise it will return Err(PagingError::NotAligned).
When allow_huge is true, it will try to map the region with huge pages
if possible. Otherwise, it will map the region with 4K pages.
When flush_tlb_by_page is true, it will flush the TLB immediately after
mapping each page. Otherwise, the TLB flush should by handled by the caller.
sourcepub fn unmap_region(
&mut self,
vaddr: M::VirtAddr,
size: usize,
flush_tlb_by_page: bool,
) -> PagingResult<TlbFlushAll<M>>
pub fn unmap_region( &mut self, vaddr: M::VirtAddr, size: usize, flush_tlb_by_page: bool, ) -> PagingResult<TlbFlushAll<M>>
Unmaps a contiguous virtual memory region.
The region must be mapped before using PageTable64::map_region, or
unexpected behaviors may occur. It can deal with huge pages automatically.
When flush_tlb_by_page is true, it will flush the TLB immediately after
mapping each page. Otherwise, the TLB flush should by handled by the caller.
sourcepub fn protect_region(
&mut self,
vaddr: M::VirtAddr,
size: usize,
flags: MappingFlags,
flush_tlb_by_page: bool,
) -> PagingResult<TlbFlushAll<M>>
pub fn protect_region( &mut self, vaddr: M::VirtAddr, size: usize, flags: MappingFlags, flush_tlb_by_page: bool, ) -> PagingResult<TlbFlushAll<M>>
Updates mapping flags of a contiguous virtual memory region.
The region must be mapped before using PageTable64::map_region, or
unexpected behaviors may occur. It can deal with huge pages automatically.
When flush_tlb_by_page is true, it will flush the TLB immediately after
mapping each page. Otherwise, the TLB flush should by handled by the caller.
sourcepub fn walk<F>(
&self,
limit: usize,
pre_func: Option<&F>,
post_func: Option<&F>,
) -> PagingResult
pub fn walk<F>( &self, limit: usize, pre_func: Option<&F>, post_func: Option<&F>, ) -> PagingResult
Walk the page table recursively.
When reaching a page table entry, call pre_func and post_func on the
entry if they are provided. The max number of enumerations in one table
is limited by limit. pre_func and post_func are called before and
after recursively walking the page table.
The arguments of *_func are:
- Current level (starts with
0):usize - The index of the entry in the current-level table:
usize - The virtual address that is mapped to the entry:
M::VirtAddr - The reference of the entry:
&PTE