Struct hyperpom::memory::PageTableManager
source · pub struct PageTableManager { /* private fields */ }
Expand description
Implements the paging model that allows mapping virtual addresses to physical ones.
Role of the Page Table Manager in the Fuzzer
Using unique virtual address spaces for each guests gives us a better control over memory accessible to them and also prevents inadvertent accesses to each other’s memory while fuzzing (e.g. an OOB that goes undetected because the access was on a page allocated for another guest). But to create this virtual address space, we must use translation tables that map virtual addresses to physical ones.
Page Tables Implementation
Addressable Virtual Memory
When we’re fuzzing a userland application, even though we’re only testing non-privileged code, there are still some privileged operations that need to take place: cache maintenance, exceptions handling, etc. Handling these operations requires to have dedicated code available at fixed addresses in memory and we need to make sure that they don’t collide with the program’s address ranges.
To solve this problem, based on the assumption that most userland binaries expect to be mapped at lower addresses, this fuzzer splits a guest address space into two virtual address ranges.
- The lower address range for non-privileged mappings. It is translated using
TTBR0_EL1
and spans from0x0000_0000_0000_0000
to0x0000_ffff_ffff_ffff
by settingTCR_EL1.T0SZ
to 16. - The upper address range for privileged mappings. It is translated using
TTBR1_EL1
and spans from0xffff_0000_0000_0000
to0xffff_ffff_ffff_ffff
by settingTCR_EL1.T1SZ
to 16.
0xffff_ffff_ffff_ffff +---------------------+
| |
| TTBR1_EL1 |
| REGION |
| |
0xffff_0000_0000_0000 +---------------------+ ----> TCR_EL1.T1SZ == 16
| ///////////////// |
| ///////////////// |
| ///////////////// |
| |
| ACCESSES GENERATE |
| TRANSLATION FAULT |
| |
| ///////////////// |
| ///////////////// |
| ///////////////// |
0x0000_ffff_ffff_ffff +---------------------+ ----> TCR_EL1.T0SZ == 16
| |
| TTBR0_EL1 |
| REGION |
| |
0x0000_0000_0000_0000 +---------------------+
Note: While it’s possible to have privileged mappings in lower addresses and non-privileged in higher ones, keep in mind that some addresses in the upper virtual address range are reserved by the fuzzer. If you wish to map addresses in the upper VA, make sure they don’t overlap or alter existing mappings.
Paging Model
We’ll use two separate page tables for each region: one referenced by TTBR0_EL1
and the other
by TTBR1_EL1
. But before we move on to the actual implementation, we need to determine the
number of page table levels necessary based on our requirements. In the rest of this section,
we’ll explain the reasoning for the region covered by TTBR0_EL1
, but the same applies to its
counterpart.
One of our requirements is to have regions with a total size of addressable memory of
0x0001_0000_0000_0000
bytes, which means that a virtual address in these regions is 48-bit
long. The second requirement is that the granule size is 4KB.
With a 4KB granule size, the last 12 bits of the address are directly used as an offset into the corresponding physical page and they don’t need to be taken into account during the translation process. But we still need to determine how to split the remaining 36 bits.
Since the granule size is 4KB, page tables are also 4KB long. And because the descriptors we store in these tables are 8-byte long, this means that we can store at most 512 descriptors. Therefore there are 9 address bits resolved in one level of lookup. If you need more convicing, you can take the example of the last level of a page table lookup starting at address 0. The 512 descriptors it contains spans from the page corresponding to address 0 to the one corresponding to address 0x1ff000, with 0x1ff being 9-bit long.
All in all, if one level of lookup resolves 9 bits and we need to resolve 36 of them, it means that our page table should have 4 levels.
Input Address -> 48 bits
+--> Level 0: bits [47:39]
+--> Level 1: bits [38:30]
+--> Level 2: bits [29:21]
+--> Level 3: bits [20:12]
+--> Page offset: bits [11:0]
To address these four levels in the fuzzer, we shamelessly stole Linux’s naming convention:
PageGlobalDirectory
at level 0;PageUpperDirectory
at level 1;PageMiddleDirectory
at level 2;PageTable
at level 3;
In each of these structures, there is a SlabObject
that points to the physical memory
region that contains the descriptors used during memory translation as well as a hashmap
to get a convenient mapping between the descriptor’s index and the object it corresponds to
(e.g. in a page upper directory, the hashmap stores a mapping with page middle directories).
We now need to figure out how to fill these objects to actually map a virtual address.
Mapping a Virtual Address
If we want to map, for example, a memory page at address 0xdead_beef_c000, we first extract the indices into the page table levels from the input virtual address:
Input Address -> 0xdead_beef_cafe
+--> Level 0: bits [47:39] = (0xdead_beef_cafe >> 39) & 0x1ff = 0x1bd
+--> Level 1: bits [38:30] = (0xdead_beef_cafe >> 30) & 0x1ff = 0xb6
+--> Level 2: bits [29:21] = (0xdead_beef_cafe >> 21) & 0x1ff = 0x1f7
+--> Level 3: bits [20:12] = (0xdead_beef_cafe >> 12) & 0x1ff = 0xfc
Then, we check if the entries exists in the corresponding levels, starting with the page global directory:
- if an entry exists in
PageGlobalDirectory
’s hashmap for index0x1bd
, we get the the correspondingPageUpperDirectory
entry and continue. - otherwise, it the entry doesn’t exist yet, we create a new
PageUpperDirectory
object, add the PUD descriptor in the physical memory page of thePageGlobalDirectory
at index0x1bd
and insert the PUD object into the PGD’s hashmap.
We repeat this process for the PageUpperDirectory
and PageMiddleDirectory
.
When we reach the PageTable
level, there should be no entry at index 0xfc
, otherwise
we return a MemoryError::AlreadyMapped
error. We can now create a Page
object, add it
to the PageTable
’s hashmap as well as its descriptor into the PT’s memory page.
+-----------+
| TTBR0_EL1 |
+-----------+
|
|
v
+-----------------------+
| Page Global Directory |
+-----------------------+
|
+--> Index 0x000: [...]
•
•
• +----------------------+
+--> Index 0x1bd: | Page Upper Directory |
• +----------------------+
|
+--> Index 0x000: [...]
•
•
• +-----------------------+
+--> Index 0x0b6: | Page Middle Directory |
• +-----------------------+
|
+--> Index 0x000: [...]
•
•
• +------------+
+--> Index 0x1f7: | Page Table |
• +------------+
|
+--> Index 0x000: [...]
•
•
•
+--> Index 0x0fc: Page
•
The MMU can now use our page tables to resolve the physical page that corresponds to the
the virtual address 0xdead_beef_c000
.
At this stage, even if we need a bit more abstraction to create a real virtual memory allocator
that maps memory, performs read/writes operations, etc., most of the heavy lifting is done
by the PageTableManager
.
You can refer to VirtMemAllocator
for more information about the virtual memory allocator
used by the fuzzer.
Handling Dirty Bits
Another useful feature that we want for our virtual memory management is the ability to detect pages that have been modified. This is especially important for a fuzzer because it allows us between to only restore the pages that have been modified thus reducing the downtime between every iteration.
Revision v8.1 of the ARM architecture introduces a hardware dirty state manager, where a page
descriptor is modified directly by the processor when the page is modified. However, this
feature is not implemented on Apple Silicon chips, according to the ID_AA64MMFR1_EL1
register.
// Value read from the CPU
ID_AA64MMFR1_EL1 = 0x11212000
ID_AA64MMFR1_EL1[3:0] = 0b0000
-> HAFDBS-> bits [3:0]: Hardware updates to Access flag and Dirty state in translation
tables.
-> 0b0000: Hardware update of the Access flag and dirty state are not supported.
But since we still want this feature, we’ll have to emulate it in software. To achieve this,
we simply remap writable pages to read-only ones (using PageDescriptor::read_only
) and
store a copy of the original writable mapping descriptor.
When the page is written to for the first time, it will raise a data abort exception. If the page descriptor currently in use differs from the saved one, it means that it is a page that was remapped with read-only permissions for the purpose of detecting write accesses to it. In that case, the page is remapped with the original intended permissions, the fault handler then resumes the execution on the faulting address and retry the access.
This time around, if an exception occurs again, we know it’s not related to the handling of dirty states, but an actual exception that needs to be propagated to the corresponding handler.
Implementations
sourceimpl PageTableManager
impl PageTableManager
sourcepub fn new(pma: PhysMemAllocator) -> Result<Self>
pub fn new(pma: PhysMemAllocator) -> Result<Self>
Creates a new page table manager using pma
as the physical memory page provider.
sourcepub fn map(
&mut self,
addr: u64,
size: usize,
perms: MemPerms,
privileged: bool
) -> Result<()>
pub fn map(
&mut self,
addr: u64,
size: usize,
perms: MemPerms,
privileged: bool
) -> Result<()>
Maps the virtual address range of size size
and starting at virtual address addr
with
permissions perms
. privileged
determines if the mapping should be privileged or not
(i.e. whether or not instructions running at EL0 can access it).
sourcepub fn unmap(&mut self, addr: u64, size: usize) -> Result<()>
pub fn unmap(&mut self, addr: u64, size: usize) -> Result<()>
Unmaps the virtual address range of size size
and starting at address addr
.
sourcepub fn get_page_by_addr(&self, addr: u64) -> Result<Rc<RefCell<Page>>>
pub fn get_page_by_addr(&self, addr: u64) -> Result<Rc<RefCell<Page>>>
Finds a Page
by its address and returns a reference to it.
sourcepub fn del_entry(idx: usize, ents: &mut SlabObject) -> Result<()>
pub fn del_entry(idx: usize, ents: &mut SlabObject) -> Result<()>
Removes the descriptor at index idx
from the SlabObject
ents
that corresponds to
a page table level.
Trait Implementations
sourceimpl Clone for PageTableManager
impl Clone for PageTableManager
sourcefn clone(&self) -> PageTableManager
fn clone(&self) -> PageTableManager
1.0.0 · sourcefn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more