#[repr(C)]pub struct AtomicMatrix {
pub id: Uuid,
pub fl_bitmap: AtomicU32,
pub sl_bitmaps: [AtomicU32; 32],
pub matrix: [[AtomicU32; 8]; 32],
pub mmap: MmapMut,
pub sector_boundaries: [AtomicU32; 4],
pub total_size: u32,
}Expand description
The structural core of the matrix.
Its the non-blocking, SHM-backed memory arena, utilizing a segmented TLSF (Two-Level segregated fit) inspired mapping for O(1) allocation, paired with a custom Kinetic Coalescing logic.
§Memory Layout
The matrix is designed to be mapped directly into ’/dev/shm“. It starts with a 16-byte ‘init_guard’ followed by the struct itself, and then the sectorized raw memory blocks.
Fields§
§id: Uuid§fl_bitmap: AtomicU32§sl_bitmaps: [AtomicU32; 32]§matrix: [[AtomicU32; 8]; 32]§mmap: MmapMut§sector_boundaries: [AtomicU32; 4]§total_size: u32Implementations§
Source§impl AtomicMatrix
impl AtomicMatrix
Sourcepub fn bootstrap(id: Option<Uuid>, size: usize) -> Result<MatrixHandler, String>
pub fn bootstrap(id: Option<Uuid>, size: usize) -> Result<MatrixHandler, String>
The entry point of the matrix struct.
It initializes SHM segment, bind to it, executes the initial formatting, prepares both the matrix and handler structs and return the High-Level API to the caller.
§Params:
@id: The ID of a new or existing matrix (if existing, will skip formatting and
just bind to it)
@size: The SHM allocation size
§Returns
The matrix handler api, or an error to be handled
Sourcepub fn allocate(
&self,
base_ptr: *const u8,
size: u32,
) -> Result<RelativePtr<u8>, String>
pub fn allocate( &self, base_ptr: *const u8, size: u32, ) -> Result<RelativePtr<u8>, String>
Allocates a block in the matrix for the caller
It acts as a greed allocator, ensuring each call will either get a block allocated in the matrix, or it throws a OOM Contention flag. It achieves this by politely trying to claim a block for itself. In case the CAS loop fails, it will simply jump to the next free block on the chain, granting a lock-free allocation paradigm.
Each allocation is allowed to retry itself 512 times to confirm the matrix is indeed out of memory before killing the execution of the function.
§Params:
@base_ptr: The starting offset of the SHM mapping.
@size: The allocation size of the block
§Returns:
Either the relative pointer to the allocated block, or the OOM Contention flag.
Sourcepub fn ack(&self, ptr: &RelativePtr<BlockHeader>, base_ptr: *const u8)
pub fn ack(&self, ptr: &RelativePtr<BlockHeader>, base_ptr: *const u8)
Acknowledges the freedon of a block and pushes it to the to_be_freed queue.
If the to_be_freed queue is full, it will imediatelly trigger the drainage of the queue and coalesce every block present before trying to push the newly ack block into the queue. If there is space available, simply push it and move on
§Params:
@ptr: The relative pointer of the block to acknowledge
@base_ptr: The offset from the start of the SHM segment.
Sourcepub fn coalesce(&self, ptr: &RelativePtr<BlockHeader>, base_ptr: *const u8)
pub fn coalesce(&self, ptr: &RelativePtr<BlockHeader>, base_ptr: *const u8)
Tries to merge neighbouring blocks to the left until the end of the matrix is reached or the neighbour block is not ACKED/FREE.
This is the elegant implementation of the Kinetic Coalescence processes. It receives the initial block that will start the ripple, and traverse the matrix to the left (monotonicity guard). If any race conditions are met in the middle (another coalescing just start, or a module just claimed this block), it will stop the coalescing and move on (permissive healing).
Then it tries to update the next neighbour previous physical offset metadata to the start of the new free block. If this exchange fails due to end of sector, or just claimed blocks, it will skip this marking in hopes that when this block is eventually coalesced, it will passivelly merge backwards with the ripple and fix the marking on its header by himself (horizon boundary).
This three core implementations together composes the Propagation Principle of Atomic Coalescence and enables the matrix to have such high throughput speeds.
§Params:
@ptr: The relative pointer of the block to coalesce.
@base_ptr: The offset from the start of the SHM segment.
§Throws:
TidalRippleContentionError: Two coalescing ripples executing simultaneously on the same blocks.
Sourcepub fn query(&self, offset: u32) -> RelativePtr<u8>
pub fn query(&self, offset: u32) -> RelativePtr<u8>
Sourcepub fn find_suitable_block(&self, fl: u32, sl: u32) -> Option<(u32, u32)>
pub fn find_suitable_block(&self, fl: u32, sl: u32) -> Option<(u32, u32)>
Queries the TLSF bitmaps in search of a block.
It acquires the first most suitable index flag (according to the find _indices function) and does a bitwise operation to check if it possesses an available block. If it matches, return the coordinates of the FL and the CTZ result from the SL. If it doesn’t match, performs CTZ on the first level to return the first available coordinate.
§Params:
@fl: Calculated first level coordinate
@sl: Calculated second level coordinate
§Returns:
A tuple containing the FL/SL coordinates or nothing if there is no space available in the matrix.
Sourcepub fn remove_free_block(
&self,
base_ptr: *const u8,
fl: u32,
sl: u32,
) -> Result<u32, String>
pub fn remove_free_block( &self, base_ptr: *const u8, fl: u32, sl: u32, ) -> Result<u32, String>
Pops a free block from the TLSF bitmap.
It tries atomically claims ownership over the header inside the map. If successful, swap the current head to next free head in the chain, or 0 if there is none. If it fails, it automatically assumes someone claimed the buffer first and calls a hint::spin loop instruction to retry claiming a head. If, in one of the interactions, the bucket returs 0, it breaks the function with an error.
§Params:
@base_ptr: The offset from the start of the SHM segment
@fl: First level coordinates of the bucket
@sl: Second level coordinates of the head.
§Returns
A result containing either the head of the newly acquired block, or an EmptyBitmapError
Sourcepub fn insert_free_block(
&self,
base_ptr: *const u8,
offset: u32,
fl: u32,
sl: u32,
)
pub fn insert_free_block( &self, base_ptr: *const u8, offset: u32, fl: u32, sl: u32, )
Stores a new header inside a bucket
It does the exact oposite of the remove_free_block basically.
§Params:
@base_ptr: The offset from the beginning of the SHM segment
@offset: The header offset to be inserted into the bucket
@fl: The first level insertion coordinates
@sl: The second level insertion coordinates