Skip to main content

AtomicMatrix

Struct AtomicMatrix 

Source
#[repr(C)]
pub struct AtomicMatrix { pub id: Uuid, pub fl_bitmap: AtomicU32, pub sl_bitmaps: [AtomicU32; 32], pub matrix: [[AtomicU32; 8]; 32], pub mmap: MmapMut, pub sector_boundaries: [AtomicU32; 4], pub total_size: u32, }
Expand description

The structural core of the matrix.

Its the non-blocking, SHM-backed memory arena, utilizing a segmented TLSF (Two-Level segregated fit) inspired mapping for O(1) allocation, paired with a custom Kinetic Coalescing logic.

§Memory Layout

The matrix is designed to be mapped directly into ’/dev/shm“. It starts with a 16-byte ‘init_guard’ followed by the struct itself, and then the sectorized raw memory blocks.

Fields§

§id: Uuid§fl_bitmap: AtomicU32§sl_bitmaps: [AtomicU32; 32]§matrix: [[AtomicU32; 8]; 32]§mmap: MmapMut§sector_boundaries: [AtomicU32; 4]§total_size: u32

Implementations§

Source§

impl AtomicMatrix

Source

pub fn init(ptr: *mut AtomicMatrix, id: Uuid, size: u32) -> &'static mut Self

Initialized the matrix struct and returns it.

This function will initialize both TLSF level flags, the matrix map for free blocks, assign all the require metadata and return the ready to use object

§Params

@ptr: The pointer to the beginning of the matrix segment
@id: The ID of this matrix instance
@size: The total size of the SHM allocation

§Returns

A static, lifetime specified, reference to the matrix struct.

Source

pub fn bootstrap( id: Option<Uuid>, size: usize, sector_barriers: (u32, u32), ) -> Result<MatrixHandler, String>

The entry point of the matrix struct.

It initializes SHM segment, bind to it, executes the initial formatting, prepares both the matrix and handler structs and return the High-Level API to the caller.

§Params:

@id: The ID of a new or existing matrix (if existing, will skip formatting and just bind to it)
@size: The SHM allocation size

§Returns

The matrix handler api, or an error to be handled

Source

pub fn sectorize( &self, base_ptr: *const u8, total_file_size: usize, small_percent: u8, medium_percent: u8, ) -> Result<(), String>

Sectorizes the SHM segment into three different zones of allocation. These zones are classified as Small, Medium and Large.

  • Small Sector: For data objects between 32 bytes and 1 KB.
  • Medium Sector: For data objects between 1 KB and 1 MB.
  • Large Sector: For data objects bigger than 1 MB.

This ensures three main safeties for the matrix:

  • Size integrity: Blocks with similar sizes are required to stay together, ensuring that we don’t deal with a huge size variety in coalescing.
  • Propagation granularity: The healing propagation only occurs inside the block sector, ensuring that high operation sectors dont cause a tide of coalescing into lower operation sectors.
  • Seach optimization: Since small blocks are always together, it reduces the TLSF searching index as the size you need is almost always garanteed to exist.

The sectorize also limits sectors based on the choosen size for the matrix to ensure that if we have a small matrix (e.g.: 1mb) we don’t allocate a unneces- sary large sector.

§Params:

@base_ptr: The starting offset of the SHM mapping.
@total_file_size: The total size of SHM segment
@mut small_percent: The desired size percentage of the small sector
@mut medium_sector: The desired size percentage of the medium sector

§Returns:

Any error that arises from the sectorizing. Otherwise, an Ok flag.

Source

pub fn allocate( &self, base_ptr: *const u8, size: u32, ) -> Result<RelativePtr<u8>, String>

Allocates a block in the matrix for the caller

It acts as a greed allocator, ensuring each call will either get a block allocated in the matrix, or it throws a OOM Contention flag. It achieves this by politely trying to claim a block for itself. In case the CAS loop fails, it will simply jump to the next free block on the chain, granting a lock-free allocation paradigm.

Each allocation is allowed to retry itself 512 times to confirm the matrix is indeed out of memory before killing the execution of the function.

§Params:

@base_ptr: The starting offset of the SHM mapping.
@size: The allocation size of the block

§Returns:

Either the relative pointer to the allocated block, or the OOM Contention flag.

Source

pub fn ack(&self, ptr: &RelativePtr<BlockHeader>, base_ptr: *const u8)

Acknowledges the freedon of a block and pushes it to the to_be_freed queue.

If the to_be_freed queue is full, it will imediatelly trigger the drainage of the queue and coalesce every block present before trying to push the newly ack block into the queue. If there is space available, simply push it and move on

§Params:

@ptr: The relative pointer of the block to acknowledge
@base_ptr: The offset from the start of the SHM segment.

Source

pub fn coalesce(&self, ptr: &RelativePtr<BlockHeader>, base_ptr: *const u8)

Tries to merge neighbouring blocks to the left until the end of the matrix is reached or the neighbour block is not ACKED/FREE.

This is the elegant implementation of the Kinetic Coalescence processes. It receives the initial block that will start the ripple, and traverse the matrix to the left (monotonicity guard). If any race conditions are met in the middle (another coalescing just start, or a module just claimed this block), it will stop the coalescing and move on (permissive healing).

Then it tries to update the next neighbour previous physical offset metadata to the start of the new free block. If this exchange fails due to end of sector, or just claimed blocks, it will skip this marking in hopes that when this block is eventually coalesced, it will passivelly merge backwards with the ripple and fix the marking on its header by himself (horizon boundary).

This three core implementations together composes the Propagation Principle of Atomic Coalescence and enables the matrix to have such high throughput speeds.

§Params:

@ptr: The relative pointer of the block to coalesce.
@base_ptr: The offset from the start of the SHM segment.

§Throws:

TidalRippleContentionError: Two coalescing ripples executing simultaneously on the same blocks.

Source

pub fn query(&self, offset: u32) -> RelativePtr<u8>

Queries a block offset inside of the matrix.

Not much to say about this, the name is pretty self explanatory.

§Params:

@offset: The offset of the block to be queried

§Returns:

The Relative Pointer to the queried block

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.