Skip to main content

atomic_matrix/
handlers.rs

1//! # Matrix High-Level API Handles
2//!
3//! This module encapsulates the matrix raw primitives into a more ergonomic API
4//! that abstracts a lot of manual and repetitive work that has to be executed in
5//! order to correctly interact with the matrix, as well as some safe pre-baked
6//! functions that add more extensibility over what can be done generally.
7//!
8//! # Abstraction Layers
9//!
10//! ```text
11//! [ internals -> Matrix Internal Frameworks ]   + iter, workers, tables, ...
12//!     * builds on
13//! [ MatrixHandler ]                             + typed blocks, lifecycle, sharing
14//!     * escape hatch
15//! [ AtomicMatrix ]                              + raw offsets, sizes, bytes
16//!     *
17//! [ /dev/shm ]                                  * physical shared memory
18//! ```
19//!
20//! # Handler Scope
21//!
22//! The handler owns the SHM mapping and provides:
23//! - Typed block allocation (`allocate<T>`) and deallocation (`free<T>`)
24//! - Raw byte allocation for unknown types (`allocate_raw`)
25//! - Zero-copy typed read and write on allocated blocks
26//! - User-defined lifecycle state management (states 49+)
27//! - Atomic state transitions with user-defined ordering
28//! - Thread sharing via [`SharedHandler`]
29//! - Escape hatches to the raw matrix and base pointer
30//!
31//! > Any high-level datasets and operators will be implemented in the **internals**
32//! > folder.
33//!
34//! # Lifecycle States
35//!
36//! States 0–48 are reserved for internal matrix operations:
37//! - `0` — `STATE_FREE`
38//! - `1` — `STATE_ALLOCATED`
39//! - `2` — `STATE_ACKED`
40//! - `3` — `STATE_COALESCING`
41//!
42//! States 49 and above are available for user-defined lifecycles.
43//! The matrix coalescing engine ignores any state beyond the ones described above —
44//! a block in state 112 is never reclaimed automatically. Call `free()` explicitly
45//! when done.
46//!
47//! **Note:** States 4–48 are reserved for future internal state management
48//! implementations that have not been planned yet. Better safe than sorry.
49//!
50//! # Thread Sharing
51//!
52//! [`MatrixHandler`] owns the mmap and is not `Clone`. Use `share()` to produce a
53//! [`SharedHandler`] that can be sent to other threads. The original handler must
54//! outlive all shared handles derived from it.
55
56use std::sync::atomic::Ordering;
57use crate::matrix::core::{ AtomicMatrix, BlockHeader, RelativePtr };
58use memmap2::MmapMut;
59
60/// Minimum state value available for user-defined lifecycles.
61/// States 0–48 are reserved for internal matrix and future framework use.
62/// Currently only 0–3 are assigned — the remaining range (4–48) is reserved
63/// for future internal lifecycle states without breaking user code.
64pub const USER_STATE_MIN: u32 = 49;
65
66/// Errors produced by [`MatrixHandler`] and [`SharedHandler`] operations.
67#[derive(Debug, PartialEq)]
68pub enum HandlerError {
69    /// The allocator could not find a free block. Either OOM or contention.
70    AllocationFailed(String),
71    /// Caller attempted to set or transition to a reserved internal state (0–48).
72    ReservedStatus(u32),
73    /// Atomic state transition failed — block was not in the expected state.
74    /// Contains the actual state found.
75    TransitionFailed(u32),
76    /// The block offset is outside the valid segment range.
77    InvalidOffset(u32),
78}
79
80/// A typed handle to an allocated block in the matrix.
81///
82/// Since the matrix operates entirely on raw pointer addresses and internal
83/// types, `Block<T>` is provided at the API level to wrap allocations into
84/// a typed, ergonomic handle. The raw [`RelativePtr`] returned by the matrix
85/// is reinterpreted as `T` and wrapped in `Block<T>` to maintain type
86/// information at the surface layer. All pointer arithmetic is delegated to
87/// the inner [`RelativePtr<T>`], referred to as **pointer**.
88///
89/// # Validity
90///
91/// A `Block<T>` is valid as long as:
92/// - The originating [`MatrixHandler`] (and its mmap) is alive.
93/// - The block has not been freed via `handler.free()`.
94///
95/// Blocks carry no lifetime parameter. The caller is responsible for not using
96/// a block after freeing it or after the handler is dropped.
97pub struct Block<T> {
98    /// Payload offset from SHM base — points past the `BlockHeader`.
99    pointer: RelativePtr<T>,
100}
101
102/// A lightweight reflection of the original handler that can be safely sent
103/// across threads.
104///
105/// Produced by [`MatrixHandler::share()`]. Holds raw pointers into the SHM
106/// segment. The originating [`MatrixHandler`] **must** outlive all
107/// `SharedHandler` instances derived from it.
108///
109/// `SharedHandler` exposes the same allocation, I/O, lifecycle, and escape
110/// hatch API as [`MatrixHandler`] via the [`HandlerFunctions`] trait —
111/// it does not own the mmap.
112pub struct SharedHandler {
113    matrix_addr: usize,
114    base_addr: usize,
115    segment_size: u32,
116    first_block_offset: u32,
117}
118
119/// The primary interface for interacting with an [`AtomicMatrix`].
120///
121/// Owns the SHM mapping. Cannot be cloned — use [`share()`] to produce a
122/// [`SharedHandler`] for other threads.
123///
124/// See module documentation for the full abstraction layer diagram.
125pub struct MatrixHandler {
126    matrix: &'static mut AtomicMatrix,
127    mmap: MmapMut,
128    first_block_offset: u32,
129}
130
131impl<T> Block<T> {
132    /// Constructs a `Block<T>` from a raw payload offset.
133    ///
134    /// The offset must point past the [`BlockHeader`] (i.e. `header_offset + 32`).
135    /// Type `T` is introduced here — the matrix has no knowledge of it.
136    pub(crate) fn from_offset(offset: u32) -> Self {
137        Self { pointer: RelativePtr::new(offset) }
138    }
139}
140
141impl MatrixHandler {
142    /// Internal constructor. Called exclusively by [`AtomicMatrix::bootstrap`].
143    pub(crate) fn new(
144        matrix: &'static mut AtomicMatrix,
145        mmap: MmapMut,
146        first_block_offset: u32
147    ) -> Self {
148        Self { matrix, mmap, first_block_offset }
149    }
150
151    /// Produces a lightweight [`SharedHandler`] that can be sent to other threads.
152    ///
153    /// [`SharedHandler`] holds raw pointers into the SHM segment. This handler
154    /// **must** outlive all shared handles derived from it — Rust cannot enforce
155    /// this lifetime relationship automatically because `SharedHandler` uses raw
156    /// pointers. Violating this contract is undefined behaviour.
157    pub fn share(&self) -> SharedHandler {
158        SharedHandler {
159            matrix_addr: self.matrix as *const AtomicMatrix as usize,
160            base_addr: self.base_ptr() as usize,
161            segment_size: self.segment_size(),
162            first_block_offset: self.first_block_offset,
163        }
164    }
165}
166
167impl HandlerFunctions for MatrixHandler {
168    fn base_ptr(&self) -> *const u8 { self.mmap.as_ptr() }
169    fn matrix(&self) -> &AtomicMatrix { self.matrix }
170    fn first_block_offset(&self) -> u32 { self.first_block_offset }
171    fn segment_size(&self) -> u32 { self.mmap.len() as u32 }
172}
173
174// Safety: AtomicMatrix uses only atomic operations internally.
175// Caller guarantees the originating MatrixHandler outlives all SharedHandlers.
176unsafe impl Send for SharedHandler {}
177unsafe impl Sync for SharedHandler {}
178
179impl HandlerFunctions for SharedHandler {
180    fn base_ptr(&self) -> *const u8 { self.base_addr as *const u8 }
181    fn matrix(&self) -> &AtomicMatrix {
182        unsafe { &*(self.matrix_addr as *const AtomicMatrix) }
183    }
184    fn first_block_offset(&self) -> u32 { self.first_block_offset }
185    fn segment_size(&self) -> u32 { self.segment_size }
186}
187
188/// Defines the core interaction surface for any matrix handle.
189///
190/// Implemented by both [`MatrixHandler`] and [`SharedHandler`]. All matrix
191/// operations — allocation, I/O, lifecycle management, and escape hatches —
192/// are provided through this trait so that framework code in `internals` can
193/// operate generically over either handle type via `impl HandlerFunctions`.
194///
195/// # Implementing this trait
196///
197/// Implementors must provide four primitive accessors:
198/// - [`base_ptr()`] — the SHM base pointer for this process's mapping
199/// - [`matrix()`] — reference to the underlying [`AtomicMatrix`]
200/// - [`first_block_offset()`] — offset of the first data block in the segment
201/// - [`segment_size()`] — total segment size in bytes
202///
203/// All other methods have default implementations built on these four.
204pub trait HandlerFunctions {
205    /// Returns the SHM base pointer for this process's mapping.
206    fn base_ptr(&self) -> *const u8;
207
208    /// Returns a reference to the underlying [`AtomicMatrix`].
209    fn matrix(&self) -> &AtomicMatrix;
210
211    /// Returns the offset of the first data block in the segment.
212    /// Used by `internals` iterators as the physical chain walk start point.
213    fn first_block_offset(&self) -> u32;
214
215    /// Returns the total segment size in bytes.
216    fn segment_size(&self) -> u32;
217
218    /// Allocates a block sized to hold `T`.
219    ///
220    /// Size is computed from `size_of::<T>()` and rounded up to the 16-byte
221    /// minimum payload if necessary. The matrix remains typeless — type
222    /// information exists only in the returned [`Block<T>`].
223    ///
224    /// # Errors
225    /// Returns [`HandlerError::AllocationFailed`] if the matrix is out of
226    /// memory or under contention after 512 retries.
227    fn allocate<T>(&self) -> Result<Block<T>, HandlerError> {
228        let size = (std::mem::size_of::<T>() as u32).max(16);
229        self.matrix()
230            .allocate(self.base_ptr(), size)
231            .map(|ptr| Block::from_offset(ptr.offset()))
232            .map_err(HandlerError::AllocationFailed)
233    }
234
235    /// Allocates a raw byte block of the given size.
236    ///
237    /// Returns a [`RelativePtr<u8>`] directly — use when the payload type is
238    /// not known at allocation time, or when building `internals` framework
239    /// primitives that operate on raw offsets. The caller is responsible for
240    /// all casting and interpretation of the memory.
241    ///
242    /// # Errors
243    /// Returns [`HandlerError::AllocationFailed`] if OOM or contention.
244    fn allocate_raw(&self, size: u32) -> Result<RelativePtr<u8>, HandlerError> {
245        self.matrix()
246            .allocate(self.base_ptr(), size)
247            .map_err(HandlerError::AllocationFailed)
248    }
249
250    /// Writes a value of type `T` into an allocated block.
251    ///
252    /// # Safety
253    /// - `block` must be in `STATE_ALLOCATED`.
254    /// - `block` must have been allocated with sufficient size to hold `T`.
255    ///   This is guaranteed if the block was produced by [`allocate::<T>()`].
256    /// - No other thread may be reading or writing this block concurrently.
257    ///   The caller is responsible for all synchronization beyond the atomic
258    ///   state transitions provided by [`set_state`] and [`transition_state`].
259    unsafe fn write<T>(&self, block: &mut Block<T>, value: T) {
260        unsafe { block.pointer.write(self.base_ptr(), value) }
261    }
262
263    /// Reads a shared reference to `T` from an allocated block.
264    ///
265    /// # Safety
266    /// - `block` must be in `STATE_ALLOCATED`.
267    /// - A value of type `T` must have been previously written via [`write`].
268    /// - The returned reference is valid as long as the SHM mapping is alive
269    ///   and the block has not been freed. It is **not** tied to the lifetime
270    ///   of the [`Block<T>`] handle — the caller must ensure the block is not
271    ///   freed while the reference is in use.
272    /// - No other thread may be writing to this block concurrently.
273    unsafe fn read<'a, T>(&self, block: &Block<T>) -> &'a T {
274        unsafe { block.pointer.resolve(self.base_ptr()) }
275    }
276
277    /// Reads a mutable reference to `T` from an allocated block.
278    ///
279    /// # Safety
280    /// - `block` must be in `STATE_ALLOCATED`.
281    /// - A value of type `T` must have been previously written via [`write`].
282    /// - The returned reference is valid as long as the SHM mapping is alive
283    ///   and the block has not been freed. It is **not** tied to the lifetime
284    ///   of the [`Block<T>`] handle — the caller must ensure the block is not
285    ///   freed while the reference is in use.
286    /// - No other thread may be reading or writing this block concurrently.
287    ///   Two simultaneous `read_mut` calls on the same block is undefined behaviour.
288    unsafe fn read_mut<'a, T>(&self, block: &Block<T>) -> &'a mut T {
289        unsafe { block.pointer.resolve_mut(self.base_ptr()) }
290    }
291
292    /// Frees a typed block.
293    ///
294    /// Marks the block `STATE_ACKED` and immediately triggers coalescing.
295    /// The block is invalid after this call — using it in any way is
296    /// undefined behaviour.
297    fn free<T>(&self, block: Block<T>) {
298        let header_ptr = RelativePtr::<BlockHeader>::new(block.pointer.offset() - 32);
299        self.matrix().ack(&header_ptr, self.base_ptr());
300    }
301
302    /// Frees a block by its header offset directly.
303    ///
304    /// Used by `internals` framework code that operates on raw offsets
305    /// rather than typed [`Block<T>`] handles. `header_offset` must point
306    /// to a valid [`BlockHeader`] within the segment.
307    fn free_at(&self, header_offset: u32) {
308        let header_ptr = RelativePtr::<BlockHeader>::new(header_offset);
309        self.matrix().ack(&header_ptr, self.base_ptr());
310    }
311
312    /// Sets a user-defined lifecycle state on a block.
313    ///
314    /// The state must be >= [`USER_STATE_MIN`] (49). Attempting to set an
315    /// internal state (0–48) returns [`HandlerError::ReservedStatus`].
316    ///
317    /// User states are invisible to the coalescing engine — a block in any
318    /// user state will never be automatically reclaimed. Call [`free`]
319    /// explicitly when the lifecycle is complete.
320    ///
321    /// # Errors
322    /// Returns [`HandlerError::ReservedStatus`] if `state < USER_STATE_MIN`.
323    fn set_state<T>(&self, block: &Block<T>, state: u32) -> Result<(), HandlerError> {
324        if state < USER_STATE_MIN {
325            return Err(HandlerError::ReservedStatus(state));
326        }
327        unsafe {
328            block.pointer
329                .resolve_header_mut(self.base_ptr())
330                .state
331                .store(state, Ordering::Release);
332        }
333        Ok(())
334    }
335
336    /// Returns the current state of a block.
337    ///
338    /// `order` controls the memory ordering of the atomic load. Use
339    /// `Ordering::Acquire` for the general case. Use `Ordering::Relaxed`
340    /// only if you do not need to synchronize with writes to the block's
341    /// payload.
342    fn get_state<T>(&self, block: &Block<T>, order: Ordering) -> u32 {
343        unsafe {
344            block.pointer
345                .resolve_header(self.base_ptr())
346                .state
347                .load(order)
348        }
349    }
350
351    /// Atomically transitions a block from one state to another.
352    ///
353    /// Succeeds only if the block is currently in `expected`. `next` must
354    /// be >= [`USER_STATE_MIN`] — transitioning into an internal state is
355    /// not permitted.
356    ///
357    /// `success_order` controls the memory ordering on success. Use
358    /// `Ordering::AcqRel` for the general case. The failure ordering is
359    /// always `Ordering::Relaxed`.
360    ///
361    /// Returns `Ok(expected)` on success — the value that was replaced.
362    ///
363    /// # Errors
364    /// - [`HandlerError::ReservedStatus`] if `next < USER_STATE_MIN`.
365    /// - [`HandlerError::TransitionFailed(actual)`] if the block was not
366    ///   in `expected` — `actual` is the state that was observed instead.
367    fn transition_state<T>(
368        &self,
369        block: &Block<T>,
370        expected: u32,
371        next: u32,
372        success_order: Ordering
373    ) -> Result<u32, HandlerError> {
374        if next < USER_STATE_MIN {
375            return Err(HandlerError::ReservedStatus(next));
376        }
377        unsafe {
378            block.pointer
379                .resolve_header_mut(self.base_ptr())
380                .state
381                .compare_exchange(expected, next, success_order, Ordering::Relaxed)
382                .map_err(HandlerError::TransitionFailed)
383        }
384    }
385
386    /// Returns a raw reference to the underlying [`AtomicMatrix`].
387    ///
388    /// For `internals` framework authors who need allocator primitives
389    /// directly. Bypasses all handler abstractions — use with care.
390    fn raw_matrix(&self) -> &AtomicMatrix {
391        self.matrix()
392    }
393
394    /// Returns the raw SHM base pointer for this process's mapping.
395    ///
396    /// Use alongside [`raw_matrix()`] when building `internals` that need
397    /// direct access to block memory beyond what the typed API provides.
398    fn raw_base_ptr(&self) -> *const u8 {
399        self.base_ptr()
400    }
401}