pub struct MemRanges { /* private fields */ }Expand description
Cumulative dataflow state for a sequence of concurrent dispatches.
Direct port of struct ggml_mem_ranges in
ggml-metal-common.cpp:19-23. The state is reset every time a
barrier is emitted; between barriers, all recorded dispatches are
considered to run concurrently and their R/W ranges accumulate.
Implementations§
Source§impl MemRanges
impl MemRanges
Sourcepub fn new() -> Self
pub fn new() -> Self
New empty state. Pre-allocates capacity matching llama.cpp’s
reserve(256) (line 28).
Sourcepub fn reset(&mut self)
pub fn reset(&mut self)
Drop all recorded ranges (called after emitting a barrier).
Mirrors ggml_mem_ranges_reset.
Sourcepub fn checks(&self) -> u64
pub fn checks(&self) -> u64
Number of check() calls performed since construction
(diagnostic, monotone).
Sourcepub fn barriers_forced(&self) -> u64
pub fn barriers_forced(&self) -> u64
Number of check() calls that returned false, forcing a
barrier (diagnostic, monotone). When tracking is enabled at
every dispatch, total_dispatches - barriers_forced ==
barriers elided versus the unconditional-barrier baseline.
Sourcepub fn push(&mut self, range: BufferRange)
pub fn push(&mut self, range: BufferRange)
Push a single range onto the cumulative state without checking.
Used internally by [Self::add] and Self::add_dispatch.
Public so unit tests can construct adversarial states.
Sourcepub fn add_dispatch(&mut self, reads: &[&MlxBuffer], writes: &[&MlxBuffer])
pub fn add_dispatch(&mut self, reads: &[&MlxBuffer], writes: &[&MlxBuffer])
Record a dispatch’s read-buffer ranges + write-buffer ranges.
Mirrors ggml_mem_ranges_add(tensor) at
ggml-metal-common.cpp:114-122: pushes one Src range per
tensor->src[i] and one Dst range for tensor itself.
Caller is expected to have already invoked
Self::check_dispatch and emitted a barrier on conflict; the
barrier-emit + reset() is the responsibility of the
integration site (typically CommandEncoder).
Sourcepub fn check_dispatch(
&mut self,
reads: &[&MlxBuffer],
writes: &[&MlxBuffer],
) -> bool
pub fn check_dispatch( &mut self, reads: &[&MlxBuffer], writes: &[&MlxBuffer], ) -> bool
Check whether a candidate dispatch can run concurrently with the recorded state.
Returns true iff none of the candidate’s reads or writes
conflict with any recorded range. Exactly mirrors
ggml_mem_ranges_check(tensor) at ggml-metal-common.cpp:175-185:
each src is checked against existing ranges, then the dst is
checked against existing ranges.
Increments Self::checks. On false return, also
increments Self::barriers_forced — so the diagnostic
counter is accurate even when callers ignore the return value.
Sourcepub fn check_and_record(
&mut self,
reads: &[&MlxBuffer],
writes: &[&MlxBuffer],
) -> bool
pub fn check_and_record( &mut self, reads: &[&MlxBuffer], writes: &[&MlxBuffer], ) -> bool
Combined check + add. Returns true if the dispatch was added
concurrent (no conflict, no barrier needed); returns false
if the caller must emit a barrier and reset() before adding
the dispatch’s ranges.
On false return, the caller’s responsibility is:
- Emit the underlying
memoryBarrierWithScope:on the live encoder. - Call
Self::reset. - Call
Self::add_dispatchwith the samereads/writesto seed the new concurrent group.
This mirrors the call pattern at ggml-metal-ops.cpp:220-225.