#[non_exhaustive]pub enum MemoryOrdering {
Relaxed,
Acquire,
Release,
AcqRel,
SeqCst,
GridSync,
}Expand description
Memory ordering attached to atomic and barrier operations.
Variants (Non-exhaustive)§
This enum is marked as non-exhaustive
Relaxed
No synchronization beyond atomicity of the operation.
Acquire
Subsequent reads observe writes released by another participant.
Release
Prior writes become visible to acquiring participants.
AcqRel
Acquire and release semantics in one operation.
SeqCst
Single total order across sequentially consistent operations within the issuing thread’s workgroup.
GridSync
Cross-grid synchronization. Every thread in the dispatch waits
here, and every prior write is globally visible after the
barrier returns. This is strictly stronger than SeqCst, which
only synchronizes within a workgroup. GridSync is required
when a fused kernel has an arm with divergent stores
(e.g. if invocation_id == K { store ... }) followed by an arm
that reads what was stored — without grid-level sync, threads
in non-K blocks observe stale state. Backends that lack a
native grid barrier (workgroup-only fences, no cooperative
launch) must lower this to a kernel-split: emit two separate
dispatches that share the underlying buffers.
Implementations§
Source§impl MemoryOrdering
impl MemoryOrdering
Sourcepub fn from_wire_tag(tag: u8) -> Result<Self, String>
pub fn from_wire_tag(tag: u8) -> Result<Self, String>
Decode a stable wire tag.
§Errors
Returns an actionable error when tag is not assigned to a memory
ordering in this schema.
Sourcepub const fn is_valid_for_atomic_rmw(self) -> bool
pub const fn is_valid_for_atomic_rmw(self) -> bool
Whether this ordering is valid for an atomic RMW operation.
GridSync is barrier-only and not a valid atomic ordering.
Sourcepub const fn is_valid_for_barrier(self) -> bool
pub const fn is_valid_for_barrier(self) -> bool
Whether this ordering is valid for a barrier.
Sourcepub const fn requires_grid_sync(self) -> bool
pub const fn requires_grid_sync(self) -> bool
Whether this ordering requires cross-grid synchronization. Backends with a native grid barrier emit one instruction; backends without must split the kernel.
Trait Implementations§
Source§impl Clone for MemoryOrdering
impl Clone for MemoryOrdering
Source§fn clone(&self) -> MemoryOrdering
fn clone(&self) -> MemoryOrdering
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for MemoryOrdering
impl Debug for MemoryOrdering
Source§impl Default for MemoryOrdering
impl Default for MemoryOrdering
Source§impl<'de> Deserialize<'de> for MemoryOrdering
impl<'de> Deserialize<'de> for MemoryOrdering
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Source§impl Hash for MemoryOrdering
impl Hash for MemoryOrdering
Source§impl PartialEq for MemoryOrdering
impl PartialEq for MemoryOrdering
Source§impl Serialize for MemoryOrdering
impl Serialize for MemoryOrdering
impl Copy for MemoryOrdering
impl Eq for MemoryOrdering
impl StructuralPartialEq for MemoryOrdering
Auto Trait Implementations§
impl Freeze for MemoryOrdering
impl RefUnwindSafe for MemoryOrdering
impl Send for MemoryOrdering
impl Sync for MemoryOrdering
impl Unpin for MemoryOrdering
impl UnsafeUnpin for MemoryOrdering
impl UnwindSafe for MemoryOrdering
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§impl<Q, K> Equivalent<K> for Q
impl<Q, K> Equivalent<K> for Q
Source§fn equivalent(&self, key: &K) -> bool
fn equivalent(&self, key: &K) -> bool
key and return true if they are equal.