pub struct AtomicFixedVec<T>{ /* private fields */ }
Expand description
A thread-safe, compressed, randomly accessible vector of integers with
fixed-width encoding, backed by u64
atomic words.
Implementations§
Source§impl<T> AtomicFixedVec<T>
impl<T> AtomicFixedVec<T>
Sourcepub fn builder() -> AtomicFixedVecBuilder<T>
pub fn builder() -> AtomicFixedVecBuilder<T>
Creates a builder for constructing an AtomicFixedVec
from a slice.
§Examples
use compressed_intvec::prelude::*;
use compressed_intvec::fixed::{AtomicFixedVec, UAtomicFixedVec, BitWidth};
let data: &[i16] = &[-100, 0, 100, 200];
let vec: UAtomicFixedVec<i16> = AtomicFixedVec::builder()
.bit_width(BitWidth::PowerOfTwo) // Force 16 bits for signed values
.build(data)
.unwrap();
assert_eq!(vec.len(), 4);
assert_eq!(vec.bit_width(), 16);
Sourcepub fn as_slice(&self) -> &[AtomicU64]
pub fn as_slice(&self) -> &[AtomicU64]
Returns a read-only slice of the underlying atomic storage words.
Sourcepub fn load(&self, index: usize, order: Ordering) -> T
pub fn load(&self, index: usize, order: Ordering) -> T
Atomically loads the value at index
.
load
takes an Ordering
argument which describes the memory ordering
of this operation. For more information, see the Rust documentation on
memory ordering.
§Panics
Panics if index
is out of bounds.
Sourcepub unsafe fn load_unchecked(&self, index: usize, order: Ordering) -> T
pub unsafe fn load_unchecked(&self, index: usize, order: Ordering) -> T
Atomically loads the value at index
without bounds checking.
load_unchecked
takes an Ordering
argument which describes the memory ordering
of this operation. For more information, see the Rust documentation on
memory ordering.
§Safety
Calling this method with an out-of-bounds index
is undefined behavior.
Sourcepub fn store(&self, index: usize, value: T, order: Ordering)
pub fn store(&self, index: usize, value: T, order: Ordering)
Atomically stores value
at index
.
§Panics
Panics if index
is out of bounds. Note that the stored value is not
checked for whether it fits in the configured bit_width
and will be
truncated if it is too large.
Sourcepub unsafe fn store_unchecked(&self, index: usize, value: T, order: Ordering)
pub unsafe fn store_unchecked(&self, index: usize, value: T, order: Ordering)
Atomically stores value
at index
without bounds checking.
§Safety
Calling this method with an out-of-bounds index
is undefined behavior.
Note that the stored value is not checked for whether it fits in the
configured bit_width
and will be truncated if it is too large.
Sourcepub fn swap(&self, index: usize, value: T, order: Ordering) -> T
pub fn swap(&self, index: usize, value: T, order: Ordering) -> T
Atomically swaps the value at index
with value
, returning the
previous value.
§Panics
Panics if index
is out of bounds.
Sourcepub unsafe fn swap_unchecked(
&self,
index: usize,
value: T,
order: Ordering,
) -> T
pub unsafe fn swap_unchecked( &self, index: usize, value: T, order: Ordering, ) -> T
Atomically swaps the value at index
with value
without bounds checking.
§Safety
Calling this method with an out-of-bounds index
is undefined behavior.
Sourcepub fn compare_exchange(
&self,
index: usize,
current: T,
new: T,
success: Ordering,
failure: Ordering,
) -> Result<T, T>
pub fn compare_exchange( &self, index: usize, current: T, new: T, success: Ordering, failure: Ordering, ) -> Result<T, T>
Atomically compares the value at index
with current
and, if they are
equal, replaces it with new
.
Returns Ok
with the previous value on success, or Err
with the
actual value if the comparison fails. This is also known as a
“compare-and-set” (CAS) operation.
§Panics
Panics if index
is out of bounds.
Sourcepub unsafe fn compare_exchange_unchecked(
&self,
index: usize,
current: T,
new: T,
success: Ordering,
failure: Ordering,
) -> Result<T, T>
pub unsafe fn compare_exchange_unchecked( &self, index: usize, current: T, new: T, success: Ordering, failure: Ordering, ) -> Result<T, T>
Atomically compares the value at index
with current
and, if they are
equal, replaces it with new
, without bounds checking.
Returns Ok
with the previous value on success, or Err
with the
actual value if the comparison fails. This is also known as a
“compare-and-set” (CAS) operation.
§Safety
Calling this method with an out-of-bounds index
is undefined behavior.
Sourcepub fn get(&self, index: usize) -> Option<T>
pub fn get(&self, index: usize) -> Option<T>
Returns the element at index
, or None
if out of bounds.
This is an ergonomic wrapper around load
that uses Ordering::SeqCst
.
Sourcepub unsafe fn get_unchecked(&self, index: usize) -> T
pub unsafe fn get_unchecked(&self, index: usize) -> T
Returns the element at index
without bounds checking.
§Safety
Calling this method with an out-of-bounds index
is undefined behavior.
Sourcepub fn iter(&self) -> impl Iterator<Item = T> + '_
pub fn iter(&self) -> impl Iterator<Item = T> + '_
Returns an iterator over the elements of the vector.
The iterator atomically loads each element using Ordering::SeqCst
.
Sourcepub fn par_iter(&self) -> impl ParallelIterator<Item = T> + '_
pub fn par_iter(&self) -> impl ParallelIterator<Item = T> + '_
Returns a parallel iterator over the elements of the vector.
The iterator atomically loads each element using Ordering::Relaxed
.
This operation is highly parallelizable as each element can be loaded
independently.
§Examples
use compressed_intvec::prelude::*;
use compressed_intvec::fixed::{AtomicFixedVec, UAtomicFixedVec, BitWidth};
use rayon::prelude::*;
use std::sync::atomic::Ordering;
let data: Vec<u32> = (0..1000).collect();
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
.build(&data)
.unwrap();
// Sum the elements in parallel.
let sum: u32 = vec.par_iter().sum();
assert_eq!(sum, (0..1000).sum());
Sourcepub fn par_iter_mut(
&self,
) -> impl ParallelIterator<Item = AtomicMutProxy<'_, T>>
pub fn par_iter_mut( &self, ) -> impl ParallelIterator<Item = AtomicMutProxy<'_, T>>
Returns a parallel iterator that allows modifying elements of the vector in place.
Each element is accessed via an AtomicMutProxy
, which ensures that
all modifications are written back atomically.
§Examples
use compressed_intvec::prelude::*;
use compressed_intvec::fixed::{AtomicFixedVec, UAtomicFixedVec, BitWidth};
use rayon::prelude::*;
use std::sync::atomic::Ordering;
let data: Vec<u32> = (0..100).collect();
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(8)) // 2*99 = 198, needs 8 bits
.build(&data)
.unwrap();
vec.par_iter_mut().for_each(|mut proxy| {
*proxy *= 2;
});
assert_eq!(vec.load(50, Ordering::Relaxed), 100);
Source§impl<T> AtomicFixedVec<T>
impl<T> AtomicFixedVec<T>
Sourcepub fn fetch_add(&self, index: usize, val: T, order: Ordering) -> Twhere
T: WrappingAdd,
pub fn fetch_add(&self, index: usize, val: T, order: Ordering) -> Twhere
T: WrappingAdd,
Atomically adds to the value at index
, returning the previous value.
This operation is a “read-modify-write” (RMW) operation. It atomically
reads the value at index
, adds val
to it (with wrapping on overflow),
and writes the result back.
§Panics
Panics if index
is out of bounds.
§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;
// The initial value is 10. The result will be 15, which needs 4 bits.
let data = vec![10u32, 20];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(5))
.build(&data)
.unwrap();
let previous = vec.fetch_add(0, 5, Ordering::SeqCst);
assert_eq!(previous, 10);
assert_eq!(vec.load(0, Ordering::SeqCst), 15);
Sourcepub fn fetch_sub(&self, index: usize, val: T, order: Ordering) -> Twhere
T: WrappingSub,
pub fn fetch_sub(&self, index: usize, val: T, order: Ordering) -> Twhere
T: WrappingSub,
Atomically subtracts from the value at index
, returning the previous value.
This is an atomic “read-modify-write” (RMW) operation.
§Panics
Panics if index
is out of bounds.
§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;
// The initial value is 10. The result will be 5, which fits.
let data = vec![10u32, 20];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(5))
.build(&data)
.unwrap();
let previous = vec.fetch_sub(0, 5, Ordering::SeqCst);
assert_eq!(previous, 10);
assert_eq!(vec.load(0, Ordering::SeqCst), 5);
Sourcepub fn fetch_and(&self, index: usize, val: T, order: Ordering) -> Twhere
T: BitAnd<Output = T>,
pub fn fetch_and(&self, index: usize, val: T, order: Ordering) -> Twhere
T: BitAnd<Output = T>,
Atomically performs a bitwise AND on the value at index
, returning the previous value.
This is an atomic “read-modify-write” (RMW) operation.
§Panics
Panics if index
is out of bounds.
§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;
// 0b1100 = 12. Needs 4 bits.
let data = vec![12u32];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(4))
.build(&data)
.unwrap();
// 0b1010 = 10
let previous = vec.fetch_and(0, 10, Ordering::SeqCst);
assert_eq!(previous, 12);
// 0b1100 & 0b1010 = 0b1000 = 8
assert_eq!(vec.load(0, Ordering::SeqCst), 8);
Sourcepub fn fetch_or(&self, index: usize, val: T, order: Ordering) -> Twhere
T: BitOr<Output = T>,
pub fn fetch_or(&self, index: usize, val: T, order: Ordering) -> Twhere
T: BitOr<Output = T>,
Atomically performs a bitwise OR on the value at index
, returning the previous value.
This is an atomic “read-modify-write” (RMW) operation.
§Panics
Panics if index
is out of bounds.
§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;
// 0b1100 = 12. Needs 4 bits.
let data = vec![12u32];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(4))
.build(&data)
.unwrap();
// 0b1010 = 10
let previous = vec.fetch_or(0, 10, Ordering::SeqCst);
assert_eq!(previous, 12);
// 0b1100 | 0b1010 = 0b1110 = 14
assert_eq!(vec.load(0, Ordering::SeqCst), 14);
Sourcepub fn fetch_xor(&self, index: usize, val: T, order: Ordering) -> Twhere
T: BitXor<Output = T>,
pub fn fetch_xor(&self, index: usize, val: T, order: Ordering) -> Twhere
T: BitXor<Output = T>,
Atomically performs a bitwise XOR on the value at index
, returning the previous value.
This is an atomic “read-modify-write” (RMW) operation.
§Panics
Panics if index
is out of bounds.
§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;
// 0b1100 = 12. Needs 4 bits.
let data = vec![12u32];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(4))
.build(&data)
.unwrap();
// 0b1010 = 10
let previous = vec.fetch_xor(0, 10, Ordering::SeqCst);
assert_eq!(previous, 12);
// 0b1100 ^ 0b1010 = 0b0110 = 6
assert_eq!(vec.load(0, Ordering::SeqCst), 6);
Sourcepub fn fetch_max(&self, index: usize, val: T, order: Ordering) -> Twhere
T: Ord,
pub fn fetch_max(&self, index: usize, val: T, order: Ordering) -> Twhere
T: Ord,
Atomically computes the maximum of the value at index
and val
, returning the previous value.
This is an atomic “read-modify-write” (RMW) operation.
§Panics
Panics if index
is out of bounds.
§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;
// Value 20 needs 6 bits with zig-zag encoding.
let data = vec![10i32];
let vec: SAtomicFixedVec<i32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(6))
.build(&data)
.unwrap();
// Attempt to store a larger value
let previous = vec.fetch_max(0, 20, Ordering::SeqCst);
assert_eq!(previous, 10);
assert_eq!(vec.load(0, Ordering::SeqCst), 20);
// Attempt to store a smaller value
let previous2 = vec.fetch_max(0, 5, Ordering::SeqCst);
assert_eq!(previous2, 20);
assert_eq!(vec.load(0, Ordering::SeqCst), 20); // Value is unchanged
Sourcepub fn fetch_min(&self, index: usize, val: T, order: Ordering) -> Twhere
T: Ord,
pub fn fetch_min(&self, index: usize, val: T, order: Ordering) -> Twhere
T: Ord,
Atomically computes the minimum of the value at index
and val
, returning the previous value.
This is an atomic “read-modify-write” (RMW) operation.
§Panics
Panics if index
is out of bounds.
§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;
// Value 10 needs 5 bits with zig-zag encoding.
let data = vec![10i32];
let vec: SAtomicFixedVec<i32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(5))
.build(&data)
.unwrap();
// Attempt to store a smaller value
let previous = vec.fetch_min(0, 5, Ordering::SeqCst);
assert_eq!(previous, 10);
assert_eq!(vec.load(0, Ordering::SeqCst), 5);
// Attempt to store a larger value
let previous2 = vec.fetch_min(0, 20, Ordering::SeqCst);
assert_eq!(previous2, 5);
assert_eq!(vec.load(0, Ordering::SeqCst), 5); // Value is unchanged
Sourcepub fn fetch_update<F>(
&self,
index: usize,
success: Ordering,
failure: Ordering,
f: F,
) -> Result<T, T>
pub fn fetch_update<F>( &self, index: usize, success: Ordering, failure: Ordering, f: F, ) -> Result<T, T>
Atomically modifies the value at index
using a closure.
Reads the value, applies the function f
, and attempts to write the
new value back. If the value has been changed by another thread in the
meantime, the function is re-evaluated with the new current value.
The closure f
can return None
to abort the update.
§Panics
Panics if index
is out of bounds.
§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;
// Value 20 needs 5 bits.
let data = vec![10u32];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
.bit_width(BitWidth::Explicit(5))
.build(&data)
.unwrap();
// Successfully update the value
let result = vec.fetch_update(0, Ordering::SeqCst, Ordering::Relaxed, |val| {
Some(val * 2)
});
assert_eq!(result, Ok(10));
assert_eq!(vec.load(0, Ordering::SeqCst), 20);
// Abort the update
let result_aborted = vec.fetch_update(0, Ordering::SeqCst, Ordering::Relaxed, |val| {
if val > 15 {
None // Abort if value is > 15
} else {
Some(val + 1)
}
});
assert_eq!(result_aborted, Err(20));
assert_eq!(vec.load(0, Ordering::SeqCst), 20); // Value remains unchanged
Trait Implementations§
Source§impl<T> Debug for AtomicFixedVec<T>
impl<T> Debug for AtomicFixedVec<T>
Source§impl<T> From<AtomicFixedVec<T>> for FixedVec<T, u64, LE, Vec<u64>>
impl<T> From<AtomicFixedVec<T>> for FixedVec<T, u64, LE, Vec<u64>>
Source§fn from(atomic_vec: AtomicFixedVec<T>) -> Self
fn from(atomic_vec: AtomicFixedVec<T>) -> Self
Creates a FixedVec
from an owned AtomicFixedVec
.
This is a zero-copy operation that re-uses the allocated buffer.
Source§impl<T, W, E> From<FixedVec<T, W, E>> for AtomicFixedVec<T>
impl<T, W, E> From<FixedVec<T, W, E>> for AtomicFixedVec<T>
Source§impl<'a, T> IntoIterator for &'a AtomicFixedVec<T>
impl<'a, T> IntoIterator for &'a AtomicFixedVec<T>
Source§impl<T: Storable<u64>> MemDbgImpl for AtomicFixedVec<T>
impl<T: Storable<u64>> MemDbgImpl for AtomicFixedVec<T>
fn _mem_dbg_rec_on( &self, writer: &mut impl Write, total_size: usize, max_depth: usize, prefix: &mut String, _is_last: bool, flags: DbgFlags, ) -> Result
fn _mem_dbg_depth_on( &self, writer: &mut impl Write, total_size: usize, max_depth: usize, prefix: &mut String, field_name: Option<&str>, is_last: bool, padded_size: usize, flags: DbgFlags, ) -> Result<(), Error>
Source§impl<T> MemSize for AtomicFixedVec<T>
impl<T> MemSize for AtomicFixedVec<T>
Source§impl<T> PartialEq for AtomicFixedVec<T>
impl<T> PartialEq for AtomicFixedVec<T>
Source§fn eq(&self, other: &Self) -> bool
fn eq(&self, other: &Self) -> bool
Checks for equality between two AtomicFixedVec
instances.
This comparison is performed by iterating over both vectors and comparing their elements one by one. The reads are done atomically but the overall comparison is not a single atomic operation.
Source§impl<T> TryFrom<&[T]> for AtomicFixedVec<T>
impl<T> TryFrom<&[T]> for AtomicFixedVec<T>
impl<T> Eq for AtomicFixedVec<T>
Auto Trait Implementations§
impl<T> Freeze for AtomicFixedVec<T>
impl<T> !RefUnwindSafe for AtomicFixedVec<T>
impl<T> Send for AtomicFixedVec<T>where
T: Send,
impl<T> Sync for AtomicFixedVec<T>where
T: Sync,
impl<T> Unpin for AtomicFixedVec<T>where
T: Unpin,
impl<T> UnwindSafe for AtomicFixedVec<T>where
T: UnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T, U> CastableInto<U> for Twhere
U: CastableFrom<T>,
impl<T, U> CastableInto<U> for Twhere
U: CastableFrom<T>,
Source§impl<T> DowncastableFrom<T> for T
impl<T> DowncastableFrom<T> for T
Source§fn downcast_from(value: T) -> T
fn downcast_from(value: T) -> T
Source§impl<T, U> DowncastableInto<U> for Twhere
U: DowncastableFrom<T>,
impl<T, U> DowncastableInto<U> for Twhere
U: DowncastableFrom<T>,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> MemDbg for Twhere
T: MemDbgImpl,
impl<T> MemDbg for Twhere
T: MemDbgImpl,
Source§fn mem_dbg(&self, flags: DbgFlags) -> Result<(), Error>
fn mem_dbg(&self, flags: DbgFlags) -> Result<(), Error>
Source§fn mem_dbg_on(
&self,
writer: &mut impl Write,
flags: DbgFlags,
) -> Result<(), Error>
fn mem_dbg_on( &self, writer: &mut impl Write, flags: DbgFlags, ) -> Result<(), Error>
core::fmt::Write
debug infos about the structure memory
usage, expanding all levels of nested structures.Source§fn mem_dbg_depth(&self, max_depth: usize, flags: DbgFlags) -> Result<(), Error>
fn mem_dbg_depth(&self, max_depth: usize, flags: DbgFlags) -> Result<(), Error>
mem_dbg
, but expanding only up to max_depth
levels of nested structures.Source§fn mem_dbg_depth_on(
&self,
writer: &mut impl Write,
max_depth: usize,
flags: DbgFlags,
) -> Result<(), Error>
fn mem_dbg_depth_on( &self, writer: &mut impl Write, max_depth: usize, flags: DbgFlags, ) -> Result<(), Error>
core::fmt::Write
debug infos about the structure memory
usage as mem_dbg_on
, but expanding only up to
max_depth
levels of nested structures.