Struct AtomicFixedVec

Source
pub struct AtomicFixedVec<T>
where T: Storable<u64>,
{ /* private fields */ }
Expand description

A thread-safe, compressed, randomly accessible vector of integers with fixed-width encoding, backed by u64 atomic words.

Implementations§

Source§

impl<T> AtomicFixedVec<T>
where T: Storable<u64> + Copy + ToPrimitive,

Source

pub fn builder() -> AtomicFixedVecBuilder<T>

Creates a builder for constructing an AtomicFixedVec from a slice.

§Examples
use compressed_intvec::prelude::*;
use compressed_intvec::fixed::{AtomicFixedVec, UAtomicFixedVec, BitWidth};

let data: &[i16] = &[-100, 0, 100, 200];
let vec: UAtomicFixedVec<i16> = AtomicFixedVec::builder()
    .bit_width(BitWidth::PowerOfTwo) // Force 16 bits for signed values
    .build(data)
    .unwrap();

assert_eq!(vec.len(), 4);
assert_eq!(vec.bit_width(), 16);
Source

pub fn len(&self) -> usize

Returns the number of elements in the vector.

Source

pub fn is_empty(&self) -> bool

Returns true if the vector contains no elements.

Source

pub fn bit_width(&self) -> usize

Returns the number of bits used to encode each element.

Source

pub fn as_slice(&self) -> &[AtomicU64]

Returns a read-only slice of the underlying atomic storage words.

Source

pub fn load(&self, index: usize, order: Ordering) -> T

Atomically loads the value at index.

load takes an Ordering argument which describes the memory ordering of this operation. For more information, see the Rust documentation on memory ordering.

§Panics

Panics if index is out of bounds.

Source

pub unsafe fn load_unchecked(&self, index: usize, order: Ordering) -> T

Atomically loads the value at index without bounds checking.

load_unchecked takes an Ordering argument which describes the memory ordering of this operation. For more information, see the Rust documentation on memory ordering.

§Safety

Calling this method with an out-of-bounds index is undefined behavior.

Source

pub fn store(&self, index: usize, value: T, order: Ordering)

Atomically stores value at index.

§Panics

Panics if index is out of bounds. Note that the stored value is not checked for whether it fits in the configured bit_width and will be truncated if it is too large.

Source

pub unsafe fn store_unchecked(&self, index: usize, value: T, order: Ordering)

Atomically stores value at index without bounds checking.

§Safety

Calling this method with an out-of-bounds index is undefined behavior. Note that the stored value is not checked for whether it fits in the configured bit_width and will be truncated if it is too large.

Source

pub fn swap(&self, index: usize, value: T, order: Ordering) -> T

Atomically swaps the value at index with value, returning the previous value.

§Panics

Panics if index is out of bounds.

Source

pub unsafe fn swap_unchecked( &self, index: usize, value: T, order: Ordering, ) -> T

Atomically swaps the value at index with value without bounds checking.

§Safety

Calling this method with an out-of-bounds index is undefined behavior.

Source

pub fn compare_exchange( &self, index: usize, current: T, new: T, success: Ordering, failure: Ordering, ) -> Result<T, T>

Atomically compares the value at index with current and, if they are equal, replaces it with new.

Returns Ok with the previous value on success, or Err with the actual value if the comparison fails. This is also known as a “compare-and-set” (CAS) operation.

§Panics

Panics if index is out of bounds.

Source

pub unsafe fn compare_exchange_unchecked( &self, index: usize, current: T, new: T, success: Ordering, failure: Ordering, ) -> Result<T, T>

Atomically compares the value at index with current and, if they are equal, replaces it with new, without bounds checking.

Returns Ok with the previous value on success, or Err with the actual value if the comparison fails. This is also known as a “compare-and-set” (CAS) operation.

§Safety

Calling this method with an out-of-bounds index is undefined behavior.

Source

pub fn get(&self, index: usize) -> Option<T>

Returns the element at index, or None if out of bounds.

This is an ergonomic wrapper around load that uses Ordering::SeqCst.

Source

pub unsafe fn get_unchecked(&self, index: usize) -> T

Returns the element at index without bounds checking.

§Safety

Calling this method with an out-of-bounds index is undefined behavior.

Source

pub fn iter(&self) -> impl Iterator<Item = T> + '_

Returns an iterator over the elements of the vector.

The iterator atomically loads each element using Ordering::SeqCst.

Source

pub fn par_iter(&self) -> impl ParallelIterator<Item = T> + '_
where T: Send + Sync,

Returns a parallel iterator over the elements of the vector.

The iterator atomically loads each element using Ordering::Relaxed. This operation is highly parallelizable as each element can be loaded independently.

§Examples
use compressed_intvec::prelude::*;
use compressed_intvec::fixed::{AtomicFixedVec, UAtomicFixedVec, BitWidth};
use rayon::prelude::*;
use std::sync::atomic::Ordering;

let data: Vec<u32> = (0..1000).collect();
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
    .build(&data)
    .unwrap();

// Sum the elements in parallel.
let sum: u32 = vec.par_iter().sum();
assert_eq!(sum, (0..1000).sum());
Source

pub fn par_iter_mut( &self, ) -> impl ParallelIterator<Item = AtomicMutProxy<'_, T>>
where T: Send + Sync,

Returns a parallel iterator that allows modifying elements of the vector in place.

Each element is accessed via an AtomicMutProxy, which ensures that all modifications are written back atomically.

§Examples
use compressed_intvec::prelude::*;
use compressed_intvec::fixed::{AtomicFixedVec, UAtomicFixedVec, BitWidth};
use rayon::prelude::*;
use std::sync::atomic::Ordering;

let data: Vec<u32> = (0..100).collect();
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(8)) // 2*99 = 198, needs 8 bits
    .build(&data)
    .unwrap();

vec.par_iter_mut().for_each(|mut proxy| {
    *proxy *= 2;
});

assert_eq!(vec.load(50, Ordering::Relaxed), 100);
Source§

impl<T> AtomicFixedVec<T>

Source

pub fn fetch_add(&self, index: usize, val: T, order: Ordering) -> T
where T: WrappingAdd,

Atomically adds to the value at index, returning the previous value.

This operation is a “read-modify-write” (RMW) operation. It atomically reads the value at index, adds val to it (with wrapping on overflow), and writes the result back.

§Panics

Panics if index is out of bounds.

§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;

// The initial value is 10. The result will be 15, which needs 4 bits.
let data = vec![10u32, 20];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(5))
    .build(&data)
    .unwrap();

let previous = vec.fetch_add(0, 5, Ordering::SeqCst);

assert_eq!(previous, 10);
assert_eq!(vec.load(0, Ordering::SeqCst), 15);
Source

pub fn fetch_sub(&self, index: usize, val: T, order: Ordering) -> T
where T: WrappingSub,

Atomically subtracts from the value at index, returning the previous value.

This is an atomic “read-modify-write” (RMW) operation.

§Panics

Panics if index is out of bounds.

§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;

// The initial value is 10. The result will be 5, which fits.
let data = vec![10u32, 20];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(5))
    .build(&data)
    .unwrap();

let previous = vec.fetch_sub(0, 5, Ordering::SeqCst);

assert_eq!(previous, 10);
assert_eq!(vec.load(0, Ordering::SeqCst), 5);
Source

pub fn fetch_and(&self, index: usize, val: T, order: Ordering) -> T
where T: BitAnd<Output = T>,

Atomically performs a bitwise AND on the value at index, returning the previous value.

This is an atomic “read-modify-write” (RMW) operation.

§Panics

Panics if index is out of bounds.

§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;

// 0b1100 = 12. Needs 4 bits.
let data = vec![12u32];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(4))
    .build(&data)
    .unwrap();

// 0b1010 = 10
let previous = vec.fetch_and(0, 10, Ordering::SeqCst);

assert_eq!(previous, 12);
// 0b1100 & 0b1010 = 0b1000 = 8
assert_eq!(vec.load(0, Ordering::SeqCst), 8);
Source

pub fn fetch_or(&self, index: usize, val: T, order: Ordering) -> T
where T: BitOr<Output = T>,

Atomically performs a bitwise OR on the value at index, returning the previous value.

This is an atomic “read-modify-write” (RMW) operation.

§Panics

Panics if index is out of bounds.

§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;

// 0b1100 = 12. Needs 4 bits.
let data = vec![12u32];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(4))
    .build(&data)
    .unwrap();

// 0b1010 = 10
let previous = vec.fetch_or(0, 10, Ordering::SeqCst);

assert_eq!(previous, 12);
// 0b1100 | 0b1010 = 0b1110 = 14
assert_eq!(vec.load(0, Ordering::SeqCst), 14);
Source

pub fn fetch_xor(&self, index: usize, val: T, order: Ordering) -> T
where T: BitXor<Output = T>,

Atomically performs a bitwise XOR on the value at index, returning the previous value.

This is an atomic “read-modify-write” (RMW) operation.

§Panics

Panics if index is out of bounds.

§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;

// 0b1100 = 12. Needs 4 bits.
let data = vec![12u32];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(4))
    .build(&data)
    .unwrap();

// 0b1010 = 10
let previous = vec.fetch_xor(0, 10, Ordering::SeqCst);

assert_eq!(previous, 12);
// 0b1100 ^ 0b1010 = 0b0110 = 6
assert_eq!(vec.load(0, Ordering::SeqCst), 6);
Source

pub fn fetch_max(&self, index: usize, val: T, order: Ordering) -> T
where T: Ord,

Atomically computes the maximum of the value at index and val, returning the previous value.

This is an atomic “read-modify-write” (RMW) operation.

§Panics

Panics if index is out of bounds.

§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;

// Value 20 needs 6 bits with zig-zag encoding.
let data = vec![10i32];
let vec: SAtomicFixedVec<i32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(6))
    .build(&data)
    .unwrap();

// Attempt to store a larger value
let previous = vec.fetch_max(0, 20, Ordering::SeqCst);
assert_eq!(previous, 10);
assert_eq!(vec.load(0, Ordering::SeqCst), 20);

// Attempt to store a smaller value
let previous2 = vec.fetch_max(0, 5, Ordering::SeqCst);
assert_eq!(previous2, 20);
assert_eq!(vec.load(0, Ordering::SeqCst), 20); // Value is unchanged
Source

pub fn fetch_min(&self, index: usize, val: T, order: Ordering) -> T
where T: Ord,

Atomically computes the minimum of the value at index and val, returning the previous value.

This is an atomic “read-modify-write” (RMW) operation.

§Panics

Panics if index is out of bounds.

§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;

// Value 10 needs 5 bits with zig-zag encoding.
let data = vec![10i32];
let vec: SAtomicFixedVec<i32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(5))
    .build(&data)
    .unwrap();

// Attempt to store a smaller value
let previous = vec.fetch_min(0, 5, Ordering::SeqCst);
assert_eq!(previous, 10);
assert_eq!(vec.load(0, Ordering::SeqCst), 5);

// Attempt to store a larger value
let previous2 = vec.fetch_min(0, 20, Ordering::SeqCst);
assert_eq!(previous2, 5);
assert_eq!(vec.load(0, Ordering::SeqCst), 5); // Value is unchanged
Source

pub fn fetch_update<F>( &self, index: usize, success: Ordering, failure: Ordering, f: F, ) -> Result<T, T>
where F: FnMut(T) -> Option<T>,

Atomically modifies the value at index using a closure.

Reads the value, applies the function f, and attempts to write the new value back. If the value has been changed by another thread in the meantime, the function is re-evaluated with the new current value.

The closure f can return None to abort the update.

§Panics

Panics if index is out of bounds.

§Examples
use compressed_intvec::prelude::*;
use std::sync::atomic::Ordering;

// Value 20 needs 5 bits.
let data = vec![10u32];
let vec: UAtomicFixedVec<u32> = AtomicFixedVec::builder()
    .bit_width(BitWidth::Explicit(5))
    .build(&data)
    .unwrap();

// Successfully update the value
let result = vec.fetch_update(0, Ordering::SeqCst, Ordering::Relaxed, |val| {
    Some(val * 2)
});
assert_eq!(result, Ok(10));
assert_eq!(vec.load(0, Ordering::SeqCst), 20);

// Abort the update
let result_aborted = vec.fetch_update(0, Ordering::SeqCst, Ordering::Relaxed, |val| {
    if val > 15 {
        None // Abort if value is > 15
    } else {
        Some(val + 1)
    }
});
assert_eq!(result_aborted, Err(20));
assert_eq!(vec.load(0, Ordering::SeqCst), 20); // Value remains unchanged

Trait Implementations§

Source§

impl<T> Debug for AtomicFixedVec<T>
where T: Storable<u64> + Debug,

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<T> From<AtomicFixedVec<T>> for FixedVec<T, u64, LE, Vec<u64>>
where T: Storable<u64>,

Source§

fn from(atomic_vec: AtomicFixedVec<T>) -> Self

Creates a FixedVec from an owned AtomicFixedVec. This is a zero-copy operation that re-uses the allocated buffer.

Source§

impl<T, W, E> From<FixedVec<T, W, E>> for AtomicFixedVec<T>
where T: Storable<W> + Storable<u64>, W: Word, E: Endianness,

Source§

fn from(fixed_vec: FixedVec<T, W, E, Vec<W>>) -> Self

Creates an AtomicFixedVec from an owned FixedVec. This is a zero-copy operation that re-uses the allocated buffer.

Source§

impl<'a, T> IntoIterator for &'a AtomicFixedVec<T>
where T: Storable<u64> + Copy + ToPrimitive,

Source§

type Item = T

The type of the elements being iterated over.
Source§

type IntoIter = AtomicFixedVecIter<'a, T>

Which kind of iterator are we turning this into?
Source§

fn into_iter(self) -> Self::IntoIter

Creates an iterator from a value. Read more
Source§

impl<T: Storable<u64>> MemDbgImpl for AtomicFixedVec<T>

Source§

fn _mem_dbg_rec_on( &self, writer: &mut impl Write, total_size: usize, max_depth: usize, prefix: &mut String, _is_last: bool, flags: DbgFlags, ) -> Result

Source§

fn _mem_dbg_depth_on( &self, writer: &mut impl Write, total_size: usize, max_depth: usize, prefix: &mut String, field_name: Option<&str>, is_last: bool, padded_size: usize, flags: DbgFlags, ) -> Result<(), Error>

Source§

impl<T> MemSize for AtomicFixedVec<T>
where T: Storable<u64>,

Source§

fn mem_size(&self, flags: SizeFlags) -> usize

Returns the (recursively computed) overall memory size of the structure in bytes.
Source§

impl<T> PartialEq for AtomicFixedVec<T>

Source§

fn eq(&self, other: &Self) -> bool

Checks for equality between two AtomicFixedVec instances.

This comparison is performed by iterating over both vectors and comparing their elements one by one. The reads are done atomically but the overall comparison is not a single atomic operation.

1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl<T> TryFrom<&[T]> for AtomicFixedVec<T>
where T: Storable<u64> + Copy + ToPrimitive,

Source§

fn try_from(slice: &[T]) -> Result<Self, Self::Error>

Creates an AtomicFixedVec<T> from a slice using BitWidth::Minimal.

Source§

type Error = Error

The type returned in the event of a conversion error.
Source§

impl<T> Eq for AtomicFixedVec<T>
where T: Storable<u64> + Eq + Copy + ToPrimitive,

Auto Trait Implementations§

§

impl<T> Freeze for AtomicFixedVec<T>

§

impl<T> !RefUnwindSafe for AtomicFixedVec<T>

§

impl<T> Send for AtomicFixedVec<T>
where T: Send,

§

impl<T> Sync for AtomicFixedVec<T>
where T: Sync,

§

impl<T> Unpin for AtomicFixedVec<T>
where T: Unpin,

§

impl<T> UnwindSafe for AtomicFixedVec<T>
where T: UnwindSafe,

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CastableFrom<T> for T

Source§

fn cast_from(value: T) -> T

Call Self as W
Source§

impl<T, U> CastableInto<U> for T
where U: CastableFrom<T>,

Source§

fn cast(self) -> U

Call W::cast_from(self)
Source§

impl<T> DowncastableFrom<T> for T

Source§

fn downcast_from(value: T) -> T

Truncate the current UnsignedInt to a possibly smaller size
Source§

impl<T, U> DowncastableInto<U> for T
where U: DowncastableFrom<T>,

Source§

fn downcast(self) -> U

Call W::downcast_from(self)
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> MemDbg for T
where T: MemDbgImpl,

Source§

fn mem_dbg(&self, flags: DbgFlags) -> Result<(), Error>

Writes to stderr debug infos about the structure memory usage, expanding all levels of nested structures.
Source§

fn mem_dbg_on( &self, writer: &mut impl Write, flags: DbgFlags, ) -> Result<(), Error>

Writes to a core::fmt::Write debug infos about the structure memory usage, expanding all levels of nested structures.
Source§

fn mem_dbg_depth(&self, max_depth: usize, flags: DbgFlags) -> Result<(), Error>

Writes to stderr debug infos about the structure memory usage as mem_dbg, but expanding only up to max_depth levels of nested structures.
Source§

fn mem_dbg_depth_on( &self, writer: &mut impl Write, max_depth: usize, flags: DbgFlags, ) -> Result<(), Error>

Writes to a core::fmt::Write debug infos about the structure memory usage as mem_dbg_on, but expanding only up to max_depth levels of nested structures.
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> Splat<T> for T

Source§

fn splat(value: T) -> T

Source§

impl<T> To<T> for T

Source§

fn to(self) -> T

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> UpcastableFrom<T> for T

Source§

fn upcast_from(value: T) -> T

Extend the current UnsignedInt to a possibly bigger size.
Source§

impl<T, U> UpcastableInto<U> for T
where U: UpcastableFrom<T>,

Source§

fn upcast(self) -> U

Call W::upcast_from(self)
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V