Struct ConcurrentVec

Source
pub struct ConcurrentVec<T, P = SplitVec<ConcurrentElement<T>, Doubling>>{ /* private fields */ }
Expand description

A thread-safe, efficient and lock-free vector allowing concurrent grow, read and update operations.

ConcurrentVec provides safe api for the following three sets of concurrent operations, grow & read & update.

§Examples

use orx_concurrent_vec::*;
use std::time::Duration;

#[derive(Debug, Default)]
struct Metric {
    sum: i32,
    count: i32,
}
impl Metric {
    fn aggregate(self, value: &i32) -> Self {
        Self {
            sum: self.sum + value,
            count: self.count + 1,
        }
    }
}

// record measurements in random intervals, roughly every 2ms
let measurements = ConcurrentVec::new();

// collect metrics every 100 milliseconds
let metrics = ConcurrentVec::new();

std::thread::scope(|s| {
    // thread to store measurements as they arrive
    s.spawn(|| {
        for i in 0..100 {
            std::thread::sleep(Duration::from_millis(i % 5));

            // collect measurements and push to measurements vec
            measurements.push(i as i32);
        }
    });

    // thread to collect metrics every 100 milliseconds
    s.spawn(|| {
        for _ in 0..10 {
            // safely read from measurements vec to compute the metric at that instant
            let metric =
                measurements.fold(Metric::default(), |x, value| x.aggregate(value));

            // push result to metrics
            metrics.push(metric);

            std::thread::sleep(Duration::from_millis(100));
        }
    });
});

let measurements: Vec<_> = measurements.to_vec();
let averages: Vec<_> = metrics.to_vec();

assert_eq!(measurements.len(), 100);
assert_eq!(averages.len(), 10);

Implementations§

Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn clear(&mut self)

Clears the concurrent bag.

Source

pub fn reserve_maximum_capacity(&mut self, new_maximum_capacity: usize) -> usize

Note that ConcurrentVec::maximum_capacity returns the maximum possible number of elements that the underlying pinned vector can grow to without reserving maximum capacity.

In other words, the pinned vector can automatically grow up to the ConcurrentVec::maximum_capacity with write and write_n_items methods, using only a shared reference.

When required, this maximum capacity can be attempted to increase by this method with a mutable reference.

Importantly note that maximum capacity does not correspond to the allocated memory.

Among the common pinned vector implementations:

  • SplitVec<_, Doubling>: supports this method; however, it does not require for any practical size.
  • SplitVec<_, Linear>: is guaranteed to succeed and increase its maximum capacity to the required value.
  • FixedVec<_>: is the most strict pinned vector which cannot grow even in a single-threaded setting. Currently, it will always return an error to this call.
§Safety

This method is unsafe since the concurrent pinned vector might contain gaps. The vector must be gap-free while increasing the maximum capacity.

This method can safely be called if entries in all positions 0..len are written.

Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn push(&self, value: T) -> usize

Concurrent, thread-safe method to push the given value to the back of the bag, and returns the position or index of the pushed value.

It preserves the order of elements with respect to the order the push method is called.

§Panics

Panics if the concurrent bag is already at its maximum capacity; i.e., if self.len() == self.maximum_capacity().

Note that this is an important safety assertion in the concurrent context; however, not a practical limitation. Please see the orx_pinned_concurrent_col::PinnedConcurrentCol::maximum_capacity for details.

§Examples

We can directly take a shared reference of the bag, share it among threads and collect results concurrently.

use orx_concurrent_vec::*;

let (num_threads, num_items_per_thread) = (4, 1_024);

let vec = ConcurrentVec::new();

std::thread::scope(|s| {
    let vec = &vec;
    for i in 0..num_threads {
        s.spawn(move || {
            for j in 0..num_items_per_thread {
                // concurrently collect results simply by calling `push`
                vec.push(i * 1000 + j);
            }
        });
    }
});

let mut vec = vec.to_vec();
vec.sort();
let mut expected: Vec<_> = (0..num_threads).flat_map(|i| (0..num_items_per_thread).map(move |j| i * 1000 + j)).collect();
expected.sort();
assert_eq!(vec, expected);
§Performance Notes - False Sharing

ConcurrentVec::push implementation is lock-free and focuses on efficiency. However, we need to be aware of the potential false sharing risk. False sharing might lead to significant performance degradation. However, it is possible to avoid in many cases.

§When?

Performance degradation due to false sharing might be observed when both of the following conditions hold:

  • small data: data to be pushed is small, the more elements fitting in a cache line the bigger the risk,
  • little work: multiple threads/cores are pushing to the concurrent bag with high frequency; i.e.,
    • very little or negligible work / time is required in between push calls.

The example above fits this situation. Each thread only performs one multiplication and addition in between pushing elements, and the elements to be pushed are very small, just one usize.

§Why?
  • ConcurrentBag assigns unique positions to each value to be pushed. There is no true sharing among threads in the position level.
  • However, cache lines contain more than one position.
  • One thread updating a particular position invalidates the entire cache line on an other thread.
  • Threads end up frequently reloading cache lines instead of doing the actual work of writing elements to the bag.
  • This might lead to a significant performance degradation.
§Solution: extend rather than push

One very simple, effective and memory efficient solution to this problem is to use ConcurrentVec::extend rather than push in small data & little work situations.

Assume that we will have 4 threads and each will push 1_024 elements. Instead of making 1_024 push calls from each thread, we can make one extend call from each. This would give the best performance. Further, it has zero buffer or memory cost:

  • it is important to note that the batch of 1_024 elements are not stored temporarily in another buffer,
  • there is no additional allocation,
  • extend does nothing more than reserving the position range for the thread by incrementing the atomic counter accordingly.

However, we do not need to have such a perfect information about the number of elements to be pushed. Performance gains after reaching the cache line size are much lesser.

For instance, consider the challenging super small element size case, where we are collecting i32s. We can already achieve a very high performance by simply extending the bag by batches of 16 elements.

As the element size gets larger, required batch size to achieve a high performance gets smaller and smaller.

Required change in the code from push to extend is not significant. The example above could be revised as follows to avoid the performance degrading of false sharing.

use orx_concurrent_vec::*;

let (num_threads, num_items_per_thread) = (4, 1_024);

let vec = ConcurrentVec::new();
let batch_size = 16;

std::thread::scope(|s| {
    let vec = &vec;
    for i in 0..num_threads {
        s.spawn(move || {
            for j in (0..num_items_per_thread).step_by(batch_size) {
                let iter = (j..(j + batch_size)).map(|j| i * 1000 + j);
                // concurrently collect results simply by calling `extend`
                vec.extend(iter);
            }
        });
    }
});

let mut vec = vec.to_vec();
vec.sort();
let mut expected: Vec<_> = (0..num_threads).flat_map(|i| (0..num_items_per_thread).map(move |j| i * 1000 + j)).collect();
expected.sort();
assert_eq!(vec, expected);
Source

pub fn push_for_idx<F>(&self, f: F) -> usize
where F: FnOnce(usize) -> T,

Pushes the value which will be computed as a function of the index where it will be written.

Note that we cannot guarantee the index of the element by pushing since there might be many pushes happening concurrently. In cases where we absolutely need to know the index, in other words, when the value depends on the index, we can use push_for_idx.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::new();
vec.push(0);
vec.push_for_idx(|i| i * 2);
vec.push_for_idx(|i| i + 10);
vec.push(42);

assert_eq!(&vec, &[0, 2, 12, 42]);
Source

pub fn extend<IntoIter, Iter>(&self, values: IntoIter) -> usize
where IntoIter: IntoIterator<Item = T, IntoIter = Iter>, Iter: Iterator<Item = T> + ExactSizeIterator,

Concurrent, thread-safe method to push all values that the given iterator will yield to the back of the bag. The method returns the position or index of the first pushed value (returns the length of the concurrent bag if the iterator is empty).

All values in the iterator will be added to the bag consecutively:

  • the first yielded value will be written to the position which is equal to the current length of the bag, say begin_idx, which is the returned value,
  • the second yielded value will be written to the begin_idx + 1-th position,
  • and the last value will be written to the begin_idx + values.count() - 1-th position of the bag.

Important notes:

  • This method does not allocate to buffer.
  • All it does is to increment the atomic counter by the length of the iterator (push would increment by 1) and reserve the range of positions for this operation.
  • If there is not sufficient space, the vector grows first; iterating over and writing elements to the vec happens afterwards.
  • Therefore, other threads do not wait for the extend method to complete, they can concurrently write.
  • This is a simple and effective approach to deal with the false sharing problem.

For this reason, the method requires an ExactSizeIterator. There exists the variant ConcurrentVec::extend_n_items method which accepts any iterator together with the correct length to be passed by the caller. It is unsafe as the caller must guarantee that the iterator yields at least the number of elements explicitly passed in as an argument.

§Panics

Panics if not all of the values fit in the concurrent bag’s maximum capacity.

Note that this is an important safety assertion in the concurrent context; however, not a practical limitation. Please see the orx_pinned_concurrent_col::PinnedConcurrentCol::maximum_capacity for details.

§Examples

We can directly take a shared reference of the bag and share it among threads.

use orx_concurrent_vec::*;

let (num_threads, num_items_per_thread) = (4, 1_024);

let vec = ConcurrentVec::new();
let batch_size = 16;

std::thread::scope(|s| {
    let vec = &vec;
    for i in 0..num_threads {
        s.spawn(move || {
            for j in (0..num_items_per_thread).step_by(batch_size) {
                let iter = (j..(j + batch_size)).map(|j| i * 1000 + j);
                // concurrently collect results simply by calling `extend`
                vec.extend(iter);
            }
        });
    }
});

let mut vec: Vec<_> = vec.to_vec();
vec.sort();
let mut expected: Vec<_> = (0..num_threads).flat_map(|i| (0..num_items_per_thread).map(move |j| i * 1000 + j)).collect();
expected.sort();
assert_eq!(vec, expected);
§Performance Notes - False Sharing

ConcurrentVec::push method is implementation is simple, lock-free and efficient. However, we need to be aware of the potential false sharing risk. False sharing might lead to significant performance degradation; fortunately, it is possible to avoid in many cases.

§When?

Performance degradation due to false sharing might be observed when both of the following conditions hold:

  • small data: data to be pushed is small, the more elements fitting in a cache line the bigger the risk,
  • little work: multiple threads/cores are pushing to the concurrent bag with high frequency; i.e.,
    • very little or negligible work / time is required in between push calls.

The example above fits this situation. Each thread only performs one multiplication and addition for computing elements, and the elements to be pushed are very small, just one usize.

§Why?
  • ConcurrentBag assigns unique positions to each value to be pushed. There is no true sharing among threads in the position level.
  • However, cache lines contain more than one position.
  • One thread updating a particular position invalidates the entire cache line on an other thread.
  • Threads end up frequently reloading cache lines instead of doing the actual work of writing elements to the bag.
  • This might lead to a significant performance degradation.
§Solution: extend rather than push

One very simple, effective and memory efficient solution to the false sharing problem is to use ConcurrentVec::extend rather than push in small data & little work situations.

Assume that we will have 4 threads and each will push 1_024 elements. Instead of making 1_024 push calls from each thread, we can make one extend call from each. This would give the best performance. Further, it has zero buffer or memory cost:

  • it is important to note that the batch of 1_024 elements are not stored temporarily in another buffer,
  • there is no additional allocation,
  • extend does nothing more than reserving the position range for the thread by incrementing the atomic counter accordingly.

However, we do not need to have such a perfect information about the number of elements to be pushed. Performance gains after reaching the cache line size are much lesser.

For instance, consider the challenging super small element size case, where we are collecting i32s. We can already achieve a very high performance by simply extending the bag by batches of 16 elements.

As the element size gets larger, required batch size to achieve a high performance gets smaller and smaller.

The example code above already demonstrates the solution to a potentially problematic case in the ConcurrentVec::push example.

Source

pub fn extend_for_idx<IntoIter, Iter, F>(&self, f: F, num_items: usize) -> usize
where IntoIter: IntoIterator<Item = T, IntoIter = Iter>, Iter: Iterator<Item = T> + ExactSizeIterator, F: FnOnce(usize) -> IntoIter,

Extends the vector with the values of the iterator which is created as a function of the index that the first element of the iterator will be written to.

Note that we cannot guarantee the index of the element by extending since there might be many pushes or extends happening concurrently. In cases where we absolutely need to know the index, in other words, when the values depend on the indices, we can use extend_for_idx.

§Panics

Panics if the iterator created by f does not yield num_items elements.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::new();

vec.push(0);

let iter = |begin_idx: usize| ((begin_idx..(begin_idx + 3)).map(|i| i * 5));
vec.extend_for_idx(|begin_idx| iter(begin_idx), 3);
vec.push(42);

assert_eq!(&vec, &[0, 5, 10, 15, 42]);
Source

pub fn extend_n_items<IntoIter>( &self, values: IntoIter, num_items: usize, ) -> usize
where IntoIter: IntoIterator<Item = T>,

Concurrent, thread-safe method to push num_items elements yielded by the values iterator to the back of the bag. The method returns the position or index of the first pushed value (returns the length of the concurrent bag if the iterator is empty).

All values in the iterator will be added to the bag consecutively:

  • the first yielded value will be written to the position which is equal to the current length of the bag, say begin_idx, which is the returned value,
  • the second yielded value will be written to the begin_idx + 1-th position,
  • and the last value will be written to the begin_idx + num_items - 1-th position of the bag.

Important notes:

  • This method does not allocate at all to buffer elements to be pushed.
  • All it does is to increment the atomic counter by the length of the iterator (push would increment by 1) and reserve the range of positions for this operation.
  • Iterating over and writing elements to the vec happens afterwards.
  • This is a simple, effective and memory efficient solution to the false sharing problem.

For this reason, the method requires the additional num_items argument. There exists the variant ConcurrentVec::extend method which accepts only an ExactSizeIterator.

§Panics

Panics if the iterator created by f does not yield num_items elements.

§Examples

We can directly take a shared reference of the bag and share it among threads.

use orx_concurrent_vec::*;

let (num_threads, num_items_per_thread) = (4, 1_024);

let vec = ConcurrentVec::new();
let batch_size = 16;

std::thread::scope(|s| {
    let vec = &vec;
    for i in 0..num_threads {
        s.spawn(move || {
            for j in (0..num_items_per_thread).step_by(batch_size) {
                let iter = (j..(j + batch_size)).map(|j| i * 1000 + j);
                // concurrently collect results simply by calling `extend_n_items`
                unsafe { vec.extend_n_items(iter, batch_size) };
            }
        });
    }
});

let mut vec: Vec<_> = vec.to_vec();
vec.sort();
let mut expected: Vec<_> = (0..num_threads).flat_map(|i| (0..num_items_per_thread).map(move |j| i * 1000 + j)).collect();
expected.sort();
assert_eq!(vec, expected);
§Performance Notes - False Sharing

ConcurrentVec::push method is implementation is simple, lock-free and efficient. However, we need to be aware of the potential false sharing risk. False sharing might lead to significant performance degradation; fortunately, it is possible to avoid in many cases.

§When?

Performance degradation due to false sharing might be observed when both of the following conditions hold:

  • small data: data to be pushed is small, the more elements fitting in a cache line the bigger the risk,
  • little work: multiple threads/cores are pushing to the concurrent bag with high frequency; i.e.,
    • very little or negligible work / time is required in between push calls.

The example above fits this situation. Each thread only performs one multiplication and addition for computing elements, and the elements to be pushed are very small, just one usize.

§Why?
  • ConcurrentBag assigns unique positions to each value to be pushed. There is no true sharing among threads in the position level.
  • However, cache lines contain more than one position.
  • One thread updating a particular position invalidates the entire cache line on an other thread.
  • Threads end up frequently reloading cache lines instead of doing the actual work of writing elements to the bag.
  • This might lead to a significant performance degradation.
§Solution: extend rather than push

One very simple, effective and memory efficient solution to the false sharing problem is to use ConcurrentVec::extend rather than push in small data & little work situations.

Assume that we will have 4 threads and each will push 1_024 elements. Instead of making 1_024 push calls from each thread, we can make one extend call from each. This would give the best performance. Further, it has zero buffer or memory cost:

  • it is important to note that the batch of 1_024 elements are not stored temporarily in another buffer,
  • there is no additional allocation,
  • extend does nothing more than reserving the position range for the thread by incrementing the atomic counter accordingly.

However, we do not need to have such a perfect information about the number of elements to be pushed. Performance gains after reaching the cache line size are much lesser.

For instance, consider the challenging super small element size case, where we are collecting i32s. We can already achieve a very high performance by simply extending the bag by batches of 16 elements.

As the element size gets larger, required batch size to achieve a high performance gets smaller and smaller.

The example code above already demonstrates the solution to a potentially problematic case in the ConcurrentVec::push example.

Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn map<'a, F, U>(&'a self, f: F) -> impl Iterator<Item = U> + 'a
where F: FnMut(&T) -> U + 'a,

Returns an iterator to values obtained by mapping elements of the vec by f.

Note that vec.map(f) is a shorthand for vec.iter().map(move |elem| elem.map(|x: &T| f(x))).

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter(0..4);

let doubles: Vec<_> = vec.map(|x| x * 2).collect();
assert_eq!(doubles, [0, 2, 4, 6]);
Source

pub fn filter<'a, F>( &'a self, f: F, ) -> impl Iterator<Item = &'a ConcurrentElement<T>> + 'a
where F: FnMut(&T) -> bool + 'a,

Returns an iterator to elements filtered by using the predicate f on the values.

Note that vec.filter(f) is a shorthand for vec.iter().filter(move |elem| elem.map(|x: &T| f(x))).

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter(0..4);

let mut evens = vec.filter(|x| x % 2 == 0);
assert_eq!(evens.next().unwrap(), &0);
assert_eq!(evens.next().unwrap(), &2);
assert_eq!(evens.next(), None);
Source

pub fn fold<F, U>(&self, init: U, f: F) -> U
where F: FnMut(U, &T) -> U,

Folds the values of the vec starting from the init using the fold function f.

Note that vec.fold(f) is a shorthand for vec.iter().fold(init, |agg, elem| elem.map(|x| f(agg, x))).

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter(0..4);

let sum = vec.fold(0, |sum, x| sum + x);
assert_eq!(sum, 6);
Source

pub fn reduce<F>(&self, f: F) -> Option<T>
where T: Clone, F: FnMut(&T, &T) -> T,

Reduces the values of the vec using the reduction f; returns None if the vec is empty.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::new();
let sum = vec.reduce(|a, b| a + b);
assert_eq!(sum, None);

vec.push(42);
let sum = vec.reduce(|a, b| a + b);
assert_eq!(sum, Some(42));

vec.extend([6, 2]);
let sum = vec.reduce(|a, b| a + b);
assert_eq!(sum, Some(50));
Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn swap(&self, i: usize, j: usize) -> bool

Swaps two elements in the vector.

Returns:

  • true of both i and j are in bounds and values are swapped,
  • false if at least one of the indices is out of bounds.
§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);

let swapped = vec.swap(0, 2);
assert_eq!(swapped, true);
assert_eq!(&vec, &[2, 1, 0, 3]);

let swapped = vec.swap(0, 4);
assert_eq!(swapped, false);
assert_eq!(&vec, &[2, 1, 0, 3]);
Source

pub fn fill(&self, value: T)
where T: Clone,

Fills all positions of the vec with the given value.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);

vec.fill(42);
assert_eq!(&vec, &[42, 42, 42, 42]);
Source

pub fn fill_with<F>(&self, value: F)
where F: FnMut(usize) -> T,

Fills all positions of the vec with the the values created by successively calling value(i) for each position.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);

let mut current = 0;
vec.fill_with(|i| {
    current += i as i32;
    current
});
assert_eq!(&vec, &[0, 1, 3, 6]);
Source§

impl<T> ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Doubling>>

Source

pub fn new() -> Self

Creates a new concurrent bag by creating and wrapping up a new SplitVec<T, Doubling> as the underlying storage.

Source

pub fn with_doubling_growth() -> Self

Creates a new concurrent bag by creating and wrapping up a new SplitVec<T, Doubling> as the underlying storage.

Source§

impl<T> ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Linear>>

Source

pub fn with_linear_growth( constant_fragment_capacity_exponent: usize, fragments_capacity: usize, ) -> Self

Creates a new concurrent bag by creating and wrapping up a new SplitVec<T, Linear> as the underlying storage.

  • Each fragment of the split vector will have a capacity of 2 ^ constant_fragment_capacity_exponent.
  • Further, fragments collection of the split vector will have a capacity of fragments_capacity on initialization.

This leads to a orx_pinned_concurrent_col::PinnedConcurrentCol::maximum_capacity of fragments_capacity * 2 ^ constant_fragment_capacity_exponent.

Whenever this capacity is not sufficient, fragments capacity can be increased by using the orx_pinned_concurrent_col::PinnedConcurrentCol::reserve_maximum_capacity method.

Source§

impl<T> ConcurrentVec<T, FixedVec<ConcurrentElement<T>>>

Source

pub fn with_fixed_capacity(fixed_capacity: usize) -> Self

Creates a new concurrent bag by creating and wrapping up a new FixedVec<T> as the underlying storage.

§Safety

Note that a FixedVec cannot grow; i.e., it has a hard upper bound on the number of elements it can hold, which is the fixed_capacity.

Pushing to the vector beyond this capacity leads to “out-of-capacity” error.

This maximum capacity can be accessed by orx_pinned_concurrent_col::PinnedConcurrentCol::capacity or orx_pinned_concurrent_col::PinnedConcurrentCol::maximum_capacity methods.

Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn index_of(&self, value: &T) -> Option<usize>

Returns the index of the first element equal to the given value. Returns None if the value is absent.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter(['a', 'b', 'c']);

assert_eq!(vec.index_of(&'c'), Some(2));
assert_eq!(vec.index_of(&'d'), None);
Source

pub fn contains(&self, value: &T) -> bool

Returns whether an element equal to the given value exists or not.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter(['a', 'b', 'c']);

assert_eq!(vec.contains(&'c'), true);
assert_eq!(vec.contains(&'d'), false);
Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn split_at( &self, mid: usize, ) -> (ConcurrentSlice<'_, T, P>, ConcurrentSlice<'_, T, P>)

Divides one slice into two at an index:

  • the first will contain elements in positions [0, mid),
  • the second will contain elements in positions [mid, self.len()).
§Panics

Panics if mid > self.len().

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter(0..8);

let (a, b) = vec.split_at(3);
assert_eq!(a, [0, 1, 2]);
assert_eq!(b, [3, 4, 5, 6, 7]);
Source

pub fn split_first( &self, ) -> Option<(&ConcurrentElement<T>, ConcurrentSlice<'_, T, P>)>

Returns the first and all the rest of the elements of the slice, or None if it is empty.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter(0..4);

let (a, b) = vec.split_first().unwrap();
assert_eq!(a, &0);
assert_eq!(b, [1, 2, 3]);

// empty
let slice = vec.slice(0..0);
assert!(slice.split_first().is_none());

// single element
let slice = vec.slice(2..3);
let (a, b) = slice.split_first().unwrap();
assert_eq!(a, &2);
assert_eq!(b, []);
Source

pub fn split_last( &self, ) -> Option<(&ConcurrentElement<T>, ConcurrentSlice<'_, T, P>)>

Returns the last and all the rest of the elements of the slice, or None if it is empty.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter(0..4);

let (a, b) = vec.split_last().unwrap();
assert_eq!(a, &3);
assert_eq!(b, [0, 1, 2]);

// empty
let slice = vec.slice(0..0);
assert!(slice.split_last().is_none());

// single element
let slice = vec.slice(2..3);
let (a, b) = slice.split_last().unwrap();
assert_eq!(a, &2);
assert_eq!(b, []);
Source

pub fn chunks( &self, chunk_size: usize, ) -> impl ExactSizeIterator<Item = ConcurrentSlice<'_, T, P>>

Returns an iterator over chunk_size elements of the slice at a time, starting at the beginning of the slice.

The chunks are slices and do not overlap. If chunk_size does not divide the length of the slice, then the last chunk will not have length chunk_size.

§Panics

Panics if chunk_size is 0.

§Examples
use orx_concurrent_vec::*;

let vec: ConcurrentVec<_> = ['l', 'o', 'r', 'e', 'm'].into_iter().collect();

let mut iter = vec.chunks(2);
assert_eq!(iter.next().unwrap(), ['l', 'o']);
assert_eq!(iter.next().unwrap(), ['r', 'e']);
assert_eq!(iter.next().unwrap(), ['m']);
assert!(iter.next().is_none());
Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn to_vec(self) -> Vec<T>

Transforms the concurrent vec into a regular vector.

Source

pub fn clone_to_vec(&self) -> Vec<T>
where T: Clone,

Without consuming the concurrent vector, clones the values of its elements into a regular vector.

Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn get_raw(&self, i: usize) -> Option<*const T>

Returns:

  • a raw *const T pointer to the underlying data if element at the i-th position is pushed,
  • None otherwise.
§Safety

Please see below the safety guarantees and potential safety risks using the pointer obtained by this method.

§Safety Guarantees

Pointer obtained by this method will be valid:

  • ConcurrentVec prevents access to elements which are not added yet.
  • ConcurrentOption wrapper prevents access during initialization, and hence, prevents data race during initialization.
  • PinnedVec storage makes sure that memory location of the elements never change.

Therefore, the caller can hold on the obtained pointer throughout the lifetime of the vec. It is guaranteed that it will be valid pointing to the correct position with initialized data.

§Unsafe Bits

However, this method still leaks out a pointer, using which can cause data races as follows:

  • The value of the position can be replaced or set or updated concurrently by another thread.
  • If at the same instant, we attempt to read using this pointer, we would end up with a data-race.
§Safe Usage

This method can be safely used as long as the caller is able to guarantee that the position will not be being mutated while using the pointer to directly access the data.

A common use case to this is the grow-only scenarios where added elements are not mutated:

  • elements can be added to the vector by multiple threads,
  • while already pushed elements can safely be accessed by other threads using get_raw.
Source

pub unsafe fn get_ref(&self, i: usize) -> Option<&T>

Returns a reference to the element at the i-th position of the vec. It returns None if index is out of bounds.

See also get and get_cloned for thread-safe alternatives of concurrent access to data.

§Safety

All methods that leak out &T or &mut T references are marked as unsafe. Please see the reason and possible scenarios to use it safely below.

§Safety Guarantees

Reference obtained by this method will be valid:

  • ConcurrentVec prevents access to elements which are not added yet.
  • ConcurrentOption wrapper prevents access during initialization, and hence, prevents data race during initialization.
  • PinnedVec storage makes sure that memory location of the elements never change.

Therefore, the caller can hold on the obtained reference throughout the lifetime of the vec. It is guaranteed that the reference will be valid pointing to the correct position.

§Unsafe Bits

However, this method still leaks out a reference, which can cause data races as follows:

  • The value of the position can be replaced or set or updated concurrently by another thread.
  • If at the same instant, we attempt to read using this reference, we would end up with a data-race.
§Safe Usage

This method can be safely used as long as the caller is able to guarantee that the position will not be being mutated while using the reference to directly access the data.

A common use case to this is the grow-only scenarios where added elements are not mutated:

  • elements can be added to the vector by multiple threads,
  • while already pushed elements can safely be accessed by other threads using get.
§Examples

As explained above, the following constructs a safe usage example of the unsafe get method.

use orx_concurrent_vec::*;
use std::time::Duration;

#[derive(Debug, Default)]
struct Metric {
    sum: i32,
    count: i32,
}

impl Metric {
    fn aggregate(self, value: &i32) -> Self {
        Self {
            sum: self.sum + value,
            count: self.count + 1,
        }
    }
}

// record measurements in random intervals, roughly every 2ms
let measurements = ConcurrentVec::new();

// collect metrics every 100 milliseconds
let metrics = ConcurrentVec::new();

std::thread::scope(|s| {
    // thread to store measurements as they arrive
    s.spawn(|| {
        for i in 0..100 {
            std::thread::sleep(Duration::from_millis(i % 5));

            // collect measurements and push to measurements vec
            measurements.push(i as i32);
        }
    });

    // thread to collect metrics every 100 milliseconds
    s.spawn(|| {
        for _ in 0..10 {
            // safely read from measurements vec to compute the metric
            // since pushed elements are not being mutated
            let len = measurements.len();
            let mut metric = Metric::default();
            for i in 0..len {
                if let Some(value) = unsafe { measurements.get_ref(i) } {
                    metric = metric.aggregate(value);
                }
            }

            // push result to metrics
            metrics.push(metric);

            std::thread::sleep(Duration::from_millis(100));
        }
    });
});

let measurements: Vec<_> = measurements.to_vec();
let averages: Vec<_> = metrics.to_vec();

assert_eq!(measurements.len(), 100);
assert_eq!(averages.len(), 10);
Source

pub unsafe fn iter_ref(&self) -> impl Iterator<Item = &T>

Returns an iterator to references of elements of the vec.

See also iter and iter_cloned for thread-safe alternatives of concurrent access to elements.

§Safety

All methods that leak out &T or &mut T references are marked as unsafe. Please see the reason and possible scenarios to use it safely below.

§Safety Guarantees

References obtained by this method will be valid:

  • ConcurrentVec prevents access to elements which are not added yet.
  • ConcurrentOption wrapper prevents access during initialization, and hence, prevents data race during initialization.
  • PinnedVec storage makes sure that memory location of the elements never change.

Therefore, the caller can hold on the obtained references throughout the lifetime of the vec. It is guaranteed that the references will be valid pointing to the correct positions.

§Unsafe Bits

However, this method still leaks out references that can cause data races as follows:

  • Values of elements in the vector can be concurrently mutated by methods such as replace or update by other threads.
  • If at the same instant, we attempt to read using these references, we would end up with a data-race.
§Safe Usage

This method can be safely used as long as the caller is able to guarantee that the position will not be being mutated while using these references to directly access the data.

A common use case to this is the grow-only scenarios where added elements are not mutated:

  • elements can be added to the vector by multiple threads,
  • while already pushed elements can safely be accessed by other threads using iter.
§Examples

As explained above, the following constructs a safe usage example of the unsafe iter method.

use orx_concurrent_vec::*;
use std::time::Duration;

#[derive(Debug, Default)]
struct Metric {
    sum: i32,
    count: i32,
}

impl Metric {
    fn aggregate(self, value: &i32) -> Self {
        Self {
            sum: self.sum + value,
            count: self.count + 1,
        }
    }
}

// record measurements in random intervals, roughly every 2ms
let measurements = ConcurrentVec::new();

// collect metrics every 100 milliseconds
let metrics = ConcurrentVec::new();

std::thread::scope(|s| {
    // thread to store measurements as they arrive
    s.spawn(|| {
        for i in 0..100 {
            std::thread::sleep(Duration::from_millis(i % 5));

            // collect measurements and push to measurements vec
            measurements.push(i as i32);
        }
    });

    // thread to collect metrics every 100 milliseconds
    s.spawn(|| {
        for _ in 0..10 {
            // safely read from measurements vec to compute the metric
            // since pushed elements are never mutated
            let metric = unsafe {
                measurements
                    .iter_ref()
                    .fold(Metric::default(), |x, value| x.aggregate(value))
            };

            // push result to metrics
            metrics.push(metric);

            std::thread::sleep(Duration::from_millis(100));
        }
    });
});

let measurements: Vec<_> = measurements.to_vec();
let averages: Vec<_> = metrics.to_vec();

assert_eq!(measurements.len(), 100);
assert_eq!(averages.len(), 10);
Source

pub fn get_raw_mut(&self, i: usize) -> Option<*mut T>

Returns:

  • a raw *mut T pointer to the underlying data if element at the i-th position is pushed,
  • None otherwise.
§Safety

Please see below the safety guarantees and potential safety risks using the pointer obtained by this method.

§Safety Guarantees

Pointer obtained by this method will be valid:

  • ConcurrentVec prevents access to elements which are not added yet.
  • ConcurrentOption wrapper prevents access during initialization, and hence, prevents data race during initialization.
  • PinnedVec storage makes sure that memory location of the elements never change.

Therefore, the caller can hold on the obtained pointer throughout the lifetime of the vec. It is guaranteed that it will be valid pointing to the correct position with initialized data.

§Unsafe Bits

However, this method still leaks out a pointer, using which can cause data races as follows:

  • The value of the position can be replaced or set or updated concurrently by another thread.
  • If at the same instant, we attempt to read using this pointer, we would end up with a data-race.
§Safe Usage

This method can be safely used as long as the caller is able to guarantee that the position will not be being read or written by another thread while using the pointer to directly access the data.

Source

pub unsafe fn get_mut(&self, i: usize) -> Option<&mut T>

Returns a mutable reference to the element at the i-th position of the vec. It returns None if index is out of bounds.

§Safety

All methods that return &T or &mut T references are marked as unsafe. Please see the reason and possible scenarios to use it safely below.

§Safety Guarantees

Reference obtained by this method will be valid:

  • ConcurrentVec prevents access to elements which are not added yet.
  • ConcurrentOption wrapper prevents access during initialization, and hence, prevents data race during initialization.
  • PinnedVec storage makes sure that memory location of the elements never change.

Therefore, the caller can hold on the obtained reference throughout the lifetime of the vec. It is guaranteed that the reference will be valid pointing to the correct position.

§Unsafe Bits

However, this method still leaks out a reference, which can cause data races as follows:

  • The value of the position can be replaced or set or updated concurrently by another thread.
  • And it maybe read by safe access methods such as map or cloned.
  • If at the same instant, we attempt to read or write using this reference, we would end up with a data-race.
§Safe Usage

This method can be safely used as long as the caller is able to guarantee that the position will not be being read or written by another thread while using the reference to directly access the data.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::new();
vec.extend(['a', 'b', 'c', 'd']);

assert_eq!(unsafe { vec.get_mut(4) }, None);

*unsafe { vec.get_mut(1).unwrap() } = 'x';
assert_eq!(unsafe { vec.get_ref(1) }, Some(&'x'));

assert_eq!(&vec, &['a', 'x', 'c', 'd']);
Source

pub unsafe fn iter_mut(&self) -> impl Iterator<Item = &mut T>

Returns an iterator to mutable references of elements of the vec.

See also iter for thread-safe alternative of concurrent mutation of elements.

§Safety

All methods that leak out &T or &mut T references are marked as unsafe. Please see the reason and possible scenarios to use it safely below.

§Safety Guarantees

References obtained by this method will be valid:

  • ConcurrentVec prevents access to elements which are not added yet.
  • ConcurrentOption wrapper prevents access during initialization, and hence, prevents data race during initialization.
  • PinnedVec storage makes sure that memory location of the elements never change.

Therefore, the caller can hold on the obtained references throughout the lifetime of the vec. It is guaranteed that the references will be valid pointing to the correct position.

§Unsafe Bits

However, this method still leaks out references, which can cause data races as follows:

  • Values of elements can be concurrently read by other threads.
  • Likewise, they can be concurrently mutated by thread-safe mutation methods.
  • If at the same instant, we attempt to read or write using these references, we would end up with a data-race.
§Safe Usage

This method can be safely used as long as the caller is able to guarantee that the elements will not be being read or written by another thread while using the reference to directly access the data.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);

let iter = unsafe { vec.iter_mut() };
for x in iter {
    *x *= 2;
}

assert_eq!(&vec, &[0, 2, 4, 6]);
Source§

impl<T, P> ConcurrentVec<T, P>

Source

pub fn into_inner(self) -> P

Consumes the concurrent vec and returns the underlying pinned vector.

Any PinnedVec implementation can be converted to a ConcurrentVec using the From trait. Similarly, underlying pinned vector can be obtained by calling the consuming into_inner method.

Source

pub fn len(&self) -> usize

Returns the number of elements which are pushed to the vec, excluding the elements which received their reserved locations and are currently being pushed.

§Examples
use orx_concurrent_vec::ConcurrentVec;

let vec = ConcurrentVec::new();
vec.push('a');
vec.push('b');

assert_eq!(2, vec.len());
Source

pub fn is_empty(&self) -> bool

Returns whether or not the bag is empty.

§Examples
use orx_concurrent_vec::ConcurrentVec;

let mut vec = ConcurrentVec::new();

assert!(vec.is_empty());

vec.push('a');
vec.push('b');

assert!(!vec.is_empty());

vec.clear();
assert!(vec.is_empty());
Source

pub fn capacity(&self) -> usize

Returns the current allocated capacity of the collection.

Source

pub fn maximum_capacity(&self) -> usize

Returns maximum possible capacity that the collection can reach without calling ConcurrentVec::reserve_maximum_capacity.

Importantly note that maximum capacity does not correspond to the allocated memory.

Source

pub fn slice<R: RangeBounds<usize>>( &self, range: R, ) -> ConcurrentSlice<'_, T, P>

Creates and returns a slice of a ConcurrentVec or another ConcurrentSlice.

Concurrent counterpart of a slice for a standard vec or an array.

A ConcurrentSlice provides a focused / restricted view on a slice of the vector. It provides all methods of the concurrent vector except for the ones which grow the size of the vector.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3, 4]);

let slice = vec.slice(1..);
assert_eq!(&slice, &[1, 2, 3, 4]);

let slice = vec.slice(1..4);
assert_eq!(&slice, &[1, 2, 3]);

let slice = vec.slice(..3);
assert_eq!(&slice, &[0, 1, 2]);

let slice = vec.slice(3..10);
assert_eq!(&slice, &[3, 4]);

let slice = vec.slice(7..9);
assert_eq!(&slice, &[]);

// slices can also be sliced

let slice = vec.slice(1..=4);
assert_eq!(&slice, &[1, 2, 3, 4]);

let sub_slice = slice.slice(1..3);
assert_eq!(&sub_slice, &[2, 3]);
Source

pub fn as_slice(&self) -> ConcurrentSlice<'_, T, P>

Creates and returns a slice of all elements of the vec.

Note that vec.as_slice() is equivalent to vec.slice(..).

A ConcurrentSlice provides a focused / restricted view on a slice of the vector. It provides all methods of the concurrent vector except for the ones which grow the size of the vector.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3, 4]);

let slice = vec.as_slice();
assert_eq!(&slice, &[0, 1, 2, 3, 4]);
Source

pub fn get(&self, i: usize) -> Option<&ConcurrentElement<T>>

Returns the element at the i-th position; returns None if the index is out of bounds.

The safe api of the ConcurrentVec never gives out &T or &mut T references. Instead, returns a ConcurrentElement which provides thread safe concurrent read and write methods on the element.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);

assert!(vec.get(4).is_none());

let cloned = vec.get(2).map(|elem| elem.cloned());
assert_eq!(cloned, Some(2));

let double = vec.get(2).map(|elem| elem.map(|x| x * 2));
assert_eq!(double, Some(4));

let elem = vec.get(2).unwrap();
assert_eq!(elem, &2);

elem.set(42);
assert_eq!(elem, &42);

elem.update(|x| *x = *x / 2);
assert_eq!(elem, &21);

let old = elem.replace(7);
assert_eq!(old, 21);
assert_eq!(elem, &7);

assert_eq!(&vec, &[0, 1, 7, 3]);
Source

pub fn get_cloned(&self, i: usize) -> Option<T>
where T: Clone,

Returns the cloned value of element at the i-th position; returns None if the index is out of bounds.

Note that vec.get_cloned(i) is short-hand for vec.get(i).map(|elem| elem.cloned()).

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);

assert_eq!(vec.get_cloned(2), Some(2));
assert_eq!(vec.get_cloned(4), None);
Source

pub fn get_copied(&self, i: usize) -> Option<T>
where T: Copy,

Returns the copied value of element at the i-th position; returns None if the index is out of bounds.

Note that vec.get_copied(i) is short-hand for vec.get(i).map(|elem| elem.copied()).

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);

assert_eq!(vec.get_copied(2), Some(2));
assert_eq!(vec.get_copied(4), None);
Source

pub fn iter(&self) -> impl Iterator<Item = &ConcurrentElement<T>>

Returns an iterator to the elements of the vec.

The safe api of the ConcurrentVec never gives out &T or &mut T references. Instead, the iterator yields ConcurrentElement which provides thread safe concurrent read and write methods on the element.

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);

// read - map

let doubles: Vec<_> = vec.iter().map(|elem| elem.map(|x| x * 2)).collect();
assert_eq!(doubles, [0, 2, 4, 6]);

// read - reduce

let sum: i32 = vec.iter().map(|elem| elem.cloned()).sum();
assert_eq!(sum, 6);

// mutate

for (i, elem) in vec.iter().enumerate() {
    match i {
        2 => elem.set(42),
        _ => elem.update(|x| *x *= 2),
    }
}
assert_eq!(&vec, &[0, 2, 42, 6]);

let old_vals: Vec<_> = vec.iter().map(|elem| elem.replace(7)).collect();
assert_eq!(&old_vals, &[0, 2, 42, 6]);
assert_eq!(&vec, &[7, 7, 7, 7]);
Source

pub fn iter_cloned(&self) -> impl Iterator<Item = T> + '_
where T: Clone,

Returns an iterator to cloned values of the elements of the vec.

Note that vec.iter_cloned() is short-hand for vec.iter().map(|elem| elem.cloned()).

§Examples
use orx_concurrent_vec::*;

let vec = ConcurrentVec::new();
vec.extend([42, 7]);

let mut iter = vec.iter_cloned();

assert_eq!(iter.next(), Some(42));
assert_eq!(iter.next(), Some(7));
assert_eq!(iter.next(), None);

let sum: i32 = vec.iter_cloned().sum();
assert_eq!(sum, 49);

Trait Implementations§

Source§

impl<T> Clone for ConcurrentVec<T>
where T: Clone,

Source§

fn clone(&self) -> Self

A thread-safe method to clone the concurrent vec.

§Example
use orx_concurrent_vec::*;

let vec: ConcurrentVec<_> = (0..4).into_iter().collect();
let clone = vec.clone();

assert_eq!(&clone, &[0, 1, 2, 3]);
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl<T, P> Debug for ConcurrentVec<T, P>

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<T> Default for ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Doubling>>

Source§

fn default() -> Self

Creates a new concurrent bag by creating and wrapping up a new SplitVec<T, Doubling> as the underlying storage.

Source§

impl<T, P> From<P> for ConcurrentVec<T, P>

Source§

fn from(pinned_vec: P) -> Self

ConcurrentVec<T> uses any PinnedVec<T> implementation as the underlying storage.

Therefore, without a cost

  • ConcurrentVec<T> can be constructed from any PinnedVec<T>, and
  • the underlying PinnedVec<T> can be obtained by ConcurrentVec::into_inner(self) method.
Source§

impl<T> FromIterator<T> for ConcurrentVec<T>

Source§

fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> Self

Creates a value from an iterator. Read more
Source§

impl<P, T> Index<usize> for ConcurrentVec<T, P>

Source§

fn index(&self, i: usize) -> &Self::Output

Returns a reference to the concurrent element at the i-th position of the vec.

Note that vec[i] is a shorthand for vec.get(i).unwrap().

§Panics

Panics if i is out of bounds.

Source§

type Output = ConcurrentElement<T>

The returned type after indexing.
Source§

impl<T, P> IntoIterator for ConcurrentVec<T, P>

Source§

type Item = T

The type of the elements being iterated over.
Source§

type IntoIter = ElementValuesIter<T, P>

Which kind of iterator are we turning this into?
Source§

fn into_iter(self) -> Self::IntoIter

Creates an iterator from a value. Read more
Source§

impl<T: PartialEq> PartialEq<[T]> for ConcurrentVec<T>

Source§

fn eq(&self, other: &[T]) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl<const N: usize, T: PartialEq> PartialEq<[T; N]> for ConcurrentVec<T>

Source§

fn eq(&self, other: &[T; N]) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl<T: PartialEq> PartialEq<ConcurrentSlice<'_, T>> for ConcurrentVec<T>

Source§

fn eq(&self, other: &ConcurrentSlice<'_, T>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl<T: PartialEq> PartialEq<ConcurrentVec<T>> for ConcurrentSlice<'_, T>

Source§

fn eq(&self, other: &ConcurrentVec<T>) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl<T: PartialEq> PartialEq for ConcurrentVec<T>

Source§

fn eq(&self, other: &Self) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl<T: Send, P: IntoConcurrentPinnedVec<ConcurrentElement<T>>> Send for ConcurrentVec<T, P>

Source§

impl<T: Sync, P: IntoConcurrentPinnedVec<ConcurrentElement<T>>> Sync for ConcurrentVec<T, P>

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> SoM<T> for T

Source§

fn get_ref(&self) -> &T

Returns a reference to self.
Source§

fn get_mut(&mut self) -> &mut T

Returns a mutable reference to self.
Source§

impl<T> SoR<T> for T

Source§

fn get_ref(&self) -> &T

Returns a reference to self.
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.