pub struct ConcurrentVec<T, P = SplitVec<ConcurrentElement<T>, Doubling>>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,{ /* private fields */ }
Expand description
A thread-safe, efficient and lock-free vector allowing concurrent grow, read and update operations.
ConcurrentVec provides safe api for the following three sets of concurrent operations, grow & read & update.
§Examples
use orx_concurrent_vec::*;
use std::time::Duration;
#[derive(Debug, Default)]
struct Metric {
sum: i32,
count: i32,
}
impl Metric {
fn aggregate(self, value: &i32) -> Self {
Self {
sum: self.sum + value,
count: self.count + 1,
}
}
}
// record measurements in random intervals, roughly every 2ms
let measurements = ConcurrentVec::new();
// collect metrics every 100 milliseconds
let metrics = ConcurrentVec::new();
std::thread::scope(|s| {
// thread to store measurements as they arrive
s.spawn(|| {
for i in 0..100 {
std::thread::sleep(Duration::from_millis(i % 5));
// collect measurements and push to measurements vec
measurements.push(i as i32);
}
});
// thread to collect metrics every 100 milliseconds
s.spawn(|| {
for _ in 0..10 {
// safely read from measurements vec to compute the metric at that instant
let metric =
measurements.fold(Metric::default(), |x, value| x.aggregate(value));
// push result to metrics
metrics.push(metric);
std::thread::sleep(Duration::from_millis(100));
}
});
});
let measurements: Vec<_> = measurements.to_vec();
let averages: Vec<_> = metrics.to_vec();
assert_eq!(measurements.len(), 100);
assert_eq!(averages.len(), 10);
Implementations§
Source§impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
Sourcepub fn reserve_maximum_capacity(&mut self, new_maximum_capacity: usize) -> usize
pub fn reserve_maximum_capacity(&mut self, new_maximum_capacity: usize) -> usize
Note that ConcurrentVec::maximum_capacity
returns the maximum possible number of elements that the underlying pinned vector can grow to without reserving maximum capacity.
In other words, the pinned vector can automatically grow up to the ConcurrentVec::maximum_capacity
with write
and write_n_items
methods, using only a shared reference.
When required, this maximum capacity can be attempted to increase by this method with a mutable reference.
Importantly note that maximum capacity does not correspond to the allocated memory.
Among the common pinned vector implementations:
SplitVec<_, Doubling>
: supports this method; however, it does not require for any practical size.SplitVec<_, Linear>
: is guaranteed to succeed and increase its maximum capacity to the required value.FixedVec<_>
: is the most strict pinned vector which cannot grow even in a single-threaded setting. Currently, it will always return an error to this call.
§Safety
This method is unsafe since the concurrent pinned vector might contain gaps. The vector must be gap-free while increasing the maximum capacity.
This method can safely be called if entries in all positions 0..len are written.
Source§impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
Sourcepub fn push(&self, value: T) -> usize
pub fn push(&self, value: T) -> usize
Concurrent, thread-safe method to push the given value
to the back of the bag, and returns the position or index of the pushed value.
It preserves the order of elements with respect to the order the push
method is called.
§Panics
Panics if the concurrent bag is already at its maximum capacity; i.e., if self.len() == self.maximum_capacity()
.
Note that this is an important safety assertion in the concurrent context; however, not a practical limitation.
Please see the orx_pinned_concurrent_col::PinnedConcurrentCol::maximum_capacity
for details.
§Examples
We can directly take a shared reference of the bag, share it among threads and collect results concurrently.
use orx_concurrent_vec::*;
let (num_threads, num_items_per_thread) = (4, 1_024);
let vec = ConcurrentVec::new();
std::thread::scope(|s| {
let vec = &vec;
for i in 0..num_threads {
s.spawn(move || {
for j in 0..num_items_per_thread {
// concurrently collect results simply by calling `push`
vec.push(i * 1000 + j);
}
});
}
});
let mut vec = vec.to_vec();
vec.sort();
let mut expected: Vec<_> = (0..num_threads).flat_map(|i| (0..num_items_per_thread).map(move |j| i * 1000 + j)).collect();
expected.sort();
assert_eq!(vec, expected);
§Performance Notes - False Sharing
ConcurrentVec::push
implementation is lock-free and focuses on efficiency.
However, we need to be aware of the potential false sharing risk.
False sharing might lead to significant performance degradation.
However, it is possible to avoid in many cases.
§When?
Performance degradation due to false sharing might be observed when both of the following conditions hold:
- small data: data to be pushed is small, the more elements fitting in a cache line the bigger the risk,
- little work: multiple threads/cores are pushing to the concurrent bag with high frequency; i.e.,
- very little or negligible work / time is required in between
push
calls.
- very little or negligible work / time is required in between
The example above fits this situation.
Each thread only performs one multiplication and addition in between pushing elements, and the elements to be pushed are very small, just one usize
.
§Why?
ConcurrentBag
assigns unique positions to each value to be pushed. There is no true sharing among threads in the position level.- However, cache lines contain more than one position.
- One thread updating a particular position invalidates the entire cache line on an other thread.
- Threads end up frequently reloading cache lines instead of doing the actual work of writing elements to the bag.
- This might lead to a significant performance degradation.
§Solution: extend
rather than push
One very simple, effective and memory efficient solution to this problem is to use ConcurrentVec::extend
rather than push
in small data & little work situations.
Assume that we will have 4 threads and each will push 1_024 elements.
Instead of making 1_024 push
calls from each thread, we can make one extend
call from each.
This would give the best performance.
Further, it has zero buffer or memory cost:
- it is important to note that the batch of 1_024 elements are not stored temporarily in another buffer,
- there is no additional allocation,
extend
does nothing more than reserving the position range for the thread by incrementing the atomic counter accordingly.
However, we do not need to have such a perfect information about the number of elements to be pushed. Performance gains after reaching the cache line size are much lesser.
For instance, consider the challenging super small element size case, where we are collecting i32
s.
We can already achieve a very high performance by simply extend
ing the bag by batches of 16 elements.
As the element size gets larger, required batch size to achieve a high performance gets smaller and smaller.
Required change in the code from push
to extend
is not significant.
The example above could be revised as follows to avoid the performance degrading of false sharing.
use orx_concurrent_vec::*;
let (num_threads, num_items_per_thread) = (4, 1_024);
let vec = ConcurrentVec::new();
let batch_size = 16;
std::thread::scope(|s| {
let vec = &vec;
for i in 0..num_threads {
s.spawn(move || {
for j in (0..num_items_per_thread).step_by(batch_size) {
let iter = (j..(j + batch_size)).map(|j| i * 1000 + j);
// concurrently collect results simply by calling `extend`
vec.extend(iter);
}
});
}
});
let mut vec = vec.to_vec();
vec.sort();
let mut expected: Vec<_> = (0..num_threads).flat_map(|i| (0..num_items_per_thread).map(move |j| i * 1000 + j)).collect();
expected.sort();
assert_eq!(vec, expected);
Sourcepub fn push_for_idx<F>(&self, f: F) -> usize
pub fn push_for_idx<F>(&self, f: F) -> usize
Pushes the value which will be computed as a function of the index where it will be written.
Note that we cannot guarantee the index of the element by push
ing since there might be many
pushes happening concurrently. In cases where we absolutely need to know the index, in other
words, when the value depends on the index, we can use push_for_idx
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::new();
vec.push(0);
vec.push_for_idx(|i| i * 2);
vec.push_for_idx(|i| i + 10);
vec.push(42);
assert_eq!(&vec, &[0, 2, 12, 42]);
Sourcepub fn extend<IntoIter, Iter>(&self, values: IntoIter) -> usizewhere
IntoIter: IntoIterator<Item = T, IntoIter = Iter>,
Iter: Iterator<Item = T> + ExactSizeIterator,
pub fn extend<IntoIter, Iter>(&self, values: IntoIter) -> usizewhere
IntoIter: IntoIterator<Item = T, IntoIter = Iter>,
Iter: Iterator<Item = T> + ExactSizeIterator,
Concurrent, thread-safe method to push all values
that the given iterator will yield to the back of the bag.
The method returns the position or index of the first pushed value (returns the length of the concurrent bag if the iterator is empty).
All values
in the iterator will be added to the bag consecutively:
- the first yielded value will be written to the position which is equal to the current length of the bag, say
begin_idx
, which is the returned value, - the second yielded value will be written to the
begin_idx + 1
-th position, - …
- and the last value will be written to the
begin_idx + values.count() - 1
-th position of the bag.
Important notes:
- This method does not allocate to buffer.
- All it does is to increment the atomic counter by the length of the iterator (
push
would increment by 1) and reserve the range of positions for this operation. - If there is not sufficient space, the vector grows first; iterating over and writing elements to the vec happens afterwards.
- Therefore, other threads do not wait for the
extend
method to complete, they can concurrently write. - This is a simple and effective approach to deal with the false sharing problem.
For this reason, the method requires an ExactSizeIterator
.
There exists the variant ConcurrentVec::extend_n_items
method which accepts any iterator together with the correct length to be passed by the caller.
It is unsafe
as the caller must guarantee that the iterator yields at least the number of elements explicitly passed in as an argument.
§Panics
Panics if not all of the values
fit in the concurrent bag’s maximum capacity.
Note that this is an important safety assertion in the concurrent context; however, not a practical limitation.
Please see the orx_pinned_concurrent_col::PinnedConcurrentCol::maximum_capacity
for details.
§Examples
We can directly take a shared reference of the bag and share it among threads.
use orx_concurrent_vec::*;
let (num_threads, num_items_per_thread) = (4, 1_024);
let vec = ConcurrentVec::new();
let batch_size = 16;
std::thread::scope(|s| {
let vec = &vec;
for i in 0..num_threads {
s.spawn(move || {
for j in (0..num_items_per_thread).step_by(batch_size) {
let iter = (j..(j + batch_size)).map(|j| i * 1000 + j);
// concurrently collect results simply by calling `extend`
vec.extend(iter);
}
});
}
});
let mut vec: Vec<_> = vec.to_vec();
vec.sort();
let mut expected: Vec<_> = (0..num_threads).flat_map(|i| (0..num_items_per_thread).map(move |j| i * 1000 + j)).collect();
expected.sort();
assert_eq!(vec, expected);
§Performance Notes - False Sharing
ConcurrentVec::push
method is implementation is simple, lock-free and efficient.
However, we need to be aware of the potential false sharing risk.
False sharing might lead to significant performance degradation; fortunately, it is possible to avoid in many cases.
§When?
Performance degradation due to false sharing might be observed when both of the following conditions hold:
- small data: data to be pushed is small, the more elements fitting in a cache line the bigger the risk,
- little work: multiple threads/cores are pushing to the concurrent bag with high frequency; i.e.,
- very little or negligible work / time is required in between
push
calls.
- very little or negligible work / time is required in between
The example above fits this situation.
Each thread only performs one multiplication and addition for computing elements, and the elements to be pushed are very small, just one usize
.
§Why?
ConcurrentBag
assigns unique positions to each value to be pushed. There is no true sharing among threads in the position level.- However, cache lines contain more than one position.
- One thread updating a particular position invalidates the entire cache line on an other thread.
- Threads end up frequently reloading cache lines instead of doing the actual work of writing elements to the bag.
- This might lead to a significant performance degradation.
§Solution: extend
rather than push
One very simple, effective and memory efficient solution to the false sharing problem is to use ConcurrentVec::extend
rather than push
in small data & little work situations.
Assume that we will have 4 threads and each will push 1_024 elements.
Instead of making 1_024 push
calls from each thread, we can make one extend
call from each.
This would give the best performance.
Further, it has zero buffer or memory cost:
- it is important to note that the batch of 1_024 elements are not stored temporarily in another buffer,
- there is no additional allocation,
extend
does nothing more than reserving the position range for the thread by incrementing the atomic counter accordingly.
However, we do not need to have such a perfect information about the number of elements to be pushed. Performance gains after reaching the cache line size are much lesser.
For instance, consider the challenging super small element size case, where we are collecting i32
s.
We can already achieve a very high performance by simply extend
ing the bag by batches of 16 elements.
As the element size gets larger, required batch size to achieve a high performance gets smaller and smaller.
The example code above already demonstrates the solution to a potentially problematic case in the ConcurrentVec::push
example.
Sourcepub fn extend_for_idx<IntoIter, Iter, F>(&self, f: F, num_items: usize) -> usizewhere
IntoIter: IntoIterator<Item = T, IntoIter = Iter>,
Iter: Iterator<Item = T> + ExactSizeIterator,
F: FnOnce(usize) -> IntoIter,
pub fn extend_for_idx<IntoIter, Iter, F>(&self, f: F, num_items: usize) -> usizewhere
IntoIter: IntoIterator<Item = T, IntoIter = Iter>,
Iter: Iterator<Item = T> + ExactSizeIterator,
F: FnOnce(usize) -> IntoIter,
Extends the vector with the values of the iterator which is created as a function of the index that the first element of the iterator will be written to.
Note that we cannot guarantee the index of the element by extend
ing since there might be many
pushes or extends happening concurrently. In cases where we absolutely need to know the index, in other
words, when the values depend on the indices, we can use extend_for_idx
.
§Panics
Panics if the iterator created by f
does not yield num_items
elements.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::new();
vec.push(0);
let iter = |begin_idx: usize| ((begin_idx..(begin_idx + 3)).map(|i| i * 5));
vec.extend_for_idx(|begin_idx| iter(begin_idx), 3);
vec.push(42);
assert_eq!(&vec, &[0, 5, 10, 15, 42]);
Sourcepub fn extend_n_items<IntoIter>(
&self,
values: IntoIter,
num_items: usize,
) -> usizewhere
IntoIter: IntoIterator<Item = T>,
pub fn extend_n_items<IntoIter>(
&self,
values: IntoIter,
num_items: usize,
) -> usizewhere
IntoIter: IntoIterator<Item = T>,
Concurrent, thread-safe method to push num_items
elements yielded by the values
iterator to the back of the bag.
The method returns the position or index of the first pushed value (returns the length of the concurrent bag if the iterator is empty).
All values
in the iterator will be added to the bag consecutively:
- the first yielded value will be written to the position which is equal to the current length of the bag, say
begin_idx
, which is the returned value, - the second yielded value will be written to the
begin_idx + 1
-th position, - …
- and the last value will be written to the
begin_idx + num_items - 1
-th position of the bag.
Important notes:
- This method does not allocate at all to buffer elements to be pushed.
- All it does is to increment the atomic counter by the length of the iterator (
push
would increment by 1) and reserve the range of positions for this operation. - Iterating over and writing elements to the vec happens afterwards.
- This is a simple, effective and memory efficient solution to the false sharing problem.
For this reason, the method requires the additional num_items
argument.
There exists the variant ConcurrentVec::extend
method which accepts only an ExactSizeIterator
.
§Panics
Panics if the iterator created by f
does not yield num_items
elements.
§Examples
We can directly take a shared reference of the bag and share it among threads.
use orx_concurrent_vec::*;
let (num_threads, num_items_per_thread) = (4, 1_024);
let vec = ConcurrentVec::new();
let batch_size = 16;
std::thread::scope(|s| {
let vec = &vec;
for i in 0..num_threads {
s.spawn(move || {
for j in (0..num_items_per_thread).step_by(batch_size) {
let iter = (j..(j + batch_size)).map(|j| i * 1000 + j);
// concurrently collect results simply by calling `extend_n_items`
unsafe { vec.extend_n_items(iter, batch_size) };
}
});
}
});
let mut vec: Vec<_> = vec.to_vec();
vec.sort();
let mut expected: Vec<_> = (0..num_threads).flat_map(|i| (0..num_items_per_thread).map(move |j| i * 1000 + j)).collect();
expected.sort();
assert_eq!(vec, expected);
§Performance Notes - False Sharing
ConcurrentVec::push
method is implementation is simple, lock-free and efficient.
However, we need to be aware of the potential false sharing risk.
False sharing might lead to significant performance degradation; fortunately, it is possible to avoid in many cases.
§When?
Performance degradation due to false sharing might be observed when both of the following conditions hold:
- small data: data to be pushed is small, the more elements fitting in a cache line the bigger the risk,
- little work: multiple threads/cores are pushing to the concurrent bag with high frequency; i.e.,
- very little or negligible work / time is required in between
push
calls.
- very little or negligible work / time is required in between
The example above fits this situation.
Each thread only performs one multiplication and addition for computing elements, and the elements to be pushed are very small, just one usize
.
§Why?
ConcurrentBag
assigns unique positions to each value to be pushed. There is no true sharing among threads in the position level.- However, cache lines contain more than one position.
- One thread updating a particular position invalidates the entire cache line on an other thread.
- Threads end up frequently reloading cache lines instead of doing the actual work of writing elements to the bag.
- This might lead to a significant performance degradation.
§Solution: extend
rather than push
One very simple, effective and memory efficient solution to the false sharing problem is to use ConcurrentVec::extend
rather than push
in small data & little work situations.
Assume that we will have 4 threads and each will push 1_024 elements.
Instead of making 1_024 push
calls from each thread, we can make one extend
call from each.
This would give the best performance.
Further, it has zero buffer or memory cost:
- it is important to note that the batch of 1_024 elements are not stored temporarily in another buffer,
- there is no additional allocation,
extend
does nothing more than reserving the position range for the thread by incrementing the atomic counter accordingly.
However, we do not need to have such a perfect information about the number of elements to be pushed. Performance gains after reaching the cache line size are much lesser.
For instance, consider the challenging super small element size case, where we are collecting i32
s.
We can already achieve a very high performance by simply extend
ing the bag by batches of 16 elements.
As the element size gets larger, required batch size to achieve a high performance gets smaller and smaller.
The example code above already demonstrates the solution to a potentially problematic case in the ConcurrentVec::push
example.
Source§impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
Sourcepub fn map<'a, F, U>(&'a self, f: F) -> impl Iterator<Item = U> + 'a
pub fn map<'a, F, U>(&'a self, f: F) -> impl Iterator<Item = U> + 'a
Returns an iterator to values obtained by mapping elements of the vec by f
.
Note that vec.map(f)
is a shorthand for vec.iter().map(move |elem| elem.map(|x: &T| f(x)))
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter(0..4);
let doubles: Vec<_> = vec.map(|x| x * 2).collect();
assert_eq!(doubles, [0, 2, 4, 6]);
Sourcepub fn filter<'a, F>(
&'a self,
f: F,
) -> impl Iterator<Item = &'a ConcurrentElement<T>> + 'a
pub fn filter<'a, F>( &'a self, f: F, ) -> impl Iterator<Item = &'a ConcurrentElement<T>> + 'a
Returns an iterator to elements filtered by using the predicate f
on the values.
Note that vec.filter(f)
is a shorthand for vec.iter().filter(move |elem| elem.map(|x: &T| f(x)))
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter(0..4);
let mut evens = vec.filter(|x| x % 2 == 0);
assert_eq!(evens.next().unwrap(), &0);
assert_eq!(evens.next().unwrap(), &2);
assert_eq!(evens.next(), None);
Sourcepub fn fold<F, U>(&self, init: U, f: F) -> U
pub fn fold<F, U>(&self, init: U, f: F) -> U
Folds the values of the vec starting from the init
using the fold function f
.
Note that vec.fold(f)
is a shorthand for vec.iter().fold(init, |agg, elem| elem.map(|x| f(agg, x)))
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter(0..4);
let sum = vec.fold(0, |sum, x| sum + x);
assert_eq!(sum, 6);
Sourcepub fn reduce<F>(&self, f: F) -> Option<T>
pub fn reduce<F>(&self, f: F) -> Option<T>
Reduces the values of the vec using the reduction f
; returns None if the vec is empty.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::new();
let sum = vec.reduce(|a, b| a + b);
assert_eq!(sum, None);
vec.push(42);
let sum = vec.reduce(|a, b| a + b);
assert_eq!(sum, Some(42));
vec.extend([6, 2]);
let sum = vec.reduce(|a, b| a + b);
assert_eq!(sum, Some(50));
Source§impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
Sourcepub fn swap(&self, i: usize, j: usize) -> bool
pub fn swap(&self, i: usize, j: usize) -> bool
Swaps two elements in the vector.
Returns:
- true of both
i
andj
are in bounds and values are swapped, - false if at least one of the indices is out of bounds.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);
let swapped = vec.swap(0, 2);
assert_eq!(swapped, true);
assert_eq!(&vec, &[2, 1, 0, 3]);
let swapped = vec.swap(0, 4);
assert_eq!(swapped, false);
assert_eq!(&vec, &[2, 1, 0, 3]);
Sourcepub fn fill(&self, value: T)where
T: Clone,
pub fn fill(&self, value: T)where
T: Clone,
Fills all positions of the vec with the given value
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);
vec.fill(42);
assert_eq!(&vec, &[42, 42, 42, 42]);
Sourcepub fn fill_with<F>(&self, value: F)
pub fn fill_with<F>(&self, value: F)
Fills all positions of the vec with the the values
created by successively calling value(i)
for each position.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);
let mut current = 0;
vec.fill_with(|i| {
current += i as i32;
current
});
assert_eq!(&vec, &[0, 1, 3, 6]);
Source§impl<T> ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Doubling>>
impl<T> ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Doubling>>
Sourcepub fn new() -> Self
pub fn new() -> Self
Creates a new concurrent bag by creating and wrapping up a new SplitVec<T, Doubling>
as the underlying storage.
Sourcepub fn with_doubling_growth() -> Self
pub fn with_doubling_growth() -> Self
Creates a new concurrent bag by creating and wrapping up a new SplitVec<T, Doubling>
as the underlying storage.
Source§impl<T> ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Linear>>
impl<T> ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Linear>>
Sourcepub fn with_linear_growth(
constant_fragment_capacity_exponent: usize,
fragments_capacity: usize,
) -> Self
pub fn with_linear_growth( constant_fragment_capacity_exponent: usize, fragments_capacity: usize, ) -> Self
Creates a new concurrent bag by creating and wrapping up a new SplitVec<T, Linear>
as the underlying storage.
- Each fragment of the split vector will have a capacity of
2 ^ constant_fragment_capacity_exponent
. - Further, fragments collection of the split vector will have a capacity of
fragments_capacity
on initialization.
This leads to a orx_pinned_concurrent_col::PinnedConcurrentCol::maximum_capacity
of fragments_capacity * 2 ^ constant_fragment_capacity_exponent
.
Whenever this capacity is not sufficient, fragments capacity can be increased by using the orx_pinned_concurrent_col::PinnedConcurrentCol::reserve_maximum_capacity
method.
Source§impl<T> ConcurrentVec<T, FixedVec<ConcurrentElement<T>>>
impl<T> ConcurrentVec<T, FixedVec<ConcurrentElement<T>>>
Sourcepub fn with_fixed_capacity(fixed_capacity: usize) -> Self
pub fn with_fixed_capacity(fixed_capacity: usize) -> Self
Creates a new concurrent bag by creating and wrapping up a new FixedVec<T>
as the underlying storage.
§Safety
Note that a FixedVec
cannot grow; i.e., it has a hard upper bound on the number of elements it can hold, which is the fixed_capacity
.
Pushing to the vector beyond this capacity leads to “out-of-capacity” error.
This maximum capacity can be accessed by orx_pinned_concurrent_col::PinnedConcurrentCol::capacity
or orx_pinned_concurrent_col::PinnedConcurrentCol::maximum_capacity
methods.
Source§impl<T, P> ConcurrentVec<T, P>
impl<T, P> ConcurrentVec<T, P>
Sourcepub fn index_of(&self, value: &T) -> Option<usize>
pub fn index_of(&self, value: &T) -> Option<usize>
Returns the index of the first element equal to the given value
.
Returns None if the value is absent.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter(['a', 'b', 'c']);
assert_eq!(vec.index_of(&'c'), Some(2));
assert_eq!(vec.index_of(&'d'), None);
Source§impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
Sourcepub fn split_at(
&self,
mid: usize,
) -> (ConcurrentSlice<'_, T, P>, ConcurrentSlice<'_, T, P>)
pub fn split_at( &self, mid: usize, ) -> (ConcurrentSlice<'_, T, P>, ConcurrentSlice<'_, T, P>)
Divides one slice into two at an index:
- the first will contain elements in positions
[0, mid)
, - the second will contain elements in positions
[mid, self.len())
.
§Panics
Panics if mid > self.len()
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter(0..8);
let (a, b) = vec.split_at(3);
assert_eq!(a, [0, 1, 2]);
assert_eq!(b, [3, 4, 5, 6, 7]);
Sourcepub fn split_first(
&self,
) -> Option<(&ConcurrentElement<T>, ConcurrentSlice<'_, T, P>)>
pub fn split_first( &self, ) -> Option<(&ConcurrentElement<T>, ConcurrentSlice<'_, T, P>)>
Returns the first and all the rest of the elements of the slice, or None if it is empty.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter(0..4);
let (a, b) = vec.split_first().unwrap();
assert_eq!(a, &0);
assert_eq!(b, [1, 2, 3]);
// empty
let slice = vec.slice(0..0);
assert!(slice.split_first().is_none());
// single element
let slice = vec.slice(2..3);
let (a, b) = slice.split_first().unwrap();
assert_eq!(a, &2);
assert_eq!(b, []);
Sourcepub fn split_last(
&self,
) -> Option<(&ConcurrentElement<T>, ConcurrentSlice<'_, T, P>)>
pub fn split_last( &self, ) -> Option<(&ConcurrentElement<T>, ConcurrentSlice<'_, T, P>)>
Returns the last and all the rest of the elements of the slice, or None if it is empty.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter(0..4);
let (a, b) = vec.split_last().unwrap();
assert_eq!(a, &3);
assert_eq!(b, [0, 1, 2]);
// empty
let slice = vec.slice(0..0);
assert!(slice.split_last().is_none());
// single element
let slice = vec.slice(2..3);
let (a, b) = slice.split_last().unwrap();
assert_eq!(a, &2);
assert_eq!(b, []);
Sourcepub fn chunks(
&self,
chunk_size: usize,
) -> impl ExactSizeIterator<Item = ConcurrentSlice<'_, T, P>>
pub fn chunks( &self, chunk_size: usize, ) -> impl ExactSizeIterator<Item = ConcurrentSlice<'_, T, P>>
Returns an iterator over chunk_size
elements of the slice at a time, starting at the beginning of the slice.
The chunks are slices and do not overlap. If chunk_size does not divide the length of the slice, then the last chunk will not have length chunk_size.
§Panics
Panics if chunk_size is 0.
§Examples
use orx_concurrent_vec::*;
let vec: ConcurrentVec<_> = ['l', 'o', 'r', 'e', 'm'].into_iter().collect();
let mut iter = vec.chunks(2);
assert_eq!(iter.next().unwrap(), ['l', 'o']);
assert_eq!(iter.next().unwrap(), ['r', 'e']);
assert_eq!(iter.next().unwrap(), ['m']);
assert!(iter.next().is_none());
Source§impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
Source§impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
Sourcepub fn get_raw(&self, i: usize) -> Option<*const T>
pub fn get_raw(&self, i: usize) -> Option<*const T>
Returns:
- a raw
*const T
pointer to the underlying data if element at thei
-th position is pushed, None
otherwise.
§Safety
Please see below the safety guarantees and potential safety risks using the pointer obtained by this method.
§Safety Guarantees
Pointer obtained by this method will be valid:
ConcurrentVec
prevents access to elements which are not added yet.ConcurrentOption
wrapper prevents access during initialization, and hence, prevents data race during initialization.PinnedVec
storage makes sure that memory location of the elements never change.
Therefore, the caller can hold on the obtained pointer throughout the lifetime of the vec. It is guaranteed that it will be valid pointing to the correct position with initialized data.
§Unsafe Bits
However, this method still leaks out a pointer, using which can cause data races as follows:
- The value of the position can be
replace
d orset
orupdate
d concurrently by another thread. - If at the same instant, we attempt to read using this pointer, we would end up with a data-race.
§Safe Usage
This method can be safely used as long as the caller is able to guarantee that the position will not be being mutated while using the pointer to directly access the data.
A common use case to this is the grow-only scenarios where added elements are not mutated:
- elements can be added to the vector by multiple threads,
- while already pushed elements can safely be accessed by other threads using
get_raw
.
Sourcepub unsafe fn get_ref(&self, i: usize) -> Option<&T>
pub unsafe fn get_ref(&self, i: usize) -> Option<&T>
Returns a reference to the element at the i
-th position of the vec.
It returns None
if index is out of bounds.
See also get
and get_cloned
for thread-safe alternatives of concurrent access to data.
§Safety
All methods that leak out &T
or &mut T
references are marked as unsafe.
Please see the reason and possible scenarios to use it safely below.
§Safety Guarantees
Reference obtained by this method will be valid:
ConcurrentVec
prevents access to elements which are not added yet.ConcurrentOption
wrapper prevents access during initialization, and hence, prevents data race during initialization.PinnedVec
storage makes sure that memory location of the elements never change.
Therefore, the caller can hold on the obtained reference throughout the lifetime of the vec. It is guaranteed that the reference will be valid pointing to the correct position.
§Unsafe Bits
However, this method still leaks out a reference, which can cause data races as follows:
- The value of the position can be
replace
d orset
orupdate
d concurrently by another thread. - If at the same instant, we attempt to read using this reference, we would end up with a data-race.
§Safe Usage
This method can be safely used as long as the caller is able to guarantee that the position will not be being mutated while using the reference to directly access the data.
A common use case to this is the grow-only scenarios where added elements are not mutated:
- elements can be added to the vector by multiple threads,
- while already pushed elements can safely be accessed by other threads using
get
.
§Examples
As explained above, the following constructs a safe usage example of the unsafe get method.
use orx_concurrent_vec::*;
use std::time::Duration;
#[derive(Debug, Default)]
struct Metric {
sum: i32,
count: i32,
}
impl Metric {
fn aggregate(self, value: &i32) -> Self {
Self {
sum: self.sum + value,
count: self.count + 1,
}
}
}
// record measurements in random intervals, roughly every 2ms
let measurements = ConcurrentVec::new();
// collect metrics every 100 milliseconds
let metrics = ConcurrentVec::new();
std::thread::scope(|s| {
// thread to store measurements as they arrive
s.spawn(|| {
for i in 0..100 {
std::thread::sleep(Duration::from_millis(i % 5));
// collect measurements and push to measurements vec
measurements.push(i as i32);
}
});
// thread to collect metrics every 100 milliseconds
s.spawn(|| {
for _ in 0..10 {
// safely read from measurements vec to compute the metric
// since pushed elements are not being mutated
let len = measurements.len();
let mut metric = Metric::default();
for i in 0..len {
if let Some(value) = unsafe { measurements.get_ref(i) } {
metric = metric.aggregate(value);
}
}
// push result to metrics
metrics.push(metric);
std::thread::sleep(Duration::from_millis(100));
}
});
});
let measurements: Vec<_> = measurements.to_vec();
let averages: Vec<_> = metrics.to_vec();
assert_eq!(measurements.len(), 100);
assert_eq!(averages.len(), 10);
Sourcepub unsafe fn iter_ref(&self) -> impl Iterator<Item = &T>
pub unsafe fn iter_ref(&self) -> impl Iterator<Item = &T>
Returns an iterator to references of elements of the vec.
See also iter
and iter_cloned
for thread-safe alternatives of concurrent access to elements.
§Safety
All methods that leak out &T
or &mut T
references are marked as unsafe.
Please see the reason and possible scenarios to use it safely below.
§Safety Guarantees
References obtained by this method will be valid:
ConcurrentVec
prevents access to elements which are not added yet.ConcurrentOption
wrapper prevents access during initialization, and hence, prevents data race during initialization.PinnedVec
storage makes sure that memory location of the elements never change.
Therefore, the caller can hold on the obtained references throughout the lifetime of the vec. It is guaranteed that the references will be valid pointing to the correct positions.
§Unsafe Bits
However, this method still leaks out references that can cause data races as follows:
- Values of elements in the vector can be concurrently mutated by methods such as
replace
orupdate
by other threads. - If at the same instant, we attempt to read using these references, we would end up with a data-race.
§Safe Usage
This method can be safely used as long as the caller is able to guarantee that the position will not be being mutated while using these references to directly access the data.
A common use case to this is the grow-only scenarios where added elements are not mutated:
- elements can be added to the vector by multiple threads,
- while already pushed elements can safely be accessed by other threads using
iter
.
§Examples
As explained above, the following constructs a safe usage example of the unsafe iter method.
use orx_concurrent_vec::*;
use std::time::Duration;
#[derive(Debug, Default)]
struct Metric {
sum: i32,
count: i32,
}
impl Metric {
fn aggregate(self, value: &i32) -> Self {
Self {
sum: self.sum + value,
count: self.count + 1,
}
}
}
// record measurements in random intervals, roughly every 2ms
let measurements = ConcurrentVec::new();
// collect metrics every 100 milliseconds
let metrics = ConcurrentVec::new();
std::thread::scope(|s| {
// thread to store measurements as they arrive
s.spawn(|| {
for i in 0..100 {
std::thread::sleep(Duration::from_millis(i % 5));
// collect measurements and push to measurements vec
measurements.push(i as i32);
}
});
// thread to collect metrics every 100 milliseconds
s.spawn(|| {
for _ in 0..10 {
// safely read from measurements vec to compute the metric
// since pushed elements are never mutated
let metric = unsafe {
measurements
.iter_ref()
.fold(Metric::default(), |x, value| x.aggregate(value))
};
// push result to metrics
metrics.push(metric);
std::thread::sleep(Duration::from_millis(100));
}
});
});
let measurements: Vec<_> = measurements.to_vec();
let averages: Vec<_> = metrics.to_vec();
assert_eq!(measurements.len(), 100);
assert_eq!(averages.len(), 10);
Sourcepub fn get_raw_mut(&self, i: usize) -> Option<*mut T>
pub fn get_raw_mut(&self, i: usize) -> Option<*mut T>
Returns:
- a raw
*mut T
pointer to the underlying data if element at thei
-th position is pushed, None
otherwise.
§Safety
Please see below the safety guarantees and potential safety risks using the pointer obtained by this method.
§Safety Guarantees
Pointer obtained by this method will be valid:
ConcurrentVec
prevents access to elements which are not added yet.ConcurrentOption
wrapper prevents access during initialization, and hence, prevents data race during initialization.PinnedVec
storage makes sure that memory location of the elements never change.
Therefore, the caller can hold on the obtained pointer throughout the lifetime of the vec. It is guaranteed that it will be valid pointing to the correct position with initialized data.
§Unsafe Bits
However, this method still leaks out a pointer, using which can cause data races as follows:
- The value of the position can be
replace
d orset
orupdate
d concurrently by another thread. - If at the same instant, we attempt to read using this pointer, we would end up with a data-race.
§Safe Usage
This method can be safely used as long as the caller is able to guarantee that the position will not be being read or written by another thread while using the pointer to directly access the data.
Sourcepub unsafe fn get_mut(&self, i: usize) -> Option<&mut T>
pub unsafe fn get_mut(&self, i: usize) -> Option<&mut T>
Returns a mutable reference to the element at the i
-th position of the vec.
It returns None
if index is out of bounds.
§Safety
All methods that return &T
or &mut T
references are marked as unsafe.
Please see the reason and possible scenarios to use it safely below.
§Safety Guarantees
Reference obtained by this method will be valid:
ConcurrentVec
prevents access to elements which are not added yet.ConcurrentOption
wrapper prevents access during initialization, and hence, prevents data race during initialization.PinnedVec
storage makes sure that memory location of the elements never change.
Therefore, the caller can hold on the obtained reference throughout the lifetime of the vec. It is guaranteed that the reference will be valid pointing to the correct position.
§Unsafe Bits
However, this method still leaks out a reference, which can cause data races as follows:
- The value of the position can be
replace
d orset
orupdate
d concurrently by another thread. - And it maybe read by safe access methods such as
map
orcloned
. - If at the same instant, we attempt to read or write using this reference, we would end up with a data-race.
§Safe Usage
This method can be safely used as long as the caller is able to guarantee that the position will not be being read or written by another thread while using the reference to directly access the data.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::new();
vec.extend(['a', 'b', 'c', 'd']);
assert_eq!(unsafe { vec.get_mut(4) }, None);
*unsafe { vec.get_mut(1).unwrap() } = 'x';
assert_eq!(unsafe { vec.get_ref(1) }, Some(&'x'));
assert_eq!(&vec, &['a', 'x', 'c', 'd']);
Sourcepub unsafe fn iter_mut(&self) -> impl Iterator<Item = &mut T>
pub unsafe fn iter_mut(&self) -> impl Iterator<Item = &mut T>
Returns an iterator to mutable references of elements of the vec.
See also iter
for thread-safe alternative of concurrent mutation of elements.
§Safety
All methods that leak out &T
or &mut T
references are marked as unsafe.
Please see the reason and possible scenarios to use it safely below.
§Safety Guarantees
References obtained by this method will be valid:
ConcurrentVec
prevents access to elements which are not added yet.ConcurrentOption
wrapper prevents access during initialization, and hence, prevents data race during initialization.PinnedVec
storage makes sure that memory location of the elements never change.
Therefore, the caller can hold on the obtained references throughout the lifetime of the vec. It is guaranteed that the references will be valid pointing to the correct position.
§Unsafe Bits
However, this method still leaks out references, which can cause data races as follows:
- Values of elements can be concurrently read by other threads.
- Likewise, they can be concurrently mutated by thread-safe mutation methods.
- If at the same instant, we attempt to read or write using these references, we would end up with a data-race.
§Safe Usage
This method can be safely used as long as the caller is able to guarantee that the elements will not be being read or written by another thread while using the reference to directly access the data.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);
let iter = unsafe { vec.iter_mut() };
for x in iter {
*x *= 2;
}
assert_eq!(&vec, &[0, 2, 4, 6]);
Source§impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
impl<T, P> ConcurrentVec<T, P>where
P: IntoConcurrentPinnedVec<ConcurrentElement<T>>,
Sourcepub fn into_inner(self) -> P
pub fn into_inner(self) -> P
Consumes the concurrent vec and returns the underlying pinned vector.
Any PinnedVec
implementation can be converted to a ConcurrentVec
using the From
trait.
Similarly, underlying pinned vector can be obtained by calling the consuming into_inner
method.
Sourcepub fn len(&self) -> usize
pub fn len(&self) -> usize
Returns the number of elements which are pushed to the vec, excluding the elements which received their reserved locations and are currently being pushed.
§Examples
use orx_concurrent_vec::ConcurrentVec;
let vec = ConcurrentVec::new();
vec.push('a');
vec.push('b');
assert_eq!(2, vec.len());
Sourcepub fn is_empty(&self) -> bool
pub fn is_empty(&self) -> bool
Returns whether or not the bag is empty.
§Examples
use orx_concurrent_vec::ConcurrentVec;
let mut vec = ConcurrentVec::new();
assert!(vec.is_empty());
vec.push('a');
vec.push('b');
assert!(!vec.is_empty());
vec.clear();
assert!(vec.is_empty());
Sourcepub fn maximum_capacity(&self) -> usize
pub fn maximum_capacity(&self) -> usize
Returns maximum possible capacity that the collection can reach without calling ConcurrentVec::reserve_maximum_capacity
.
Importantly note that maximum capacity does not correspond to the allocated memory.
Sourcepub fn slice<R: RangeBounds<usize>>(
&self,
range: R,
) -> ConcurrentSlice<'_, T, P>
pub fn slice<R: RangeBounds<usize>>( &self, range: R, ) -> ConcurrentSlice<'_, T, P>
Creates and returns a slice of a ConcurrentVec
or another ConcurrentSlice
.
Concurrent counterpart of a slice for a standard vec or an array.
A ConcurrentSlice
provides a focused / restricted view on a slice of the vector.
It provides all methods of the concurrent vector except for the ones which
grow the size of the vector.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3, 4]);
let slice = vec.slice(1..);
assert_eq!(&slice, &[1, 2, 3, 4]);
let slice = vec.slice(1..4);
assert_eq!(&slice, &[1, 2, 3]);
let slice = vec.slice(..3);
assert_eq!(&slice, &[0, 1, 2]);
let slice = vec.slice(3..10);
assert_eq!(&slice, &[3, 4]);
let slice = vec.slice(7..9);
assert_eq!(&slice, &[]);
// slices can also be sliced
let slice = vec.slice(1..=4);
assert_eq!(&slice, &[1, 2, 3, 4]);
let sub_slice = slice.slice(1..3);
assert_eq!(&sub_slice, &[2, 3]);
Sourcepub fn as_slice(&self) -> ConcurrentSlice<'_, T, P>
pub fn as_slice(&self) -> ConcurrentSlice<'_, T, P>
Creates and returns a slice of all elements of the vec.
Note that vec.as_slice()
is equivalent to vec.slice(..)
.
A ConcurrentSlice
provides a focused / restricted view on a slice of the vector.
It provides all methods of the concurrent vector except for the ones which
grow the size of the vector.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3, 4]);
let slice = vec.as_slice();
assert_eq!(&slice, &[0, 1, 2, 3, 4]);
Sourcepub fn get(&self, i: usize) -> Option<&ConcurrentElement<T>>
pub fn get(&self, i: usize) -> Option<&ConcurrentElement<T>>
Returns the element at the i
-th position;
returns None if the index is out of bounds.
The safe api of the ConcurrentVec
never gives out &T
or &mut T
references.
Instead, returns a ConcurrentElement
which provides thread safe concurrent read and write
methods on the element.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);
assert!(vec.get(4).is_none());
let cloned = vec.get(2).map(|elem| elem.cloned());
assert_eq!(cloned, Some(2));
let double = vec.get(2).map(|elem| elem.map(|x| x * 2));
assert_eq!(double, Some(4));
let elem = vec.get(2).unwrap();
assert_eq!(elem, &2);
elem.set(42);
assert_eq!(elem, &42);
elem.update(|x| *x = *x / 2);
assert_eq!(elem, &21);
let old = elem.replace(7);
assert_eq!(old, 21);
assert_eq!(elem, &7);
assert_eq!(&vec, &[0, 1, 7, 3]);
Sourcepub fn get_cloned(&self, i: usize) -> Option<T>where
T: Clone,
pub fn get_cloned(&self, i: usize) -> Option<T>where
T: Clone,
Returns the cloned value of element at the i
-th position;
returns None if the index is out of bounds.
Note that vec.get_cloned(i)
is short-hand for vec.get(i).map(|elem| elem.cloned())
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);
assert_eq!(vec.get_cloned(2), Some(2));
assert_eq!(vec.get_cloned(4), None);
Sourcepub fn get_copied(&self, i: usize) -> Option<T>where
T: Copy,
pub fn get_copied(&self, i: usize) -> Option<T>where
T: Copy,
Returns the copied value of element at the i
-th position;
returns None if the index is out of bounds.
Note that vec.get_copied(i)
is short-hand for vec.get(i).map(|elem| elem.copied())
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);
assert_eq!(vec.get_copied(2), Some(2));
assert_eq!(vec.get_copied(4), None);
Sourcepub fn iter(&self) -> impl Iterator<Item = &ConcurrentElement<T>>
pub fn iter(&self) -> impl Iterator<Item = &ConcurrentElement<T>>
Returns an iterator to the elements of the vec.
The safe api of the ConcurrentVec
never gives out &T
or &mut T
references.
Instead, the iterator yields ConcurrentElement
which provides thread safe concurrent read and write
methods on the element.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::from_iter([0, 1, 2, 3]);
// read - map
let doubles: Vec<_> = vec.iter().map(|elem| elem.map(|x| x * 2)).collect();
assert_eq!(doubles, [0, 2, 4, 6]);
// read - reduce
let sum: i32 = vec.iter().map(|elem| elem.cloned()).sum();
assert_eq!(sum, 6);
// mutate
for (i, elem) in vec.iter().enumerate() {
match i {
2 => elem.set(42),
_ => elem.update(|x| *x *= 2),
}
}
assert_eq!(&vec, &[0, 2, 42, 6]);
let old_vals: Vec<_> = vec.iter().map(|elem| elem.replace(7)).collect();
assert_eq!(&old_vals, &[0, 2, 42, 6]);
assert_eq!(&vec, &[7, 7, 7, 7]);
Sourcepub fn iter_cloned(&self) -> impl Iterator<Item = T> + '_where
T: Clone,
pub fn iter_cloned(&self) -> impl Iterator<Item = T> + '_where
T: Clone,
Returns an iterator to cloned values of the elements of the vec.
Note that vec.iter_cloned()
is short-hand for vec.iter().map(|elem| elem.cloned())
.
§Examples
use orx_concurrent_vec::*;
let vec = ConcurrentVec::new();
vec.extend([42, 7]);
let mut iter = vec.iter_cloned();
assert_eq!(iter.next(), Some(42));
assert_eq!(iter.next(), Some(7));
assert_eq!(iter.next(), None);
let sum: i32 = vec.iter_cloned().sum();
assert_eq!(sum, 49);
Trait Implementations§
Source§impl<T> Clone for ConcurrentVec<T>where
T: Clone,
impl<T> Clone for ConcurrentVec<T>where
T: Clone,
Source§fn clone(&self) -> Self
fn clone(&self) -> Self
A thread-safe method to clone the concurrent vec.
§Example
use orx_concurrent_vec::*;
let vec: ConcurrentVec<_> = (0..4).into_iter().collect();
let clone = vec.clone();
assert_eq!(&clone, &[0, 1, 2, 3]);
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl<T, P> Debug for ConcurrentVec<T, P>
impl<T, P> Debug for ConcurrentVec<T, P>
Source§impl<T> Default for ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Doubling>>
impl<T> Default for ConcurrentVec<T, SplitVec<ConcurrentElement<T>, Doubling>>
Source§fn default() -> Self
fn default() -> Self
Creates a new concurrent bag by creating and wrapping up a new SplitVec<T, Doubling>
as the underlying storage.