pub trait ParMap: Iterator + Sized {
// Provided methods
fn par_map<B, F>(self, f: F) -> Map<Self, B, F> ⓘ
where F: Sync + Send + 'static + Fn(Self::Item) -> B,
B: Send + 'static,
Self::Item: Send + 'static { ... }
fn par_flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F> ⓘ
where F: Sync + Send + 'static + Fn(Self::Item) -> U,
U: IntoIterator,
U::Item: Send + 'static,
Self::Item: Send + 'static { ... }
fn pack(self, nb: usize) -> Pack<Self> ⓘ { ... }
fn par_packed_map<'a, B, F>(self, nb: usize, f: F) -> PackedMap<'a, B> ⓘ
where F: Sync + Send + 'static + Fn(Self::Item) -> B,
B: Send + 'static,
Self::Item: Send + 'static,
Self: 'a { ... }
fn par_packed_flat_map<'a, U, F>(
self,
nb: usize,
f: F,
) -> PackedFlatMap<'a, U::Item> ⓘ
where F: Sync + Send + 'static + Fn(Self::Item) -> U,
U: IntoIterator + 'a,
U::Item: Send + 'static,
Self::Item: Send + 'static,
Self: 'a { ... }
fn with_nb_threads(self, nb: usize) -> ParMapBuilder<Self> { ... }
}
Expand description
This trait extends std::iter::Iterator
with parallel
iterator adaptors. Just use
it to get access to the methods:
use par_map::ParMap;
Each iterator adaptor will have its own thread pool of the number
of CPU. At maximum, 2 times the number of defined threads
(the default is the number of cpus) will be
launched in advance, guarantying that the memory will not be
exceeded if the iterator is not consumed faster that the
production. To be effective, the given function should be costy
to compute and each call should take about the same time. The
packed
variants will do the same, processing by batch instead of
doing one job for each item.
The 'static
constraints are needed to have such a simple
interface. These adaptors are well suited for big iterators that
can’t be collected into a Vec
. Else, crates such as rayon
are
more suited for this kind of task.
Provided Methods§
Sourcefn par_map<B, F>(self, f: F) -> Map<Self, B, F> ⓘ
fn par_map<B, F>(self, f: F) -> Map<Self, B, F> ⓘ
Takes a closure and creates an iterator which calls that
closure on each element, exactly as
std::iter::Iterator::map
.
The order of the elements are guaranted to be unchanged. Of course, the given closures can be executed in parallel out of order.
§Example
use par_map::ParMap;
let a = [1, 2, 3];
let mut iter = a.iter().cloned().par_map(|x| 2 * x);
assert_eq!(iter.next(), Some(2));
assert_eq!(iter.next(), Some(4));
assert_eq!(iter.next(), Some(6));
assert_eq!(iter.next(), None);
Sourcefn par_flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F> ⓘ
fn par_flat_map<U, F>(self, f: F) -> FlatMap<Self, U, F> ⓘ
Creates an iterator that works like map, but flattens nested
structure, exactly as std::iter::Iterator::flat_map
.
The order of the elements are guaranted to be unchanged. Of course, the given closures can be executed in parallel out of order.
§Example
use par_map::ParMap;
let words = ["alpha", "beta", "gamma"];
let merged: String = words.iter()
.cloned() // as items must be 'static
.par_flat_map(|s| s.chars()) // exactly as std::iter::Iterator::flat_map
.collect();
assert_eq!(merged, "alphabetagamma");
Sourcefn pack(self, nb: usize) -> Pack<Self> ⓘ
fn pack(self, nb: usize) -> Pack<Self> ⓘ
Creates an iterator that yields Vec<Self::Item>
of size nb
(or less on the last element).
§Example
use par_map::ParMap;
let nbs = [1, 2, 3, 4, 5, 6, 7];
let mut iter = nbs.iter().cloned().pack(3);
assert_eq!(Some(vec![1, 2, 3]), iter.next());
assert_eq!(Some(vec![4, 5, 6]), iter.next());
assert_eq!(Some(vec![7]), iter.next());
assert_eq!(None, iter.next());
Sourcefn par_packed_map<'a, B, F>(self, nb: usize, f: F) -> PackedMap<'a, B> ⓘ
fn par_packed_map<'a, B, F>(self, nb: usize, f: F) -> PackedMap<'a, B> ⓘ
Same as par_map
, but the parallel work is batched by nb
items.
§Example
use par_map::ParMap;
let a = [1, 2, 3];
let mut iter = a.iter().cloned().par_packed_map(2, |x| 2 * x);
assert_eq!(iter.next(), Some(2));
assert_eq!(iter.next(), Some(4));
assert_eq!(iter.next(), Some(6));
assert_eq!(iter.next(), None);
Sourcefn par_packed_flat_map<'a, U, F>(
self,
nb: usize,
f: F,
) -> PackedFlatMap<'a, U::Item> ⓘ
fn par_packed_flat_map<'a, U, F>( self, nb: usize, f: F, ) -> PackedFlatMap<'a, U::Item> ⓘ
Same as par_flat_map
, but the parallel work is batched by nb
items.
§Example
use par_map::ParMap;
let words = ["alpha", "beta", "gamma"];
let merged: String = words.iter()
.cloned()
.par_packed_flat_map(2, |s| s.chars())
.collect();
assert_eq!(merged, "alphabetagamma");
Sourcefn with_nb_threads(self, nb: usize) -> ParMapBuilder<Self>
fn with_nb_threads(self, nb: usize) -> ParMapBuilder<Self>
Configure the number of thread used. If not set, the default is the number of cpus available
§Example
use par_map::ParMap;
let a = [1, 2, 3];
let mut iter = a.iter().cloned().with_nb_threads(2).par_map(|x| 2 * x);
assert_eq!(iter.next(), Some(2));
assert_eq!(iter.next(), Some(4));
assert_eq!(iter.next(), Some(6));
assert_eq!(iter.next(), None);
Dyn Compatibility§
This trait is not dyn compatible.
In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.