pub trait ParallelIterator {
// Provided methods
fn map_parallel<B, F>(self, f: F) -> ParallelMap<Self, B> ⓘ
where Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static { ... }
fn map_parallel_limit<B, F>(
self,
threads: usize,
buffer_size: usize,
f: F,
) -> ParallelMap<Self, B> ⓘ
where Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static { ... }
}
Expand description
An extension trait adding parallel sequential mapping to the standard Iterator
trait.
Provided Methods§
Sourcefn map_parallel<B, F>(self, f: F) -> ParallelMap<Self, B> ⓘ
fn map_parallel<B, F>(self, f: F) -> ParallelMap<Self, B> ⓘ
Creates an iterator which applies a given closure to each element in parallel.
This function is a multi-threaded equivalent of Iterator::map
. It uses up to
available_parallelism
threads and buffers a finite
number of items. Use map_parallel_limit
if you want to set
parallelism and space limits.
The returned iterator
- preserves the order of the original iterator
- is lazy in the sense that it doesn’t consume from the original iterator before
next
is called for the first time - doesn’t
fuse
the original iterator - uses constant space: linear in
threads
andbuffer_size
, not in the length of the possibly infinite original iterator - propagates panics from the given closure
§Example
use std::time::Duration;
use parseq::ParallelIterator;
let mut iter = [3,2,1]
.into_iter()
.map_parallel(|i| {
// Insert heavy computation here ...
std::thread::sleep(Duration::from_millis(100*i));
2*i
});
assert_eq!(iter.next(), Some(6));
assert_eq!(iter.next(), Some(4));
assert_eq!(iter.next(), Some(2));
assert_eq!(iter.next(), None);
Sourcefn map_parallel_limit<B, F>(
self,
threads: usize,
buffer_size: usize,
f: F,
) -> ParallelMap<Self, B> ⓘ
fn map_parallel_limit<B, F>( self, threads: usize, buffer_size: usize, f: F, ) -> ParallelMap<Self, B> ⓘ
Creates an iterator which applies a given closure to each element in parallel.
This function is a multi-threaded equivalent of Iterator::map
. It uses up to the given
number of threads
and buffers up to buffer_size
items. If threads
is zero, up to
available_parallelism
threads are used instead. The
buffer_size
should be greater than the number of threads. A buffer_size < 2
effectively
results in single-threaded processing.
The returned iterator
- preserves the order of the original iterator
- is lazy in the sense that it doesn’t consume from the original iterator before
next
is called for the first time - doesn’t
fuse
the original iterator - uses constant space: linear in
threads
andbuffer_size
, not in the length of the possibly infinite original iterator - propagates panics from the given closure
§Example
use std::time::Duration;
use parseq::ParallelIterator;
let mut iter = [3,2,1]
.into_iter()
.map_parallel_limit(2, 16, |i| {
std::thread::sleep(Duration::from_millis(100*i));
2*i
});
assert_eq!(iter.next(), Some(6));
assert_eq!(iter.next(), Some(4));
assert_eq!(iter.next(), Some(2));
assert_eq!(iter.next(), None);