Trait parseq::ParallelIterator
source · pub trait ParallelIterator {
fn map_parallel<B, F>(self, f: F) -> ParallelMap<Self, B> ⓘ
where
Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static,
{ ... }
fn map_parallel_limit<B, F>(
self,
threads: usize,
buffer_size: usize,
f: F
) -> ParallelMap<Self, B> ⓘ
where
Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static,
{ ... }
}
Expand description
An extension trait adding parallel sequential mapping to the standard Iterator
trait.
Provided Methods§
sourcefn map_parallel<B, F>(self, f: F) -> ParallelMap<Self, B> ⓘwhere
Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static,
fn map_parallel<B, F>(self, f: F) -> ParallelMap<Self, B> ⓘwhere
Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static,
Creates an iterator which applies a given closure to each element in parallel.
This function is a multi-threaded equivalent of Iterator::map
. It uses up to
available_parallelism
threads and buffers a finite
number of items. Use map_parallel_limit
if you want to set
parallelism and space limits.
The returned iterator
- preserves the order of the original iterator
- is lazy in the sense that it doesn’t consume from the original iterator before
next
is called for the first time - doesn’t fuse the original iterator
- uses constant space: linear in
threads
andbuffer_size
, not in the length of the possibly infinite original iterator - propagates panics from the given closure
Example
use std::time::Duration;
use parseq::ParallelIterator;
let mut iter = (0..3)
.into_iter()
.map_parallel(|i| {
std::thread::sleep(Duration::from_millis((i % 3) * 10));
i
});
assert_eq!(iter.next(), Some(0));
assert_eq!(iter.next(), Some(1));
assert_eq!(iter.next(), Some(2));
assert_eq!(iter.next(), None);
sourcefn map_parallel_limit<B, F>(
self,
threads: usize,
buffer_size: usize,
f: F
) -> ParallelMap<Self, B> ⓘwhere
Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static,
fn map_parallel_limit<B, F>(
self,
threads: usize,
buffer_size: usize,
f: F
) -> ParallelMap<Self, B> ⓘwhere
Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static,
Creates an iterator which applies a given closure to each element in parallel.
This function is a multi-threaded equivalent of Iterator::map
. It uses up to the given
number of threads
and buffers up to buffer_size
items. If threads
is zero, up to
available_parallelism
threads are used instead. The
buffer_size
should be greater than the number of threads. A buffer_size < 2
effectively
results in single-threaded processing.
The returned iterator
- preserves the order of the original iterator
- is lazy in the sense that it doesn’t consume from the original iterator before
next
is called for the first time - doesn’t fuse the original iterator
- uses constant space: linear in
threads
andbuffer_size
, not in the length of the possibly infinite original iterator - propagates panics from the given closure
Example
use std::time::Duration;
use parseq::ParallelIterator;
let mut iter = (0..3)
.into_iter()
.map_parallel_limit(2, 16, |i| {
std::thread::sleep(Duration::from_millis((i % 3) * 10));
i
});
assert_eq!(iter.next(), Some(0));
assert_eq!(iter.next(), Some(1));
assert_eq!(iter.next(), Some(2));
assert_eq!(iter.next(), None);
Examples found in repository?
79 80 81 82 83 84 85 86 87 88 89 90 91
fn map_parallel<B, F>(self, f: F) -> ParallelMap<Self, B>
where
Self: Iterator + Sized,
Self::Item: Send + 'static,
F: FnMut(Self::Item) -> B + Send + Clone + 'static,
B: Send + 'static,
{
let threads = std::thread::available_parallelism()
.map(NonZeroUsize::get)
.unwrap_or(1);
let buffer_size = threads.saturating_mul(16);
self.map_parallel_limit(threads, buffer_size, f)
}