Trait rayon::iter::ParallelIterator [] [src]

pub trait ParallelIterator: Sized {
    type Item: Send;
    fn drive_unindexed<C>(self, consumer: C) -> C::Result
    where
        C: UnindexedConsumer<Self::Item>
; fn weight(self, _scale: f64) -> Weight<Self> { ... } fn weight_max(self) -> Weight<Self> { ... } fn for_each<OP>(self, op: OP)
    where
        OP: Fn(Self::Item) + Sync
, { ... } fn count(self) -> usize { ... } fn map<F, R>(self, map_op: F) -> Map<Self, MapFn<F>>
    where
        F: Fn(Self::Item) -> R + Sync,
        R: Send
, { ... } fn cloned<'a, T>(self) -> Map<Self, MapCloned>
    where
        T: 'a + Clone + Send,
        Self: ParallelIterator<Item = &'a T>
, { ... } fn inspect<OP>(self, inspect_op: OP) -> Map<Self, MapInspect<OP>>
    where
        OP: Fn(&Self::Item) + Sync
, { ... } fn filter<P>(self, filter_op: P) -> Filter<Self, P>
    where
        P: Fn(&Self::Item) -> bool + Sync
, { ... } fn filter_map<P, R>(self, filter_op: P) -> FilterMap<Self, P>
    where
        P: Fn(Self::Item) -> Option<R> + Sync,
        R: Send
, { ... } fn flat_map<F, PI>(self, map_op: F) -> FlatMap<Self, F>
    where
        F: Fn(Self::Item) -> PI + Sync,
        PI: IntoParallelIterator
, { ... } fn reduce<OP, ID>(self, identity: ID, op: OP) -> Self::Item
    where
        OP: Fn(Self::Item, Self::Item) -> Self::Item + Sync,
        ID: Fn() -> Self::Item + Sync
, { ... } fn reduce_with<OP>(self, op: OP) -> Option<Self::Item>
    where
        OP: Fn(Self::Item, Self::Item) -> Self::Item + Sync
, { ... } fn reduce_with_identity<OP>(
        self,
        identity: Self::Item,
        op: OP
    ) -> Self::Item
    where
        OP: Fn(Self::Item, Self::Item) -> Self::Item + Sync,
        Self::Item: Clone + Sync
, { ... } fn fold<T, ID, F>(self, identity: ID, fold_op: F) -> Fold<Self, ID, F>
    where
        F: Fn(T, Self::Item) -> T + Sync,
        ID: Fn() -> T + Sync,
        T: Send
, { ... } fn sum<S>(self) -> S
    where
        S: Send + Sum<Self::Item> + Sum
, { ... } fn product<P>(self) -> P
    where
        P: Send + Product<Self::Item> + Product
, { ... } fn mul(self) -> Self::Item
    where
        Self::Item: Product
, { ... } fn min(self) -> Option<Self::Item>
    where
        Self::Item: Ord
, { ... } fn min_by_key<K, F>(self, f: F) -> Option<Self::Item>
    where
        K: Ord + Send,
        F: Sync + Fn(&Self::Item) -> K
, { ... } fn max(self) -> Option<Self::Item>
    where
        Self::Item: Ord
, { ... } fn max_by_key<K, F>(self, f: F) -> Option<Self::Item>
    where
        K: Ord + Send,
        F: Sync + Fn(&Self::Item) -> K
, { ... } fn chain<C>(self, chain: C) -> Chain<Self, C::Iter>
    where
        C: IntoParallelIterator<Item = Self::Item>
, { ... } fn find_any<P>(self, predicate: P) -> Option<Self::Item>
    where
        P: Fn(&Self::Item) -> bool + Sync
, { ... } fn find_first<P>(self, predicate: P) -> Option<Self::Item>
    where
        P: Fn(&Self::Item) -> bool + Sync
, { ... } fn find_last<P>(self, predicate: P) -> Option<Self::Item>
    where
        P: Fn(&Self::Item) -> bool + Sync
, { ... } fn any<P>(self, predicate: P) -> bool
    where
        P: Fn(Self::Item) -> bool + Sync
, { ... } fn all<P>(self, predicate: P) -> bool
    where
        P: Fn(Self::Item) -> bool + Sync
, { ... } fn collect<C>(self) -> C
    where
        C: FromParallelIterator<Self::Item>
, { ... } fn opt_len(&mut self) -> Option<usize> { ... } }

The ParallelIterator interface.

Associated Types

Required Methods

Internal method used to define the behavior of this parallel iterator. You should not need to call this directly.

This method causes the iterator self to start producing items and to feed them to the consumer consumer one by one. It may split the consumer before doing so to create the opportunity to produce in parallel.

See the README for more details on the internals of parallel iterators.

Provided Methods

Deprecated since v0.7.0

: try with_min_len or with_max_len instead

Deprecated. If the adaptive algorithms don't split appropriately, try IndexedParallelIterator::with_min_len() or with_max_len() instead.

Deprecated since v0.7.0

: try with_min_len or with_max_len instead

Deprecated. If the adaptive algorithms don't split appropriately, try IndexedParallelIterator::with_min_len() or with_max_len() instead.

Executes OP on each item produced by the iterator, in parallel.

Counts the number of items in this parallel iterator.

Applies map_op to each item of this iterator, producing a new iterator with the results.

Creates an iterator which clones all of its elements. This may be useful when you have an iterator over &T, but you need T.

Applies inspect_op to a reference to each item of this iterator, producing a new iterator passing through the original items. This is often useful for debugging to see what's happening in iterator stages.

Applies filter_op to each item of this iterator, producing a new iterator with only the items that gave true results.

Applies filter_op to each item of this iterator to get an Option, producing a new iterator with only the items from Some results.

Applies map_op to each item of this iterator to get nested iterators, producing a new iterator that flattens these back into one.

Reduces the items in the iterator into one item using op. The argument identity should be a closure that can produce "identity" value which may be inserted into the sequence as needed to create opportunities for parallel execution. So, for example, if you are doing a summation, then identity() ought to produce something that represents the zero for your type (but consider just calling sum() in that case).

Example:

// Iterate over a sequence of pairs `(x0, y0), ..., (xN, yN)`
// and use reduce to compute one pair `(x0 + ... + xN, y0 + ... + yN)`
// where the first/second elements are summed separately.
use rayon::prelude::*;
let sums = [(0, 1), (5, 6), (16, 2), (8, 9)]
           .par_iter()        // iterating over &(i32, i32)
           .cloned()          // iterating over (i32, i32)
           .reduce(|| (0, 0), // the "identity" is 0 in both columns
                   |a, b| (a.0 + b.0, a.1 + b.1));
assert_eq!(sums, (0 + 5 + 16 + 8, 1 + 6 + 2 + 9));

Note: unlike a sequential fold operation, the order in which op will be applied to reduce the result is not fully specified. So op should be associative or else the results will be non-deterministic. And of course identity() should produce a true identity.

Reduces the items in the iterator into one item using op. If the iterator is empty, None is returned; otherwise, Some is returned.

This version of reduce is simple but somewhat less efficient. If possible, it is better to call reduce(), which requires an identity element.

Note: unlike a sequential fold operation, the order in which op will be applied to reduce the result is not fully specified. So op should be associative or else the results will be non-deterministic.

Deprecated since v0.5.0

: call reduce instead

Deprecated. Use reduce() instead.

Parallel fold is similar to sequential fold except that the sequence of items may be subdivided before it is folded. Consider a list of numbers like 22 3 77 89 46. If you used sequential fold to add them (fold(0, |a,b| a+b), you would wind up first adding 0 + 22, then 22 + 3, then 25 + 77, and so forth. The parallel fold works similarly except that it first breaks up your list into sublists, and hence instead of yielding up a single sum at the end, it yields up multiple sums. The number of results is nondeterministic, as is the point where the breaks occur.

So if did the same parallel fold (fold(0, |a,b| a+b)) on our example list, we might wind up with a sequence of two numbers, like so:

22 3 77 89 46
      |     |
    102   135

Or perhaps these three numbers:

22 3 77 89 46
      |  |  |
    102 89 46

In general, Rayon will attempt to find good breaking points that keep all of your cores busy.

Fold versus reduce

The fold() and reduce() methods each take an identity element and a combining function, but they operate rather differently.

reduce() requires that the identity function has the same type as the things you are iterating over, and it fully reduces the list of items into a single item. So, for example, imagine we are iterating over a list of bytes bytes: [128_u8, 64_u8, 64_u8]. If we used bytes.reduce(|| 0_u8, |a: u8, b: u8| a + b), we would get an overflow. This is because 0, a, and b here are all bytes, just like the numbers in the list (I wrote the types explicitly above, but those are the only types you can use). To avoid the overflow, we would need to do something like bytes.map(|b| b as u32).reduce(|| 0, |a, b| a + b), in which case our result would be 256.

In contrast, with fold(), the identity function does not have to have the same type as the things you are iterating over, and you potentially get back many results. So, if we continue with the bytes example from the previous paragraph, we could do bytes.fold(|| 0_u32, |a, b| a + (b as u32)) to convert our bytes into u32. And of course we might not get back a single sum.

There is a more subtle distinction as well, though it's actually implied by the above points. When you use reduce(), your reduction function is sometimes called with values that were never part of your original parallel iterator (for example, both the left and right might be a partial sum). With fold(), in contrast, the left value in the fold function is always the accumulator, and the right value is always from your original sequence.

Fold vs Map/Reduce

Fold makes sense if you have some operation where it is cheaper to groups of elements at a time. For example, imagine collecting characters into a string. If you were going to use map/reduce, you might try this:

use rayon::prelude::*;
let s =
    ['a', 'b', 'c', 'd', 'e']
    .par_iter()
    .map(|c: &char| format!("{}", c))
    .reduce(|| String::new(),
            |mut a: String, b: String| { a.push_str(&b); a });
assert_eq!(s, "abcde");

Because reduce produces the same type of element as its input, you have to first map each character into a string, and then you can reduce them. This means we create one string per element in ou iterator -- not so great. Using fold, we can do this instead:

use rayon::prelude::*;
let s =
    ['a', 'b', 'c', 'd', 'e']
    .par_iter()
    .fold(|| String::new(),
            |mut s: String, c: &char| { s.push(*c); s })
    .reduce(|| String::new(),
            |mut a: String, b: String| { a.push_str(&b); a });
assert_eq!(s, "abcde");

Now fold will process groups of our characters at a time, and we only make one string per group. We should wind up with some small-ish number of strings roughly proportional to the number of CPUs you have (it will ultimately depend on how busy your processors are). Note that we still need to do a reduce afterwards to combine those groups of strings into a single string.

You could use a similar trick to save partial results (e.g., a cache) or something similar.

Combining fold with other operations

You can combine fold with reduce if you want to produce a single value. This is then roughly equivalent to a map/reduce combination in effect:

use rayon::prelude::*;
let bytes = 0..22_u8; // series of u8 bytes
let sum = bytes.into_par_iter()
               .fold(|| 0_u32, |a: u32, b: u8| a + (b as u32))
               .sum::<u32>();
assert_eq!(sum, (0..22).sum()); // compare to sequential

Sums up the items in the iterator.

Note that the order in items will be reduced is not specified, so if the + operator is not truly associative, then the results are not fully deterministic.

Basically equivalent to self.reduce(|| 0, |a, b| a + b), except that the type of 0 and the + operation may vary depending on the type of value being produced.

Multiplies all the items in the iterator.

Note that the order in items will be reduced is not specified, so if the * operator is not truly associative, then the results are not fully deterministic.

Basically equivalent to self.reduce(|| 1, |a, b| a * b), except that the type of 1 and the * operation may vary depending on the type of value being produced.

Deprecated since v0.6.0

: name changed to product() to match sequential iterators

DEPRECATED

Computes the minimum of all the items in the iterator. If the iterator is empty, None is returned; otherwise, Some(min) is returned.

Note that the order in which the items will be reduced is not specified, so if the Ord impl is not truly associative, then the results are not deterministic.

Basically equivalent to self.reduce_with(|a, b| cmp::min(a, b)).

Computes the item that yields the minimum value for the given function. If the iterator is empty, None is returned; otherwise, Some(item) is returned.

Note that the order in which the items will be reduced is not specified, so if the Ord impl is not truly associative, then the results are not deterministic.

Computes the maximum of all the items in the iterator. If the iterator is empty, None is returned; otherwise, Some(max) is returned.

Note that the order in which the items will be reduced is not specified, so if the Ord impl is not truly associative, then the results are not deterministic.

Basically equivalent to self.reduce_with(|a, b| cmp::max(a, b)).

Computes the item that yields the maximum value for the given function. If the iterator is empty, None is returned; otherwise, Some(item) is returned.

Note that the order in which the items will be reduced is not specified, so if the Ord impl is not truly associative, then the results are not deterministic.

Takes two iterators and creates a new iterator over both.

Searches for some item in the parallel iterator that matches the given predicate and returns it. This operation is similar to find on sequential iterators but the item returned may not be the first one in the parallel sequence which matches, since we search the entire sequence in parallel.

Once a match is found, we will attempt to stop processing the rest of the items in the iterator as soon as possible (just as find stops iterating once a match is found).

Searches for the first item in the parallel iterator that matches the given predicate and returns it.

Once a match is found, all attempts to the right of the match will be stopped, while attempts to the left must continue in case an earlier match is found.

Note that not all parallel iterators have a useful order, much like sequential HashMap iteration, so "first" may be nebulous.

Searches for the last item in the parallel iterator that matches the given predicate and returns it.

Once a match is found, all attempts to the left of the match will be stopped, while attempts to the right must continue in case a later match is found.

Note that not all parallel iterators have a useful order, much like sequential HashMap iteration, so "last" may be nebulous.

Searches for some item in the parallel iterator that matches the given predicate, and if so returns true. Once a match is found, we'll attempt to stop process the rest of the items. Proving that there's no match, returning false, does require visiting every item.

Tests that every item in the parallel iterator matches the given predicate, and if so returns true. If a counter-example is found, we'll attempt to stop processing more items, then return false.

Create a fresh collection containing all the element produced by this parallel iterator.

You may prefer to use collect_into(), which allocates more efficiently with precise knowledge of how many elements the iterator contains, and even allows you to reuse an existing vector's backing store rather than allocating a fresh vector.

Internal method used to define the behavior of this parallel iterator. You should not need to call this directly.

Returns the number of items produced by this iterator, if known statically. This can be used by consumers to trigger special fast paths. Therefore, if Some(_) is returned, this iterator must only use the (indexed) Consumer methods when driving a consumer, such as split_at(). Calling UnindexedConsumer::split_off_left() or other UnindexedConsumer methods -- or returning an inaccurate value -- may result in panics.

This method is currently used to optimize collect for want of true Rust specialization; it may be removed when specialization is stable.

Implementors