orx_parallel/parameters/chunk_size.rs
1use std::num::NonZeroUsize;
2
3/// `ChunkSize` represents the batch size of elements each thread will pull from the main iterator once it becomes idle again.
4/// It is possible to define a minimum or exact chunk size.
5///
6/// # Rules of Thumb / Guidelines
7///
8/// The objective of this parameter is to balance the overhead of parallelization and cost of heterogeneity of tasks.
9///
10/// In order to illustrate, assume that there exist 8 elements to process, or 8 jobs to execute, and we will use 2 threads for this computation.
11/// Two extreme strategies can be defined as follows.
12///
13/// * **Perfect Sharing of Tasks**
14/// * Setting chunk size to 4 provides a perfect division of tasks in terms of quantity.
15/// Each thread will retrieve 4 elements at once in one pull and process them.
16/// This *one pull* per thread can be considered as the parallelization overhead and this is the best/minimum we can achieve.
17/// * Drawback of this approach, on the other hand, is observed when the execution time of each job is significantly different; i.e., when we have heterogeneous tasks.
18/// * Assume, for instance, that the first element requires 7 units of time while all remaining elements require 1 unit of time.
19/// * Roughly, the parallel execution with a chunk size of 4 would complete in 10 units of time, which is the execution time of the first thread (7 + 3*1).
20/// * The second thread will complete its 4 tasks in 4 units of time and will remain idle for 6 units of time.
21/// * **Perfect Handling of Heterogeneity**
22/// * Setting chunk size to 1 provides a perfect way to deal with heterogeneous tasks, minimizing the idle time of threads.
23/// Each thread will retrieve elements one by one whenever they become idle.
24/// * Considering the heterogeneous example above, the parallel execution with a chunk size of 1 would complete around 7 units of time.
25/// * This is again the execution time of the first thread, which will only execute the first element.
26/// * The second thread will execute the remaining 7 elements, again in 7 units in time.
27/// * None of the threads will be idle, which is the best we can achieve.
28/// * Drawback of this approach is the parallelization overhead due to *pull*s.
29/// * Chunk size being 1, this setting will lead to a total of 8 pull operations (1 pull by the first thread, 7 pulls by the second thread).
30/// * This leads to the maximum/worst parallelization overhead in this scenario.
31///
32/// The objective then is to find a chunk size which is:
33/// * large enough that total time spent for the pulls is insignificant, while
34/// * small enough not to suffer from the impact of heterogeneity.
35///
36/// Note that this decision is data dependent, and hence, can be tuned for the input when the operation is extremely time-critical.
37///
38/// In these cases, the following rule of thumb helps to find a good chunk size.
39/// We can set the chunk size to the smallest value which would make the overhead of pulls insignificant:
40/// * The larger each individual task, the less significant the parallelization overhead. A small chunk size would do.
41/// * The smaller each individual task, the more significant the parallelization overhead. We require a larger chunk size while being careful not to suffer from idle times of threads due to heterogeneity.
42///
43/// In general, it is recommended to set this parameter to its default value, `ChunkSize::Auto`.
44/// This library will try to solve the tradeoff explained above depending on the input data to minimize execution time and idle thread time.
45///
46/// For more critical operations, this `ChunkSize::Exact` and `ChunkSize::Min` options can be used to tune the execution for the class of the relevant input data.
47#[derive(Clone, Copy, Debug, PartialEq, Eq, Default)]
48pub enum ChunkSize {
49 /// The objective of `ChunkSize` parameter is to balance the overhead of parallelization and cost of heterogeneity of tasks.
50 ///
51 /// When `ChunkSize::Auto` is used, this library will try to solve the tradeoff explained above depending on the input data to minimize execution time and idle thread time.
52 #[default]
53 Auto,
54 /// This variant defines a minimum chunk size, say `min_chunk_size`.
55 /// Each time a thread completes a task and becomes idle, it will pull at least `min_chunk_size` elements from the input source.
56 /// Parallel execution is allowed to and might decide to pull more elements depending on characteristics of the inputs and used number of threads.
57 Min(NonZeroUsize),
58 /// This variant defines an exact chunk size, say `exact_chunk_size`.
59 /// Each time a thread completes a task and becomes idle, it will pull exactly `exact_chunk_size` elements from the input source.
60 Exact(NonZeroUsize),
61}
62
63impl From<usize> for ChunkSize {
64 /// Converts the nonnegative integer to chunk size as follows:
65 ///
66 /// * 0 is converted to `ChunkSize::Auto`,
67 /// * `n` is converted to `ChunkSize::Exact(n)` where `n > 0`.
68 fn from(value: usize) -> Self {
69 match value {
70 0 => Self::Auto,
71 n => Self::Exact(NonZeroUsize::new(n).expect("is positive")),
72 }
73 }
74}