Expand description
A fast data parallelism library using Atomics to share data across threads and uniquely pull values from Collections such as Vec, Range or HashMap. It follows a ‘push’ approach and has a scheduling algorithm based on work redistribution to expedite results. The results show comparable performance to the popular Rayon library within 5 - 10%.
Add this crate using:
cargo add parallel_task
Code sample below:
use parallel_task::prelude::*;
let job = || {
std::thread::sleep(std::time::Duration::from_nanos(10));
(0..1_000).sum::<i32>()
};
let vec_jobs = (0..100_000).map(|_|job).collect::<Vec<_>>();
// Parallel Iter example
let r1 = vec_jobs.parallel_iter().map(|func| func()).collect::<Vec<i32>>();
// Into Parallel Iter that consumes the vec_jobs
let r1 = vec_jobs.into_parallel_iter().map(|func| func()).collect::<Vec<i32>>();
// Print all values using a for_each. This runs for_each concurrently on a Vec or HashMap
r1.parallel_iter().for_each(|val| { print!("{} ",*val);});Modules§
- accessors
- collector
- Collector is a trait that can be implemented across Collections and other types that implement Extend trait. For instance here it has been implemented for Vector, HashMap and so on. This allows the end result to collected in the desired form as per the annotation.
- errors
- for_
each - ParallelForEach object is implemented for AtomicIterator and hence for types implementing Fetch trait. This allows an FnMut function to be run on each value of the collection implementing Fetch.
- iterators
- map
- ParallelMap is a structure type that captures the Map object and function necessary to run the values within the AtomicIterator in parallel.
- prelude
- Adding this module as parallel_task::prelude::* gives access to the desired functionalities.
- push_
workers - task_
queue - TaskQueue stores the AtomicIterator that allows a unique value to be popped up for each thread that enquires the same.
- utils
- worker_
thread - WorkerThreads follow a Push based strategy wherein Threads are managed by a WorkerController that spawns WorkerThreads. These worker threads can be communicated with via sync and async channels to send data for processing and to close the same