deluge/
lib.rs

1#![feature(associated_type_defaults)]
2#![feature(type_alias_impl_trait)]
3#![feature(const_trait_impl)]
4#![feature(let_chains)]
5#![feature(stmt_expr_attributes)]
6#![feature(extend_one)]
7#![feature(impl_trait_in_assoc_type)]
8
9//! # Deluge is (not) a Stream
10//!
11//! Deluge implements parallel and concurrent stream operations while driving the underlying futures concurrently.
12//! This is in contrast to standard streams which evaluate each future sequentially, leading to large delays on highly concurrent operations.
13//!
14//! ```toml
15//! deluge = "0.1"
16//! ```
17//!
18//! **This library is still experimental, use at your own risk**
19//!
20//! ### Available features
21//!
22//! By default the library does not build the parallel collectors and folds.
23//! In order to enable them, please enable either the `tokio` or `async-std` feature.
24//!
25//! ```toml
26//! deluge = { version = "0.1", features = ["tokio"] }
27//! ```
28//!
29//! or
30//!
31//! ```toml
32//! deluge = { version = "0.1", features = ["async-std"] }
33//! ```
34//!
35//! ### Design decisions
36//!
37//! This is an opinionated library that puts ease of use and external simplicity at the forefront.
38//! Operations that apply to individual elements like maps and filters **do not** allocate.
39//! They simply wrap each element in another future but they do not control the way these processed elements are evaluated.
40//! It is the collector that controls the evaluation strategy.
41//! At the moment there are two basic collectors supplied: a concurrent and a parallel one.
42//!
43//! The concurrent collector accepts an optional concurrency limit.
44//! If it is specified, at most the number of futures equal to that limit will be evaluated at once.
45//!
46//! ```
47//! use deluge::*;
48//!
49//! # futures::executor::block_on(async {
50//! let result = [1, 2, 3, 4]
51//!     .into_deluge()
52//!     .map(|x| async move { x * 2 })
53//!     .collect::<Vec<usize>>(None)
54//!     .await;
55//!
56//! assert_eq!(vec![2, 4, 6, 8], result);
57//! # });
58//! ```
59//!
60//! The parallel collector spawns a number of workers.
61//! If a number of workers is not specified, it will default to the number of cpus, if the concurrency limit is not specified each worker will default to `total_futures_to_evaluate / number_of_workers`.
62//! Note that you need to enable either a `tokio` or `async-std` feature to support parallel collectors.
63//!
64//! ```
65//! use deluge::*;
66//! # use std::time::Duration;
67//!
68//! # let rt = tokio::runtime::Runtime::new().unwrap();
69//! # #[cfg(feature = "parallel")]
70//! # rt.handle().block_on(async {
71//! let result = (0..150)
72//!    .into_deluge()
73//!    .map(|idx| async move {
74//!        tokio::time::sleep(Duration::from_millis(50)).await;
75//!        idx
76//!    })
77//!    .collect_par::<Vec<usize>>(10, None)
78//!    .await;
79//!
80//! assert_eq!(result.len(), 150);
81//! # });
82//! ```
83
84mod deluge;
85mod deluge_ext;
86mod helpers;
87mod into_deluge;
88mod iter;
89mod ops;
90
91pub use self::deluge::*;
92pub use deluge_ext::*;
93pub use into_deluge::*;
94pub use iter::*;