pawawwewism/lib.rs
1//! A simple library providing modern concurrency primitives.
2//!
3//! The goal of this library is to explore the design space of easy-to-use higher-level concurrency
4//! primitives that implement the principles of *structured concurrency*, and also allow bridging
5//! thread-based concurrency and `async` concurrency (via primitives that feature both a blocking
6//! and an `async` API).
7//!
8//! # Why structured concurrency?
9//!
10//! Similar to how `goto` performs unstructured control flow, mechanisms like Go's `go` statement,
11//! or threads/tasks that detach from the code that spawned them, perform *unstructured
12//! concurrency*.
13//! As it turns out, both `goto` and unstructured concurrency share very similar issues, which have
14//! been detailed at length in [this blog post][notes-on-structured-concurrency].
15//!
16//! Modern languages generally eschew `goto` due to its many issues, instead relying on structured
17//! control flow primitives like `if`, loops, `break`, `continue`, and `try ... catch`. However,
18//! they do *not* generally eschew unstructured concurrency, presumably because that problem is
19//! usually considered out-of-scope, or because structured concurrency is unfamiliar to most
20//! programmers.
21//!
22//! While Rust does provide some tools to make concurrency easier, it still does *not*
23//! provide any tools for structured concurrency (beyond [`thread::scope`]).
24//! The wider Rust ecosystem is, unfortunately, no exception here: both `async_std` and `tokio`
25//! allow cheaply spawning *unstructured* async tasks, which will simply continue running in the
26//! background when the corresponding handle is dropped.
27//!
28//! [notes-on-structured-concurrency]: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/
29//! [`thread::scope`]: std::thread::scope
30//!
31//! # What is structured concurrency (in Rust)?
32//!
33//! In Rust specifically, my interpretation of *structured concurrency* means that:
34//!
35//! - Every background operation (whether thread or async task) is represented by an owned handle.
36//! - No background operation outlasts its handle. If the handle is dropped, the operation is either
37//! canceled or joined (if it is a thread).
38//!
39//! This prevents resource leaks by joining or aborting the background operation when the value
40//! representing it is dropped. We no longer have to remember to shut down background threads when
41//! some component is shut down. The drawback: the automatic join can potentially hang forever, if
42//! the thread doesn't react to the shutdown request, but this is a lot less subtle than never
43//! stopping a background thread.
44//!
45//! This also brings some immediate code clarity benefits: now every background computation is
46//! *required* to be represented as an in-scope value (frequently a field of a `struct`), constantly
47//! reminding us of its presence.
48//!
49//! Due to Rust's ownership system, the background operations started by a program form a tree, just
50//! like any other owned Rust value, and so have a unique owner (which can be sidestepped via
51//! [`Arc`] and [`Rc`], but that's besides the point). This property actually allows us to add one
52//! bonus feature with relative ease:
53//!
54//! - Panics occurring in background operations will be *propagated* to its owner, without causing
55//! *additional* panics.
56//!
57//! This is not normally the case when using [`std::thread`] or async tasks in most popular async
58//! runtimes: those typically surface panics happening in the background thread or task as a
59//! [`Result`].
60//! If structured concurrency is implemented properly, the only way to catch a panic is to do so
61//! explicitly with [`catch_unwind`].
62//! All panics happening inside concurrent operations are handled in a reasonable way automatically,
63//! and will (if the unwinding runtime is used) eventually unwind and reach the program's entry
64//! point, just like panics that happen in sequential code. No additional panics will be raised, and
65//! the pieces of code that are blamed for the panic, and that participate in its propagation are
66//! always predictable: it's the background code raising the original panic, and the code owning or
67//! interfacing with the background operation, respectively.
68//!
69//! Of course, structured concurrency is not magic. As soon as code stops being sequential, there is
70//! the possibility that *multiple* panics happen at once. Since panics are only propagated when
71//! "interacting" with the background operation in some way (eg. by dropping it, joining it, sending
72//! it more work to do, or checking its status), panics will generally be forwarded to the owning
73//! thread *opportunistically*, when they are noticed, rather than in the order they happened (and
74//! regardless, Rust provides no reliable mechanism for determining this order).
75//! This is why programs utilizing structured concurrency should generally avoid causing any
76//! knock-on panics, like those caused by unwrapping a poisoned mutex, since they might be
77//! propagated before the panic representing the actual root cause.
78//!
79//! [`catch_unwind`]: std::panic::catch_unwind
80//! [`forget`]: std::mem::forget
81//! [`Arc`]: std::sync::Arc
82//! [`Rc`]: std::rc::Rc
83//!
84//! # Overview
85//!
86//! This library features several thread-based structured concurrency primitives:
87//!
88//! - [`background`][background()], which is a simple method to run a closure to completion on a
89//! [`Background`] thread.
90//! - [`Worker`]/[`WorkerSet`], which is a background thread that processes packets of work fed to
91//! it from the owning thread.
92//! - [`reader::Reader`], a background thread that reads from a cancelable stream and processes or
93//! forwards the results.
94//!
95//! Additionally, this library features communication primitives that can be used to exchange data
96//! between background and foreground threads or tasks:
97//!
98//! - [`Promise`] and [`PromiseHandle`] provide a mechanism for communicating the result of
99//! computations (like those performed by a [`Worker`]).
100//! - [`reactive::Value`] is a value that can be changed from one place, and notifies every
101//! associated [`reactive::Reader`] of that change, so that consumers can react to those changes.
102
103#[cfg(test)]
104mod test;
105
106mod background;
107mod drop;
108mod promise;
109mod worker;
110
111pub mod isochronous;
112pub mod reactive;
113pub mod reader;
114pub mod sync;
115
116pub use background::*;
117pub use promise::*;
118pub use worker::*;