1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
//! Oneshot spsc (single producer, single consumer) channel. Meaning each channel instance
//! can only transport a single message. This has a few nice outcomes. One thing is that
//! the implementation can be very efficient, utilizing the knowledge that there will
//! only be one message. But more importantly, it allows the API to be expressed in such
//! a way that certain edge cases that you don't want to care about when only sending a
//! single message on a channel does not exist. For example: The sender can't be copied
//! or cloned, and the send method takes ownership and consumes the sender.
//! So you are guaranteed, at the type level, that there can only be one message sent.
//!
//! The sender's send method is non-blocking, and potentially lock- and wait-free.
//! See documentation on [Sender::send] for situations where it might not be fully wait-free.
//! The receiver supports both lock- and wait-free `try_recv` as well as indefinite and time
//! limited thread blocking receive operations. The receiver also implements `IntoFuture` and
//! supports asynchronously awaiting the message.
//!
//!
//! # Examples
//!
//! This example sets up a background worker that processes requests coming in on a standard
//! mpsc channel and replies on a oneshot channel provided with each request. The worker can
//! be interacted with both from sync and async contexts since the oneshot receiver
//! can receive both blocking and async.
//!
//! ```rust
//! # #[cfg(not(feature = "loom"))] {
//! use std::sync::mpsc;
//! use std::thread;
//! use std::time::Duration;
//!
//! type Request = String;
//!
//! // Starts a background thread performing some computation on requests sent to it.
//! // Delivers the response back over a oneshot channel.
//! fn spawn_processing_thread() -> mpsc::Sender<(Request, oneshot::Sender<usize>)> {
//! let (request_sender, request_receiver) = mpsc::channel::<(Request, oneshot::Sender<usize>)>();
//! thread::spawn(move || {
//! for (request_data, response_sender) in request_receiver.iter() {
//! let compute_operation = || request_data.len();
//! let _ = response_sender.send(compute_operation()); // <- Send on the oneshot channel
//! }
//! });
//! request_sender
//! }
//!
//! let processor = spawn_processing_thread();
//!
//! // If compiled with `std` the library can receive messages with timeout on regular threads
//! #[cfg(feature = "std")] {
//! let (response_sender, response_receiver) = oneshot::channel();
//! let request = Request::from("data from sync thread");
//!
//! processor.send((request, response_sender)).expect("Processor down");
//! match response_receiver.recv_timeout(Duration::from_secs(1)) { // <- Receive on the oneshot channel
//! Ok(result) => println!("Processor returned {}", result),
//! Err(oneshot::RecvTimeoutError::Timeout) => eprintln!("Processor was too slow"),
//! Err(oneshot::RecvTimeoutError::Disconnected) => panic!("Processor exited"),
//! }
//! }
//!
//! // If compiled with the `async` feature, the `Receiver` can be awaited in an async context
//! #[cfg(feature = "async")] {
//! tokio::runtime::Runtime::new()
//! .unwrap()
//! .block_on(async move {
//! let (response_sender, response_receiver) = oneshot::channel();
//! let request = Request::from("data from sync thread");
//!
//! processor.send((request, response_sender)).expect("Processor down");
//! match response_receiver.await { // <- Receive on the oneshot channel asynchronously
//! Ok(result) => println!("Processor returned {}", result),
//! Err(_e) => panic!("Processor exited"),
//! }
//! });
//! }
//! # }
//! ```
//!
//! # Send has happens-before relationship with receive
//!
//! All the various ways the `Receiver` can obtain the message out of the channel is synchronized
//! with the `Sender`s `send` method. This means any operations and memory modifications done in
//! the sender thread before the call to `Sender::send` are guaranteed to happen before any code
//! running after the message has been received in the receiver thread.
//!
//! # Sync vs async
//!
//! The main motivation for writing this library was that there were no (known to me) channel
//! implementations allowing you to seamlessly send messages between a normal thread and an async
//! task, or the other way around. If message passing is the way you are communicating, of course
//! that should work smoothly between the sync and async parts of the program!
//!
//! This library achieves that by having a fast and cheap send operation that can
//! be used in both regular threads and async tasks. The receiver has both thread blocking
//! receive methods for synchronous usage, and implements `IntoFuture` for asynchronous usage.
//!
//! The receiving endpoint of this channel implements Rust's `IntoFuture` trait and can be waited on
//! in an asynchronous task. This implementation is completely executor/runtime agnostic. It should
//! be possible to use this library with any executor, or even pass messages between tasks running
//! in different executors.
//!
// # Implementation description
//
// When a channel is created via the `channel` function, it creates a single heap allocation
// containing:
// * A one byte atomic integer that represents the current channel state,
// * Uninitialized memory to fit the message,
// * Uninitialized memory to fit the waker that can wake the receiving task or thread up.
//
// The size of the waker depends on which features are activated, it ranges from 0 to 24 bytes[1].
// So with all features enabled each channel allocates 25 bytes plus the size of the
// message, plus any padding needed to get correct memory alignment.
//
// The Sender and Receiver only holds a raw pointer to the heap channel object. The last endpoint
// to be consumed or dropped is responsible for freeing the heap memory. The first endpoint to
// be consumed or dropped signal via the state that it is gone. And the second one see this and
// frees the memory.
//
// ## Footnotes
//
// [1]: Mind that the waker only takes zero bytes when all features are disabled, making it
// impossible to *wait* for the message. `try_recv` is the only available method in this scenario.
// Enables this nightly only feature for the documentation build on docs.rs.
// To test this locally, build the docs with:
// `RUSTDOCFLAGS="--cfg docsrs" cargo +nightly doc --all-features`
extern crate alloc;
use NonNull;
use Channel;
pub use Sender;
pub use AsyncReceiver;
pub use Receiver;
use Box;
use Box;
// Wildcard imports are not nice. But since multiple errors have various conditional compilation,
// this is easier than doing three different imports.
pub use *;
/// Creates a new oneshot channel and returns the two endpoints, [`Sender`] and [`Receiver`].
/// Ergonomic shorthand for creating a channel and immediately convert the [`Receiver`] into
/// a future.
///
/// This can be useful when you need to pass the receiver to a function that expects a
/// type implementing [`Future`] directly. Using this function is not necessary when
/// you are going to use `.await` on the receiver, as that will automatically call
/// [`IntoFuture::into_future`] in the background.
/// Deallocates the channel's heap allocation (created in `oneshot::channel()`).
///
/// # Safety
///
/// * `channel` must be a valid pointer to a `Channel<T>` originally coming from
/// `oneshot::channel()`.
/// * The thread calling this function must have properly synchronized with any other thread
/// that has used the channel (either the `Sender` or `Receiver`). This means having an
/// acquire memory barrier on or after the loading of `channel.state` that determined that
/// the other thread is fully done using the channel and we are responsible for freeing it.
pub unsafe