Crossfire
High-performance lockless spsc/mpsc/mpmc channels.
It supports async contexts, and communication between async and blocking contexts.
The low level is based on crossbeam-queue.
For the concept, please refer to the wiki.
Version history
-
V1.0: Released in 2022.12 and used in production.
-
V2.0: Released in 2025.6. Refactored the codebase and API by removing generic types from the ChannelShared type, which made it easier to code with.
-
v2.1: Released in 2025.9. Removed the dependency on crossbeam-channel and implemented a modified version of crossbeam-queue, which brings performance improvements for both async and blocking contexts.
Performance
Being a lockless channel, crossfire outperforms other async-capable channels. And thanks to a lighter notification mechanism, in a blocking context, some cases are even better than the original crossbeam-channel,
More benchmark data is posted on wiki.
Also, being a lockless channel, the algorithm relies on spinning and yielding. Spinning is good on
multi-core systems, but not friendly to single-core systems (like virtual machines).
So we provide a function detect_backoff_cfg() to detect the running platform.
Calling it within the initialization section of your code, will get a 2x performance boost on VPS.
The benchmark is written in the criterion framework. You can run the benchmark by:
cargo bench --bench crossfire
APIs
Modules and functions
There are 3 modules: spsc, mpsc, mpmc, providing functions to allocate different types of channels.
The SP or SC interface is only for non-concurrent operation. It's more memory-efficient than MP or MC implementations, and sometimes slightly faster.
The return types in these 3 modules are different:
-
mpmc::bounded_blocking() : (tx blocking, rx blocking)
-
mpmc::bounded_async() : (tx async, rx async)
-
mpmc::bounded_tx_async_rx_blocking() : (tx async, rx blocking)
-
mpmc::bounded_tx_blocking_rx_async() : (tx blocking, rx async)
-
mpmc::unbounded_blocking() : (tx non-blocking, rx blocking)
-
mpmc::unbounded_async() : (tx non-blocking, rx async)
NOTE : For a bounded channel, a 0 size case is not supported yet. (Temporary rewrite as 1 size).
Types
For the SP / SC version, AsyncTx, AsyncRx, Tx, and Rx are not Clone and without Sync.
Although can be moved to other threads, but not allowed to use send/recv while in an Arc.
(Refer to the compile_fail examples in the type document).
The benefit of using the SP / SC API is completely lockless waker registration, in exchange for a performance boost.
The sender/receiver can use the From trait to convert between blocking and async context
counterparts.
Error types
Error types are the same as crossbeam-channel: TrySendError, SendError, TryRecvError, RecvError
Feature flags
-
tokio: Enable send_timeout, recv_timeout API for async context, based ontokio. And will detect the right backoff strategy for the type of runtime (multi-threaded / current-thread). -
async_std: Enable send_timeout, recv_timeout API for async context, based onasync-std.
Async compatibility
Tested on tokio-1.x and async-std-1.x, crossfire is runtime-agnostic.
The following scenarios are considered:
-
The
AsyncTx::send()andAsyncRx:recv()operations are cancellation-safe in an async context. You can safely use the select! macro and timeout() function in tokio/futures in combination with recv(). On cancellation, [SendFuture] and [RecvFuture] will trigger drop(), which will clean up the state of the waker, making sure there is no mem-leak and deadlock. But you cannot know the true result from SendFuture, since it's dropped upon cancellation. Thus, we suggest usingAsyncTx::send_timeout()instead. -
When the "tokio" or "async_std" feature is enabled, we also provide two additional functions:
-
AsyncTx::send_timeout(), which will return the message that failed to be sent in [SendTimeoutError]. We guarantee the result is atomic. -
AsyncRx::recv_timeout(), we guarantee the result is atomic.
-
Between blocking context and async context, and between different async runtime instances.
-
The async waker footprint.
When using a multi-producer and multi-consumer scenario, there's a small memory overhead to pass along a Weak
reference of wakers.
Because we aim to be lockless, when the sending/receiving futures are canceled (like tokio::time::timeout()),
it might trigger an immediate cleanup if the try-lock is successful, otherwise will rely on lazy cleanup.
(This won't be an issue because weak wakers will be consumed by actual message send and recv).
On an idle-select scenario, like a notification for close, the waker will be reused as much as possible
if poll() returns pending.
Usage
Cargo.toml:
[]
= "2.1"
Example with tokio::select!
extern crate crossfire;
use *;
extern crate tokio;
use ;
async