gil
Get In Line - A collection of high-performance, lock-free concurrent queues with sync and async support.
⚠️ WIP: things WILL change a lot without warnings even in minor updates until v1, use at your own risk.
Usage
Single-Producer Single-Consumer (SPSC)
The most optimized queue for 1-to-1 thread communication.
use thread;
use NonZeroUsize;
use channel;
const COUNT: usize = 100_000;
let = ;
let handle = spawn;
for i in 0..COUNT
handle.join.unwrap;
Multi-Producer Single-Consumer (MPSC)
Useful when multiple threads need to send data to a single worker thread.
use thread;
use NonZeroUsize;
use channel;
let = ;
let mut handles = vec!;
for i in 0..10
for _ in 0..10
for handle in handles
Multi-Producer Multi-Consumer (MPMC)
The most flexible queue, allowing multiple senders and multiple receivers.
use thread;
use NonZeroUsize;
use channel;
let = ;
let mut handles = vec!;
// Spawn multiple producers
for i in 0..5
// Spawn multiple consumers
for _ in 0..5
for handle in handles
Sharded MPMC/MPSC
For high-throughput scenarios where multiple threads access the queue concurrently, sharded versions can significantly reduce contention. These use multiple SPSC queues internally and distribute load across them.
Note: The sharded channels use a "bounded" number of shards. This means the number of concurrent senders (and receivers for MPMC) is limited to the number of shards. Cloning a sender/receiver will fail (return None) if all shards are occupied.
use thread;
use NonZeroUsize;
use channel;
let max_shards = new.unwrap;
let capacity_per_shard = new.unwrap;
let = ;
// Clone sender to use different shards
// Note: This returns Option<Sender>, returning None if all shards are busy.
if let Some = tx.try_clone
let value = rx.recv;
assert_eq!;
Async Example
To use async features, enable the async feature in your Cargo.toml.
[]
= { = "0.3", = ["async"] }
use channel;
use NonZeroUsize;
const COUNT: usize = 100_000;
let = ;
let handle = spawn;
for i in 0..COUNT
handle.await.unwrap;
Non-blocking Operations
use channel;
use NonZeroUsize;
let = ;
// Try to send without blocking
match tx.try_send
// Try to receive without blocking
match rx.try_recv
Batch Operations (Zero-copy)
For maximum performance, you can directly access the internal buffer. This allows you to write or read multiple items at once, bypassing the per-item synchronization overhead.
use channel;
use ptr;
use NonZeroUsize;
let = ;
// Zero-copy write
let data = ;
let slice = tx.write_buffer;
let count = data.len.min;
unsafe
// Zero-copy read
let len = ;
// Advance the consumer head to mark items as processed
unsafe
Performance
The queue achieves high throughput through several optimizations:
- Cache-line alignment: Head and tail pointers are on separate cache lines to prevent false sharing
- Local caching: Each side caches the other side's position to reduce atomic operations
- Batch operations: Amortize atomic operation costs across multiple items
- Zero-copy API: Direct buffer access eliminates memory copies
Large Objects
For large objects, consider using Box<T> to avoid the cost of copying the entire object into the queue. This way, only the pointer (8 bytes) is copied:
use channel;
use NonZeroUsize;
let = ;
// Only the Box pointer is copied, not the 1024 bytes
tx.send;
let value = rx.recv;
Safety
The code has been verified using:
License
MIT License - see LICENSE file for details.
Acknowledgements
- SPSC was inspired by the
ProducerConsumerQueuein the Facebook Folly library. - MPMC/MPSC are based on the bounded queue algorithm developed by Dmitry Vyukov.
For more details on third-party licenses, see the LICENSE-THIRD-PARTY file.