gil
Get In Line - A fast single-producer single-consumer (SPSC) queue with sync and async support.
Inspired by Facebook's folly's ProducerConsumerQueue.
⚠️ WIP: things WILL change a lot without warnings even in minor updates until v1, use at your own risk.
Features
- Lock-free: Uses atomic operations for synchronization
- Single-producer, single-consumer: Optimized for this specific use case
- Thread-safe: Producer and consumer can run on different threads
- Blocking and non-blocking operations: Both sync and async APIs
- Batch operations: Send and receive multiple items efficiently
- Zero-copy operations: Direct buffer access for maximum performance
Installation
Add this to your Cargo.toml:
[]
= "0.3"
Usage
The producer and consumer can run on different threads, but there can only be one producer and only one consumer. The producer (or consumer) can be moved between threads, but cannot be shared between threads. The queue has a fixed capacity that must be specified when creating the channel.
Consumer blocks until there is a value on the queue, or use Receiver::try_recv for non-blocking version. Similarly, producer blocks until there is a free slot on the queue, or use Sender::try_send for non-blocking version.
Basic Example (Synchronous)
use thread;
use NonZeroUsize;
use channel;
const COUNT: usize = 100_000;
let = channel;
let handle = spawn;
for i in 0..COUNT
handle.join.unwrap;
Async Example
To use async features, enable the async feature in your Cargo.toml.
[]
= { = "0.3", = ["async"] }
use channel;
use NonZeroUsize;
#
# async
Non-blocking Operations
use channel;
use NonZeroUsize;
let = channel;
// Try to send without blocking
match tx.try_send
// Try to receive without blocking
match rx.try_recv
Batch Operations (Zero-copy)
For maximum performance, you can directly access the internal buffer. This allows you to write or read multiple items at once, bypassing the per-item synchronization overhead.
use channel;
use ptr;
use NonZeroUsize;
let = channel;
// Zero-copy write
let data = ;
let slice = tx.write_buffer;
let count = data.len.min;
unsafe
// Zero-copy read
let len = ;
// Advance the consumer head to mark items as processed
unsafe
Performance
The queue achieves high throughput through several optimizations:
- Cache-line alignment: Head and tail pointers are on separate cache lines to prevent false sharing
- Local caching: Each side caches the other side's position to reduce atomic operations
- Batch operations: Amortize atomic operation costs across multiple items
- Zero-copy API: Direct buffer access eliminates memory copies
Type Constraints
The current implementation is optimized for usize.
Safety
The code has been verified using:
License
MIT License - see LICENSE file for details.