Overview
Low latency, single producer & many consumer (SPMC) ring buffer that works with shared memory. bcast natively supports variable message sizes (&[u8]) and offers two read styles:
- lazy message access via
read_batch()/receive_next() - raw bulk copy via
read_bulk()for lower reader-side overhead
Supported Platforms
The crate has been developed and tested exclusively on x86_64-linux. It should also work (but it's by
no means guaranteed) on CPU architectures with weaker memory ordering semantics. If you want a particular platform
to be properly supported feel free to contribute and submit a pull request.
Example
Create Writer by attaching it to the provided byte slice. It does not matter where the underlying bytes are stored, it
could be on the heap, stack as well as a result of memory mapping of a file by the process.
let bytes: & = ...;
let writer = new.into_writer;
Writing takes place via claim operation that returns Claim object. We then have access to the underlying buffer to which
we can write our variable length message.
let mut claim = writer.claim;
claim.get_buffer_mut.copy_from_slice;
claim.commit;
The commit operation is optional as the new producer position (as a result of us writing to the buffer) will be made
visible to other processes (threads) the moment the Claim is dropped. The Reader is constructed in similar way by attaching it to some 'shared' memory.
let bytes: & = ...;
let reader = new.into_reader;
The Reader is batch aware (it knows how far behind a producer it is) and provides an iterator over pending messages.
if let Some = reader.read_batch
If you want to copy a bounded raw window out of the ring first and parse it off-ring, use the bulk API:
if let Some = reader.read_bulk
When the mmap feature is enabled, MappedWriter and MappedReader provide file-backed wrappers over the same API for IPC-style usage.
Backpressure (and the lack of it)
bcast design is to allow producer to process and publish messages at full line rate and deliver the same latency irrespective
of the number of consumers (in reality there is a tiny penalty associated with adding each additional consumer). With the Message
API, consumer can detect when it has been overrun by the producer and take appropriate action (such as crashing
the application).
match msg.read
The message API is intentionally lazy: payload bytes are only copied when Message::read(...) is called, and Message
can be cloned if you need to defer consumption. If you prefer eager copying with a single overrun check at the end of
the copy, use read_bulk() instead.