xchannel
xchannel is a tiny, zero-copy, mmap-backed IPC channel with automatic region and file rolling. It lets a single writer append messages to a persistent stream that multiple readers can replay from the beginning (LateJoin) or tail in real time (Live)—without a broker or background service.
Features
- Shared‑memory / IPC logs without a broker.
- Constant-time tailing (readers only track a byte offset).
- Works on Linux and macOS (16 KiB) and typical 4 KiB page systems.
- Zero‑copy access: messages are written directly into a memory‑mapped region and read back without additional copying.
- Rolling regions and files: large channels are segmented into fixed‑size regions. When a region fills up the writer rolls over to the next region; when the end of a file is reached a new file with an incremented sequence number is created automatically.
- Two reader modes:
- LateJoin – start from the beginning of the earliest existing channel file.
- Live – join the channel at the current write position and only observe new messages.
- MTU enforcement: optional maximum message size to defend against unbounded memory usage or corrupted input.
- Atomic state management: the shared write position and message count are tracked using atomic variables with proper memory ordering
- Very simple, low maintenance: the system relies on a minimal set of concepts. There are no background services, no complex synchronization mechanisms, and no external dependencies.
- No back pressure: readers cannot slow down the writer, retention is controlled by rolling policy rather than consumer speed.
- Clear non-aliasing contract (single writer): readers never observe bytes
while they’re being written. This is a language-agnostic safety property (C/C++/Rust/..),
and fits Rust’s
&mut/&guarantees naturally.
Minimum example
use ;
let region = page_size; // ensure page-aligned regions
let mut w = new
.region_size
.file_roll_size
.build?;
// write a message
let payload = b"hello world";
if let Some = w.try_reserve
// read it back
let mut r = new
.late_join
.batch_limit
.build?;
if let Some = r.try_read
Batch read example
use ReaderBuilder;
let mut r = new.late_join.build?;
if let Some = r.try_read_batch
xchannel
mmap-backed IPC channels for Rust
Zero-copy • Append-only • Multi-reader
Why another channel?
Typical Rust channels:
std::sync::mpsctokio::mpsccrossbeam
These are great but they:
- work inside one process
- messages exist only in memory
- cannot replay history
- cannot easily tail from another process
Motivation
Some systems need:
- cross-process communication
- persistent message streams
- very low overhead
- simple debugging
Typical examples:
- trading systems
- logging pipelines
- market data distribution
- real-time analytics
Core idea
Instead of a queue:
Use an append-only log stored in a memory-mapped file
Writer ---> mmap file ---> Readers
Properties:
- writer appends messages
- readers scan sequentially
- messages remain persistent
Readers can start:
- from beginning
- from current tail
Architecture overview
Writer
│
│ append messages
▼
┌─────────────────────────────────────────────┐
│ mmap file │
│ │
│ msg1 msg2 msg3 msg4 msg5 msg6 msg7 msg8 ... │
│ │
└─────────────────────────────────────────────┘
▲ ▲
│ │
Reader A Reader B
(LateJoin) (Live)
Key property:
Readers never block the writer.
File structure
Channel files are divided into regions.
File
┌─────────────────────────────┐
│ Region 0 │
│ ChannelHeader + messages │
├─────────────────────────────┤
│ Region 1 │
│ messages │
├─────────────────────────────┤
│ Region 2 │
│ messages │
└─────────────────────────────┘
Regions provide:
- predictable memory layout
- simple boundary handling
- easier file rolling
Record layout
Each record looks like:
[ MessageHeader ][ payload ][ padding ]
Header fields include:
- committed flag
- message type
- payload length
- timestamp
Readers check:
header.committed
Record memory layout
┌──────────────────────────────┐
│ MessageHeader │
│ │
│ committed │
│ message_type │
│ payload_length │
│ timestamp_ns │
└──────────────────────────────┘
│
▼
┌──────────────────────────────┐
│ Payload bytes │
│ user message │
└──────────────────────────────┘
│
▼
┌──────────────────────────────┐
│ Padding (optional) │
└──────────────────────────────┘
Writer workflow
reserve → write payload → publish
Steps:
- reserve memory
- write payload
- prepare next header
- commit current message
The commit flag is written last.
Publish protocol
Writer publishes a message in this order:
1. write payload(i)
2. prepare header(i+1) // write-ahead header
3. commit header(i) = true (Release)
Meaning:
- the next header slot exists before publication
- the message becomes visible only after commit
Why prepare the next header first?
When a reader sees:
header(i).committed == true
then:
- payload(i) is fully written
- header(i+1) already exists
So the reader can continue scanning safely:
header(i) → payload(i) → header(i+1)
No global metadata required.
Writer pipeline visualization
header(i) ready
│
▼
write payload(i)
│
▼
prepare header(i+1)
│
▼
commit header(i)
Key property:
The commit flag is the only synchronization point.
Cache contention problem
Naive design:
writer updates global head pointer
readers poll the same pointer
Result:
CPU1 (writer) <----> CPU2 (reader)
cache invalidations
This causes unnecessary cache coherence traffic.
Commit flag solution
Each message has its own commit flag.
Readers check different cache lines as they scan.
msg1.header.committed
msg2.header.committed
msg3.header.committed
Benefits:
- minimal contention
- scalable readers
- avoids cache bouncing
Rust example: writer
use WriterBuilder;
Rust example: reader
use ReaderBuilder;
Rust aliasing requirement
Rust enforces strict aliasing rules:
&mut T → exclusive access
&T → shared access
Simultaneous read/write would violate:
&mut [u8] vs &[u8]
This is especially important with mmap memory.
The aliasing challenge
Writer and readers operate on the same mapped memory.
Naively this could allow:
writer writing payload
reader reading same payload
This would break Rust’s non-aliasing contract.
xchannel solution
The algorithm guarantees:
Writer and readers never access the same memory region simultaneously
Except one field:
AtomicBool committed
Access separation
Writer accesses:
payload(i)
header(i)
Readers access only:
payload(j)
header(j)
Where
j < committed_index
Meaning the message is already published.
Publish ordering
Writer:
write payload
prepare next header
commit = true (Release)
Reader:
if committed.load(Acquire) {
read payload
}
Guarantees:
- no partial reads
- correct memory ordering
Why this satisfies Rust aliasing rules
After commit:
writer never touches payload again
Then readers access:
&[u8]
Timeline:
writer (&mut) → finished
reader (&) → begins
No overlapping access.
Rust’s non-aliasing guarantee is preserved.
The only shared memory location
Both sides access only:
AtomicBool committed
Safe because:
- atomic operations
- Acquire / Release ordering
- tiny memory footprint
Reader modes
LateJoin
start from beginning
Useful for:
- replay
- debugging
- analytics
Live
start from tail
Useful for:
- real-time consumers
- monitoring
- streaming pipelines
Rolling files
Channels can run indefinitely.
Files roll when necessary:
demo.xch
demo.xch.1
demo.xch.2
Process:
- writer writes Roll marker
- creates next file
- readers follow automatically
Why mmap?
Benefits:
- zero-copy payload access
- OS page cache handles IO
- sequential reads are extremely fast
- minimal syscalls
Readers simply scan memory:
header → header → header
Why this design fits Rust well
The design aligns with Rust principles:
Ownership transfer:
writer owns payload → commit → reader observes
Concurrency:
atomic publication
Memory layout:
simple + predictable
Key design principles
xchannel relies on:
- append-only log
- commit-flag publication
- write-ahead headers
- sequential scanning
- strict memory ownership transfer
Result:
safe + zero-copy + low latency + scalable
When to use xchannel
Good fit:
- market data distribution
- logging pipelines
- inter-process messaging
- persistent event streams
- historical replay (simulation)
Summary
xchannel provides:
- mmap-based IPC channels
- zero-copy message access
- append-only persistent log
- minimal contention design
- Rust-safe memory access model