rusted-ring
A high-performance, cache-optimized ring buffer library for Rust, designed for lock-free, zero-copy event processing with smart pointer semantics for slot-based memory management.
Features
- Cache-line aligned ring buffers for optimal CPU cache performance
- Lock-free operations using atomic memory ordering
- T-shirt sized pools for different event categories (XS, S, M, L, XL)
- Zero-copy operations with Pod/Zeroable support
- Smart pointer semantics with
RingPtr<T>for automatic memory management - Reference counting for safe slot reuse across multiple consumers
- Mobile optimized for ARM and x86 architectures
- Single-writer, multi-reader design inspired by LMAX Disruptor
Core Architecture
RingPtr - Smart Pointer to Ring Buffer Slots
The key innovation is RingPtr<T>, which acts like Arc<T> but points to ring buffer memory instead of heap:
use ;
// Allocate event in ring buffer
let allocator = new;
let ring_ptr: = allocate_from_pool;
// Clone for sharing (increments ref count in slot metadata)
let shared_ptr = ring_ptr.clone;
// Access data (zero-copy deref to ring buffer slot)
let event_data = ring_ptr.deref;
// When all RingPtrs drop, slot automatically becomes reusable
Dual Memory Architecture
- Ring Buffer Slots: Store actual event data (
PooledEvent<SIZE>) - Slot Metadata: Store reference counting and generation info (
SlotMetadata)
Quick Start
Basic Ring Buffer Operations
use ;
use Arc;
// Create a ring buffer for 256-byte events with 1000 capacity
let ring = new;
// Create writer and readers
let mut writer = Writer ;
let mut reader = Reader ;
// Write events
let event = zeroed;
writer.add;
// Read events
while let Some = reader.next
Smart Pointer Usage (Recommended)
use ;
use LazyLock;
// Global allocator (typically in your application)
static ALLOCATOR: = new;
// Allocate event from appropriate pool
// Share across threads/actors
Core Types
PooledEvent
Fixed-size event structure that implements Copy, Clone, Pod and Zeroable for zero-copy operations:
EventPools
Manage multiple ring buffers by size category with automatic size estimation:
use ;
let pools = new;
// Automatic size estimation
let size = estimate_size;
let pool_id = from_size;
// Access specific pools
pools.get_slot_data;
pools.inc_ref_count;
pools.dec_ref_count;
T-Shirt Sizing
Pre-defined event sizes for optimal memory usage:
// Automatic size selection
let size = estimate_size;
let pool_id = from_size;
Reference Counting & Memory Management
Automatic slot lifecycle management with atomic reference counting:
// 1. Allocation
let ring_ptr = create_event; // ref_count = 1
// 2. Sharing
let ptr1 = ring_ptr.clone; // ref_count = 2
let ptr2 = ring_ptr.clone; // ref_count = 3
// 3. Distribution
send_to_consumer1;
send_to_consumer2;
drop; // ref_count = 2
// 4. Automatic cleanup
// When consumers finish: ref_count = 1, then 0
// Slot automatically marked reusable with new generation
Memory Ordering & Safety
Careful memory ordering ensures data visibility without locks:
- Writers: Use
Releaseordering when updating cursors - Readers: Use
Acquireordering when reading cursor positions - Reference counting: Uses
AcqRelordering for atomicity - Slot reuse: Protected by generation numbers (ABA prevention)
Cache Optimization
All structures are cache-line aligned (64 bytes) to prevent false sharing:
Performance Characteristics
- Allocation: ~10-50ns per event (pool allocation vs ~100-500ns malloc)
- Reference counting: ~1-3 CPU cycles (atomic increment/decrement)
- Access latency: Direct pointer dereference to ring buffer memory
- Memory usage: Predictable 2.5MB total across all pools
- Cache performance: Optimized for temporal locality in slot access
Use Cases
Perfect for:
- Event-driven architectures with actor-based processing
- Real-time systems requiring predictable latency and memory usage
- High-frequency event processing (trading, gaming, IoT)
- P2P systems needing efficient local buffering
- CRDT operations and collaborative editing systems
- Mobile applications with strict memory constraints
Example: Multi-Actor Event Processing
use ;
use ;
// Global allocator
static ALLOCATOR: = new;
// Event wrapper for your application
// FFI boundary - incoming events
pub extern "C"
// Actor processing
// Batch processing with automatic cleanup
Memory Requirements
Approximate memory usage for default pool configurations:
- XS Pool: 64 × 2000 = 128KB (+ 14KB metadata)
- S Pool: 256 × 1000 = 256KB (+ 7KB metadata)
- M Pool: 1024 × 300 = 307KB (+ 2KB metadata)
- L Pool: 4096 × 60 = 245KB (+ 420 bytes metadata)
- XL Pool: 16384 × 15 = 245KB (+ 105 bytes metadata)
Total: ~2.5MB of predictable, pre-allocated memory
Compile-time Safety
Built-in guards prevent stack overflow from oversized ring buffers:
const MAX_STACK_BYTES: usize = 1_048_576; // 1MB limit
// Compile-time check
const _STACK_GUARD: = ;
License
MPL-2.0