nexus-slab
Manual memory management with SLUB-style slab allocation. 1 cycle churn (alloc+free) at 32B, sub-cycle free. 15x faster than Box. Placement new confirmed in assembly.
Why
When you churn same-type objects (orders, connections, timers, nodes), the
global allocator is your bottleneck. malloc/free contend with every other
allocation in the process. A slab gives you:
- LIFO cache locality — free a slot, allocate again, get the same cache-hot memory back
- Zero fragmentation — every slot is the same size
- Allocator isolation — your hot path doesn't compete with logging or serialization
- Placement new — values written directly into slot memory, no copy
Quick Start
use Slab;
// SAFETY: caller accepts manual memory management contract
let slab = unsafe ;
let mut ptr = slab.alloc;
ptr.price = 105.0; // safe Deref/DerefMut
slab.free; // consumes handle, can't use after
Construction is unsafe — you're opting into:
- Free everything you allocate. Unfree'd slots leak.
- Free from the same slab. Cross-slab free corrupts the freelist.
- Don't share across threads. The slab is
!Send/!Sync.
Everything after construction is safe. Slot<T> is move-only
(!Copy, !Clone) — the compiler prevents double-free.
API
Typed Slabs
use Slab; // fixed capacity
use Slab; // grows via chunks
// Bounded
let slab = unsafe ;
let ptr = slab.alloc; // panics if full
let ptr = slab.try_alloc?; // returns Err(Full(42)) if full
let value = slab.take; // extract without drop, free slot
slab.free; // drop value, free slot
// Unbounded
let slab = unsafe ;
let ptr = slab.alloc; // never fails, grows if needed
// Placement new (two-phase)
if let Some = slab.claim
Byte Slabs (Type-Erased)
Store heterogeneous types in one slab. Any T fitting in N bytes works.
use Slab;
let slab: = unsafe ;
let p1 = slab.alloc; // 8 bytes
let p2 = slab.alloc; // 64 bytes
let p3 = slab.alloc; // different types, same slab
slab.free;
slab.free;
slab.free;
Slot
8-byte move-only handle. Safe Deref/DerefMut access.
let mut ptr = slab.alloc;
// Safe access
ptr.price = 105.0;
let p: = ptr.pin_mut; // stable address, no Unpin needed
// Raw pointer escape hatch
let raw = ptr.into_raw; // disarms debug leak detector
let ptr = unsafe ; // reconstruct
slab.free;
Debug mode: dropping a Slot without calling free() or take()
panics (leak detection). Release mode: silent leak.
Rc Slabs (Shared Ownership)
When multiple owners need access to the same slot — e.g., a collection holds a node and the user holds a handle for cancellation.
use Slab;
// SAFETY: caller accepts manual memory management contract
let slab = unsafe ;
// Alloc returns RcSlot<Order> with refcount 1
let h1 = slab.alloc;
// Clone is safe — increments refcount
let h2 = h1.clone; // refcount 2
// Access through borrow guards — one borrow at a time
// Every handle must be freed — slot deallocated on last free
slab.free; // refcount 2 → 1, slot stays alive
slab.free; // refcount 1 → 0, value dropped, slot freed
Borrow rules: More conservative than RefCell. Only one borrow
(shared OR exclusive) is allowed at a time. Any attempt to borrow while
another borrow is active panics. This is intentional — shared mutable
state in a low-latency system should be tightly controlled.
let h1 = slab.alloc;
let h2 = h1.clone;
let _g1 = h1.borrow;
let _g2 = h2.borrow; // PANICS — already borrowed (even though both are shared)
Pin support: Slab memory never moves, so Pin is sound without
T: Unpin:
let mut pinned = handle.pin_mut; // Pin<RefMut<'_, T>>
Performance
See BENCHMARKS.md for full methodology and numbers.
Pinned to core 0. Batched timing (64 ops per rdtsc pair), 10K samples.
Churn — alloc + deref + free (cycles p50)
| Size | Slab | Box | Speedup |
|---|---|---|---|
| 32B | 1 | 15 | 15x |
| 64B | 2 | 17 | 8.5x |
| 128B | 4 | 23 | 5.8x |
| 256B | 7 | 29 | 4.1x |
| 512B | 14 | 44 | 3.1x |
| 1024B | 25 | 103 | 4.1x |
| 4096B | 78 | 249 | 3.2x |
Free (cycles p50)
| Size | Slab | Box |
|---|---|---|
| 32B-4096B | 0-1 | 23-26 |
Slab free is sub-cycle regardless of size — a single freelist pointer write. Box free is constant at ~24 cycles (allocator bookkeeping).
Assembly-verified placement new: alloc() compiles to freelist pop +
SIMD store directly into slot memory. No intermediate copy.
Bounded vs Unbounded
| Bounded | Unbounded | |
|---|---|---|
| Capacity | Fixed at init | Grows via chunks |
| Full behavior | Err(Full) |
Always succeeds |
| Alloc latency | ~2 cycles | ~2 cycles (LIFO from current chunk) |
| Growth | Never | New chunk (~40 cycle p999) |
Use bounded when you know your capacity. Use unbounded when you need overflow headroom without crashing.
Architecture
Pointer Provenance
Freelist pointers are derived from UnsafeCell for correct write provenance under stacked borrows. This ensures that miri accepts the freelist manipulation as valid -- pointers written into vacant slots carry the correct provenance tag from the UnsafeCell wrapping the union, not from a stale read-only reference.
SlotCell (SLUB-style union)
union
No tag, no metadata. Writing a value overwrites the freelist pointer.
The Slot handle is the proof of occupancy.
Unbounded Builder
use Builder;
// SAFETY: caller guarantees slab contract
let slab = unsafe ;
Features
| Feature | Default | Description |
|---|---|---|
std |
yes | Enables alloc + thread::panicking() for debug leak detection |
alloc |
with std |
Vec-backed storage. Required for slab operation. |
rc |
no | Reference-counted handles (rc::bounded::Slab, rc::unbounded::Slab) with borrow guards |
no_std with alloc is supported for embedded systems.
License
MIT OR Apache-2.0