nexus-collections
High-performance, slab-backed collections for latency-critical systems.
Why This Crate?
Node-based data structures (linked lists, heaps, skip lists) offer operations that contiguous structures can't — O(1) unlink/re-link, stable handles to interior elements, and movement between collections without copying. The trade-off is normally heap fragmentation and allocator overhead on every node allocation.
This crate eliminates that trade-off by using
nexus-slab — a SLUB-style slab
allocator — as dedicated backing storage for all nodes. Nodes live in
contiguous, type-homogeneous slabs rather than scattered across the global
heap, giving you:
- Global allocator isolation — your hot path doesn't compete with logging, serialization, or background tasks for allocator resources
- LIFO cache locality — recently freed nodes are reused first, staying hot in L1
- Zero fragmentation — every slot is the same size, freed slots are immediately reusable
- Stable handles —
RcSlot-based references that survive unlink, re-link, and movement between collections - Bounded — fixed capacity, zero allocation after init, returns
Fullat capacity - Unbounded — grows via chunks without copying
Collections
List — Doubly-Linked List
O(1) push/pop/unlink anywhere. RcSlot handles enable O(1) access by
identity. Elements can move between lists without deallocation.
builder.capacity.build.unwrap;
let mut list = new;
let handle = list.try_push_back.unwrap;
// Access via handle
assert_eq!;
// O(1) unlink and re-link
list.unlink;
list.link_back;
Heap — Pairing Heap
O(1) push, O(log n) pop, O(1) peek. RcSlot handles enable O(log n)
removal of arbitrary elements by handle.
builder.capacity.build.unwrap;
let mut heap = new;
let handle = heap.try_push.unwrap;
// O(1) peek
assert_eq!;
// O(log n) unlink by handle
heap.unlink;
SkipList — Sorted Map
Probabilistic sorted map with O(log n) insert/lookup/remove. Internal allocation — user sees only keys and values.
builder.capacity.build.unwrap;
let mut map = new;
map.try_insert.unwrap;
assert_eq!;
// Sorted iteration
for in map.iter
// Entry API
map.entry.or_try_insert.unwrap;
Allocator Macros
Each collection has a macro that generates a typed thread-local slab allocator. Invoke inside a module:
| Macro | Collection | Generated Types |
|---|---|---|
list_allocator!(T, bounded|unbounded) |
List | Allocator, Handle, List, Cursor |
heap_allocator!(T, bounded|unbounded) |
Heap | Allocator, Handle, Heap |
skip_allocator!(K, V, bounded|unbounded) |
SkipList | Allocator, SkipList, Cursor, Entry |
Bounded allocators have a fixed capacity. Insert operations return
Result<_, Full<T>> when full.
Unbounded allocators grow as needed via chunk allocation.
Initialization
Allocators must be initialized before use:
// Bounded — set capacity
builder.capacity.build.unwrap;
// Unbounded — optionally set chunk size (default 4096)
builder.chunk_size.build.unwrap;
Performance
Cycle-accurate latency, Intel Core Ultra 7 155H, pinned to physical core, turbo boost disabled. See BENCHMARKS.md for full results.
List (p50 cycles)
| Operation | Cycles |
|---|---|
| link_back | 20 |
| pop_front | 22 |
| unlink | 22 |
| move_to_front | 22 |
Heap (p50 cycles)
| Operation | Cycles |
|---|---|
| push | 24 |
| pop | 312 |
| peek | 20 |
| unlink | 30 |
SkipList (p50 cycles, @10k population)
| Operation | Cycles |
|---|---|
| get (hit) | 171 |
| insert | 510 |
| remove | 544 |
| pop_first | 26 |
License
Licensed under either of Apache License, Version 2.0 or MIT License at your option.