Arena allocator for building bytes::Bytes without copying the final payload.
Write into a Buffer, call freeze(), and hand off Bytes backed by arena memory. The slot or block returns to the arena when the last clone or slice drops.
FixedArena is the recommended high-throughput path when one slot size can cover the workload: uniform slots, a bitmap claim/release path, and predictable allocation cost.
BuddyArena is the variable-size allocator. Use it when request sizes swing enough that one fixed slot size would waste memory or spill too often. It rounds requests up to powers of two, splits larger blocks on demand, and coalesces neighbors on release.
Quick start
use NonZeroUsize;
use FixedArena;
use BufMut;
let arena = with_slot_capacity.build?;
let mut buf = arena.allocate?;
buf.put_slice;
let _bytes = buf.freeze;
# Ok::
Which allocator should I use?
| Arena | Start here when | Why |
|---|---|---|
FixedArena |
Most allocations fit one chosen slot size, or a small set of predictable slot sizes across separate arenas | Fastest path, simplest capacity planning, lowest allocator overhead |
BuddyArena |
Request sizes vary enough that fixed slots would waste memory or spill too often | One shared region, power-of-two block reuse, split/coalesce behavior for variable-size workloads |
Use FixedArena by default. Use BuddyArena only when variable-size allocation is a hard requirement.
Buddy allocator example
use NonZeroUsize;
use ;
use BufMut;
let arena = builder
.build?;
let mut buf = arena.allocate?;
buf.put_slice;
let _bytes = buf.freeze;
# Ok::
Geometry choice
BuddyGeometry::exact(...) validates a geometry that is already chosen. Invalid inputs fail immediately.
BuddyGeometry::nearest(...) snaps the requested total size and minimum block size up to the nearest valid buddy geometry. Use it when the target shape is approximate and automatic adjustment is acceptable.
Auto-spill
By default, writing past capacity panics. That follows the contract of a fixed-size BufMut: once capacity is exhausted, additional writes are an error unless a different behavior is selected explicitly.
With auto_spill(), the buffer copies its current contents to heap-backed storage, releases the arena allocation immediately, and keeps writing on the heap. freeze() still returns Bytes.
# use NonZeroUsize;
# use FixedArena;
# use BufMut;
let arena = with_slot_capacity.auto_spill.build?;
let mut buf = arena.allocate?;
buf.put_slice;
assert!;
let _bytes = buf.freeze;
# Ok::
Initialization policy
By default, arena allocations use InitPolicy::Uninit. That follows Rust's usual high-performance model for writable capacity: newly allocated bytes are not zeroed, and only written bytes become part of the frozen Bytes.
For security-sensitive or data-hygiene-sensitive workloads, InitPolicy::Zero clears reused arena memory before handing it to a writer. That trades throughput for stronger "new allocation starts zeroed" behavior.
# use NonZeroUsize;
# use ;
let arena = with_slot_capacity
.init_policy
.build?;
# Ok::
Async allocation
With the async-alloc feature, both arena types support allocate_async(). The task waits until capacity becomes available instead of busy-looping or falling back to the heap.
[]
= { = "0.6", = ["async-alloc"] }
let arena = new;
let buf = arena.allocate_async.await;
Metrics
metrics() snapshots allocator state. Fixed reports allocation, failure, spill, and live-capacity counters. Buddy adds split, coalesce, and largest-free-block data so fragmentation pressure is visible directly.
In load tests, watch spill_count and largest_free_block over time to catch pressure and fragmentation early.
Owned mutable bytes
BytesExt::into_owned() is the explicit handoff from arena-backed frozen bytes to owned mutable heap storage. Unlike auto_spill(), which changes storage implicitly on write overflow, into_owned() makes the copy at the point where the caller chooses to leave arena-backed storage:
use NonZeroUsize;
use ;
use BufMut;
let arena = with_slot_capacity.build?;
let mut buf = arena.allocate?;
buf.put_slice;
let frozen = buf.freeze;
let mut owned = frozen.into_owned;
// Arena slot is freed; owned is heap-backed and mutable
owned.put_slice;
assert_eq!;
# Ok::
Examples
Start with fixed_buffer, then run buddy_buffer for the variable-size path.
| Example | What it shows |
|---|---|
fixed_buffer |
Allocate, write, freeze, and send across threads |
buddy_buffer |
Variable-size allocations with split and coalesce |
spill_buffer |
Auto-spill to heap when a buffer outgrows slot capacity |
async_alloc |
Wait for capacity with allocate_async() |
treiber_waker |
Custom Waiter impl using a lock-free Treiber stack |
Benchmarks
Benchmark summary tables and local Criterion HTML report links are in
docs/benchmarks.md.
That page includes both the Apple M4 Max baseline and a real-hardware k8s run summary.
Run benchmarks with:
bench:extreme enables an additional high-thread contention point (40 threads by default).
Override via ARENA_BENCH_EXTREME_THREADS=<n>.
Development tasks
This repository uses mise as its task runner. Install it from the official guide: https://mise.jdx.dev/installing-mise.html.
Common commands in this repository:
List all available tasks with:
Validation
The crate is exercised under standard tests, doctests, examples, and targeted concurrency validation:
mirichecks unsafe code paths for undefined behavior regressions.loommodels the sync and async coordination paths under many thread interleavings.- CI also runs formatting, clippy, docs, examples, and MSRV coverage.
Deployment guides
- NUMA-aware deployment pattern: per-node arenas, thread pinning, and bounded cross-node fallback.
Changelog
Release notes for 0.3.0, 0.4.0, 0.5.x, and 0.6.0 are in CHANGELOG.md.
Status
As of 0.6.0, the API is stabilized. Any future API changes will ship with adapters rather than break the contract directly.
Contributing
See CONTRIBUTING.md for development workflow and PR expectations.
License
MIT