Sliding Ring
sliding-ring provides a cache-friendly, allocation-free ring buffer that keeps its slots indexed relative to a moving anchor index. It was extracted from our high-performance orderbook engine and generalized so any Copy slot type can participate in consistent sliding-window semantics. A const-generic DEPTH defines how many offsets past the anchor stay live at any time; newly entered slots are proactively zeroed on each shift so the hot window never carries stale data. Store any domain-specific coordinate (price, timestamp, etc.) outside the ring and combine it with the offsets it exposes.
Installation
The crate ships with a README.md-driven cargo doc experience, so cargo doc --open -p sliding-ring gives you rendered API docs locally.
Why this exists
Traditional ring buffers optimize for FIFO semantics, but exchange and telemetry workloads often care about the best coordinate (highest bid, current time bucket, etc.) and need random access around that best. SlidingRing keeps a 256-slot ring that re-anchors in O(1) time, clearing the newly entered slots while preserving the rest. This enables predictable performance, avoids per-slot allocations, and keeps the CPU cache warm even under heavy churn.
When to reach for it
- Aggregated orderbooks keyed by discrete price ticks.
- Sliding time buckets for counters, rate-limiters, or observability windows.
- Any moving reference window where only a bounded number of offsets from the anchor matter.
When it is the wrong tool
- You need more than 256 simultaneous offsets (use a wider structure or a map).
- The anchor jumps sparsely across a huge domain, making the clearing work wasteful.
- Slots own large heap allocations that would be expensive to recreate on every clear (store handles or IDs instead).
Model
- The ring stores an anchor index (0-255). Track any absolute coordinate outside the structure.
- A const
DEPTHdescribes the contiguous window of offsets[0, DEPTH)that stays live; everything else is reset to the caller-supplied “zero” value. - Shifting by
ksteps rewires indices modulo 256, then clears the|k|offsets that newly enter the window (or resets the whole ring when|k| >= DEPTH). shift_by_keep_anchorskips clearing the newly entered anchor slot on backward moves—useful when promoting the next best level without losing its quantity.
Performance guardrails
- No dynamic dispatch:
SlidingRing<T, DEPTH>stores[T; 256], so every slot type is monomorphized at compile time and the compiler inlines the small helper methods.Tmust beCopyandDEPTHcontrols the number of contiguous offsets that stay hot beyond the anchor. - Anchor slices for SIMD:
slices_from_anchor*return the two contiguous spans covering exactlyDEPTHslots (tail/head), so you can run vectorized scans without modulo math. This is the primary primitive for ultra-low-latency users. - Stable SIMD workflow: Pair
slices_from_anchor*with thewidecrate (see thesimd_scanexample below) to stay on stable Rust while still using vector instructions. - Raw-pointer iterators:
iter()/iter_mut()walk storage order, whileiter_from_anchor*emit(offset, slot)relative to the current anchor. Both rely on pointer arithmetic, so they stay zero-cost abstractions. - Inline hot paths: ring methods stay tiny and are annotated with
#[inline(always)]so LLVM can flatten the helpers. - Memory over CPU: we willingly dedicate 256 slots per ring to keep operations branch-predictable. If memory becomes tighter than latency, this crate is not the right fit.
Ultra-low-latency workflow
- Call
slices_from_anchor*()to obtain the tail/head spans. - Run SIMD loads/stores or hand-unrolled loops directly over those spans (tail first, then head) to scan for non-zero slots, update counters, etc.
- If you also need absolute coordinates, pair the SIMD scan with
iter_from_anchor*()—they share the same traversal order and hand you offsets you can add to your externally tracked anchor.
Slot requirements
The ring stores [T; 256], so T must be Copy. Construction takes the value that represents a cleared slot:
use SlidingRing;
const DEPTH: u8 = 48;
let mut ring = new;
That “zero” gets copied into every slot initially and whenever a slot needs to be cleared because the anchor moved. It can be anything cheap to copy: numeric zeroes, None, or lightweight structs. Only the DEPTH slots starting at the anchor remain significant; everything else is cleared automatically.
Example
use SlidingRing;
const DEPTH: u8 = 64;
let mut ring = new;
let mut anchor = 10_000i128;
let offset = 5u8;
ring.slot_mut.hits += 1;
ring.shift_by;
anchor += offset as i128;
assert_eq!;
Telemetry counter
use ;
const DEPTH: u8 = 32;
let mut ring = new;
ring.slot_mut.hits += 1;
ring.slot_mut.hits += 3;
let sum: u64 = ring
.iter_from_anchor
.map
.sum;
assert_eq!;
SIMD scanning
use SlidingRing;
use u64x4;
const DEPTH: u8 = 64;
let mut ring = new;
for i in 0..DEPTH
let = ring.slices_from_anchor;
let needle = splat;
for chunk in tail.chunks_exact.chain
Run cargo run --example telemetry or cargo run --example simd_scan for the full samples. The SIMD example relies on the wide crate, so it stays on stable Rust while still demonstrating contiguous-slice scanning.
Documentation
- API reference on docs.rs.
- Run
cargo doc --open -p sliding-ringfor local browsing.
Status
This crate is extracted from production systems but is still evolving. Expect additions such as configurable ring widths, iterators, and more slot utilities as we work toward a polished open-source release.
Examples
cargo run --example telemetry– basic per-slot counter with anchor-ordered reduction.cargo run --example simd_scan– demonstrates splitting the ring into contiguous spans and scanning with SIMD on stable via thewidecrate.
Benchmarks
cargo bench -p sliding-ringruns the Criterion suite measuring anchor shifts and slot updates.- Bazel users can run
bazel run //lib/collections/sliding-ring:sliding_ring_bench(taggedbenchmark) to launch the same binary.
License
Licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in sliding-ring by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.