stack-queue 0.5.1

Heapless auto-batching queue
Documentation

Stack Queue

License Cargo Documentation

A heapless auto-batching queue featuring deferrable batching by way of negotiating exclusive access over task ranges on thread-owned circular buffers. As tasks continue to be enqueued until batches are bounded, doing so can be deferred until after a database connection has been acquired as to allow for opportunitistic batching. This delivers optimal batching at all workload levels without batch collection overhead, superfluous timeouts, nor unnecessary allocations.

Usage

Impl one of the following while using the local_queue macro:

  • TaskQueue, for batching with per-task receivers
  • BackgroundQueue, for background processsing task batches without receivers
  • BatchReducer, for using closures to reduce over batched data

Optimal Runtime Configuration

For best performance, exclusively use the Tokio runtime as configured via the tokio::main or tokio::test macro with the crate attribute set to async_local while the barrier-protected-runtime feature is enabled on async-local. Doing so configures the Tokio runtime with a barrier that rendezvous runtime worker threads during shutdown in a way that ensures tasks never outlive thread local data owned by runtime worker threads and obviates the need for Box::leak as a means of lifetime extension.

Benchmark results // batching 16 tasks

crossbeam flume stack-queue::TaskQueue stack-queue::BackgroundQueue tokio::mpsc
1.74 us (✅ 1.00x) 2.01 us (❌ 1.16x slower) 974.99 ns (✅ 1.78x faster) 644.55 ns (🚀 2.69x faster) 1.96 us (❌ 1.13x slower)

Stable Usage

This crate conditionally makes use of the nightly only feature type_alias_impl_trait to allow async fns in traits to be unboxed. To compile on stable the boxed feature flag can be used to downgrade async_t::async_trait to async_trait::async_trait.