/stack-queue

Performance oriented heapless auto batching queue

Primary LanguageRustMIT LicenseMIT

Stack Queue

License Cargo Documentation

A (mostly) heapless auto-batching queue featuring deferrable batching by way of negotiating exclusive access over task ranges on thread-owned circular buffers. As tasks continue to be enqueued until batches are bounded, doing so can be deferred until after a database connection has been acquired as to allow for opportunitistic batching. This approach delivers optimal batching at all workload levels without batch collection overhead, superfluous timeouts, nor unnecessary allocations.

Usage

Impl one of the following while using the local_queue macro:

Optimal Runtime Configuration

For best performance, exclusively use the Tokio runtime as configured via the tokio::main or tokio::test macro with the crate attribute set to async_local while the barrier-protected-runtime feature is enabled on async-local. Doing so configures the Tokio runtime with a barrier that rendezvous runtime worker threads during shutdown in a way that ensures tasks never outlive thread local data owned by runtime worker threads and obviates the need for Box::leak as a fallback means of lifetime extension.

Benchmark results // batching 16 tasks

crossbeam flume TaskQueue tokio::mpsc
576.33 ns (✅ 1.00x) 656.54 ns (❌ 1.14x slower) 255.33 ns (🚀 2.26x faster) 551.48 ns (✅ 1.05x faster)