nexus_pool/
lib.rs

1//! # nexus-pool
2//!
3//! High-performance object pools for latency-sensitive applications.
4//!
5//! `nexus-pool` provides two pool implementations optimized for different
6//! threading models, both designed to eliminate allocation on hot paths.
7//!
8//! ## Design Philosophy
9//!
10//! This crate follows the principle of **predictability over generality**:
11//!
12//! - **SPSC over MPMC**: Single-writer patterns avoid lock contention entirely
13//! - **Pre-allocation over dynamic growth**: Bounded pools have deterministic behavior
14//! - **Specialized over general**: Each pool type is optimized for its specific access pattern
15//!
16//! ## Modules
17//!
18//! ### [`local`] - Single-threaded pools
19//!
20//! For use within a single thread. Zero synchronization overhead.
21//!
22//! - [`local::BoundedPool`] - Fixed capacity, pre-allocated objects
23//! - [`local::Pool`] - Growable, creates objects on demand via factory
24//!
25//! **Performance**: ~26 cycles acquire, ~26-28 cycles release (p50)
26//!
27//! ### [`sync`] - Thread-safe pools
28//!
29//! For cross-thread object transfer with single-acquirer semantics.
30//!
31//! - [`sync::Pool`] - One thread acquires, any thread can return
32//!
33//! **Performance**: ~42 cycles acquire, ~68 cycles release (p50)
34//!
35//! ## Why Single-Acquirer?
36//!
37//! You might ask: why not allow any thread to both acquire and return?
38//!
39//! **The short answer**: MPMC pools require solving the ABA problem, which adds
40//! significant overhead (generation counters, hazard pointers, or epoch-based
41//! reclamation). For most high-performance use cases, MPMC is also a design smell.
42//!
43//! **The architectural answer**: If multiple threads need to acquire from the same
44//! pool, you're violating the single-writer principle. This creates contention
45//! and unpredictable latency—exactly what you're trying to avoid by using a pool.
46//!
47//! Better alternatives:
48//! - **Per-thread pools**: Each thread owns its own `local::Pool`
49//! - **Sharded pools**: Hash to a pool based on thread ID
50//! - **Message passing**: Send pre-allocated buffers via channels
51//!
52//! If you truly need MPMC semantics, consider [`crossbeam::ArrayQueue`] or
53//! [`crossbeam::SegQueue`] which are well-optimized for that use case.
54//!
55//! ## Use Cases
56//!
57//! ### Trading Systems / Market Data
58//!
59//! ```rust
60//! use nexus_pool::sync::Pool;
61//!
62//! // Hot path thread owns the pool
63//! let pool = Pool::new(
64//!     1000,
65//!     || Vec::<u8>::with_capacity(4096),  // Pre-sized for typical message
66//!     |v| v.clear(),                       // Reset for reuse
67//! );
68//!
69//! // Acquire buffer, fill with market data
70//! let mut buf = pool.try_acquire().expect("pool exhausted");
71//! buf.extend_from_slice(b"market data...");
72//!
73//! // Send to worker thread for processing
74//! // Buffer automatically returns to pool when worker drops it
75//! std::thread::spawn(move || {
76//!     process(&buf);
77//!     // buf drops here, returns to pool
78//! });
79//! # fn process(_: &[u8]) {}
80//! ```
81//!
82//! ### Single-Threaded Event Loops
83//!
84//! ```rust
85//! use nexus_pool::local::BoundedPool;
86//!
87//! let pool = BoundedPool::new(
88//!     100,
89//!     || Box::new([0u8; 1024]),  // Fixed-size buffers
90//!     |b| b.fill(0),             // Zero on return
91//! );
92//!
93//! // Event loop - no allocation after startup
94//! for _ in 0..1000 {
95//!     if let Some(mut buf) = pool.try_acquire() {
96//!         // Use buffer...
97//!         buf[0] = 42;
98//!     }
99//!     // buf returns to pool automatically
100//! }
101//! ```
102//!
103//! ## Performance Characteristics
104//!
105//! Measured on Intel Core i9 @ 3.1 GHz (cycles, lower is better):
106//!
107//! | Pool | Acquire p50 | Release p50 | Release p99 |
108//! |------|-------------|-------------|-------------|
109//! | `local::BoundedPool` | 26 | 26 | 58 |
110//! | `local::Pool` (reuse) | 26 | 26 | 58 |
111//! | `local::Pool` (factory) | 32 | 26 | 58 |
112//! | `sync::Pool` | 42 | 68 | 86 |
113//!
114//! The sync pool is ~1.6x slower on acquire due to atomic operations, but
115//! still sub-100 cycles for both operations. Release p99 remains stable
116//! even under concurrent return from multiple threads.
117//!
118//! ## Safety
119//!
120//! Both pool types use RAII guards ([`local::Pooled`], [`sync::Pooled`]) that
121//! automatically return objects to the pool on drop. If the pool is dropped
122//! before all guards, the guards will drop their values directly instead of
123//! returning them—no panic, no leak, no use-after-free.
124
125pub mod local;
126pub mod sync;