Skip to main content

nexus_pool/
lib.rs

1//! # nexus-pool
2//!
3//! High-performance object pools for latency-sensitive applications.
4//!
5//! `nexus-pool` provides two pool implementations optimized for different
6//! threading models, both designed to eliminate allocation on hot paths.
7//!
8//! ## Design Philosophy
9//!
10//! This crate follows the principle of **predictability over generality**:
11//!
12//! - **SPSC over MPMC**: Single-writer patterns avoid lock contention entirely
13//! - **Pre-allocation over dynamic growth**: Bounded pools have deterministic behavior
14//! - **Specialized over general**: Each pool type is optimized for its specific access pattern
15//!
16//! ## Modules
17//!
18//! ### [`local`] - Single-threaded pools
19//!
20//! For use within a single thread. Zero synchronization overhead.
21//!
22//! - [`local::BoundedPool`] - Fixed capacity, pre-allocated objects (RAII only)
23//! - [`local::Pool`] - Growable, creates objects on demand via factory
24//!   - RAII: [`acquire()`](local::Pool::acquire) / [`try_acquire()`](local::Pool::try_acquire) → auto-return on drop
25//!   - Manual: [`take()`](local::Pool::take) / [`try_take()`](local::Pool::try_take) → [`put()`](local::Pool::put) to return
26//!
27//! **Performance**: ~26 cycles acquire, ~26-28 cycles release (p50)
28//!
29//! ### [`sync`] - Thread-safe pools
30//!
31//! For cross-thread object transfer with single-acquirer semantics.
32//!
33//! - [`sync::Pool`] - One thread acquires, any thread can return
34//!
35//! **Performance**: ~42 cycles acquire, ~68 cycles release (p50)
36//!
37//! ## Why Single-Acquirer?
38//!
39//! You might ask: why not allow any thread to both acquire and return?
40//!
41//! **The short answer**: MPMC pools require solving the ABA problem, which adds
42//! significant overhead (generation counters, hazard pointers, or epoch-based
43//! reclamation). For most high-performance use cases, MPMC is also a design smell.
44//!
45//! **The architectural answer**: If multiple threads need to acquire from the same
46//! pool, you're violating the single-writer principle. This creates contention
47//! and unpredictable latency—exactly what you're trying to avoid by using a pool.
48//!
49//! Better alternatives:
50//! - **Per-thread pools**: Each thread owns its own `local::Pool`
51//! - **Sharded pools**: Hash to a pool based on thread ID
52//! - **Message passing**: Send pre-allocated buffers via channels
53//!
54//! If you truly need MPMC semantics, consider [`crossbeam::ArrayQueue`] or
55//! [`crossbeam::SegQueue`] which are well-optimized for that use case.
56//!
57//! ## Use Cases
58//!
59//! ### Trading Systems / Market Data
60//!
61//! ```rust
62//! use nexus_pool::sync::Pool;
63//!
64//! // Hot path thread owns the pool
65//! let pool = Pool::new(
66//!     1000,
67//!     || Vec::<u8>::with_capacity(4096),  // Pre-sized for typical message
68//!     |v| v.clear(),                       // Reset for reuse
69//! );
70//!
71//! // Acquire buffer, fill with market data
72//! let mut buf = pool.try_acquire().expect("pool exhausted");
73//! buf.extend_from_slice(b"market data...");
74//!
75//! // Send to worker thread for processing
76//! // Buffer automatically returns to pool when worker drops it
77//! std::thread::spawn(move || {
78//!     process(&buf);
79//!     // buf drops here, returns to pool
80//! });
81//! # fn process(_: &[u8]) {}
82//! ```
83//!
84//! ### Single-Threaded Event Loops
85//!
86//! ```rust
87//! use nexus_pool::local::BoundedPool;
88//!
89//! let pool = BoundedPool::new(
90//!     100,
91//!     || Box::new([0u8; 1024]),  // Fixed-size buffers
92//!     |b| b.fill(0),             // Zero on return
93//! );
94//!
95//! // Event loop - no allocation after startup
96//! for _ in 0..1000 {
97//!     if let Some(mut buf) = pool.try_acquire() {
98//!         // Use buffer...
99//!         buf[0] = 42;
100//!     }
101//!     // buf returns to pool automatically
102//! }
103//! ```
104//!
105//! ## Performance Characteristics
106//!
107//! Measured on Intel Core i9 @ 3.1 GHz (cycles, lower is better):
108//!
109//! | Pool | Acquire p50 | Release p50 | Release p99 |
110//! |------|-------------|-------------|-------------|
111//! | `local::BoundedPool` | 26 | 26 | 58 |
112//! | `local::Pool` (reuse) | 26 | 26 | 58 |
113//! | `local::Pool` (factory) | 32 | 26 | 58 |
114//! | `sync::Pool` | 42 | 68 | 86 |
115//!
116//! The sync pool is ~1.6x slower on acquire due to atomic operations, but
117//! still sub-100 cycles for both operations. Release p99 remains stable
118//! even under concurrent return from multiple threads.
119//!
120//! ## Safety
121//!
122//! The RAII pool types use guards ([`local::Pooled`], [`sync::Pooled`]) that
123//! automatically return objects to the pool on drop. If the pool is dropped
124//! before all guards, the guards will drop their values directly instead of
125//! returning them—no panic, no leak, no use-after-free.
126//!
127//! [`local::Pool`] also supports manual [`take()`](local::Pool::take) /
128//! [`put()`](local::Pool::put) for cases where RAII lifetime doesn't fit
129//! (e.g., storing values in structs, passing through pipelines).
130
131pub mod local;
132pub mod sync;