safer_ring/buffer/
mod.rs

1//! Pinned buffer management for educational and benchmarking purposes.
2//!
3//! # ⚠️ Important: PinnedBuffer is NOT for I/O Operations
4//!
5//! **This module provides [`PinnedBuffer<T>`] primarily for educational purposes and allocation benchmarking.**
6//! The `PinnedBuffer` type is fundamentally limited by Rust's lifetime system and **cannot be used**
7//! **for practical I/O operations** such as loops or concurrent operations.
8//!
9//! **For all I/O operations, use [`OwnedBuffer`](crate::OwnedBuffer) with the `*_owned` methods on [`Ring`](crate::Ring).**
10//!
11//! # Key Features
12//!
13//! - **Memory Pinning**: Guarantees stable memory addresses using [`Pin<Box<T>>`]
14//! - **Generation Tracking**: Atomic counters for buffer lifecycle debugging
15//! - **NUMA Awareness**: Platform-specific NUMA-aware allocation (Linux) - useful for benchmarking
16//! - **DMA Optimization**: Page-aligned allocation for optimal hardware performance - useful for benchmarking
17//! - **Thread Safety**: Safe sharing and transfer between threads
18//!
19//! # Valid Usage Examples
20//!
21//! ```rust
22//! use safer_ring::buffer::PinnedBuffer;
23//!
24//! // ✅ VALID: Allocation benchmarking
25//! let standard_buffer = PinnedBuffer::with_capacity(4096);
26//! let aligned_buffer = PinnedBuffer::with_capacity_aligned(4096);
27//! let numa_buffer = PinnedBuffer::with_capacity_numa(4096, Some(0));
28//!
29//! // ✅ VALID: Single operation, then immediate drop
30//! let data = b"Hello, io_uring!".to_vec();
31//! let buffer = PinnedBuffer::from_vec(data);
32//! assert_eq!(buffer.as_slice(), b"Hello, io_uring!");
33//! ```
34//!
35//! # Invalid Usage (Will Not Compile)
36//!
37//! ```rust,compile_fail
38//! use safer_ring::{Ring, buffer::PinnedBuffer};
39//!
40//! # async fn example() -> Result<(), Box<dyn std::error::Error>> {
41//! let mut ring = Ring::new(32)?;
42//! let mut buffer = PinnedBuffer::with_capacity(4096);
43//!
44//! // ❌ BROKEN: This will not compile due to lifetime constraints
45//! for _ in 0..2 {
46//!     let (_, buf) = ring.read(0, buffer.as_mut_slice())?.await?;
47//!     buffer = buf;  // Error: ring is still borrowed
48//! }
49//! # Ok(())
50//! # }
51//! ```
52//!
53//! # The Technical Problem
54//!
55//! Methods like [`Ring::read()`](crate::Ring::read) return futures that hold mutable borrows of both the
56//! [`Ring`](crate::Ring) and buffer for their entire lifetime. Rust's borrow checker prevents
57//! subsequent operations until the borrow is released, making loops and concurrent operations impossible.
58
59/// Memory allocation utilities for creating aligned and optimized buffers.
60///
61/// This module provides functions for allocating buffers with specific alignment
62/// requirements, particularly page-aligned buffers for optimal DMA performance
63/// with io_uring operations.
64pub mod allocation;
65
66/// Generation tracking utilities for buffer lifecycle management.
67///
68/// This module provides atomic counters for tracking buffer state changes,
69/// helping with debugging buffer lifecycle issues and detecting potential
70/// use-after-free scenarios in development builds.
71pub mod generation;
72
73/// NUMA-aware buffer allocation for multi-socket systems.
74///
75/// This module provides NUMA-aware memory allocation functions that attempt
76/// to allocate buffers on specific NUMA nodes for optimal performance on
77/// multi-socket systems. On Linux systems, it uses CPU affinity and sysfs
78/// to determine NUMA topology and allocate memory locally.
79pub mod numa;
80
81pub use allocation::*;
82pub use generation::*;
83pub use numa::*;
84
85use std::pin::Pin;
86
87/// A buffer that is pinned in memory, primarily for educational purposes.
88///
89/// # ⚠️ FUNDAMENTALLY LIMITED - DO NOT USE FOR I/O OPERATIONS
90///
91/// **This API is considered educational and is not suitable for practical applications involving I/O.**
92/// It suffers from fundamental lifetime constraints in Rust that make it impossible to use in loops
93/// or for multiple concurrent operations on the same [`Ring`](crate::Ring) instance. It exists to
94/// demonstrate the complexities that the [`OwnedBuffer`](crate::OwnedBuffer) model successfully solves.
95///
96/// **For all applications, you MUST use [`OwnedBuffer`](crate::OwnedBuffer) with the `*_owned` methods on [`Ring`](crate::Ring).**
97///
98/// ## The Core Problem
99///
100/// The [`Ring`](crate::Ring) methods that accept `PinnedBuffer` (e.g., [`ring.read()`](crate::Ring::read)) return a `Future` that
101/// holds a mutable borrow on both the [`Ring`](crate::Ring) and the buffer for their entire lifetimes. This
102/// makes it impossible for the borrow checker to allow a second operation in a loop or
103/// concurrently, as the first borrow is never released.
104///
105/// ```rust,compile_fail
106/// use safer_ring::{Ring, PinnedBuffer};
107///
108/// # async fn example() -> Result<(), Box<dyn std::error::Error>> {
109/// let mut ring = Ring::new(32)?;
110/// let mut buffer = PinnedBuffer::with_capacity(1024);
111///
112/// // This fails to compile due to lifetime constraints:
113/// for _ in 0..2 {
114///     let (_, buf) = ring.read(0, buffer.as_mut_slice())?.await?;
115///     buffer = buf;  // Error: cannot use ring again while borrowed
116/// }
117/// # Ok(())
118/// # }
119/// ```
120///
121/// ## When is `PinnedBuffer` useful?
122///
123/// - Benchmarking allocation strategies (e.g., [`with_capacity_aligned`](Self::with_capacity_aligned), [`with_capacity_numa`](Self::with_capacity_numa)).
124/// - Situations where you need a single, one-shot I/O operation and the buffer and ring
125///   will be dropped immediately after.
126/// - As a building block for more complex, `unsafe` abstractions.
127///
128/// For all other cases, and especially for application-level code, **use [`OwnedBuffer`](crate::OwnedBuffer)**.
129///
130/// # Memory Layout
131///
132/// The buffer uses heap allocation via [`Pin<Box<T>>`] which guarantees:
133/// - Stable memory addresses (required for io_uring)
134/// - Automatic cleanup when dropped
135/// - Zero-copy semantics for I/O operations
136///
137/// # Generation Tracking
138///
139/// Each buffer includes a [`GenerationCounter`] for lifecycle tracking and debugging.
140/// This helps identify buffer reuse patterns and can assist in detecting potential
141/// use-after-free scenarios during development.
142///
143/// # Examples
144///
145/// Valid use cases (allocation benchmarking):
146///
147/// ```rust
148/// use safer_ring::buffer::PinnedBuffer;
149/// use std::pin::Pin;
150///
151/// // Benchmarking different allocation strategies
152/// let standard_buffer = PinnedBuffer::with_capacity(4096);
153/// let aligned_buffer = PinnedBuffer::with_capacity_aligned(4096);
154/// let numa_buffer = PinnedBuffer::with_capacity_numa(4096, Some(0));
155///
156/// // Single, one-shot operation (not practical for real apps)
157/// let buffer = PinnedBuffer::new([1, 2, 3, 4]);
158/// let pinned_ref: Pin<&[u8; 4]> = buffer.as_pin();
159/// ```
160pub struct PinnedBuffer<T: ?Sized> {
161    /// Heap-allocated, pinned buffer data - guarantees stable memory address
162    inner: Pin<Box<T>>,
163    /// Generation counter for tracking buffer lifecycle and reuse  
164    generation: GenerationCounter,
165}
166
167impl<T: ?Sized> PinnedBuffer<T> {
168    /// Returns a pinned reference to the buffer data.
169    ///
170    /// This method provides safe access to the pinned data while maintaining
171    /// the pinning guarantees required for io_uring operations.
172    ///
173    /// # Examples
174    ///
175    /// ```rust
176    /// use safer_ring::buffer::PinnedBuffer;
177    /// use std::pin::Pin;
178    ///
179    /// let buffer = PinnedBuffer::new([1, 2, 3, 4]);
180    /// let pinned_ref: Pin<&[u8; 4]> = buffer.as_pin();
181    /// assert_eq!(&*pinned_ref, &[1, 2, 3, 4]);
182    /// ```
183    #[inline]
184    pub fn as_pin(&self) -> Pin<&T> {
185        self.inner.as_ref()
186    }
187
188    /// Returns a mutable pinned reference to the buffer data.
189    ///
190    /// This method provides safe mutable access to the pinned data while
191    /// maintaining the pinning guarantees. Essential for io_uring write operations.
192    ///
193    /// # Examples
194    ///
195    /// ```rust
196    /// use safer_ring::buffer::PinnedBuffer;
197    /// use std::pin::Pin;
198    ///
199    /// let mut buffer = PinnedBuffer::new([0; 4]);
200    /// let mut pinned_ref: Pin<&mut [u8; 4]> = buffer.as_pin_mut();
201    /// // Safe to modify through pinned reference
202    /// ```
203    #[inline]
204    pub fn as_pin_mut(&mut self) -> Pin<&mut T> {
205        self.inner.as_mut()
206    }
207
208    /// Returns the current generation of this buffer.
209    ///
210    /// The generation counter tracks buffer lifecycle events and can be used
211    /// for debugging buffer reuse patterns and detecting potential issues.
212    ///
213    /// # Examples
214    ///
215    /// ```rust
216    /// use safer_ring::buffer::PinnedBuffer;
217    ///
218    /// let mut buffer = PinnedBuffer::with_capacity(1024);
219    /// let initial_gen = buffer.generation();
220    ///
221    /// buffer.mark_in_use();
222    /// assert!(buffer.generation() > initial_gen);
223    /// ```
224    #[inline]
225    pub fn generation(&self) -> u64 {
226        self.generation.get()
227    }
228
229    /// Mark this buffer as in use and increment generation.
230    ///
231    /// This method should be called when the buffer is being used for I/O
232    /// operations. It helps track buffer lifecycle for debugging purposes.
233    ///
234    /// # Examples
235    ///
236    /// ```rust
237    /// use safer_ring::buffer::PinnedBuffer;
238    ///
239    /// let mut buffer = PinnedBuffer::with_capacity(1024);
240    /// let gen_before = buffer.generation();
241    ///
242    /// buffer.mark_in_use();
243    /// assert_eq!(buffer.generation(), gen_before + 1);
244    /// ```
245    pub fn mark_in_use(&mut self) {
246        self.generation.increment();
247    }
248
249    /// Mark this buffer as available and increment generation.
250    ///
251    /// This method should be called when the buffer is no longer being used
252    /// for I/O operations and is available for reuse.
253    ///
254    /// # Examples
255    ///
256    /// ```rust
257    /// use safer_ring::buffer::PinnedBuffer;
258    ///
259    /// let mut buffer = PinnedBuffer::with_capacity(1024);
260    /// buffer.mark_in_use();
261    /// let gen_after_use = buffer.generation();
262    ///
263    /// buffer.mark_available();
264    /// assert_eq!(buffer.generation(), gen_after_use + 1);
265    /// ```
266    pub fn mark_available(&mut self) {
267        self.generation.increment();
268    }
269
270    /// Check if this buffer is available for use.
271    ///
272    /// Note: This is a simple implementation - a more sophisticated
273    /// version might track actual usage state.
274    pub fn is_available(&self) -> bool {
275        true // For now, always return true
276    }
277
278    /// Returns a raw pointer to the buffer data.
279    ///
280    /// # Safety
281    ///
282    /// The pointer is valid only while the buffer exists.
283    #[inline]
284    pub fn as_ptr(&self) -> *const T {
285        Pin::as_ref(&self.inner).get_ref() as *const T
286    }
287
288    /// Returns a mutable raw pointer to the buffer data.
289    ///
290    /// # Safety
291    ///
292    /// The pointer is valid only while the buffer exists.
293    #[inline]
294    pub fn as_mut_ptr(&mut self) -> *mut T {
295        unsafe { Pin::as_mut(&mut self.inner).get_unchecked_mut() as *mut T }
296    }
297}
298
299impl<T> PinnedBuffer<T> {
300    /// Creates a new pinned buffer from the given data.
301    ///
302    /// This constructor takes ownership of the provided data and pins it in memory,
303    /// making it suitable for io_uring operations. The data is moved to the heap
304    /// and its address becomes stable for the lifetime of the buffer.
305    ///
306    /// # Parameters
307    ///
308    /// * `data` - The data to pin in memory. Can be any type T.
309    ///
310    /// # Returns
311    ///
312    /// Returns a new `PinnedBuffer<T>` with the data pinned and generation counter
313    /// initialized to 0.
314    ///
315    /// # Examples
316    ///
317    /// ```rust
318    /// use safer_ring::buffer::PinnedBuffer;
319    ///
320    /// // Pin an array
321    /// let buffer = PinnedBuffer::new([1, 2, 3, 4]);
322    /// assert_eq!(buffer.len(), 4);
323    ///
324    /// // Pin a custom struct
325    /// #[derive(Debug, PartialEq)]
326    /// struct Data { value: u32 }
327    ///
328    /// let buffer = PinnedBuffer::new(Data { value: 42 });
329    /// assert_eq!(buffer.as_pin().value, 42);
330    /// ```
331    #[inline]
332    pub fn new(data: T) -> Self {
333        Self {
334            inner: Box::pin(data),
335            generation: GenerationCounter::new(),
336        }
337    }
338}
339
340impl PinnedBuffer<[u8]> {
341    /// Creates a new zero-initialized pinned buffer with the specified size.
342    ///
343    /// This is the primary method for creating buffers for I/O operations.
344    /// The buffer is heap-allocated, zero-initialized, and pinned for stable
345    /// memory addresses required by io_uring.
346    ///
347    /// # Parameters
348    ///
349    /// * `size` - The size of the buffer in bytes. Must be greater than 0 for meaningful use.
350    ///
351    /// # Returns
352    ///
353    /// Returns a `PinnedBuffer<[u8]>` containing a zero-initialized buffer of the
354    /// specified size, ready for I/O operations.
355    ///
356    /// # Examples
357    ///
358    /// ```rust
359    /// use safer_ring::buffer::PinnedBuffer;
360    ///
361    /// // Create a 4KB buffer for file I/O
362    /// let buffer = PinnedBuffer::with_capacity(4096);
363    /// assert_eq!(buffer.len(), 4096);
364    /// assert!(buffer.as_slice().iter().all(|&b| b == 0)); // All zeros
365    ///
366    /// // Create buffer for network I/O
367    /// let net_buffer = PinnedBuffer::with_capacity(1500); // MTU size
368    /// assert_eq!(net_buffer.len(), 1500);
369    /// ```
370    pub fn with_capacity(size: usize) -> Self {
371        let data = vec![0u8; size].into_boxed_slice();
372        Self {
373            inner: Pin::from(data),
374            generation: GenerationCounter::new(),
375        }
376    }
377
378    /// Creates a new pinned buffer from a vector.
379    ///
380    /// This method takes ownership of a vector and converts it into a pinned
381    /// buffer. The vector's data is preserved and the buffer can be used
382    /// immediately for I/O operations.
383    ///
384    /// # Parameters
385    ///
386    /// * `vec` - The vector to convert into a pinned buffer.
387    ///
388    /// # Returns
389    ///
390    /// Returns a `PinnedBuffer<[u8]>` containing the vector's data, pinned
391    /// and ready for I/O operations.
392    ///
393    /// # Examples
394    ///
395    /// ```rust
396    /// use safer_ring::buffer::PinnedBuffer;
397    ///
398    /// let data = vec![1, 2, 3, 4, 5];
399    /// let buffer = PinnedBuffer::from_vec(data);
400    /// assert_eq!(buffer.as_slice(), &[1, 2, 3, 4, 5]);
401    /// assert_eq!(buffer.len(), 5);
402    /// ```
403    #[inline]
404    pub fn from_vec(vec: Vec<u8>) -> Self {
405        Self::from_boxed_slice(vec.into_boxed_slice())
406    }
407
408    /// Creates a new pinned buffer from a boxed slice.
409    #[inline]
410    pub fn from_boxed_slice(slice: Box<[u8]>) -> Self {
411        Self {
412            inner: Pin::from(slice),
413            generation: GenerationCounter::new(),
414        }
415    }
416
417    /// Creates a new pinned buffer by copying from a slice.
418    #[inline]
419    pub fn from_slice(slice: &[u8]) -> Self {
420        Self::from_vec(slice.to_vec())
421    }
422
423    /// Creates a new aligned pinned buffer with the specified size.
424    ///
425    /// This method creates a pinned buffer using page-aligned allocation (4096 bytes)
426    /// for optimal DMA performance with io_uring operations. The alignment helps
427    /// reduce memory copy overhead in the kernel.
428    ///
429    /// # Parameters
430    ///
431    /// * `size` - The size of the buffer in bytes. The buffer will be page-aligned
432    ///   regardless of the size specified.
433    ///
434    /// # Returns
435    ///
436    /// Returns a `PinnedBuffer<[u8]>` with page-aligned, zero-initialized memory
437    /// optimized for high-performance I/O operations.
438    ///
439    /// # Performance Notes
440    ///
441    /// Page-aligned buffers can provide significant performance benefits for:
442    /// - Large sequential I/O operations
443    /// - Direct memory access (DMA) operations
444    /// - Kernel bypass operations with io_uring
445    ///
446    /// # Examples
447    ///
448    /// ```rust
449    /// use safer_ring::buffer::PinnedBuffer;
450    ///
451    /// // Create aligned buffer for high-performance I/O
452    /// let buffer = PinnedBuffer::with_capacity_aligned(8192);
453    /// assert_eq!(buffer.len(), 8192);
454    /// assert!(buffer.as_slice().iter().all(|&b| b == 0)); // Zero-initialized
455    ///
456    /// // Even small sizes get page alignment benefits
457    /// let small_aligned = PinnedBuffer::with_capacity_aligned(64);
458    /// assert_eq!(small_aligned.len(), 64);
459    /// ```
460    pub fn with_capacity_aligned(size: usize) -> Self {
461        let data = allocate_aligned_buffer(size);
462        Self {
463            inner: Pin::from(data),
464            generation: GenerationCounter::new(),
465        }
466    }
467
468    /// Creates a new NUMA-aware pinned buffer with the specified size.
469    /// On Linux, attempts to allocate memory on the specified NUMA node.
470    #[cfg(target_os = "linux")]
471    pub fn with_capacity_numa(size: usize, numa_node: Option<usize>) -> Self {
472        let data = allocate_numa_buffer(size, numa_node);
473        Self {
474            inner: Pin::from(data),
475            generation: GenerationCounter::new(),
476        }
477    }
478
479    /// Creates a new NUMA-aware pinned buffer (stub implementation for non-Linux).
480    #[cfg(not(target_os = "linux"))]
481    pub fn with_capacity_numa(size: usize, _numa_node: Option<usize>) -> Self {
482        // On non-Linux platforms, fall back to regular aligned allocation
483        Self::with_capacity_aligned(size)
484    }
485
486    /// Returns a mutable slice reference with pinning guarantees.
487    #[inline]
488    pub fn as_mut_slice(&mut self) -> Pin<&mut [u8]> {
489        self.inner.as_mut()
490    }
491
492    /// Returns an immutable slice reference.
493    #[inline]
494    pub fn as_slice(&self) -> &[u8] {
495        &self.inner
496    }
497
498    /// Returns the length of the buffer.
499    #[inline]
500    pub fn len(&self) -> usize {
501        self.inner.len()
502    }
503
504    /// Checks if the buffer is empty.
505    #[inline]
506    pub fn is_empty(&self) -> bool {
507        self.inner.is_empty()
508    }
509}
510
511impl<const N: usize> PinnedBuffer<[u8; N]> {
512    /// Creates a new pinned buffer from a fixed-size array.
513    #[inline]
514    pub fn from_array(array: [u8; N]) -> Self {
515        Self::new(array)
516    }
517
518    /// Creates a new zero-initialized pinned buffer.
519    #[inline]
520    pub fn zeroed() -> Self {
521        Self::new([0u8; N])
522    }
523
524    /// Returns an immutable slice reference to the array.
525    #[inline]
526    pub fn as_slice(&self) -> &[u8] {
527        &*self.inner
528    }
529
530    /// Returns a mutable slice reference with pinning guarantees.
531    #[inline]
532    pub fn as_mut_slice(&mut self) -> Pin<&mut [u8]> {
533        unsafe {
534            let array_ptr = self.inner.as_mut().get_unchecked_mut().as_mut_ptr();
535            let slice = std::slice::from_raw_parts_mut(array_ptr, N);
536            Pin::new_unchecked(slice)
537        }
538    }
539
540    /// Returns the length of the buffer.
541    #[inline]
542    pub const fn len(&self) -> usize {
543        N
544    }
545
546    /// Checks if the buffer is empty.
547    #[inline]
548    pub const fn is_empty(&self) -> bool {
549        N == 0
550    }
551}
552
553// SAFETY: PinnedBuffer can be sent between threads when T is Send
554unsafe impl<T: Send + ?Sized> Send for PinnedBuffer<T> {}
555
556// SAFETY: PinnedBuffer can be shared between threads when T is Sync
557unsafe impl<T: Sync + ?Sized> Sync for PinnedBuffer<T> {}