thread_share/
atomic.rs

1//! # Atomic Module - ArcThreadShare<T>
2//!
3//! This module provides `ArcThreadShare<T>`, a high-performance structure for
4//! zero-copy data sharing between threads using atomic operations.
5//!
6//! ## ⚠️ Important Warning
7//!
8//! **`ArcThreadShare<T>` has significant limitations and should be used with caution!**
9//!
10//! ## Overview
11//!
12//! `ArcThreadShare<T>` uses `Arc<AtomicPtr<T>>` internally to provide zero-copy
13//! data sharing without locks. While this can offer high performance, it comes
14//! with important trade-offs.
15//!
16//! ## Key Features
17//!
18//! - **Zero-Copy Operations**: No data cloning during access
19//! - **Atomic Updates**: Uses atomic pointer operations
20//! - **High Performance**: Potentially faster than lock-based approaches
21//! - **Memory Efficiency**: Single copy of data shared across threads
22//!
23//! ## ⚠️ Critical Limitations
24//!
25//! ### 1. **Non-Atomic Complex Operations**
26//! ```rust
27//! use thread_share::ArcThreadShare;
28//!
29//! let arc_share = ArcThreadShare::new(0);
30//!
31//! // ❌ This is NOT atomic and can cause race conditions
32//! arc_share.update(|x| *x = *x + 1);
33//!
34//! // ✅ Use the atomic increment method instead
35//! arc_share.increment();
36//! ```
37//!
38//! **Problem**: The `update` method with complex operations like `+=` is not atomic.
39//! Between reading the value, modifying it, and writing it back, other threads can interfere.
40//!
41//! ### 2. **High Contention Performance Issues**
42//! ```rust
43//! use thread_share::ArcThreadShare;
44//!
45//! let arc_share = ArcThreadShare::new(0);
46//!
47//! // ❌ High contention can cause significant performance degradation
48//! for _ in 0..10000 {
49//!     arc_share.increment(); // May lose many operations under high contention
50//! }
51//! ```
52//!
53//! **Problem**: Under high contention (many threads updating simultaneously), `AtomicPtr`
54//! operations can lose updates due to:
55//! - Box allocation/deallocation overhead
56//! - CAS (Compare-And-Swap) failures requiring retries
57//! - Memory pressure from frequent allocations
58//!
59//! **Expected Behavior**: In high-contention scenarios, you may see only 20-30% of
60//! expected operations complete successfully.
61//!
62//! ### 3. **Memory Allocation Overhead**
63//! ```rust
64//! use thread_share::ArcThreadShare;
65//!
66//! let arc_share = ArcThreadShare::new(0);
67//!
68//! // Each increment operation involves:
69//! // 1. Allocating new Box<T>
70//! // 2. Converting to raw pointer
71//! // 3. Atomic pointer swap
72//! // 4. Deallocating old Box<T>
73//! arc_share.increment();
74//! ```
75//!
76//! **Problem**: Every update operation creates a new `Box<T>` and deallocates the old one,
77//! which can be expensive for large data types.
78//!
79//! ## When to Use ArcThreadShare<T>
80//!
81//! ### ✅ Good Use Cases
82//! - **Low-contention scenarios** (few threads, infrequent updates)
83//! - **Performance-critical applications** where you understand the limitations
84//! - **Simple atomic operations** using built-in methods (`increment()`, `add()`)
85//! - **Read-heavy workloads** with occasional writes
86//!
87//! ### ❌ Avoid When
88//! - **High-frequency updates** (>1000 ops/second per thread)
89//! - **Critical data integrity** requirements
90//! - **Predictable performance** needs
91//! - **Large data structures** (due to allocation overhead)
92//! - **Multi-threaded counters** with strict accuracy requirements
93//!
94//! ## Example Usage
95//!
96//! ### Basic Operations
97//! ```rust
98//! use thread_share::ArcThreadShare;
99//!
100//! let counter = ArcThreadShare::new(0);
101//!
102//! // Use atomic methods for safety
103//! counter.increment();
104//! counter.add(5);
105//!
106//! assert_eq!(counter.get(), 6);
107//! ```
108//!
109//! ### From ThreadShare
110//! ```rust
111//! use thread_share::{share, ArcThreadShare};
112//!
113//! let data = share!(String::from("Hello"));
114//! let arc_data = data.as_arc();
115//! let arc_share = ArcThreadShare::from_arc(arc_data);
116//!
117//! // Safe atomic operations
118//! arc_share.update(|s| s.push_str(" World"));
119//! ```
120//!
121//! ## Performance Characteristics
122//!
123//! - **Low Contention**: Excellent performance, minimal overhead
124//! - **Medium Contention**: Good performance with some lost operations
125//! - **High Contention**: Poor performance, many lost operations
126//! - **Memory Usage**: Higher due to Box allocation/deallocation
127//!
128//! ## Best Practices
129//!
130//! 1. **Always use atomic methods** (`increment()`, `add()`) instead of complex `update()` operations
131//! 2. **Test with realistic contention levels** before production use
132//! 3. **Consider `ThreadShare<T>`** for critical applications
133//! 4. **Monitor performance** under expected load conditions
134//! 5. **Use for simple operations** only (increment, add, simple updates)
135//!
136//! ## Alternatives
137//!
138//! ### For High-Frequency Updates
139//! ```rust
140//! use thread_share::share;
141//!
142//! // Use ThreadShare with batching
143//! let share = share!(0);
144//! let clone = share.clone();
145//!
146//! clone.update(|x| {
147//!     for _ in 0..100 {
148//!         *x = *x + 1;
149//!     }
150//! });
151//! ```
152//!
153//! ### For Critical Data Integrity
154//! ```rust
155//! use thread_share::share;
156//!
157//! // Use ThreadShare for guaranteed safety
158//! let share = share!(vec![1, 2, 3]);
159//! let clone = share.clone();
160//!
161//! // All operations are guaranteed to succeed
162//! clone.update(|data| {
163//!     // Critical modifications
164//! });
165//! ```
166//!
167//! ### For Safe Zero-Copy
168//! ```rust
169//! use thread_share::{share, ArcThreadShareLocked};
170//!
171//! // Use ArcThreadShareLocked for safe zero-copy
172//! let share = share!(vec![1, 2, 3]);
173//! let arc_data = share.as_arc_locked();
174//! let locked_share = ArcThreadShareLocked::from_arc(arc_data);
175//!
176//! // Safe zero-copy with guaranteed thread safety
177//! locked_share.update(|data| {
178//!     // Safe modifications
179//! });
180//! ```
181
182use std::sync::{
183    atomic::{AtomicPtr, Ordering},
184    Arc,
185};
186
187/// Helper structure for working with Arc<AtomicPtr<T>> directly (without locks!)
188///
189/// **⚠️ WARNING: This structure has significant limitations and should be used with caution!**
190///
191/// ## Overview
192///
193/// `ArcThreadShare<T>` provides zero-copy data sharing between threads using atomic
194/// pointer operations. While this can offer high performance, it comes with important
195/// trade-offs that developers must understand.
196///
197/// ## Key Features
198///
199/// - **Zero-Copy Operations**: No data cloning during access
200/// - **Atomic Updates**: Uses atomic pointer operations
201/// - **High Performance**: Potentially faster than lock-based approaches
202/// - **Memory Efficiency**: Single copy of data shared across threads
203///
204///
205/// ### 2. **High Contention Performance Issues**
206/// Under high contention, many operations may be lost due to:
207/// - Box allocation/deallocation overhead
208/// - CAS failures requiring retries
209/// - Memory pressure from frequent allocations
210///
211/// ### 3. **Memory Allocation Overhead**
212/// Every update operation involves Box allocation and deallocation.
213///
214/// ## When to Use
215///
216/// - **Low-contention scenarios** (few threads, infrequent updates)
217/// - **Performance-critical applications** where you understand the limitations
218/// - **Simple atomic operations** using built-in methods
219/// - **Read-heavy workloads** with occasional writes
220///
221/// ## When to Avoid
222///
223/// - **High-frequency updates** (>1000 ops/second per thread)
224/// - **Critical data integrity** requirements
225/// - **Predictable performance** needs
226/// - **Large data structures**
227///
228/// ## Example
229///
230/// ```rust
231/// use thread_share::ArcThreadShare;
232///
233/// let counter = ArcThreadShare::new(0);
234///
235/// // Use atomic methods for safety
236/// counter.increment();
237/// counter.add(5);
238///
239/// assert_eq!(counter.get(), 6);
240/// ```
241pub struct ArcThreadShare<T> {
242    pub data: Arc<AtomicPtr<T>>,
243}
244
245// Automatically implement Send and Sync for ArcThreadShare
246unsafe impl<T> Send for ArcThreadShare<T> {}
247unsafe impl<T> Sync for ArcThreadShare<T> {}
248
249impl<T> Clone for ArcThreadShare<T> {
250    fn clone(&self) -> Self {
251        Self {
252            data: Arc::clone(&self.data),
253        }
254    }
255}
256
257impl<T> ArcThreadShare<T> {
258    /// Creates from Arc<AtomicPtr<T>>
259    ///
260    /// This method creates an `ArcThreadShare<T>` from an existing `Arc<AtomicPtr<T>>`.
261    /// Useful when you already have atomic pointer data from other sources.
262    ///
263    /// ## Arguments
264    ///
265    /// * `arc` - An `Arc<AtomicPtr<T>>` containing the data to share
266    ///
267    /// ## Returns
268    ///
269    /// A new `ArcThreadShare<T>` instance sharing the same data.
270    ///
271    /// ## Example
272    ///
273    /// ```rust
274    /// use thread_share::{share, ArcThreadShare};
275    ///
276    /// let data = share!(String::from("Hello"));
277    /// let arc_data = data.as_arc();
278    /// let arc_share = ArcThreadShare::from_arc(arc_data);
279    ///
280    /// // Now you can use atomic operations
281    /// arc_share.update(|s| s.push_str(" World"));
282    /// ```
283    pub fn from_arc(arc: Arc<AtomicPtr<T>>) -> Self {
284        Self { data: arc }
285    }
286
287    /// Creates a new ArcThreadShare with data
288    ///
289    /// This method creates a new `ArcThreadShare<T>` instance with the provided data.
290    /// The data is boxed and converted to an atomic pointer for thread-safe sharing.
291    ///
292    /// ## Arguments
293    ///
294    /// * `data` - The initial data to share between threads
295    ///
296    /// ## Requirements
297    ///
298    /// The type `T` must implement `Clone` trait.
299    ///
300    /// ## Returns
301    ///
302    /// A new `ArcThreadShare<T>` instance containing the data.
303    ///
304    /// ## Example
305    ///
306    /// ```rust
307    /// use thread_share::ArcThreadShare;
308    ///
309    /// let counter = ArcThreadShare::new(0);
310    /// let message = ArcThreadShare::new(String::from("Hello"));
311    /// let data = ArcThreadShare::new(vec![1, 2, 3]);
312    /// ```
313    pub fn new(data: T) -> Self
314    where
315        T: Clone,
316    {
317        let boxed = Box::new(data);
318        let ptr = Box::into_raw(boxed);
319        let atomic = Arc::new(AtomicPtr::new(ptr));
320        Self { data: atomic }
321    }
322
323    /// Gets a copy of data
324    ///
325    /// This method retrieves a copy of the current data. The operation is safe
326    /// but involves cloning the data.
327    ///
328    /// ## Requirements
329    ///
330    /// The type `T` must implement `Clone` trait.
331    ///
332    /// ## Returns
333    ///
334    /// A copy of the current data.
335    ///
336    /// ## Example
337    ///
338    /// ```rust
339    /// use thread_share::ArcThreadShare;
340    ///
341    /// let counter = ArcThreadShare::new(42);
342    /// let value = counter.get();
343    /// assert_eq!(value, 42);
344    /// ```
345    pub fn get(&self) -> T
346    where
347        T: Clone,
348    {
349        let ptr = self.data.load(Ordering::Acquire);
350        unsafe { (*ptr).clone() }
351    }
352
353    /// Sets data atomically
354    ///
355    /// This method atomically replaces the current data with new data.
356    /// The old data is automatically deallocated.
357    ///
358    /// ## Arguments
359    ///
360    /// * `new_data` - The new data to set
361    ///
362    /// ## Example
363    ///
364    /// ```rust
365    /// use thread_share::ArcThreadShare;
366    ///
367    /// let counter = ArcThreadShare::new(0);
368    /// counter.set(100);
369    /// assert_eq!(counter.get(), 100);
370    /// ```
371    pub fn set(&self, new_data: T) {
372        let new_boxed = Box::new(new_data);
373        let new_ptr = Box::into_raw(new_boxed);
374
375        let old_ptr = self.data.swap(new_ptr, Ordering::AcqRel);
376
377        // Free old data
378        if !old_ptr.is_null() {
379            unsafe {
380                drop(Box::from_raw(old_ptr));
381            }
382        }
383    }
384
385    /// Updates data (⚠️ NOT atomic for complex operations!)
386    ///
387    /// **⚠️ WARNING: This method is NOT atomic for complex operations!**
388    ///
389    /// For simple operations like `+= 1`, use the atomic methods `increment()` or `add()`
390    /// instead. This method can cause race conditions under high contention.
391    ///
392    /// ## Arguments
393    ///
394    /// * `f` - Closure that receives a mutable reference to the data
395    ///
396    /// ## Example
397    ///
398    /// ```rust
399    /// use thread_share::ArcThreadShare;
400    ///
401    /// let counter = ArcThreadShare::new(0);
402    ///
403    /// // ❌ NOT atomic - can cause race conditions
404    /// counter.update(|x| *x += 1);
405    ///
406    /// // ✅ Use atomic methods instead
407    /// counter.increment();
408    /// ```
409    pub fn update<F>(&self, f: F)
410    where
411        F: FnOnce(&mut T),
412    {
413        let ptr = self.data.load(Ordering::Acquire);
414        if !ptr.is_null() {
415            unsafe {
416                f(&mut *ptr);
417            }
418        }
419    }
420
421    /// Atomically increments numeric values (for types that support it)
422    ///
423    /// This method provides atomic increment operations for numeric types.
424    /// It uses a compare-exchange loop to ensure atomicity.
425    ///
426    /// ## Requirements
427    ///
428    /// The type `T` must implement:
429    /// - `Copy` - for efficient copying
430    /// - `std::ops::Add<Output = T>` - for addition operations
431    /// - `std::ops::AddAssign` - for compound assignment
432    /// - `From<u8>` - for creating the value 1
433    /// - `'static` - for lifetime requirements
434    ///
435    /// ## Example
436    ///
437    /// ```rust
438    /// use thread_share::ArcThreadShare;
439    ///
440    /// let counter = ArcThreadShare::new(0);
441    ///
442    /// // Atomic increment
443    /// counter.increment();
444    /// assert_eq!(counter.get(), 1);
445    ///
446    /// counter.increment();
447    /// assert_eq!(counter.get(), 2);
448    /// ```
449    pub fn increment(&self)
450    where
451        T: Copy + std::ops::Add<Output = T> + std::ops::AddAssign + From<u8> + 'static,
452    {
453        loop {
454            let ptr = self.data.load(Ordering::Acquire);
455            if ptr.is_null() {
456                break;
457            }
458
459            let current_value = unsafe { *ptr };
460            let new_value = current_value + T::from(1u8);
461
462            // Try to atomically update the pointer with new data
463            let new_boxed = Box::new(new_value);
464            let new_ptr = Box::into_raw(new_boxed);
465
466            if let Ok(_) =
467                self.data
468                    .compare_exchange(ptr, new_ptr, Ordering::AcqRel, Ordering::Acquire)
469            {
470                // Successfully updated, free old data
471                unsafe {
472                    drop(Box::from_raw(ptr));
473                }
474                break;
475            } else {
476                // Failed to update, free new data and retry
477                unsafe {
478                    drop(Box::from_raw(new_ptr));
479                }
480            }
481        }
482    }
483
484    /// Atomically adds a value (for types that support it)
485    pub fn add(&self, value: T)
486    where
487        T: Copy + std::ops::Add<Output = T> + std::ops::AddAssign + 'static,
488    {
489        loop {
490            let ptr = self.data.load(Ordering::Acquire);
491            if ptr.is_null() {
492                break;
493            }
494
495            let current_value = unsafe { *ptr };
496            let new_value = current_value + value;
497
498            // Try to atomically update the pointer with new data
499            let new_boxed = Box::new(new_value);
500            let new_ptr = Box::into_raw(new_boxed);
501
502            if let Ok(_) =
503                self.data
504                    .compare_exchange(ptr, new_ptr, Ordering::AcqRel, Ordering::Acquire)
505            {
506                // Successfully updated, free old data
507                unsafe {
508                    drop(Box::from_raw(ptr));
509                }
510                break;
511            } else {
512                // Failed to update, free new data and retry
513                unsafe {
514                    drop(Box::from_raw(new_ptr));
515                }
516            }
517        }
518    }
519
520    /// Reads data
521    pub fn read<F, R>(&self, f: F) -> R
522    where
523        F: FnOnce(&T) -> R,
524    {
525        let ptr = self.data.load(Ordering::Acquire);
526        if !ptr.is_null() {
527            unsafe { f(&*ptr) }
528        } else {
529            panic!("Attempted to read from null pointer");
530        }
531    }
532
533    /// Writes data
534    pub fn write<F, R>(&self, f: F) -> R
535    where
536        F: FnOnce(&mut T) -> R,
537    {
538        let ptr = self.data.load(Ordering::Acquire);
539        if !ptr.is_null() {
540            unsafe { f(&mut *ptr) }
541        } else {
542            panic!("Attempted to write to null pointer");
543        }
544    }
545}
546
547/// Helper structure for working with Arc<Mutex<T>> directly
548pub struct ArcSimpleShare<T> {
549    pub data: Arc<std::sync::Mutex<T>>,
550}
551
552// Automatically implement Send and Sync for ArcSimpleShare
553unsafe impl<T> Send for ArcSimpleShare<T> {}
554unsafe impl<T> Sync for ArcSimpleShare<T> {}
555
556impl<T> ArcSimpleShare<T> {
557    /// Creates from Arc<Mutex<T>>
558    pub fn from_arc(arc: Arc<std::sync::Mutex<T>>) -> Self {
559        Self { data: arc }
560    }
561
562    /// Gets data
563    pub fn get(&self) -> T
564    where
565        T: Clone,
566    {
567        self.data.lock().unwrap().clone()
568    }
569
570    /// Sets data
571    pub fn set(&self, new_data: T) {
572        let mut data = self.data.lock().unwrap();
573        *data = new_data;
574    }
575
576    /// Updates data
577    pub fn update<F>(&self, f: F)
578    where
579        F: FnOnce(&mut T),
580    {
581        let mut data = self.data.lock().unwrap();
582        f(&mut data);
583    }
584}