Expand description
§Atomic Module - ArcThreadShare
This module provides ArcThreadShare<T>, a high-performance structure for
zero-copy data sharing between threads using atomic operations.
§⚠️ Important Warning
ArcThreadShare<T> has significant limitations and should be used with caution!
§Overview
ArcThreadShare<T> uses Arc<AtomicPtr<T>> internally to provide zero-copy
data sharing without locks. While this can offer high performance, it comes
with important trade-offs.
§Key Features
- Zero-Copy Operations: No data cloning during access
- Atomic Updates: Uses atomic pointer operations
- High Performance: Potentially faster than lock-based approaches
- Memory Efficiency: Single copy of data shared across threads
§⚠️ Critical Limitations
§1. Non-Atomic Complex Operations
use thread_share::ArcThreadShare;
let arc_share = ArcThreadShare::new(0);
// ❌ This is NOT atomic and can cause race conditions
arc_share.update(|x| *x = *x + 1);
// ✅ Use the atomic increment method instead
arc_share.increment();Problem: The update method with complex operations like += is not atomic.
Between reading the value, modifying it, and writing it back, other threads can interfere.
§2. High Contention Performance Issues
use thread_share::ArcThreadShare;
let arc_share = ArcThreadShare::new(0);
// ❌ High contention can cause significant performance degradation
for _ in 0..10000 {
arc_share.increment(); // May lose many operations under high contention
}Problem: Under high contention (many threads updating simultaneously), AtomicPtr
operations can lose updates due to:
- Box allocation/deallocation overhead
- CAS (Compare-And-Swap) failures requiring retries
- Memory pressure from frequent allocations
Expected Behavior: In high-contention scenarios, you may see only 20-30% of expected operations complete successfully.
§3. Memory Allocation Overhead
use thread_share::ArcThreadShare;
let arc_share = ArcThreadShare::new(0);
// Each increment operation involves:
// 1. Allocating new Box<T>
// 2. Converting to raw pointer
// 3. Atomic pointer swap
// 4. Deallocating old Box<T>
arc_share.increment();Problem: Every update operation creates a new Box<T> and deallocates the old one,
which can be expensive for large data types.
§When to Use ArcThreadShare
§✅ Good Use Cases
- Low-contention scenarios (few threads, infrequent updates)
- Performance-critical applications where you understand the limitations
- Simple atomic operations using built-in methods (
increment(),add()) - Read-heavy workloads with occasional writes
§❌ Avoid When
- High-frequency updates (>1000 ops/second per thread)
- Critical data integrity requirements
- Predictable performance needs
- Large data structures (due to allocation overhead)
- Multi-threaded counters with strict accuracy requirements
§Example Usage
§Basic Operations
use thread_share::ArcThreadShare;
let counter = ArcThreadShare::new(0);
// Use atomic methods for safety
counter.increment();
counter.add(5);
assert_eq!(counter.get(), 6);§From ThreadShare
use thread_share::{share, ArcThreadShare};
let data = share!(String::from("Hello"));
let arc_data = data.as_arc();
let arc_share = ArcThreadShare::from_arc(arc_data);
// Safe atomic operations
arc_share.update(|s| s.push_str(" World"));§Performance Characteristics
- Low Contention: Excellent performance, minimal overhead
- Medium Contention: Good performance with some lost operations
- High Contention: Poor performance, many lost operations
- Memory Usage: Higher due to Box allocation/deallocation
§Best Practices
- Always use atomic methods (
increment(),add()) instead of complexupdate()operations - Test with realistic contention levels before production use
- Consider
ThreadShare<T>for critical applications - Monitor performance under expected load conditions
- Use for simple operations only (increment, add, simple updates)
§Alternatives
§For High-Frequency Updates
use thread_share::share;
// Use ThreadShare with batching
let share = share!(0);
let clone = share.clone();
clone.update(|x| {
for _ in 0..100 {
*x = *x + 1;
}
});§For Critical Data Integrity
use thread_share::share;
// Use ThreadShare for guaranteed safety
let share = share!(vec![1, 2, 3]);
let clone = share.clone();
// All operations are guaranteed to succeed
clone.update(|data| {
// Critical modifications
});§For Safe Zero-Copy
use thread_share::{share, ArcThreadShareLocked};
// Use ArcThreadShareLocked for safe zero-copy
let share = share!(vec![1, 2, 3]);
let arc_data = share.as_arc_locked();
let locked_share = ArcThreadShareLocked::from_arc(arc_data);
// Safe zero-copy with guaranteed thread safety
locked_share.update(|data| {
// Safe modifications
});Structs§
- ArcSimple
Share - Helper structure for working with Arc<Mutex
> directly - ArcThread
Share - Helper structure for working with Arc<AtomicPtr
> directly (without locks!)