1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
//! Lock-free data structures for high-performance concurrent operations
//!
//! # Overview
//! This module provides lock-free implementations of common data structures
//! used in the event store's hot paths. By eliminating lock contention,
//! these structures provide:
//!
//! - **10-100x lower latency** compared to mutex-based alternatives
//! - **Predictable performance** under high concurrent load
//! - **Better scalability** with increasing thread count
//!
//! # Components
//!
//! ## LockFreeEventQueue
//! Multi-producer, multi-consumer queue for event ingestion pipeline.
//! - Eliminates RwLock contention in hot path
//! - Provides backpressure handling when full
//! - ~10-20ns push/pop operations
//!
//! ## ShardedEventQueue
//! Sharded queue for maximum throughput under high contention.
//! - 3-5x higher throughput than single queue
//! - Better cache locality through sharding
//! - Batch operations for reduced overhead
//!
//! ## LockFreeMetrics
//! Atomic metrics collector for monitoring.
//! - Zero contention on metric updates
//! - ~5-10ns per metric recording
//! - Suitable for high-frequency operations
//!
//! # When to Use
//!
//! Use lock-free structures when:
//! - Operation frequency > 100K/sec per thread
//! - Multiple threads accessing same structure
//! - Latency predictability is critical
//! - Lock contention is observed in profiling
//!
//! Use regular locks when:
//! - Operation frequency < 10K/sec
//! - Simple single-threaded access
//! - Complex state updates needed
//! - Atomic cross-field invariants required
//!
//! # Performance Notes
//!
//! Lock-free operations use atomic CPU instructions (e.g., CAS - Compare-And-Swap).
//! While fast, they can still cause cache-line bouncing under extreme contention.
//! For best performance:
//!
//! - Use separate instances per logical partition/shard
//! - Batch operations when possible
//! - Consider queue capacity vs. memory trade-off
//!
//! # Example
//!
//! ```ignore
//! use crate::infrastructure::persistence::lock_free::{LockFreeEventQueue, ShardedEventQueue, LockFreeMetrics};
//!
//! // Simple queue for moderate load
//! let queue = LockFreeEventQueue::new(10000);
//!
//! // Sharded queue for high contention scenarios
//! let sharded = ShardedEventQueue::new(100000);
//!
//! let metrics = LockFreeMetrics::new();
//!
//! // Producer thread
//! queue.try_push(event)?;
//! // Or use batch operations for higher throughput
//! sharded.try_push_batch(events);
//! metrics.record_ingest();
//!
//! // Consumer thread
//! if let Some(event) = queue.try_pop() {
//! process_event(event)?;
//! metrics.record_query(latency);
//! }
//! // Or batch pop for efficiency
//! let batch = sharded.try_pop_batch(100);
//! ```
pub use ;
pub use LockFreeEventQueue;
pub use ;