allsource-core 0.19.1

High-performance event store core built in Rust
Documentation
//! Lock-free data structures for high-performance concurrent operations
//!
//! # Overview
//! This module provides lock-free implementations of common data structures
//! used in the event store's hot paths. By eliminating lock contention,
//! these structures provide:
//!
//! - **10-100x lower latency** compared to mutex-based alternatives
//! - **Predictable performance** under high concurrent load
//! - **Better scalability** with increasing thread count
//!
//! # Components
//!
//! ## LockFreeEventQueue
//! Multi-producer, multi-consumer queue for event ingestion pipeline.
//! - Eliminates RwLock contention in hot path
//! - Provides backpressure handling when full
//! - ~10-20ns push/pop operations
//!
//! ## ShardedEventQueue
//! Sharded queue for maximum throughput under high contention.
//! - 3-5x higher throughput than single queue
//! - Better cache locality through sharding
//! - Batch operations for reduced overhead
//!
//! ## LockFreeMetrics
//! Atomic metrics collector for monitoring.
//! - Zero contention on metric updates
//! - ~5-10ns per metric recording
//! - Suitable for high-frequency operations
//!
//! # When to Use
//!
//! Use lock-free structures when:
//! - Operation frequency > 100K/sec per thread
//! - Multiple threads accessing same structure
//! - Latency predictability is critical
//! - Lock contention is observed in profiling
//!
//! Use regular locks when:
//! - Operation frequency < 10K/sec
//! - Simple single-threaded access
//! - Complex state updates needed
//! - Atomic cross-field invariants required
//!
//! # Performance Notes
//!
//! Lock-free operations use atomic CPU instructions (e.g., CAS - Compare-And-Swap).
//! While fast, they can still cause cache-line bouncing under extreme contention.
//! For best performance:
//!
//! - Use separate instances per logical partition/shard
//! - Batch operations when possible
//! - Consider queue capacity vs. memory trade-off
//!
//! # Example
//!
//! ```ignore
//! use crate::infrastructure::persistence::lock_free::{LockFreeEventQueue, ShardedEventQueue, LockFreeMetrics};
//!
//! // Simple queue for moderate load
//! let queue = LockFreeEventQueue::new(10000);
//!
//! // Sharded queue for high contention scenarios
//! let sharded = ShardedEventQueue::new(100000);
//!
//! let metrics = LockFreeMetrics::new();
//!
//! // Producer thread
//! queue.try_push(event)?;
//! // Or use batch operations for higher throughput
//! sharded.try_push_batch(events);
//! metrics.record_ingest();
//!
//! // Consumer thread
//! if let Some(event) = queue.try_pop() {
//!     process_event(event)?;
//!     metrics.record_query(latency);
//! }
//! // Or batch pop for efficiency
//! let batch = sharded.try_pop_batch(100);
//! ```

pub mod metrics;
pub mod queue;
pub mod sharded_queue;

pub use metrics::{LockFreeMetrics, MetricsSnapshot};
pub use queue::LockFreeEventQueue;
pub use sharded_queue::{ShardedEventQueue, ShardedQueueStats};