1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
//! Write-Ahead Log (WAL) implementation for crash recovery and durability.
//!
//! The WAL provides:
//! - Sequential logging of all mutations
//! - Crash recovery by replaying operations
//! - Point-in-time recovery capabilities
//! - Configurable durability modes for performance tuning
//!
//! # Architecture
//!
//! AletheiaDB uses a **Concurrent WAL with Striped Lock-Free Ring Buffers** for
//! high-throughput write operations while maintaining ACID compliance.
//!
//! ```text
//! ┌─────────────────────┐
//! │ LSN Allocator │
//! │ AtomicU64::fetch_add
//! └──────────┬──────────┘
//! │
//! ┌───────────────────────┼───────────────────────┐
//! ▼ ▼ ▼
//! ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
//! │ Stripe 0 │ │ Stripe 1 │ │ Stripe N │
//! │ Ring Buffer │ │ Ring Buffer │ │ Ring Buffer │
//! │ (Lock-free) │ │ (Lock-free) │ │ (Lock-free) │
//! └──────┬──────┘ └──────┬──────┘ └──────┬──────┘
//! └───────────────────────┼───────────────────────┘
//! ▼
//! ┌─────────────────────┐
//! │ Flush Coordinator │
//! │ - Sorts by LSN │
//! │ - Writes segment │
//! └─────────────────────┘
//! ```
//!
//! # Key Design Principles
//!
//! 1. **Lock-free append path**: Multiple threads can append concurrently without mutex contention
//! 2. **Global LSN ordering**: Single atomic counter ensures total ordering of all operations
//! 3. **Sorted flush**: Entries are sorted by LSN before writing to disk
//! 4. **ACID preserved**: Synchronous and GroupCommit modes remain fully ACID compliant
//!
//! # Durability Modes
//!
//! | Mode | Latency | Throughput | ACID |
//! |------|---------|------------|------|
//! | Synchronous | ~1.5ms | ~600/sec | ✅ Full |
//! | GroupCommit | ~10-50ms | ~100K+/sec | ✅ Full |
//! | Async | <100ns | ~500K+/sec | ❌ Eventual |
//!
//! See [`DurabilityMode`] for details.
//!
//! # Usage
//!
//! ## Single Operations
//!
//! ```ignore
//! use aletheiadb::storage::wal::concurrent_system::{ConcurrentWalSystem, ConcurrentWalSystemConfig};
//!
//! let config = ConcurrentWalSystemConfig::new("data/wal");
//! let wal = ConcurrentWalSystem::new(config)?;
//!
//! // Async append (returns immediately)
//! let lsn = wal.append_async(operation)?;
//!
//! // Commit with configured durability
//! wal.commit()?;
//!
//! // Shutdown gracefully
//! wal.shutdown();
//! ```
//!
//! ## Batch Operations (High-Throughput)
//!
//! For high-throughput workloads with multiple operations, use `append_batch()` for
//! significant performance improvements:
//!
//! ```ignore
//! use aletheiadb::storage::wal::{WalOperation, ConcurrentWalSystem};
//!
//! // Create multiple operations (e.g., from a transaction)
//! let operations = vec![
//! WalOperation::CreateNode { /* ... */ },
//! WalOperation::CreateEdge { /* ... */ },
//! WalOperation::UpdateNode { /* ... */ },
//! ];
//!
//! // Batch append - optimizes LSN allocation and serialization
//! let lsns = wal.append_batch(operations)?;
//! assert_eq!(lsns.len(), 3);
//!
//! // For GroupCommit mode, commit and wait for durability
//! if let Some(epoch) = wal.commit()? {
//! wal.group_commit_coordinator().unwrap().wait_for_flush(epoch)?;
//! }
//! ```
//!
//! **Performance Benefits:**
//! - Single atomic LSN allocation (vs N atomic operations)
//! - Better CPU cache locality during serialization
//! - Reduced stripe buffer contention
//! - **20-50% throughput improvement** for batch sizes > 10
// Durability mode support
// Concurrent WAL modules
// New modules for data structures and serialization
// Re-export key types
pub use ;
pub use GroupCommitCoordinator;
// Re-export types from entry
pub use ;
// Re-export serialization helpers (needed by concurrent.rs via super::)
pub use estimate_entry_capacity;