pub struct Wal { /* private fields */ }Expand description
Live, append-side WAL handle.
Construct via Wal::open. The returned tuple includes the list of
committed mutation events that need to be re-applied to the
in-memory store before any new traffic is accepted.
Wal::open returns Arc<Self> because the optional Group-mode
background flusher needs a Weak<Wal> to call back into without
taking a strong reference (which would prevent shutdown).
Implementations§
Source§impl Wal
impl Wal
Sourcepub fn open(
dir: impl Into<PathBuf>,
sync_mode: SyncMode,
segment_target_bytes: u64,
checkpoint_lsn: Lsn,
) -> Result<(Arc<Self>, Vec<MutationEvent>), WalError>
pub fn open( dir: impl Into<PathBuf>, sync_mode: SyncMode, segment_target_bytes: u64, checkpoint_lsn: Lsn, ) -> Result<(Arc<Self>, Vec<MutationEvent>), WalError>
Open or create the WAL directory at dir.
checkpoint_lsn is the LSN stamped into the most recent
snapshot the caller is restoring from (or Lsn::ZERO if
there is no snapshot). Replay skips records at or below this
fence — they are already represented in the loaded state.
Returns (wal, committed_events). The caller is expected to
apply every event in committed_events to its in-memory store
in order before issuing any new begin / append calls.
pub fn dir(&self) -> &Path
pub fn sync_mode(&self) -> SyncMode
pub fn durable_lsn(&self) -> Lsn
Sourcepub fn bg_failure(&self) -> Option<String>
pub fn bg_failure(&self) -> Option<String>
Latched message from the background flusher, if it has ever
failed an fsync. None means the WAL is healthy. Once set,
every commit / flush / force_fsync starts returning
WalError::Poisoned and the WAL stops accepting new
transactions until the operator restarts from the last
consistent snapshot + WAL.
Sourcepub fn next_lsn(&self) -> Lsn
pub fn next_lsn(&self) -> Lsn
LSN that the next begin / append call will allocate.
Exposed for tests and for sanity checks at boot; not part of
any durability contract.
pub fn oldest_segment_id(&self) -> u64
pub fn active_segment_id(&self) -> u64
Sourcepub fn begin(&self) -> Result<Lsn, WalError>
pub fn begin(&self) -> Result<Lsn, WalError>
Begin a new transaction. Allocates a TxBegin record and
returns its LSN, which the caller must thread back through
append / commit / abort so replay can group the events.
If the active segment has crossed segment_target_bytes,
rotation happens here — TxBegin is the only record kind
guaranteed to be a transaction boundary, so rotating just
before its append keeps every transaction wholly in one
segment.
Sourcepub fn append(
&self,
tx_begin_lsn: Lsn,
event: &MutationEvent,
) -> Result<Lsn, WalError>
pub fn append( &self, tx_begin_lsn: Lsn, event: &MutationEvent, ) -> Result<Lsn, WalError>
Append a single mutation to the in-memory pending buffer of
the active segment. Not durable until flush() runs.
Sourcepub fn append_batch(
&self,
tx_begin_lsn: Lsn,
events: Vec<MutationEvent>,
) -> Result<Lsn, WalError>
pub fn append_batch( &self, tx_begin_lsn: Lsn, events: Vec<MutationEvent>, ) -> Result<Lsn, WalError>
Append many mutations as one framed record. This keeps the replay
contract identical to repeated append calls while avoiding per-event
length/CRC/framing overhead for write-heavy statements.
Sourcepub fn commit(&self, tx_begin_lsn: Lsn) -> Result<Lsn, WalError>
pub fn commit(&self, tx_begin_lsn: Lsn) -> Result<Lsn, WalError>
Append a TxCommit marker. Caller is expected to subsequently
call flush() (under SyncMode::PerCommit) to make the
commit durable before returning to its caller.
Sourcepub fn abort(&self, tx_begin_lsn: Lsn) -> Result<Lsn, WalError>
pub fn abort(&self, tx_begin_lsn: Lsn) -> Result<Lsn, WalError>
Append a TxAbort marker. Replay drops the events keyed by
tx_begin_lsn without re-applying them.
Sourcepub fn checkpoint_marker(&self, snapshot_lsn: Lsn) -> Result<Lsn, WalError>
pub fn checkpoint_marker(&self, snapshot_lsn: Lsn) -> Result<Lsn, WalError>
Append a Checkpoint marker. snapshot_lsn should equal the
LSN written into the snapshot file’s header — replay uses
it to defend against the snapshot-rename-but-no-marker race.
Sourcepub fn flush(&self) -> Result<(), WalError>
pub fn flush(&self) -> Result<(), WalError>
Flush the active segment’s pending buffer.
What “flush” means depends on SyncMode:
PerCommit— write the buffer to the OS,fsync, and advancedurable_lsn. The strongest contract: every record up tonext_lsn - 1is on disk.Group— write the buffer to the OS, but let the background flusher fsync and advancedurable_lsnon its cadence.None— write the buffer to the OS only, but advancedurable_lsnanyway. The mode opts out of crash durability, so the checkpoint fence reports “what’s been written” instead of “what’s actually safe”.
Sourcepub fn force_fsync(&self) -> Result<(), WalError>
pub fn force_fsync(&self) -> Result<(), WalError>
Unconditionally write the buffer to the OS, fsync, and
advance durable_lsn. Used by callers that need a durability
point right now regardless of the configured cadence (e.g.
checkpoint). Returns WalError::Poisoned if the bg flusher
has already failed.
Sourcepub fn truncate_up_to(&self, fence_lsn: Lsn) -> Result<(), WalError>
pub fn truncate_up_to(&self, fence_lsn: Lsn) -> Result<(), WalError>
Drop sealed segments whose entire LSN range is at or below
fence_lsn. Idempotent and safe to call repeatedly.
The active segment is never deleted — even if every record in it predates the fence, it is still the rotation target for new appends. The segment immediately before the active one is also kept as a tombstone so a subsequent crash before the next checkpoint still finds a self-describing log start.