Skip to main content

WalWriter

Struct WalWriter 

Source
pub struct WalWriter { /* private fields */ }
Expand description

Writer for the Write-Ahead Log

Wraps the underlying file in a BufWriter so each append does not pay a write syscall — bytes accumulate in a 64 KiB user-space buffer until sync() (or flush_until()) drains them and then calls sync_all() on the raw file. This is how postgres turns per-record append cost from ~500 ns down to ~5 ns; reddb’s previous per-append write_all directly to the file paid the syscall on every record.

Critical contract: every code path that calls sync_all() on the underlying file must drain the BufWriter first via BufWriter::flush(). Otherwise the bytes in user-space never reach the kernel before fsync, and durability is silently broken.

Implementations§

Source§

impl WalWriter

Source

pub fn open<P: AsRef<Path>>(path: P) -> Result<Self>

Open a WAL file for writing. Creates it if it doesn’t exist.

Source

pub fn append(&mut self, record: &WalRecord) -> Result<u64>

Append a record to the WAL.

Bytes go into the BufWriter — they are NOT durable on disk after this call returns. Callers that need durability must follow up with WalWriter::sync or WalWriter::flush_until.

Returns the LSN (Log Sequence Number) of the record.

Source

pub fn append_bytes(&mut self, bytes: &[u8]) -> Result<u64>

Write already-encoded bytes and advance the LSN counter to match. Used by the lock-free append path: writers encode + atomically reserve an LSN range outside this writer, the group-commit coordinator drains the pending queue in LSN order, then calls append_bytes for each batch.

The bytes MUST be a valid WalRecord::encode() payload (or a concatenation of such) — no structural validation happens here. The caller is responsible for keeping the on-disk byte offset synchronised with the externally-tracked LSN counter; this method just appends and advances.

Source

pub fn set_current_lsn(&mut self, lsn: u64)

Rewind the writer’s LSN counter to a specific value. Used by the lock-free append path to resync the writer with the externally-tracked next_lsn after a drain batch; the coordinator knows the exact byte offset it just wrote to and needs current_lsn to match so subsequent direct callers of append stay consistent.

Source

pub fn sync(&mut self) -> Result<()>

Force sync to disk.

Drains the user-space BufWriter first, then calls sync_all() on the underlying file so every byte appended since the last sync is durable. Updates durable_lsn so subsequent flush_until calls become no-ops up to current_lsn.

Source

pub fn flush_until(&mut self, target: u64) -> Result<()>

Ensure the WAL is durable on disk at least up to byte offset target. No-op when target <= durable_lsn.

This is the postgres XLogFlush(LSN) analogue. Pager flush paths call this with max(dirty.header.lsn) before writing any data page so the WAL record describing the change is guaranteed to be on disk before the page itself.

Source

pub fn durable_lsn(&self) -> u64

Highest byte offset that is durable on disk. Used by the pager to decide whether a flush_until call would actually need a fsync.

Source

pub fn current_lsn(&self) -> u64

Get current LSN (end of file offset)

Source

pub fn drain_for_group_sync(&mut self) -> Result<(u64, Arc<File>)>

Drain the BufWriter into the kernel and return the captured LSN plus a cloned file handle for the caller to fsync without holding the WAL writer mutex.

Used by the group-commit leader path. The flow is:

  1. Take the WAL writer mutex.
  2. Call this method — drains user-space buffer to the kernel and captures (target_lsn, sync_handle).
  3. Release the WAL writer mutex.
  4. Call sync_handle.sync_all() — this is the expensive ~100 µs syscall, and other writers can keep appending while it runs.
  5. Take the WAL writer mutex briefly and call [WalWriter::mark_durable(target_lsn)] to publish the new durable position.

The cloned sync_handle shares the same kernel inode with the writer’s file, so sync_all() on the clone flushes ALL bytes that have reached the kernel for that file — including bytes appended by other writers AFTER step 3. This is the coalescing window that makes group commit win.

Source

pub fn mark_durable(&mut self, lsn: u64)

Manually advance durable_lsn after a successful out-of-lock sync_all() performed via WalWriter::drain_for_group_sync.

Monotonic — never lowers durable_lsn. Safe to call with a stale lsn; just becomes a no-op.

Source

pub fn truncate(&mut self) -> Result<()>

Truncate the WAL (usually after checkpoint).

Drains the BufWriter first so no pending bytes hit the file after the truncate. Then resets the underlying file, rewrites the header through the buffered writer (header is small; the followup flush + sync_all makes it durable), and resets LSN bookkeeping.

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> IntoRequest<T> for T

Source§

fn into_request(self) -> Request<T>

Wrap the input message T in a tonic::Request
Source§

impl<L> LayerExt<L> for L

Source§

fn named_layer<S>(&self, service: S) -> Layered<<L as Layer<S>>::Service, S>
where L: Layer<S>,

Applies the layer to a service and wraps it in Layered.
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more