Expand description
Quorum-based commit coordination (Phase 2.6 multi-region PG parity).
The existing PrimaryReplication module streams WAL records to every
connected replica but the primary acks the client as soon as the
record hits its own WAL — replicas are eventually-consistent. For
multi-region deployments that’s not enough: a datacenter failure
after ack but before replication would drop the write.
QuorumCoordinator sits between the write path and the client ack.
It watches ReplicaState::last_acked_lsn on the underlying primary
and blocks the caller until the configured quorum of replicas has
durably received the record. Three quorum shapes are supported:
- Async (default, backwards compatible) — ack immediately, don’t wait for replicas. Same semantics as pre-Phase-2.6 RedDB.
- Sync(n) — wait for N replicas (any region) before acking.
- Regions(set) — wait for at least one replica from each listed region. Survives full-region loss as long as the surviving regions were in the required set at write time.
Crash safety: the primary WAL is already durable before quorum wait begins, so a coordinator crash doesn’t lose the record — it just means the client never got an ack and must retry idempotently.
Structs§
- Quorum
Config - Quorum configuration stored alongside
ReplicationConfig. - Quorum
Coordinator - Tracks per-replica region bindings and pairs them with the primary’s
ack map.
PrimaryReplicationowns the WAL buffer +ReplicaStatelist; this coordinator adds the region dimension and the wait-for- quorum logic without duplicating the ack table.
Enums§
- Quorum
Error - Errors raised by the quorum coordinator. The write itself succeeded on the primary WAL — these errors signal that replica acknowledgement did not reach quorum and the caller must decide whether to surface the failure or continue anyway.
- Quorum
Mode - Quorum mode selected for a replication config.