pgwire-replication
A low-level, high-performance PostgreSQL logical replication client implemented directly on top of the PostgreSQL wire protocol (pgwire).
This crate is designed for CDC, change streaming, and WAL replay systems that require explicit control over replication state, deterministic restart behavior, and minimal runtime overhead.
pgwire-replication intentionally avoids libpq, tokio-postgres, and other higher-level PostgreSQL clients for the replication path. It interacts with a Postgres instance directly and relies on START_REPLICATION ... LOGICAL ... and the built-in pgoutput output plugin.
pgwire-replication exists to provide:
- a direct pgwire implementation for logical replication
- explicit, user-controlled LSN start and stop semantics
- predictable feedback and backpressure behavior
- clean integration into async systems and coordinators
This crate was originally extracted from the Deltaforge CDC project and is maintained independently.
Installation
Add to your Cargo.toml:
[]
= "0.1"
Or with specific features:
[]
= { = "0.1", = false, = ["tls-rustls"] }
Requirements
- Rust 1.88 or later
- PostgreSQL 15+ with logical replication enabled (older versions will probably work too)
Features
- Logical replication using the PostgreSQL wire protocol
pgoutputlogical decoding support (transport-level)- Explicit LSN seek (
start_lsn) - Bounded replay (
stop_at_lsn) - Periodic standby status updates
- Keepalive handling
- Tokio-based async client
- SCRAM-SHA-256 and MD5 authentication
- TLS/mTLS support (via rustls)
- Designed for checkpoint and replay-based systems
Non-goals
This crate intentionally does not provide:
- A general-purpose SQL client
- Automatic checkpoint persistence
- Exactly-once semantics
- Schema management or DDL interpretation
- Full
pgoutputdecoding into rows or events
These responsibilities belong in higher layers.
Basic usage
use ;
let mut repl = connect.await?;
while let Some = repl.recv.await?
// Clean end-of-stream
Check the Quick Start and Examples for more detailed use cases.
Seek and Replay Semantics
pgwire-replication is built around explicit WAL position control.
LSNs (Log Sequence Numbers) are treated as first-class inputs and outputs and are never hidden behind opaque offsets.
Starting from an LSN (Seek)
Every replication session begins at an explicit LSN:
ReplicationConfig
This enables:
- resuming replication after a crash
- replaying WAL from a known checkpoint
- controlled historical backfills
The provided LSN is sent verbatim to PostgreSQL via START_REPLICATION.
Bounded Replay (Start -> Stop)
Replication can be bounded using stop_at_lsn:
ReplicationConfig
When configured:
- replication starts at start_lsn
- WAL is streamed until the stop LSN is reached
- a ReplicationEvent::StoppedAt { reached } event is emitted
- After StoppedAt is emitted, the stream ends cleanly and recv() returns Ok(None)
- the replication connection is terminated cleanly using CopyDone
This enables:
- deterministic WAL replay
- offline backfills
- "replay up to checkpoint" workflows
- controlled reprocessing in recovery scenarios
Progress Tracking and Feedback
Progress is not auto-committed. Instead, the consumer explicitly reports progress:
repl.update_applied_lsn;
Calling update_applied_lsn indicates that all WAL up to lsn has been durably persisted by the consumer (for example, flushed to disk or a message queue).
This allows callers to control:
- durability boundaries
- batching behavior
- exactly-once or at-least-once semantics (implemented externally)
Updates are monotonic: reporting an older LSN is a no-op.
Standby status updates are sent asynchronously by the worker using the
latest applied LSN, based on status_interval or server keepalive requests.
For CDC pipelines, progress should typically be reported at transaction commit boundaries, not for every message.
Idle behavior
PostgreSQL logical replication may remain silent for extended periods when no WAL is generated. This is normal.
idle_wakeup_interval does not indicate failure. It bounds how long the client may block
waiting for server messages before waking up to send a standby status update and continue waiting.
While the system is idle, the effective feedback cadence is bounded by
idle_wakeup_interval, not status_interval.
Shutdown
stop()requests a graceful stop (sendsCopyDone). The client will continue to yield any buffered events and thenrecv()returnsOk(None).shutdown().awaitis a convenience method that callsstop(), drains remaining events, and awaits the worker task result.abort()cancels the worker task immediately (hard stop; does not sendCopyDone).
Dropping ReplicationClient requests a best-effort graceful stop. When dropped inside a Tokio runtime, the worker is detached and allowed to finish cleanly; when dropped outside a runtime, the worker may be aborted to avoid leaking a task.
Important Notes on LSN Semantics
- PostgreSQL does not guarantee that every logical replication message advances the WAL end position.
- Small or fast transactions may share the same WAL position.
- LSNs should be treated as monotonic but not dense.
Today, bounded replay is evaluated using WAL positions observed during streaming. Future versions may expose commit-boundary LSNs derived from pgoutput decoding for stronger replay guarantees. LSNs are formatted exactly as PostgreSQL displays them:
- uppercase hexadecimal
X/Yformat- up to 8 hex digits per part
- leading zeros omitted
Examples: 0/0, 0/16B6C50, 16/B374D848
Parsing accepts both padded and unpadded forms for compatibility.
TLS support
TLS is optional and uses rustls.
TLS configuration is provided explicitly via ReplicationConfig and does not rely on system OpenSSL.
Quick start
Control plane (publication/slot creation) is typically done using a proper "Postgres client" (Your choice). This crate handles only the replication plane.
use ;
async
Examples
Examples that use the control-plane SQL client (tokio-postgres) require the examples feature.
Replication plane only: examples/basic.rs
START_LSN="0/16B6C50"
Control-plane + streaming: examples/checkpointed.rs
Bounded replay: examples/bounded_replay.rs
With TLS enabled: examples/with_tls.rs
PGHOST=db.example.com \
PGPORT=5432 \
PGUSER=repl_user \
PGPASSWORD=secret \
PGDATABASE=postgres \
PGSLOT=example_slot_tls \
PGPUBLICATION=example_pub_tls \
PGTLS_CA=/path/to/ca.pem \
PGTLS_SNI=db.example.com \
Enabling mTLS : examples/with_mtls.rs
Inject the fake dns record, if you need to:
and then:
PGHOST=db.example.com \
PGPORT=5432 \
PGUSER=repl_user \
PGPASSWORD=secret \
PGDATABASE=postgres \
PGSLOT=example_slot_mtls \
PGPUBLICATION=example_pub_mtls \
PGTLS_CA=/etc/ssl/ca.pem \
PGTLS_CLIENT_CERT=/etc/ssl/client.crt.pem \
PGTLS_CLIENT_KEY=/etc/ssl/client.key.pem \
PGTLS_SNI=db.example.com \
PGUSER/PGPASSWORDare used for control-plane setup (publication/slot).REPL_USER/REPL_PASSWORDare used for the replication stream.- If PGHOST is an IP address, you must set
PGTLS_SNIto a DNS name on the cert. - Client key should be PKCS#8 PEM for best compatibility.
VerifyCacan be used instead ofVerifyFullif hostname validation is not possible.
Testing
Integration tests use Docker via testcontainers and are gated behind a feature flag:
License
Licensed under either of:
- Apache License, Version 2.0
- MIT License
at your option.