pg-queue
A PostgreSQL-based job queue, pub/sub, and cache library for Rust. One dependency instead of two — if you already run Postgres, you don't need Redis.
What it does
Most applications that outgrow a single process need three things: a way to queue work for later, a way to broadcast events in real time, and a way to cache expensive lookups. The standard answer is to bolt Redis onto your stack. pg-queue provides all three using features already built into PostgreSQL — no additional infrastructure to deploy, monitor, or pay for.
In plain terms: instead of adding a separate Redis server to handle background jobs and live updates, pg-queue lets your existing Postgres database do it. Fewer moving parts, fewer things to break at 3am.
How it works
| Feature | Mechanism | What it replaces |
|---|---|---|
| Job queue | SELECT ... FOR UPDATE SKIP LOCKED |
Redis lists, Celery, Sidekiq |
| Pub/Sub | LISTEN / NOTIFY |
Redis pub/sub, NATS |
| Cache | UNLOGGED table with TTL per row |
Redis GET/SET/EXPIRE |
| Request-Response | Queue + NOTIFY with correlation ID |
Redis streams, RPC frameworks |
SKIP LOCKED lets multiple workers grab jobs concurrently without blocking each other. LISTEN/NOTIFY pushes events to subscribers the instant a row is inserted — no polling. UNLOGGED tables skip write-ahead logging for cache-tier speed (data is not crash-safe, which is fine for a cache).
Setup
Run the migration against your database:
Then create as many queues as you need:
SELECT pg_queue_create_queue('emails');
SELECT pg_queue_create_queue('image_processing');
Each call creates a dedicated table (queue_emails, queue_image_processing) with a partial index on pending jobs and a trigger that fires NOTIFY on insert.
Add to your Cargo.toml:
[]
= "0.1"
Usage
Job queue
use ;
let pool = connect.await?;
let mgr = new;
let emails = new?;
// Producer: enqueue a job
mgr.queue.push.await?;
// Consumer: claim and process
if let Some = mgr.queue..await?
Multiple consumers can call pop concurrently — SKIP LOCKED ensures each job is claimed by exactly one worker.
Pub/Sub
// Publisher
mgr.notify.notify.await?;
// Subscriber
let mut listener = mgr.create_listener.await?;
listener.listen.await?;
loop
Cache
// Set with 5 minute TTL
mgr.cache.set.await?;
// Get (returns None if expired or missing)
let cached: = mgr.cache.get.await?;
// Get-or-set with fallback
let profile = mgr.cache.get_or_set.await?;
Request-Response (RPC)
let workers = new?;
// Caller: push request and block until response
let result: MyResponse = mgr.request_response
.push_and_wait
.await?;
// Worker: process and respond
if let Some = mgr.queue..await?
Use cases
- Background job processing — email delivery, image resizing, PDF generation, webhook dispatch. Any work that should happen outside the request cycle.
- Event-driven microservices — publish domain events (order placed, user signed up) and let downstream services react via
LISTEN/NOTIFYwithout polling. - Caching hot data — store frequently-read, expensive-to-compute results (API responses, aggregations, session data) with automatic TTL expiry.
- Task orchestration — fan-out work to a pool of workers with guaranteed exactly-once delivery via
SKIP LOCKED. - Reducing infrastructure — if your Redis is only doing job queues and caching, pg-queue lets you drop it entirely and run everything on the Postgres you already have.
Tradeoffs
This is the right tool when your job volume fits comfortably within PostgreSQL's throughput (tens of thousands of jobs per second on modern hardware). If you need millions of messages per second or global pub/sub across data centres, dedicated message brokers like Kafka or NATS are better suited.
The cache uses UNLOGGED tables — faster writes, but data is lost on crash. That's the correct tradeoff for a cache (the source of truth lives elsewhere), but don't store anything you can't recompute.
License
MIT OR Apache-2.0