Taquba
A durable, single-process task queue for Rust, backed by object storage. Built on SlateDB.
Taquba uses SlateDB's single-writer model: all producers and workers for a given store must run inside one process.
Taquba is the right fit when you want durable background jobs whose state survives node loss, ephemeral disks, and region failures - without operating a queue server.
Features
- At-least-once delivery with lease-based claims and crash recovery.
- Multiple named queues per store with per-queue configuration.
- Priority levels (FIFO within each priority).
- Scheduled jobs, dedup keys, custom priority/attempts.
- Exponential retry backoff on
nack. - Bounded dead-letter retention with paginated inspection.
- Atomic batch enqueue.
- Worker loop with graceful shutdown and notify-based wakeups (no busy polling).
Stability
Taquba is pre-1.0. The Rust API may evolve between minor versions per Cargo's
standard 0.x.y semantics (0.1 -> 0.2 may break source compatibility), and
the on-disk format on object storage is not guaranteed stable across minor
versions either. Treat a Taquba minor-version bump as a one-way migration:
drain your queue first, or be prepared to start the bucket fresh.
Patch releases (0.1.0 -> 0.1.1) preserve both the Rust API and the on-disk
format.
Install
The in-memory and local-disk stores work with no feature flag, handy for tests and the quick-start below:
For production, opt in to exactly one cloud backend:
Quick start
use Arc;
use Duration;
use ;
async
Worker loop
Implement Worker and let run_worker handle the claim / ack / nack
loop, retries, and graceful shutdown:
use ;
;
Pass any future as the shutdown signal: tokio::signal::ctrl_c(),
a oneshot, etc. Shutdown is honoured at safe points: between jobs and during
idle waits. In-flight jobs always finish, so leases are never abandoned to the
reaper. See examples/worker.rs for a full setup.
Coordinating with caller state
Queue::enqueue_with_kv enqueues a job and applies a set of writes to a
caller-owned KV namespace in a single transaction, so a downstream crate can
keep its own durable coordination state (status markers, dedup records,
pointers to externally-stored blobs) consistent with the queue across crashes.
Queue::kv_get and Queue::kv_delete read and clean up those entries.
Caller keys live under a reserved usr: prefix internally so they cannot
collide with Taquba's own layout. Per-value size is capped at
MAX_KV_VALUE_SIZE (256 KiB); the namespace is sized for coordination state,
not bulk payload. Store large blobs in the underlying object store under a
content-addressed key and put only the pointer in KV.
The namespace is mutated only as a side effect of queue operations so there
is no standalone kv_put. To create or update an entry, include it in the
kv_writes map of an enqueue_with_kv call (which makes the write atomic
with the enqueue). kv_delete is the one standalone primitive, for terminal
cleanup of entries whose related queue op has already completed.
License
Apache-2.0