taquba 0.4.0

A durable, single-process task queue for Rust backed by object storage, built on SlateDB.
Documentation

Taquba

A durable, single-process task queue for Rust, backed by object storage. Built on SlateDB.

Taquba uses SlateDB's single-writer model: all producers and workers for a given store must run inside one process.

Taquba is the right fit when you want durable background jobs whose state survives node loss, ephemeral disks, and region failures - without operating a queue server.

Features

  • At-least-once delivery with lease-based claims and crash recovery.
  • Multiple named queues per store with per-queue configuration.
  • Priority levels (FIFO within each priority).
  • Scheduled jobs, dedup keys, custom priority/attempts.
  • Exponential retry backoff on nack.
  • Bounded dead-letter retention with paginated inspection.
  • Atomic batch enqueue.
  • Worker loop with graceful shutdown and notify-based wakeups (no busy polling).

Stability

Taquba is pre-1.0. The Rust API may evolve between minor versions per Cargo's standard 0.x.y semantics (0.1 -> 0.2 may break source compatibility), and the on-disk format on object storage is not guaranteed stable across minor versions either. Treat a Taquba minor-version bump as a one-way migration: drain your queue first, or be prepared to start the bucket fresh.

Patch releases (0.1.0 -> 0.1.1) preserve both the Rust API and the on-disk format.

Install

The in-memory and local-disk stores work with no feature flag, handy for tests and the quick-start below:

cargo add taquba
cargo add tokio --features full

For production, opt in to exactly one cloud backend:

cargo add taquba --features aws    # S3 / MinIO
cargo add taquba --features gcp    # Google Cloud Storage
cargo add taquba --features azure  # Azure Blob

Quick start

use std::sync::Arc;
use std::time::Duration;
use taquba::{Queue, object_store::memory::InMemory};

#[tokio::main]
async fn main() -> taquba::Result<()> {
    let q = Queue::open(Arc::new(InMemory::new()), "demo").await?;

    q.enqueue("email", b"alice@example.com".to_vec()).await?;

    if let Some(job) = q.claim("email", Duration::from_secs(30)).await? {
        // ... do the work ...
        q.ack(&job).await?;
    }

    q.close().await
}

Worker loop

Implement Worker and let run_worker handle the claim / ack / nack loop, retries, and graceful shutdown:

use taquba::{run_worker, JobRecord, Worker, WorkerError};

struct EmailWorker;

impl Worker for EmailWorker {
    async fn process(&self, job: &JobRecord) -> Result<(), WorkerError> {
        let to = std::str::from_utf8(&job.payload)?;
        send_email(to).await?;
        Ok(())
    }
}

Pass any future as the shutdown signal: tokio::signal::ctrl_c(), a oneshot, etc. Shutdown is honoured at safe points: between jobs and during idle waits. In-flight jobs always finish, so leases are never abandoned to the reaper. See examples/worker.rs for a full setup.

License

Apache-2.0