RunAt
A distributed job scheduler for Rust with PostgreSQL backend support.
Features
- Distributed Job Scheduling: Run jobs across multiple workers with PostgreSQL-backed coordination
- Application Context: Pass database connections, config, and other state to job handlers
- Cron Support: Schedule recurring jobs using cron expressions
- Failed Jobs Queue: Failed jobs are moved to a separate queue
- Retry Mechanisms: Built-in exponential backoff and custom retry strategies
- Type-Safe Jobs: Leverage Rust's type system for job definitions
- Async/Await: Built on Tokio for efficient async job execution
Installation
Add this to your Cargo.toml:
[dependencies]
runat = "0.2.2"
For PostgreSQL support (enabled by default):
[dependencies]
runat = { version = "0.2.2", features = ["postgres"] }
Optional features:
postgres - PostgreSQL backend (enabled by default)
tracing - Tracing support for observability
Quick Start
Define Your Application Context
The context allows you to pass shared state (database pools, config, HTTP clients, etc.) to your job handlers:
use sqlx::PgPool;
#[derive(Clone)]
pub struct AppContext {
pub db: PgPool,
pub api_key: String,
}
Define a Job
use runat::{BackgroundJob, Executable, JobResult};
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, BackgroundJob)]
pub struct SendEmailJob {
pub to: String,
pub subject: String,
pub body: String,
}
#[async_trait]
impl Executable<AppContext> for SendEmailJob {
async fn execute(&mut self, ctx: &AppContext) -> JobResult<()> {
println!("Sending email to {}: {}", self.to, self.subject);
Ok(())
}
}
Create the Queue and Run Workers
use runat::{IntoJob, JobQueue, JobQueueConfig, PostgresDatastore, JobResult};
use sqlx::postgres::PgPoolOptions;
use std::sync::Arc;
#[tokio::main]
async fn main() -> JobResult<()> {
let pool = PgPoolOptions::new()
.max_connections(10)
.connect("postgres://user:pass@localhost/db")
.await?;
let datastore = PostgresDatastore::new(pool.clone()).await?;
datastore.migrate().await?;
let ctx = AppContext {
db: pool,
api_key: "secret".to_string(),
};
let queue = JobQueue::new(
Arc::new(datastore),
JobQueueConfig::default(),
ctx,
);
queue.register::<SendEmailJob>()?;
queue.enqueue(
SendEmailJob {
to: "user@example.com".to_string(),
subject: "Welcome!".to_string(),
body: "Thanks for signing up!".to_string(),
}.job()?
).await?;
let queue_clone = queue.clone();
tokio::spawn(async move {
queue_clone.start_worker().await
});
Ok(())
}
Scheduled Jobs with Cron
queue.enqueue(
SendEmailJob {
to: "admin@example.com".to_string(),
subject: "Daily Report".to_string(),
body: "Here's your daily report".to_string(),
}
.job()?
.cron("*/10 * * * * * *")?
).await?;
Using Context in Jobs
Jobs receive a reference to the context during execution, giving access to shared resources:
use runat::{BackgroundJob, Executable, JobResult};
use serde::{Deserialize, Serialize};
use async_trait::async_trait;
#[derive(Debug, Clone, Serialize, Deserialize, BackgroundJob)]
pub struct ProcessPayment {
pub user_id: String,
pub amount: f64,
}
#[async_trait]
impl Executable<AppContext> for ProcessPayment {
async fn execute(&mut self, ctx: &AppContext) -> JobResult<()> {
sqlx::query("INSERT INTO payments (user_id, amount) VALUES ($1, $2)")
.bind(&self.user_id)
.bind(self.amount)
.execute(&ctx.db)
.await?;
println!("Using API key: {}", ctx.api_key);
Ok(())
}
async fn pre_execute(&mut self, ctx: &AppContext) {
println!("About to process payment using db pool");
}
async fn post_execute(&mut self, ctx: &AppContext, result: JobResult<()>) -> JobResult<()> {
if result.is_err() {
}
result
}
}
Job Registration
Important: You must register job handlers with the queue before workers can process them.
queue.register::<SendEmailJob>()?;
queue.register::<ProcessPayment>()?;
queue.start_worker().await?;
If a worker encounters a job type that hasn't been registered, it will fail the job with:
No handler registered for job type: SendEmailJob. Call queue.register::<T>() before starting workers.
Running Workers
Option 1: Run worker directly from the queue (recommended)
let queue_clone = queue.clone();
tokio::spawn(async move {
queue_clone.start_worker().await
});
Option 2: Create a worker instance
let worker = queue.worker();
tokio::spawn(async move {
worker.run().await
});
Retry Strategies
use runat::Retry;
use chrono::Duration;
queue.enqueue(
SendEmailJob {
to: "user@example.com".to_string(),
subject: "Welcome!".to_string(),
body: "Thanks for signing up!".to_string(),
}
.job()?
.set_max_attempts(3)
.retry(Retry::fixed(Duration::seconds(30)))
).await?;
Simple Jobs Without Context
If your jobs don't need shared state, use () as the context type:
#[derive(Debug, Clone, Serialize, Deserialize, BackgroundJob)]
pub struct SimpleJob {
pub message: String,
}
#[async_trait]
impl Executable<()> for SimpleJob {
async fn execute(&mut self, _ctx: &()) -> JobResult<()> {
println!("{}", self.message);
Ok(())
}
}
let queue = JobQueue::with_datastore(Arc::new(datastore), ());
Running Tests
cargo test