Hammerwork
A high-performance, database-driven job queue for Rust with comprehensive features for production workloads.
Features
- ๐ Dynamic Job Spawning: Jobs can dynamically create child jobs during execution for fan-out processing patterns, with full parent-child relationship tracking and lineage management
- ๐ Web Dashboard: Modern real-time web interface for monitoring queues, managing jobs, and system administration with authentication and WebSocket updates
- ๐งช TestQueue Framework: Complete in-memory testing implementation with MockClock for deterministic testing of time-dependent features, workflows, and job processing
- ๐ Job Tracing & Correlation: Comprehensive distributed tracing with OpenTelemetry integration, trace IDs, correlation IDs, and lifecycle event hooks
- ๐ Job Dependencies & Workflows: Create complex data processing pipelines with job dependencies, sequential chains, and parallel processing with synchronization barriers
- ๐๏ธ Job Archiving & Retention: Policy-driven archival with configurable retention periods, payload compression, and automated cleanup for compliance and performance
- Multi-database support: PostgreSQL and MySQL backends with optimized dependency queries
- Advanced retry strategies: Exponential backoff, linear, Fibonacci, and custom retry patterns with jitter
- Job prioritization: Five priority levels with weighted and strict scheduling algorithms
- Result storage: Database and in-memory result storage with TTL and automatic cleanup
- Worker autoscaling: Dynamic worker pool scaling based on queue depth and configurable thresholds
- Batch operations: High-performance bulk job enqueuing with optimized worker processing
- Cron scheduling: Full cron expression support with timezone awareness
- Rate limiting: Token bucket rate limiting with configurable burst limits
- Monitoring: Prometheus metrics and advanced alerting (enabled by default)
- Job timeouts: Per-job and worker-level timeout configuration
- Statistics: Comprehensive job statistics and dead job management
- Async/await: Built on Tokio for high concurrency
- Type-safe: Leverages Rust's type system for reliability
Installation
Core Library
[]
# Default features include metrics and alerting
= { = "1.4", = ["postgres"] }
# or
= { = "1.4", = ["mysql"] }
# With distributed tracing
= { = "1.4", = ["postgres", "tracing"] }
# Minimal installation
= { = "1.4", = ["postgres"], = false }
Feature Flags: postgres, mysql, metrics (default), alerting (default), tracing (optional), test (for TestQueue)
Web Dashboard (Optional)
# Install the web dashboard
# Or add to your project
Start the dashboard:
# Dashboard available at http://localhost:8080
Quick Start
See the Quick Start Guide for complete examples with PostgreSQL and MySQL.
Documentation
- Quick Start Guide - Get started with PostgreSQL and MySQL
- TestQueue Framework - In-memory testing with MockClock for unit tests and time control
- Web Dashboard - Real-time web interface for queue monitoring and job management
- Job Tracing & Correlation - Distributed tracing, correlation IDs, and OpenTelemetry integration
- Job Dependencies & Workflows - Complex pipelines, job dependencies, and orchestration
- Dynamic Job Spawning - Fan-out processing, parent-child relationships, and spawn tree visualization
- Job Archiving & Retention - Policy-driven archival, compression, and compliance management
- Job Types & Configuration - Job creation, priorities, timeouts, cron jobs
- Worker Configuration - Worker setup, rate limiting, statistics
- Cron Scheduling - Recurring jobs with timezone support
- Priority System - Five-level priority system with weighted scheduling
- Batch Operations - High-performance bulk job processing
- Database Migrations - Progressive schema updates and database setup
- Monitoring & Alerting - Prometheus metrics and notification systems
Basic Example
use ;
use json;
use ;
async
Workflow Example
Create complex data processing pipelines with job dependencies:
use ;
use json;
// Sequential pipeline: job1 โ job2 โ job3
let job1 = new;
let job2 = new
.depends_on;
let job3 = new
.depends_on;
// Parallel processing with synchronization barrier
let parallel_jobs = vec!;
let final_job = new;
let workflow = new
.add_parallel_jobs // These run concurrently
.then // This waits for all parallel jobs
.with_failure_policy;
// Enqueue the entire workflow
queue.enqueue_workflow.await?;
Jobs will only execute when their dependencies are satisfied, enabling sophisticated data processing pipelines and business workflows.
Tracing Example
Enable comprehensive distributed tracing with OpenTelemetry integration:
use ;
use json;
use Arc;
async
This enables end-to-end tracing across your entire job processing pipeline with automatic span creation, correlation tracking, and integration with observability platforms like Jaeger, Zipkin, or DataDog.
Testing Example
Test your job processing logic with the in-memory TestQueue framework:
use ;
use ;
use json;
use Duration;
async
The TestQueue provides complete compatibility with the DatabaseQueue trait while offering deterministic time control through MockClock, making it perfect for testing complex workflows, retry logic, and time-dependent job processing.
Job Archiving Example
Configure automatic job archival for compliance and database performance:
use ;
use Duration;
// Configure archival policy
let policy = new
.archive_completed_after // Archive completed jobs after 7 days
.archive_failed_after // Keep failed jobs for 30 days
.archive_dead_after // Archive dead jobs after 14 days
.archive_timed_out_after // Archive timed out jobs after 21 days
.purge_archived_after // Purge archived jobs after 1 year
.compress_archived_payloads // Enable gzip compression
.with_batch_size // Process up to 1000 jobs per batch
.enabled;
let config = new
.with_compression_level // Balanced compression
.with_compression_verification; // Verify compression integrity
// Run archival (typically scheduled as a cron job)
let stats = queue.archive_jobs.await?;
println!;
// Restore an archived job if needed
let job = queue.restore_archived_job.await?;
// List archived jobs with filtering
let archived_jobs = queue.list_archived_jobs.await?;
// Purge old archived jobs for GDPR compliance
let purged = queue.purge_archived_jobs.await?;
Archival moves completed/failed jobs to a separate table with compressed payloads, reducing the main table size while maintaining compliance requirements.
Web Dashboard
Start the real-time web dashboard for monitoring and managing your job queues:
# Start with PostgreSQL
# Start with authentication
# Start with custom configuration
The dashboard provides:
- Real-time Monitoring: Live queue statistics, job counts, and throughput metrics
- Job Management: View, retry, cancel, and inspect jobs with detailed payload information
- Queue Administration: Clear queues, monitor performance, and manage priorities
- Interactive Charts: Throughput graphs and job status distributions
- WebSocket Updates: Real-time updates without page refresh
- REST API: Complete programmatic access to all dashboard features
- Authentication: Secure access with bcrypt password hashing and rate limiting
Access the dashboard at http://localhost:8080 after starting the server.
Database Setup
Using Migrations (Recommended)
Hammerwork provides a migration system for progressive schema updates:
# Build the migration tool
# Run migrations
# Check migration status
# Start the web dashboard after migrations
Application Usage
Once migrations are run, your application can use the queue directly:
// In your application - no setup needed, just use the queue
let pool = connect.await?;
let queue = new;
// Start enqueuing jobs immediately
let job = new;
queue.enqueue.await?;
Database Schema
Hammerwork uses optimized tables with comprehensive indexing:
hammerwork_jobs- Main job table with priorities, timeouts, cron scheduling, retry strategies, result storage, and distributed tracing fieldshammerwork_jobs_archive- Archive table for completed/failed jobs with compressed payloads (v1.3.0+)hammerwork_batches- Batch metadata and tracking (v0.7.0+)hammerwork_job_results- Job result storage with TTL and expiration (v0.8.0+)hammerwork_migrations- Migration tracking for schema evolution
The schema supports all features including job prioritization, advanced retry strategies, timeouts, cron scheduling, batch processing, result storage with TTL, distributed tracing with trace/correlation IDs, worker autoscaling, job archival with compression, and comprehensive lifecycle tracking. See Database Migrations for details.
Development
Comprehensive testing with Docker containers:
# Start databases and run all tests
# Run specific database tests
See docs/integration-testing.md for complete development setup.
Examples
Working examples in examples/:
postgres_example.rs- PostgreSQL with timeouts and statisticsmysql_example.rs- MySQL with workers and prioritiescron_example.rs- Cron scheduling with timezonespriority_example.rs- Priority system demonstrationbatch_example.rs- Bulk job enqueuing and processingworker_batch_example.rs- Worker batch processing featuresretry_strategies.rs- Advanced retry patterns with exponential backoff and jitterresult_storage_example.rs- Job result storage and retrievalautoscaling_example.rs- Dynamic worker pool scaling based on queue depthtracing_example.rs- Distributed tracing with OpenTelemetry and event hooks
Contributing
- Fork the repository and create a feature branch
- Run tests:
make integration-all - Ensure code follows Rust standards (
cargo fmt,cargo clippy) - Submit a pull request with tests and documentation
License
This project is licensed under the MIT License - see the LICENSE-MIT file for details.