Hammerwork
A high-performance, database-driven job queue for Rust with comprehensive features for production workloads.
Features
- 📊 Web Dashboard: Modern real-time web interface for monitoring queues, managing jobs, and system administration with authentication and WebSocket updates
- 🔍 Job Tracing & Correlation: Comprehensive distributed tracing with OpenTelemetry integration, trace IDs, correlation IDs, and lifecycle event hooks
- 🔗 Job Dependencies & Workflows: Create complex data processing pipelines with job dependencies, sequential chains, and parallel processing with synchronization barriers
- Multi-database support: PostgreSQL and MySQL backends with optimized dependency queries
- Advanced retry strategies: Exponential backoff, linear, Fibonacci, and custom retry patterns with jitter
- Job prioritization: Five priority levels with weighted and strict scheduling algorithms
- Result storage: Database and in-memory result storage with TTL and automatic cleanup
- Worker autoscaling: Dynamic worker pool scaling based on queue depth and configurable thresholds
- Batch operations: High-performance bulk job enqueuing with optimized worker processing
- Cron scheduling: Full cron expression support with timezone awareness
- Rate limiting: Token bucket rate limiting with configurable burst limits
- Monitoring: Prometheus metrics and advanced alerting (enabled by default)
- Job timeouts: Per-job and worker-level timeout configuration
- Statistics: Comprehensive job statistics and dead job management
- Async/await: Built on Tokio for high concurrency
- Type-safe: Leverages Rust's type system for reliability
Installation
Core Library
[]
# Default features include metrics and alerting
= { = "1.2", = ["postgres"] }
# or
= { = "1.2", = ["mysql"] }
# With distributed tracing
= { = "1.2", = ["postgres", "tracing"] }
# Minimal installation
= { = "1.2", = ["postgres"], = false }
Feature Flags: postgres, mysql, metrics (default), alerting (default), tracing (optional)
Web Dashboard (Optional)
# Install the web dashboard
# Or add to your project
Start the dashboard:
# Dashboard available at http://localhost:8080
Quick Start
See the Quick Start Guide for complete examples with PostgreSQL and MySQL.
Documentation
- Quick Start Guide - Get started with PostgreSQL and MySQL
- Web Dashboard - Real-time web interface for queue monitoring and job management
- Job Tracing & Correlation - Distributed tracing, correlation IDs, and OpenTelemetry integration
- Job Dependencies & Workflows - Complex pipelines, job dependencies, and orchestration
- Job Types & Configuration - Job creation, priorities, timeouts, cron jobs
- Worker Configuration - Worker setup, rate limiting, statistics
- Cron Scheduling - Recurring jobs with timezone support
- Priority System - Five-level priority system with weighted scheduling
- Batch Operations - High-performance bulk job processing
- Database Migrations - Progressive schema updates and database setup
- Monitoring & Alerting - Prometheus metrics and notification systems
Basic Example
use ;
use json;
use ;
async
Workflow Example
Create complex data processing pipelines with job dependencies:
use ;
use json;
// Sequential pipeline: job1 → job2 → job3
let job1 = new;
let job2 = new
.depends_on;
let job3 = new
.depends_on;
// Parallel processing with synchronization barrier
let parallel_jobs = vec!;
let final_job = new;
let workflow = new
.add_parallel_jobs // These run concurrently
.then // This waits for all parallel jobs
.with_failure_policy;
// Enqueue the entire workflow
queue.enqueue_workflow.await?;
Jobs will only execute when their dependencies are satisfied, enabling sophisticated data processing pipelines and business workflows.
Tracing Example
Enable comprehensive distributed tracing with OpenTelemetry integration:
use ;
use json;
use Arc;
async
This enables end-to-end tracing across your entire job processing pipeline with automatic span creation, correlation tracking, and integration with observability platforms like Jaeger, Zipkin, or DataDog.
Web Dashboard
Start the real-time web dashboard for monitoring and managing your job queues:
# Start with PostgreSQL
# Start with authentication
# Start with custom configuration
The dashboard provides:
- Real-time Monitoring: Live queue statistics, job counts, and throughput metrics
- Job Management: View, retry, cancel, and inspect jobs with detailed payload information
- Queue Administration: Clear queues, monitor performance, and manage priorities
- Interactive Charts: Throughput graphs and job status distributions
- WebSocket Updates: Real-time updates without page refresh
- REST API: Complete programmatic access to all dashboard features
- Authentication: Secure access with bcrypt password hashing and rate limiting
Access the dashboard at http://localhost:8080 after starting the server.
Database Setup
Using Migrations (Recommended)
Hammerwork provides a migration system for progressive schema updates:
# Build the migration tool
# Run migrations
# Check migration status
# Start the web dashboard after migrations
Application Usage
Once migrations are run, your application can use the queue directly:
// In your application - no setup needed, just use the queue
let pool = connect.await?;
let queue = new;
// Start enqueuing jobs immediately
let job = new;
queue.enqueue.await?;
Database Schema
Hammerwork uses optimized tables with comprehensive indexing:
hammerwork_jobs- Main job table with priorities, timeouts, cron scheduling, retry strategies, result storage, and distributed tracing fieldshammerwork_batches- Batch metadata and tracking (v0.7.0+)hammerwork_job_results- Job result storage with TTL and expiration (v0.8.0+)hammerwork_migrations- Migration tracking for schema evolution
The schema supports all features including job prioritization, advanced retry strategies, timeouts, cron scheduling, batch processing, result storage with TTL, distributed tracing with trace/correlation IDs, worker autoscaling, and comprehensive lifecycle tracking. See Database Migrations for details.
Development
Comprehensive testing with Docker containers:
# Start databases and run all tests
# Run specific database tests
See docs/integration-testing.md for complete development setup.
Examples
Working examples in examples/:
postgres_example.rs- PostgreSQL with timeouts and statisticsmysql_example.rs- MySQL with workers and prioritiescron_example.rs- Cron scheduling with timezonespriority_example.rs- Priority system demonstrationbatch_example.rs- Bulk job enqueuing and processingworker_batch_example.rs- Worker batch processing featuresretry_strategies.rs- Advanced retry patterns with exponential backoff and jitterresult_storage_example.rs- Job result storage and retrievalautoscaling_example.rs- Dynamic worker pool scaling based on queue depthtracing_example.rs- Distributed tracing with OpenTelemetry and event hooks
Contributing
- Fork the repository and create a feature branch
- Run tests:
make integration-all - Ensure code follows Rust standards (
cargo fmt,cargo clippy) - Submit a pull request with tests and documentation
License
This project is licensed under the MIT License - see the LICENSE-MIT file for details.