Expand description
§Hammerwork
A high-performance, database-driven job queue for Rust with comprehensive features for production workloads.
§Features
- Multi-database support: PostgreSQL and MySQL backends with feature flags
- Job prioritization: Five priority levels with weighted and strict scheduling algorithms
- Job result storage: Store and retrieve job execution results with TTL support
- Cron scheduling: Full cron expression support with timezone awareness
- Rate limiting: Token bucket rate limiting with configurable burst limits
- Monitoring: Prometheus metrics and advanced alerting (enabled by default)
- Job timeouts: Per-job and worker-level timeout configuration
- Statistics: Comprehensive job statistics and dead job management
- Async/await: Built on Tokio for high concurrency
- Type-safe: Leverages Rust’s type system for reliability
§Quick Start
use hammerwork::{Job, Worker, WorkerPool, JobQueue, Result, worker::JobHandler, queue::DatabaseQueue};
use serde_json::json;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
// Setup database connection (requires PostgreSQL or MySQL)
let pool = sqlx::PgPool::connect("postgresql://localhost/hammerwork").await?;
let queue = Arc::new(JobQueue::new(pool));
// Note: Run database migrations first using `cargo hammerwork migrate`
// or use the migration manager programmatically
// Create job handler
let handler: JobHandler = Arc::new(|job: Job| {
Box::pin(async move {
println!("Processing job: {:?}", job.payload);
// Your job processing logic here
Ok(())
})
});
// Create and start worker
let worker = Worker::new(queue.clone(), "default".to_string(), handler);
let mut pool = WorkerPool::new();
pool.add_worker(worker);
// Enqueue a job
{
use hammerwork::queue::DatabaseQueue;
let job = Job::new("default".to_string(), json!({"task": "send_email"}));
queue.enqueue(job).await?;
}
// Start processing jobs
Ok(pool.start().await?)
}
§Core Concepts
§Jobs
Jobs are the fundamental unit of work in Hammerwork. Each job has:
- A unique UUID identifier
- A queue name for routing
- A JSON payload containing work data
- Priority level (Background, Low, Normal, High, Critical)
- Optional scheduling and timeout configuration
§Workers
Workers poll queues for pending jobs and execute them using provided handlers. Workers support:
- Configurable polling intervals and retry logic
- Priority-aware job selection with weighted or strict algorithms
- Rate limiting and throttling
- Automatic timeout detection and handling
- Statistics collection and metrics reporting
§Queues
The job queue provides a database-backed persistent store for jobs with:
- ACID transactions for reliable job state management
- Optimized indexes for high-performance job polling
- Support for delayed jobs and cron-based recurring jobs
- Dead job management and bulk operations
§Event System
The event system provides real-time job lifecycle tracking with:
- Publish/subscribe pattern for job lifecycle events
- Webhook delivery with authentication and retry policies
- Streaming integration with Kafka, Kinesis, and Pub/Sub
- Flexible event filtering and routing
- Multiple serialization formats and partitioning strategies
§Configuration & Operations
Comprehensive configuration and operational tooling:
- TOML-based configuration with environment variable overrides
- CLI tooling for database migrations, job management, and monitoring
- Development and production configuration presets
- Health checks and graceful shutdown support
§Event System Integration
Hammerwork provides a comprehensive event system for integrating with external systems:
ⓘ
use hammerwork::{
events::{EventManager, EventFilter, JobLifecycleEventType},
webhooks::{WebhookManager, WebhookConfig, WebhookAuth, HttpMethod},
streaming::{StreamManager, StreamConfig, StreamBackend, PartitioningStrategy},
config::HammerworkConfig,
};
use std::sync::Arc;
use std::collections::HashMap;
#[tokio::main]
async fn main() -> hammerwork::Result<()> {
// Create event manager
let event_manager = Arc::new(EventManager::new_default());
// Set up webhooks for job completion notifications
let webhook_manager = WebhookManager::new(
event_manager.clone(),
Default::default()
);
let webhook = WebhookConfig::new(
"completion_hook".to_string(),
"https://api.example.com/job-completed".to_string(),
)
.with_auth(WebhookAuth::Bearer {
token: "your-api-token".to_string()
})
.with_filter(
EventFilter::new()
.with_event_types(vec![JobLifecycleEventType::Completed])
);
webhook_manager.add_webhook(webhook).await?;
// Set up Kafka streaming for analytics
let stream_manager = StreamManager::new_default(event_manager.clone());
let kafka_stream = StreamConfig {
id: uuid::Uuid::new_v4(),
name: "analytics_stream".to_string(),
backend: StreamBackend::Kafka {
brokers: vec!["localhost:9092".to_string()],
topic: "job-events".to_string(),
config: HashMap::new(),
},
filter: EventFilter::new().include_payload(),
partitioning: PartitioningStrategy::QueueName,
enabled: true,
..Default::default()
};
stream_manager.add_stream(kafka_stream).await?;
Ok(())
}
§Configuration Management
Hammerwork supports comprehensive configuration through TOML files and environment variables:
ⓘ
use hammerwork::config::HammerworkConfig;
// Load from TOML file
let config = HammerworkConfig::from_file("hammerwork.toml")?;
// Load from environment variables
let config = HammerworkConfig::from_env()?;
// Create with builder pattern
let config = HammerworkConfig::new()
.with_database_url("postgresql://localhost/hammerwork")
.with_worker_pool_size(8)
.with_events_enabled(true);
§CLI Integration
Hammerwork includes a comprehensive CLI for operations:
# Database operations
cargo hammerwork migrate --database-url postgresql://localhost/hammerwork
# Job management
cargo hammerwork job list --queue=email
cargo hammerwork job retry --job-id=abc123
cargo hammerwork job enqueue --queue=email --payload='{"to":"user@example.com"}'
# Queue operations
cargo hammerwork queue stats --queue=email
cargo hammerwork queue clear --queue=test
# Webhook management
cargo hammerwork webhook list
cargo hammerwork webhook test --webhook-id=abc123
# Monitoring
cargo hammerwork monitor --tail --queue=email
§Feature Flags
postgres
- Enable PostgreSQL database supportmysql
- Enable MySQL database supportmetrics
- Enable Prometheus metrics collection (default)alerting
- Enable webhook/Slack/email alerting (default)webhooks
- Enable webhook and event system features (default)encryption
- Enable job payload encryption and PII protectiontracing
- Enable OpenTelemetry distributed tracing
Re-exports§
pub use archive::ArchivalConfig;
pub use archive::ArchivalPolicy;
pub use archive::ArchivalReason;
pub use archive::ArchivalStats;
pub use archive::ArchiveEvent;
pub use archive::ArchivedJob;
pub use archive::JobArchiver;
pub use batch::BatchId;
pub use batch::BatchResult;
pub use batch::BatchStatus;
pub use batch::JobBatch;
pub use batch::PartialFailureMode;
pub use config::ArchiveConfig;
pub use config::DatabaseConfig;
pub use config::HammerworkConfig;
pub use config::LoggingConfig;
pub use config::RateLimitingConfig;
pub use config::StreamConfig;
pub use config::StreamingConfigs;
pub use config::StreamingGlobalSettings;
pub use config::WorkerConfig;
pub use config::WebhookConfig;
pub use config::WebhookConfigs;
pub use config::WebhookGlobalSettings;
pub use cron::CronError;
pub use cron::CronSchedule;
pub use encryption::EncryptedPayload;
pub use encryption::EncryptionAlgorithm;
pub use encryption::EncryptionConfig;
pub use encryption::EncryptionEngine;
pub use encryption::EncryptionError;
pub use encryption::EncryptionKey;
pub use encryption::EncryptionMetadata;
pub use encryption::EncryptionStats;
pub use encryption::ExternalKmsConfig;
pub use encryption::KeyAuditRecord;
pub use encryption::KeyDerivationConfig;
pub use encryption::KeyManager;
pub use encryption::KeyManagerConfig;
pub use encryption::KeyManagerStats;
pub use encryption::KeyOperation;
pub use encryption::KeyPurpose;
pub use encryption::KeySource;
pub use encryption::KeyStatus;
pub use encryption::RetentionPolicy;
pub use error::HammerworkError;
pub use job::Job;
pub use job::JobId;
pub use job::JobStatus;
pub use job::ResultConfig;
pub use job::ResultStorage;
pub use priority::JobPriority;
pub use priority::PriorityError;
pub use priority::PrioritySelectionStrategy;
pub use priority::PriorityStats;
pub use priority::PriorityWeights;
pub use queue::JobQueue;
pub use rate_limit::RateLimit;
pub use rate_limit::RateLimiter;
pub use rate_limit::ThrottleConfig;
pub use retry::JitterType;
pub use retry::RetryStrategy;
pub use retry::fibonacci;
pub use spawn::JobSpawnExt;
pub use spawn::SpawnConfig;
pub use spawn::SpawnHandler;
pub use spawn::SpawnManager;
pub use spawn::SpawnResult;
pub use stats::DeadJobSummary;
pub use stats::InMemoryStatsCollector;
pub use stats::JobStatistics;
pub use stats::QueueStats;
pub use stats::StatisticsCollector;
pub use worker::AutoscaleConfig;
pub use worker::AutoscaleMetrics;
pub use worker::BatchProcessingStats;
pub use worker::JobEventHooks;
pub use worker::JobHandler;
pub use worker::JobHandlerWithResult;
pub use worker::JobHookEvent;
pub use worker::JobResult;
pub use worker::Worker;
pub use worker::WorkerPool;
pub use workflow::DependencyStatus;
pub use workflow::FailurePolicy;
pub use workflow::JobGroup;
pub use workflow::WorkflowId;
pub use workflow::WorkflowStatus;
pub use metrics::MetricsConfig;
pub use metrics::PrometheusMetricsCollector;
pub use alerting::Alert;
pub use alerting::AlertManager;
pub use alerting::AlertSeverity;
pub use alerting::AlertTarget;
pub use alerting::AlertType;
pub use alerting::AlertingConfig;
pub use events::EventConfig;
pub use events::EventFilter;
pub use events::EventManager;
pub use events::EventManagerStats;
pub use events::EventSubscription;
pub use events::JobError;
pub use events::JobLifecycleEvent;
pub use events::JobLifecycleEventBuilder;
pub use events::JobLifecycleEventType;
pub use webhooks::HttpMethod;
pub use webhooks::RetryPolicy;
pub use webhooks::WebhookAuth;
pub use webhooks::WebhookConfig as WebhookSettings;
pub use webhooks::WebhookDelivery;
pub use webhooks::WebhookManager;
pub use webhooks::WebhookManagerConfig;
pub use webhooks::WebhookManagerStats;
pub use webhooks::WebhookStats;
pub use streaming::BufferConfig;
pub use streaming::PartitionField;
pub use streaming::PartitioningStrategy;
pub use streaming::SerializationFormat;
pub use streaming::StreamBackend;
pub use streaming::StreamConfig as StreamSettings;
pub use streaming::StreamDelivery;
pub use streaming::StreamManager;
pub use streaming::StreamManagerConfig;
pub use streaming::StreamManagerGlobalStats;
pub use streaming::StreamProcessor;
pub use streaming::StreamRetryPolicy;
pub use streaming::StreamStats;
pub use streaming::StreamedEvent;
pub use tracing::CorrelationId;
pub use tracing::TraceId;
Modules§
- alerting
- archive
- Job archival and retention system for Hammerwork.
- batch
- Job batching and bulk operations for high-throughput scenarios.
- config
- Configuration management for Hammerwork job queue.
- cron
- encryption
- Job payload encryption and PII protection for Hammerwork.
- error
- events
- Event system for job lifecycle tracking and external integrations.
- job
- Job types and utilities for representing work units in the job queue.
- metrics
- migrations
- Database migration system for Hammerwork.
- priority
- Job prioritization system for controlling execution order in the job queue.
- queue
- Job queue implementation with database-specific backends.
- rate_
limit - retry
- Advanced retry strategies for job scheduling and backoff patterns.
- spawn
- Dynamic job spawning functionality for creating child jobs from parent jobs.
- stats
- streaming
- Event streaming integration for external message systems.
- tracing
- Distributed tracing and correlation support for Hammerwork jobs.
- webhooks
- Webhook system for delivering job lifecycle events to external systems.
- worker
- Worker types for processing jobs from the job queue.
- workflow
- Workflow and job dependency management.
Type Aliases§
- Result
- Convenient type alias for Results with
HammerworkError
as the error type.