Rust Task Queue
A high-performance, Redis-backed task queue framework with enhanced auto-scaling, intelligent async task spawning, multidimensional scaling triggers, and advanced backpressure management for async Rust applications.
Features
- Redis-backed broker with connection pooling and optimized operations
- Enhanced Multi-dimensional Auto-scaling with 5-metric analysis and adaptive learning
- Task scheduling with delay support and persistent scheduling
- Multiple queue priorities with predefined queue constants
- Retry logic with exponential backoff and configurable attempts
- Task timeouts and comprehensive failure handling
- Advanced metrics and monitoring with SLA tracking and performance insights
- Production-grade observability with comprehensive structured logging and tracing
- Actix Web integration (optional) with built-in endpoints
- Axum integration (optional) with built-in endpoints
- CLI tools for standalone workers with process separation and logging configuration
- Automatic task registration with procedural macros
- High performance with MessagePack serialization and connection pooling
- Advanced async task spawning with intelligent backpressure and resource management
- Graceful shutdown with active task tracking and cleanup
- Smart resource allocation with semaphore-based concurrency control
- Comprehensive testing with unit, integration, performance, and security tests
- Enterprise-grade tracing with lifecycle tracking, performance monitoring, and error context
- Production-ready with robust error handling and safety improvements
Performance Highlights
Throughput
- Serialization: 23M+ ops/sec (42ns per task) with MessagePack
- Deserialization: 31M+ ops/sec (32ns per task)
- Queue Operations: 25M+ ops/sec for config lookups (40ns per operation)
- Connection Management: 476K+ ops/sec with pooling
- Overall throughput: Thousands of tasks per second in production
Memory Usage
- Minimal overhead: MessagePack serialization is compact
- Connection pooling: Configurable Redis connections
- Worker memory: Isolated task execution with proper cleanup
- Queue constants: Zero-cost abstractions
Scaling Characteristics
- Horizontal scaling: Add more workers or worker processes
- Auto-scaling: Based on queue depth with validation
- Redis scaling: Single Redis instance or cluster support
- Monitoring: Real-time metrics without performance impact
Optimization Tips
- Connection Pool Size: Match to worker count
- Batch Operations: Group related tasks when possible
- Queue Priorities: Use appropriate queue constants
- Monitoring: Regular health checks without overhead
- Error Handling: Proper retry strategies without panics
- Configuration: Use validation to catch issues early
- Concurrency Limits: Set
max_concurrent_tasks
based on resource capacity - Backpressure Delays: Configure appropriate delays to prevent tight loops
- Active Task Monitoring: Use
active_task_count()
for real-time insights - Graceful Shutdown: Allow sufficient time for task completion (30s default)
- Context Reuse: Leverage TaskExecutionContext for efficient resource management
- Semaphore Configuration: Match semaphore size to system capacity
Production Ready
- 192+ comprehensive tests (unit, integration, performance, security)
- Memory safe - no unsafe code
- Performance benchmarked - <50ns serialization
- Enterprise logging with structured tracing
- Graceful shutdown and error recovery
- Redis cluster support with connection pooling
Quick Start
Available Features
default
:tracing
+auto-register
+config
+cli
(recommended)full
: All features enabled for maximum functionalitytracing
: enterprise-grade structured logging and observability- Complete task lifecycle tracking with distributed spans
- Performance monitoring and execution timing
- Error chain analysis with deep context
- Worker activity and resource utilization monitoring
- Production-ready logging configuration (JSON/compact/pretty formats)
- Environment-based configuration support
actix-integration
: Actix Web framework integration with built-in endpointsaxum-integration
: Axum framework integration with comprehensive metrics and CORS supportcli
: Standalone worker binaries with logging configuration supportauto-register
: Automatic task discovery via procedural macrosconfig
: External TOML/YAML configuration files
Feature Combinations for Common Use Cases
# Web application with Actix Web (recommended)
= { = "0.1", = ["tracing", "auto-register", "actix-integration", "config", "cli"] }
# Web application with Axum framework
= { = "0.1", = ["tracing", "auto-register", "axum-integration", "config", "cli"] }
# Standalone worker processes
= { = "0.1", = ["tracing", "auto-register", "cli", "config"] }
# Minimal embedded systems
= { = "0.1", = false, = ["tracing"] }
# Development/testing
= { = "0.1", = ["full"] }
# Library integration (no CLI tools)
= { = "0.1", = ["tracing", "auto-register", "config"] }
Integration Patterns
Actix Web with Separate Workers
This pattern provides the best separation of concerns and scalability.
Recommended: Use external configuration files (task-queue.toml
or task-queue.yaml
) for production deployments:
1. Create the configuration file:
Create a configuration file task-queue.toml at the root of your project. Copy & past the content from this
template task-queue.toml
. The easy way to configure your worker is through
your configuration file. Adjust it according to your need. Use the default values if you don't know how to adjust it.
2. Worker configuration:
Copy task-worker.rs
to your src/bin/ folder, and update it by importing your tasks to
ensure they are discoverable by the auto register.
use start_worker;
// Import tasks to ensure they're compiled into this binary
// This is ESSENTIAL for auto-registration to work with the inventory pattern
// Without this import, the AutoRegisterTask derive macros won't be executed
// and the tasks won't be submitted to the inventory for auto-discovery
async
Update your Cargo.toml to include the worker cli
# Bin declaration required to launch the worker.
[[]]
= "task-worker"
= "src/bin/task-worker.rs"
3. Actix Web Application:
use *;
use queue_names;
use ;
use ;
async
async
Now you can start Actix web server
4. Start Workers in Separate Terminal:
Axum Web with Separate Workers
The same pattern works with Axum, providing a modern async web framework option:
1. Create the worker configuration file same as the Actix example above
2. Do the worker configuration same as the Actix example above
3. Axum Web Application:
use *;
use queue_names;
use ;
use ;
use Arc;
async
async
Now you can start Axum web server
2. Use the same worker commands as the Actix example above
Examples
The repository includes comprehensive examples:
- Actix Integration - Complete Actix Web integration examples
- Full-featured task queue endpoints
- Automatic task registration
- Production-ready patterns
- Axum Integration - Complete Axum framework integration examples
- Full-featured task queue endpoints
- Automatic task registration
- Production-ready patterns
All-in-One Process
For simpler deployments, you can run everything in one process:
use *;
use queue_names;
async
Basic Usage
use *;
use queue_names;
use ;
async
Enhanced Auto-scaling with Manual Configuration
For programmatic control, use the enhanced configuration API:
use *;
use ;
async
Queue Constants
The framework provides predefined queue constants for type safety and consistency:
use queue_names;
// Available queue constants
DEFAULT // "default" - Standard priority tasks
HIGH_PRIORITY // "high_priority" - High priority tasks
LOW_PRIORITY // "low_priority" - Background tasks
Automatic Task Registration
With the auto-register
feature, tasks can be automatically discovered and registered:
use *;
use queue_names;
use ;
async
You can also use the attribute macro for custom task names:
CLI Worker Tool
The framework includes a powerful CLI tool for running workers in separate processes, now with enhanced auto-scaling support.
Enhanced CLI Usage with Auto-scaling
# Start workers based on your root task-queue.toml configuration file (recommended)
# Workers with custom auto-scaling thresholds
# Monitor auto-scaling in real-time
CLI Options
--redis-url, -r
: Redis connection URL (default:redis://127.0.0.1:6379
)--workers, -w
: Number of initial workers (default:4
)--enable-autoscaler, -a
: Enable enhanced multi-dimensional auto-scaling--enable-scheduler, -s
: Enable task scheduler for delayed tasks--queues, -q
: Comma-separated list of queue names to process--worker-prefix
: Custom prefix for worker names--config, -c
: Path to enhanced configuration file (recommended)--log-level
: Logging level (trace, debug, info, warn, error)--log-format
: Log output format (json, compact, pretty)--autoscaler-min-workers
: Minimum workers for auto-scaling--autoscaler-max-workers
: Maximum workers for auto-scaling--autoscaler-consecutive-signals
: Required consecutive signals for scaling
Auto-scaling
Multi-dimensional Scaling Intelligence
Our enhanced auto-scaling system analyzes 5 key metrics simultaneously for intelligent scaling decisions:
- Queue Pressure Score: Weighted queue depth accounting for priority levels
- Worker Utilization: Real-time busy/idle ratio analysis
- Task Complexity Factor: Dynamic execution pattern recognition
- Error Rate Monitoring: System health and stability tracking
- Memory Pressure: Per-worker resource utilization analysis
Adaptive Threshold Learning
The system automatically adjusts scaling triggers based on actual performance vs. your SLA targets:
[]
= 3000.0 # 3 second P95 latency target
= 0.99 # 99% success rate target
= 5000.0 # 5-second max queue wait
= 0.75 # optimal 75% worker utilization
Stability Controls
Advanced hysteresis and cooldown mechanisms prevent scaling oscillations:
- Consecutive Signal Requirements: Configurable signal thresholds (2-5 signals)
- Independent Cooldowns: Separate scale-up (3 min) and scale-down (15 min) periods
- Performance History: Learning from past scaling decisions
Test Coverage
The project maintains comprehensive test coverage across multiple dimensions:
- Unit Tests: 124 tests covering all core functionality
- Integration Tests: 9 tests for end-to-end workflows
- Actix Integration Tests: 22 tests for web endpoints and metrics API
- Axum Integration Tests: 11 tests for web framework integration
- Error Scenario Tests: 9 tests for edge cases and failure modes
- Performance Tests: 6 tests for throughput and load handling
- Security Tests: 11 tests for injection attacks and safety
- Benchmarks: 7 performance benchmarks for optimization
Total: 192 tests ensuring reliability and performance
Enterprise-Grade Observability
The framework includes comprehensive structured logging and tracing capabilities for production systems:
Tracing Features
- Complete Task Lifecycle Tracking: From enqueue to completion with detailed spans
- Performance Monitoring: Execution timing, queue metrics, and throughput analysis
- Error Chain Analysis: Deep context and source tracking for debugging
- Worker Activity Monitoring: Real-time status and resource utilization
- Distributed Tracing: Async instrumentation with span correlation
- Production Logging Configuration: Multiple output formats (JSON/compact/pretty)
Logging Configuration
use *;
async
Environment-Based Configuration
# Configure logging via environment variables
# trace, debug, info, warn, error
# json, compact, pretty
# Start worker with production logging
Performance Monitoring
The tracing system provides detailed performance insights:
- Task Execution Timing: Individual task performance tracking
- Queue Depth Monitoring: Real-time queue status and trends
- Worker Utilization: Capacity analysis and efficiency metrics
- Error Rate Tracking: System health and failure analysis
- Throughput Analysis: Tasks per second and bottleneck identification
Comprehensive Metrics API
The framework includes a production-ready metrics API with 15+ endpoints for monitoring and diagnostics:
Health & Status Endpoints
/tasks/health
- Detailed health check with component status (Redis, workers, scheduler)/tasks/status
- System status with health metrics and worker information
Core Metrics Endpoints
/tasks/metrics
- Comprehensive metrics combining all available data/tasks/metrics/system
- Enhanced system metrics with memory and performance data/tasks/metrics/performance
- Performance report with task execution metrics and SLA data/tasks/metrics/autoscaler
- AutoScaler metrics and scaling recommendations/tasks/metrics/queues
- Individual queue metrics for all queues/tasks/metrics/workers
- Worker-specific metrics and status/tasks/metrics/memory
- Memory usage metrics and tracking/tasks/metrics/summary
- Quick metrics summary for debugging
Task Registry Endpoints
/tasks/registered
- Detailed task registry information and features
Administrative Endpoints
/tasks/alerts
- Active alerts from the metrics system/tasks/sla
- SLA status and violations with performance percentages/tasks/diagnostics
- Comprehensive diagnostics with queue health analysis/tasks/uptime
- System uptime and runtime information
Best Practices
Task Design
- Keep tasks idempotent when possible
- Use meaningful task names for monitoring
- Handle errors gracefully with proper logging
- Keep task payloads reasonably small
- Use appropriate queue constants (
queue_names::*
)
Deployment
- Use separate worker processes for better isolation
- Scale workers based on queue metrics
- Monitor Redis memory usage and performance
- Set up proper logging and alerting
- Configure appropriate
max_concurrent_tasks
based on workload characteristics - Monitor active task counts to optimize worker capacity
- Use graceful shutdown patterns to prevent task loss during deployments
- Use auto-registration for rapid development
- Test with different worker configurations
- Monitor queue sizes during load testing
- Use the built-in monitoring endpoints
- Take advantage of the CLI tools for testing
- Use configuration files (
task-queue.toml
) instead of hardcoded values
Troubleshooting
Common Issues
- Workers not processing tasks: Check Redis connectivity and task registration
- High memory usage: Monitor task payload sizes and Redis memory
- Slow processing: Consider increasing worker count or optimizing task logic
- Connection issues: Verify Redis URL and network connectivity
- Tasks getting re-queued frequently: Increase
max_concurrent_tasks
or optimize task execution time - Workers not shutting down gracefully: Check for long-running tasks and adjust shutdown timeout
- High active task count: Monitor task execution patterns and consider load balancing
Documentation
- API Documentation
- Development Guide - Comprehensive development documentation
Maintenance Status
Actively Developed - Regular releases, responsive to issues, feature requests welcome.
Compatibility:
- Rust 1.70.0+
- Redis 6.0+
- Tokio 1.0+
Contributing
We welcome contributions! Please see our Development Guide for:
- Development setup instructions
- Code style guidelines
- Testing requirements
- Performance benchmarking
- Documentation standards
License
Licensed under either of Apache License, Version 2.0 or MIT License at your option.