rust-task-queue 0.1.5

Production-ready Redis task queue with intelligent auto-scaling, Actix Web integration, and enterprise-grade observability for high-performance async Rust applications.
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
# Rust Task Queue

[![Crates.io](https://img.shields.io/crates/v/rust-task-queue.svg)](https://crates.io/crates/rust-task-queue)
[![Documentation](https://docs.rs/rust-task-queue/badge.svg)](https://docs.rs/rust-task-queue)
[![License](https://img.shields.io/badge/license-MIT%2FApache--2.0-blue.svg)](https://github.com/longbowou/rust-task-queue)
[![Downloads](https://img.shields.io/crates/d/rust-task-queue.svg)](https://crates.io/crates/rust-task-queue)
[![Rust](https://img.shields.io/badge/rust-1.70%2B-blue.svg)](https://www.rust-lang.org)
[![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg)]()
[![Test Coverage](https://img.shields.io/badge/tests-192+%20passing-brightgreen.svg)]()

A high-performance, Redis-backed task queue framework with enhanced auto-scaling, intelligent async task spawning,
multidimensional scaling triggers, and advanced backpressure management for async Rust applications.

## Features

- **Redis-backed broker** with connection pooling and optimized operations
- **Enhanced Multi-dimensional Auto-scaling** with 5-metric analysis and adaptive learning
- **Task scheduling** with delay support and persistent scheduling
- **Multiple queue priorities** with predefined queue constants
- **Retry logic** with exponential backoff and configurable attempts
- **Task timeouts** and comprehensive failure handling
- **Advanced metrics and monitoring** with SLA tracking and performance insights
- **Production-grade observability** with comprehensive structured logging and tracing
- **Actix Web integration** (optional) with built-in endpoints
- **Axum integration** (optional) with built-in endpoints
- **CLI tools** for standalone workers with process separation and logging configuration
- **Automatic task registration** with procedural macros
- **High performance** with MessagePack serialization and connection pooling
- **Advanced async task spawning** with intelligent backpressure and resource management
- **Graceful shutdown** with active task tracking and cleanup
- **Smart resource allocation** with semaphore-based concurrency control
- **Comprehensive testing** with unit, integration, performance, and security tests
- **Enterprise-grade tracing** with lifecycle tracking, performance monitoring, and error context
- **Production-ready** with robust error handling and safety improvements

## Performance Highlights

### Throughput

- **Serialization**: 23M+ ops/sec (42ns per task) with MessagePack
- **Deserialization**: 31M+ ops/sec (32ns per task)
- **Queue Operations**: 25M+ ops/sec for config lookups (40ns per operation)
- **Connection Management**: 476K+ ops/sec with pooling
- **Overall throughput**: Thousands of tasks per second in production

### Memory Usage

- **Minimal overhead**: MessagePack serialization is compact
- **Connection pooling**: Configurable Redis connections
- **Worker memory**: Isolated task execution with proper cleanup
- **Queue constants**: Zero-cost abstractions

### Scaling Characteristics

- **Horizontal scaling**: Add more workers or worker processes
- **Auto-scaling**: Based on queue depth with validation
- **Redis scaling**: Single Redis instance or cluster support
- **Monitoring**: Real-time metrics without performance impact

### Optimization Tips

1. **Connection Pool Size**: Match to worker count
2. **Batch Operations**: Group related tasks when possible
3. **Queue Priorities**: Use appropriate queue constants
4. **Monitoring**: Regular health checks without overhead
5. **Error Handling**: Proper retry strategies without panics
6. **Configuration**: Use validation to catch issues early
7. **Concurrency Limits**: Set `max_concurrent_tasks` based on resource capacity
8. **Backpressure Delays**: Configure appropriate delays to prevent tight loops
9. **Active Task Monitoring**: Use `active_task_count()` for real-time insights
10. **Graceful Shutdown**: Allow sufficient time for task completion (30s default)
11. **Context Reuse**: Leverage TaskExecutionContext for efficient resource management
12. **Semaphore Configuration**: Match semaphore size to system capacity

## Production Ready

- **192+ comprehensive tests** (unit, integration, performance, security)
- **Memory safe** - no unsafe code
- **Performance benchmarked** - <50ns serialization
- **Enterprise logging** with structured tracing
- **Graceful shutdown** and error recovery
- **Redis cluster support** with connection pooling

## Quick Start

### Available Features

- `default`: `tracing` + `auto-register` + `config` + `cli` (recommended)
- `full`: All features enabled for maximum functionality
- `tracing`: enterprise-grade structured logging and observability
    - Complete task lifecycle tracking with distributed spans
    - Performance monitoring and execution timing
    - Error chain analysis with deep context
    - Worker activity and resource utilization monitoring
    - Production-ready logging configuration (JSON/compact/pretty formats)
    - Environment-based configuration support
- `actix-integration`: Actix Web framework integration with built-in endpoints
- `axum-integration`: Axum framework integration with comprehensive metrics and CORS support
- `cli`: Standalone worker binaries with logging configuration support
- `auto-register`: Automatic task discovery via procedural macros
- `config`: External TOML/YAML configuration files

### Feature Combinations for Common Use Cases

```toml
# Web application with Actix Web (recommended)
rust-task-queue = { version = "0.1", features = ["tracing", "auto-register", "actix-integration", "config", "cli"] }

# Web application with Axum framework
rust-task-queue = { version = "0.1", features = ["tracing", "auto-register", "axum-integration", "config", "cli"] }

# Standalone worker processes
rust-task-queue = { version = "0.1", features = ["tracing", "auto-register", "cli", "config"] }

# Minimal embedded systems
rust-task-queue = { version = "0.1", default-features = false, features = ["tracing"] }

# Development/testing
rust-task-queue = { version = "0.1", features = ["full"] }

# Library integration (no CLI tools)
rust-task-queue = { version = "0.1", features = ["tracing", "auto-register", "config"] }
```

## Integration Patterns

### Actix Web with Separate Workers

This pattern provides the best separation of concerns and scalability.

**Recommended**: Use external configuration files (`task-queue.toml` or `task-queue.yaml`) for production deployments:

**1. Create the configuration file:**
Create a configuration file **task-queue.toml** at the **root of your project**. Copy & past the content from this
template [`task-queue.toml`](task-queue.toml). The easy way to configure your worker is through
your configuration file. Adjust it according to your need. Use the default values if you don't know how to adjust it.

**2. Worker configuration:**

Copy [`task-worker.rs`](src/bin/task-worker.rs) to your **src/bin/** folder, and update it by importing your tasks to
ensure they are discoverable by the auto register.

```rust
use rust_task_queue::cli::start_worker;

// Import tasks to ensure they're compiled into this binary
// This is ESSENTIAL for auto-registration to work with the inventory pattern
// Without this import, the AutoRegisterTask derive macros won't be executed
// and the tasks won't be submitted to the inventory for auto-discovery

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    println!("Starting Task Worker with Auto-Configuration");
    println!("Looking for task-queue.toml, task-queue.yaml, or environment variables...");

    // Use the simplified consumer helper which handles all the configuration automatically
    start_worker().await
}
```

Update your **Cargo.toml** to include the worker cli

```toml
# Bin declaration required to launch the worker.
[[bin]]
name = "task-worker"
path = "src/bin/task-worker.rs"
```

**3. Actix Web Application:**

```rust
use rust_task_queue::prelude::*;
use rust_task_queue::queue::queue_names;
use actix_web::{web, App, HttpServer, HttpResponse, Result as ActixResult};
use serde::{Deserialize, Serialize};

#[derive(Debug, Serialize, Deserialize, Default, AutoRegisterTask)]
struct ProcessOrderTask {
    order_id: String,
    customer_email: String,
    amount: f64,
}

#[async_trait]
impl Task for ProcessOrderTask {
    async fn execute(&self) -> Result<serde_json::Value, Box<dyn std::error::Error + Send + Sync>> {
        println!("Processing order: {}", self.order_id);
        // Your processing logic here
        Ok(serde_json::json!({"status": "completed", "order_id": self.order_id}))
    }

    fn name(&self) -> &str {
        "process_order"
    }
}

async fn create_order(
    task_queue: web::Data<Arc<TaskQueue>>,
    order_data: web::Json<ProcessOrderTask>,
) -> ActixResult<HttpResponse> {
    match task_queue.enqueue(order_data.into_inner(), queue_names::DEFAULT).await {
        Ok(task_id) => Ok(HttpResponse::Ok().json(serde_json::json!({
            "task_id": task_id,
            "status": "queued"
        }))),
        Err(e) => Ok(HttpResponse::InternalServerError().json(serde_json::json!({
            "error": e.to_string()
        })))
    }
}

#[tokio::main]
async fn main() -> std::io::Result<()> {
    // Create the task_queue automatically (based on your environment variables or your task-queue.toml)
    let task_queue = TaskQueueBuilder::auto().build().await?;

    println!("Starting web server at http://localhost:3000");
    println!("Start workers separately with: cargo run --bin task-worker");

    HttpServer::new(move || {
        App::new()
            .app_data(web::Data::new(task_queue.clone()))
            .route("/order", web::post().to(create_order))
            .configure(rust_task_queue::actix::configure_task_queue_routes)
    })
        .bind("0.0.0.0:3000")?
        .run()
        .await
}
```

Now you can start Actix web server

```bash
cargo run
```

**4. Start Workers in Separate Terminal:**

```bash
cargo run --bin task-worker
```

### Axum Web with Separate Workers

The same pattern works with Axum, providing a modern async web framework option:

**1. Create the worker configuration file same as the [Actix example](#actix-web-with-separate-workers) above**

**2. Do the worker configuration same as the [Actix example](#actix-web-with-separate-workers) above**

**3. Axum Web Application:**

```rust
use rust_task_queue::prelude::*;
use rust_task_queue::queue::queue_names;
use axum::{extract::State, response::Json, Router};
use serde::{Deserialize, Serialize};
use std::sync::Arc;

#[derive(Debug, Serialize, Deserialize, Default, AutoRegisterTask)]
struct ProcessOrderTask {
    order_id: String,
    customer_email: String,
    amount: f64,
}

#[async_trait]
impl Task for ProcessOrderTask {
    async fn execute(&self) -> Result<Vec<u8>, Box<dyn std::error::Error + Send + Sync>> {
        println!("Processing order: {}", self.order_id);
        // Your processing logic here
        let response = serde_json::json!({"status": "completed", "order_id": self.order_id});
        Ok(rmp_serde::to_vec(&response)?)
    }

    fn name(&self) -> &str {
        "process_order"
    }
}

async fn create_order(
    State(task_queue): State<Arc<TaskQueue>>,
    Json(order_data): Json<ProcessOrderTask>,
) -> Json<serde_json::Value> {
    match task_queue.enqueue(order_data, queue_names::DEFAULT).await {
        Ok(task_id) => Json(serde_json::json!({
            "task_id": task_id,
            "status": "queued"
        })),
        Err(e) => Json(serde_json::json!({
            "error": e.to_string()
        }))
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create the task_queue automatically (based on your environment variables or your task-queue.toml)
    let task_queue = TaskQueueBuilder::auto().build().await?;

    println!("Starting Axum web server at http://localhost:3000");
    println!("Start workers separately with: cargo run --bin task-worker");

    // Build our application with routes
    let app = Router::new()
        .route("/order", axum::routing::post(create_order))
        .merge(rust_task_queue::axum::configure_task_queue_routes())
        .with_state(task_queue);

    // Run the server
    let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await?;
    axum::serve(listener, app).await?;

    Ok(())
}
```

Now you can start Axum web server

```bash
cargo run
```

**2. Use the same worker commands as the Actix example above**

### Examples

The repository includes comprehensive examples:

- [**Actix Integration**]examples/actix-integration/ - Complete Actix Web integration examples
    - Full-featured task queue endpoints
    - Automatic task registration
    - Production-ready patterns
- [**Axum Integration**]examples/axum-integration/ - Complete Axum framework integration examples
    - Full-featured task queue endpoints
    - Automatic task registration
    - Production-ready patterns

### All-in-One Process

For simpler deployments, you can run everything in one process:

```rust
use rust_task_queue::prelude::*;
use rust_task_queue::queue::queue_names;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let task_queue = TaskQueueBuilder::new("redis://localhost:6379")
        .auto_register_tasks()
        .initial_workers(4)  // Workers start automatically
        .with_scheduler()
        .with_autoscaler()
        .build()
        .await?;

    // Your application logic here
    Ok(())
}
```

### Basic Usage

```rust
use rust_task_queue::prelude::*;
use rust_task_queue::queue::queue_names;
use serde::{Deserialize, Serialize};

#[derive(Debug, Serialize, Deserialize)]
struct MyTask {
    data: String,
}

#[async_trait]
impl Task for MyTask {
    async fn execute(&self) -> Result<serde_json::Value, Box<dyn std::error::Error + Send + Sync>> {
        println!("Processing: {}", self.data);
        Ok(serde_json::json!({"status": "completed"}))
    }

    fn name(&self) -> &str {
        "my_task"
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create and configure task queue with enhanced auto-scaling
    let task_queue = TaskQueueBuilder::new("redis://localhost:6379")
        .initial_workers(4)
        .with_scheduler()
        .with_autoscaler()  // Uses enhanced multi-dimensional auto-scaling
        .build()
        .await?;

    // Enqueue a task using predefined queue constants
    let task = MyTask { data: "Hello, World!".to_string() };
    let task_id = task_queue.enqueue(task, queue_names::DEFAULT).await?;

    println!("Enqueued task: {}", task_id);
    Ok(())
}
```

### Enhanced Auto-scaling with Manual Configuration

For programmatic control, use the enhanced configuration API:

```rust
use rust_task_queue::prelude::*;
use rust_task_queue::{ScalingTriggers, SLATargets, AutoScalerConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Enhanced auto-scaling configuration
    let autoscaler_config = AutoScalerConfig {
        min_workers: 2,
        max_workers: 50,
        scale_up_count: 3,
        scale_down_count: 1,

        // Multi-dimensional scaling triggers
        scaling_triggers: ScalingTriggers {
            queue_pressure_threshold: 1.5,
            worker_utilization_threshold: 0.80,
            task_complexity_threshold: 2.0,
            error_rate_threshold: 0.05,
            memory_pressure_threshold: 512.0,
        },

        // Adaptive learning settings
        enable_adaptive_thresholds: true,
        learning_rate: 0.1,
        adaptation_window_minutes: 30,

        // Stability controls
        scale_up_cooldown_seconds: 120,
        scale_down_cooldown_seconds: 600,
        consecutive_signals_required: 2,

        // SLA targets for optimization
        target_sla: SLATargets {
            max_p95_latency_ms: 5000.0,
            min_success_rate: 0.95,
            max_queue_wait_time_ms: 10000.0,
            target_worker_utilization: 0.70,
        },
    };

    let task_queue = TaskQueueBuilder::new("redis://localhost:6379")
        .auto_register_tasks()
        .initial_workers(4)
        .with_scheduler()
        .with_autoscaler_config(autoscaler_config)
        .build()
        .await?;

    // Monitor enhanced auto-scaling
    let recommendations = task_queue.autoscaler()
        .get_scaling_recommendations()
        .await?;
    println!("Auto-scaling recommendations:\n{}", recommendations);

    Ok(())
}
```

### Queue Constants

The framework provides predefined queue constants for type safety and consistency:

```rust
use rust_task_queue::queue::queue_names;

// Available queue constants
queue_names::DEFAULT       // "default" - Standard priority tasks
queue_names::HIGH_PRIORITY  // "high_priority" - High priority tasks  
queue_names::LOW_PRIORITY   // "low_priority" - Background tasks
```

### Automatic Task Registration

With the `auto-register` feature, tasks can be automatically discovered and registered:

```rust
use rust_task_queue::prelude::*;
use rust_task_queue::queue::queue_names;
use serde::{Deserialize, Serialize};

#[derive(Debug, Serialize, Deserialize, Default, AutoRegisterTask)]
struct MyTask {
    data: String,
}

#[async_trait]
impl Task for MyTask {
    async fn execute(&self) -> Result<serde_json::Value, Box<dyn std::error::Error + Send + Sync>> {
        println!("Processing: {}", self.data);
        Ok(serde_json::json!({"status": "completed"}))
    }

    fn name(&self) -> &str {
        "my_task"
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Tasks with AutoRegisterTask are automatically discovered!
    let task_queue = TaskQueueBuilder::new("redis://localhost:6379")
        .auto_register_tasks()
        .initial_workers(4)
        .with_scheduler()
        .with_autoscaler()
        .build()
        .await?;

    // No manual registration needed!
    let task = MyTask { data: "Hello, World!".to_string() };
    let task_id = task_queue.enqueue(task, queue_names::DEFAULT).await?;

    println!("Enqueued task: {}", task_id);
    Ok(())
}
```

You can also use the attribute macro for custom task names:

```rust
#[register_task("custom_name")]
#[derive(Debug, Serialize, Deserialize)]
struct MyTask {
    data: String,
}

impl Default for MyTask {
    fn default() -> Self {
        Self { data: String::new() }
    }
}
```

## CLI Worker Tool

The framework includes a powerful CLI tool for running workers in separate processes, now with enhanced auto-scaling
support.

### Enhanced CLI Usage with Auto-scaling

```bash
# Start workers based on your root task-queue.toml configuration file (recommended)
cargo run --bin task-worker

# Workers with custom auto-scaling thresholds
cargo run --bin task-worker -- \
  --workers 4 \
  --enable-autoscaler \
  --autoscaler-min-workers 2 \
  --autoscaler-max-workers 20 \
  --autoscaler-scale-up-threshold 1.5 \
  --autoscaler-consecutive-signals 3

# Monitor auto-scaling in real-time
cargo run --bin task-worker -- \
  --workers 6 \
  --enable-autoscaler \
  --enable-scheduler \
  --log-level debug  # See detailed auto-scaling decisions
```

### CLI Options

- `--redis-url, -r`: Redis connection URL (default: `redis://127.0.0.1:6379`)
- `--workers, -w`: Number of initial workers (default: `4`)
- `--enable-autoscaler, -a`: Enable enhanced multi-dimensional auto-scaling
- `--enable-scheduler, -s`: Enable task scheduler for delayed tasks
- `--queues, -q`: Comma-separated list of queue names to process
- `--worker-prefix`: Custom prefix for worker names
- `--config, -c`: Path to enhanced configuration file (recommended)
- `--log-level`: Logging level (trace, debug, info, warn, error)
- `--log-format`: Log output format (json, compact, pretty)
- `--autoscaler-min-workers`: Minimum workers for auto-scaling
- `--autoscaler-max-workers`: Maximum workers for auto-scaling
- `--autoscaler-consecutive-signals`: Required consecutive signals for scaling

## Auto-scaling

### **Multi-dimensional Scaling Intelligence**

Our enhanced auto-scaling system analyzes 5 key metrics simultaneously for intelligent scaling decisions:

- **Queue Pressure Score**: Weighted queue depth accounting for priority levels
- **Worker Utilization**: Real-time busy/idle ratio analysis
- **Task Complexity Factor**: Dynamic execution pattern recognition
- **Error Rate Monitoring**: System health and stability tracking
- **Memory Pressure**: Per-worker resource utilization analysis

### **Adaptive Threshold Learning**

The system automatically adjusts scaling triggers based on actual performance vs. your SLA targets:

```toml
[autoscaler.target_sla]
max_p95_latency_ms = 3000.0           # 3 second P95 latency target
min_success_rate = 0.99               # 99% success rate target
max_queue_wait_time_ms = 5000.0       # 5-second max queue wait
target_worker_utilization = 0.75      # optimal 75% worker utilization
```

### **Stability Controls**

Advanced hysteresis and cooldown mechanisms prevent scaling oscillations:

- **Consecutive Signal Requirements**: Configurable signal thresholds (2-5 signals)
- **Independent Cooldowns**: Separate scale-up (3 min) and scale-down (15 min) periods
- **Performance History**: Learning from past scaling decisions

## Test Coverage

The project maintains comprehensive test coverage across multiple dimensions:

- **Unit Tests**: 124 tests covering all core functionality
- **Integration Tests**: 9 tests for end-to-end workflows
- **Actix Integration Tests**: 22 tests for web endpoints and metrics API
- **Axum Integration Tests**: 11 tests for web framework integration
- **Error Scenario Tests**: 9 tests for edge cases and failure modes
- **Performance Tests**: 6 tests for throughput and load handling
- **Security Tests**: 11 tests for injection attacks and safety
- **Benchmarks**: 7 performance benchmarks for optimization

**Total**: 192 tests ensuring reliability and performance

## Enterprise-Grade Observability

The framework includes comprehensive structured logging and tracing capabilities for production systems:

### Tracing Features

- **Complete Task Lifecycle Tracking**: From enqueue to completion with detailed spans
- **Performance Monitoring**: Execution timing, queue metrics, and throughput analysis
- **Error Chain Analysis**: Deep context and source tracking for debugging
- **Worker Activity Monitoring**: Real-time status and resource utilization
- **Distributed Tracing**: Async instrumentation with span correlation
- **Production Logging Configuration**: Multiple output formats (JSON/compact/pretty)

### Logging Configuration

```rust
use rust_task_queue::prelude::*;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Configure structured logging for production
    configure_production_logging(
        rust_task_queue::LogLevel::Info,
        rust_task_queue::LogFormat::Json
    );

    let task_queue = TaskQueueBuilder::new("redis://localhost:6379")
        .auto_register_tasks()
        .initial_workers(4)
        .build()
        .await?;

    Ok(())
}
```

### Environment-Based Configuration

```bash
# Configure logging via environment variables
export LOG_LEVEL=info        # trace, debug, info, warn, error
export LOG_FORMAT=json       # json, compact, pretty

# Start worker with production logging
cargo run --bin task-worker
```

### Performance Monitoring

The tracing system provides detailed performance insights:

- **Task Execution Timing**: Individual task performance tracking
- **Queue Depth Monitoring**: Real-time queue status and trends
- **Worker Utilization**: Capacity analysis and efficiency metrics
- **Error Rate Tracking**: System health and failure analysis
- **Throughput Analysis**: Tasks per second and bottleneck identification

## Comprehensive Metrics API

The framework includes a production-ready metrics API with 15+ endpoints for monitoring and diagnostics:

### Health & Status Endpoints

- **`/tasks/health`** - Detailed health check with component status (Redis, workers, scheduler)
- **`/tasks/status`** - System status with health metrics and worker information

### Core Metrics Endpoints

- **`/tasks/metrics`** - Comprehensive metrics combining all available data
- **`/tasks/metrics/system`** - Enhanced system metrics with memory and performance data
- **`/tasks/metrics/performance`** - Performance report with task execution metrics and SLA data
- **`/tasks/metrics/autoscaler`** - AutoScaler metrics and scaling recommendations
- **`/tasks/metrics/queues`** - Individual queue metrics for all queues
- **`/tasks/metrics/workers`** - Worker-specific metrics and status
- **`/tasks/metrics/memory`** - Memory usage metrics and tracking
- **`/tasks/metrics/summary`** - Quick metrics summary for debugging

### Task Registry Endpoints

- **`/tasks/registered`** - Detailed task registry information and features

### Administrative Endpoints

- **`/tasks/alerts`** - Active alerts from the metrics system
- **`/tasks/sla`** - SLA status and violations with performance percentages
- **`/tasks/diagnostics`** - Comprehensive diagnostics with queue health analysis
- **`/tasks/uptime`** - System uptime and runtime information

## Best Practices

### Task Design

- Keep tasks idempotent when possible
- Use meaningful task names for monitoring
- Handle errors gracefully with proper logging
- Keep task payloads reasonably small
- Use appropriate queue constants (`queue_names::*`)

### Deployment

- Use separate worker processes for better isolation
- Scale workers based on queue metrics
- Monitor Redis memory usage and performance
- Set up proper logging and alerting
- Configure appropriate `max_concurrent_tasks` based on workload characteristics
- Monitor active task counts to optimize worker capacity
- Use graceful shutdown patterns to prevent task loss during deployments
- Use auto-registration for rapid development
- Test with different worker configurations
- Monitor queue sizes during load testing
- Use the built-in monitoring endpoints
- Take advantage of the CLI tools for testing
- Use configuration files (`task-queue.toml`) instead of hardcoded values

## Troubleshooting

### Common Issues

1. **Workers not processing tasks**: Check Redis connectivity and task registration
2. **High memory usage**: Monitor task payload sizes and Redis memory
3. **Slow processing**: Consider increasing worker count or optimizing task logic
4. **Connection issues**: Verify Redis URL and network connectivity
5. **Tasks getting re-queued frequently**: Increase `max_concurrent_tasks` or optimize task execution time
6. **Workers not shutting down gracefully**: Check for long-running tasks and adjust shutdown timeout
7. **High active task count**: Monitor task execution patterns and consider load balancing

## Documentation

- [API Documentation]https://docs.rs/rust-task-queue
- [Development Guide]DEVELOPMENT.md - Comprehensive development documentation

## Maintenance Status

**Actively Developed** - Regular releases, responsive to issues, feature requests welcome.

**Compatibility:**

- Rust 1.70.0+
- Redis 6.0+
- Tokio 1.0+

## Contributing

We welcome contributions! Please see our [Development Guide](DEVELOPMENT.md) for:

- Development setup instructions
- Code style guidelines
- Testing requirements
- Performance benchmarking
- Documentation standards

## License

Licensed under either of Apache License, Version 2.0 or MIT License at your option.