qrush 0.6.0

Lightweight Job Queue and Task Scheduler for Rust (Actix + Redis + Cron)
Documentation
# Running QRush as a Separate Process

## Current Architecture Analysis

Your `qrush` crate supports **two modes**:

1. **Integrated Mode** (current in test app): Workers run inside the same process as your web server
2. **Separate Process Mode**: Workers run in a dedicated `qrush-engine` process

## Architecture Overview

```
┌─────────────────────────────────────────────────────────────┐
│  Current Setup (Integrated Mode)                            │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  ┌──────────────────┐                                       │
│  │   Web Server     │                                       │
│  │  (Actix Web)     │                                       │
│  │                  │                                       │
│  │  • Enqueue Jobs  │                                       │
│  │  • Process Jobs  │  ← Workers run here                  │
│  │  • Serve Routes  │                                       │
│  └──────────────────┘                                       │
│                                                              │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│  Separate Process Setup (Recommended for Production)         │
├─────────────────────────────────────────────────────────────┤
│                                                              │
│  ┌──────────────────┐         ┌──────────────────┐          │
│  │   Web Server     │         │  qrush-engine    │          │
│  │  (Actix Web)     │         │  (Separate       │          │
│  │                  │         │   Process)       │          │
│  │  • Enqueue Jobs  │────────▶│                  │          │
│  │  • Serve Routes  │  Redis  │  • Process Jobs  │          │
│  │                  │         │  • Cron Jobs     │          │
│  └──────────────────┘         └──────────────────┘          │
│                                                              │
└─────────────────────────────────────────────────────────────┘
```

## Key Differences

| Aspect | Integrated Mode | Separate Process Mode |
|--------|----------------|----------------------|
| **Process** | Single process | Two processes |
| **Worker Location** | Inside web server | Separate `qrush-engine` binary |
| **Scalability** | Limited (shared resources) | Better (isolated resources) |
| **Deployment** | Simpler | More complex |
| **Production Ready** | Good for small apps | Better for production |

## How to Run Separate Process Mode

### Step 1: Create a Shared Job Registry Module

Create a shared module that both your app and `qrush-engine` can use to register jobs:

```rust
// test/src/shared_jobs/mod.rs
pub mod notify_user_job;
pub mod daily_report_job;

use qrush_engine::registry::register_job;
use crate::shared_jobs::notify_user_job::NotifyUserJob;
use crate::shared_jobs::daily_report_job::DailyReportJob;

/// Register all jobs - call this from both app and engine
pub fn register_all_jobs() {
    register_job(NotifyUserJob::name(), NotifyUserJob::handler);
    register_job(DailyReportJob::name(), DailyReportJob::handler);
}
```

### Step 2: Modify Your Test App (Remove Worker Initialization)

**Before (Integrated Mode):**
```rust
// test/src/main.rs
QrushIntegrated::initialize(None).await;  // ❌ Removes this
QrushEngine::initialize(None).await;       // ❌ Removes this
```

**After (Separate Process Mode):**
```rust
// test/src/main.rs
use crate::shared_jobs::register_all_jobs;

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let _ = dotenvy::dotenv();
    
    // ✅ Register jobs (needed for enqueue to work)
    register_all_jobs();
    
    // ❌ DO NOT initialize workers - they run in separate process
    // QrushIntegrated::initialize(None).await;  // REMOVE
    // QrushEngine::initialize(None).await;      // REMOVE
    
    // ... rest of your app setup
    HttpServer::new(move || {
        App::new()
            // Remove worker configs - not needed
            // .app_data(web::Data::new(qrush_worker_config))
            // .app_data(web::Data::new(qrush_engine_worker_config))
            // Remove qrush routes - engine has its own web UI
            // .service(web::scope("/qrush").configure(...))
            // .service(web::scope("/qrush-engine").configure(...))
            .service(web::scope("/api/v1").configure(...))
            .route("/", web::get().to(health_check))
    })
    .bind(server_address)?
    .run()
    .await
}
```

### Step 3: Create a Standalone qrush-engine Binary

Create a new binary that registers jobs and starts the engine:

```rust
// test/src/bin/qrush_engine.rs
use qrush_engine::config::QueueConfig;
use qrush_engine::cron::cron_scheduler::CronScheduler;
use test::shared_jobs::{register_all_jobs, daily_report_job::DailyReportJob};
use std::time::Duration;
use tokio::signal;
use tracing::{info, warn};

#[tokio::main(flavor = "multi_thread")]
async fn main() -> anyhow::Result<()> {
    // Load .env
    dotenvy::dotenv().ok();
    
    // Setup tracing
    if std::env::var_os("RUST_LOG").is_none() {
        std::env::set_var("RUST_LOG", "info,qrush_engine=info");
    }
    tracing_subscriber::fmt::init();
    
    // ✅ Register all jobs BEFORE starting workers
    info!("Registering jobs...");
    register_all_jobs();
    
    // Get Redis URL
    let redis_url = std::env::var("REDIS_URL")
        .unwrap_or_else(|_| "redis://127.0.0.1:6379".to_string());
    
    // Configure queues (format: "name:concurrency:priority")
    let queues = vec![
        QueueConfig::new("default", 5, 1),
        QueueConfig::new("critical", 10, 0),
    ];
    
    // ✅ Initialize workers
    info!("Starting qrush-engine workers...");
    QueueConfig::initialize(redis_url, queues).await?;
    
    // Register cron jobs
    let daily_report = DailyReportJob {
        report_type: "engine_report".to_string(),
    };
    CronScheduler::register_cron_job(daily_report).await?;
    
    info!("qrush-engine started. Press Ctrl+C to stop.");
    
    // Wait for shutdown signal
    tokio::select! {
        _ = signal::ctrl_c() => {
            warn!("Received Ctrl+C, shutting down...");
        }
        _ = async {
            #[cfg(unix)]
            {
                use tokio::signal::unix::{signal as unix_signal, SignalKind};
                if let Ok(mut sigterm) = unix_signal(SignalKind::terminate()) {
                    sigterm.recv().await;
                }
            }
            #[cfg(not(unix))]
            {
                futures_util::future::pending::<()>().await;
            }
        } => {
            warn!("Received SIGTERM, shutting down...");
        }
    }
    
    // Graceful shutdown
    qrush_engine::config::trigger_shutdown();
    tokio::time::sleep(Duration::from_secs(5)).await;
    
    info!("qrush-engine exited");
    Ok(())
}
```

### Step 4: Update Cargo.toml

Add the binary to your test app's `Cargo.toml`:

```toml
# test/Cargo.toml
[[bin]]
name = "qrush_engine"
path = "src/bin/qrush_engine.rs"
```

### Step 5: Run Both Processes

**Terminal 1 - Web Server (enqueues jobs only):**
```bash
cd /Users/snm/ws/xsnm/ws/crates/test
cargo run
```

**Terminal 2 - QRush Engine (processes jobs):**
```bash
cd /Users/snm/ws/xsnm/ws/crates/test
cargo run --bin qrush_engine
```

Or use the standalone `qrush-engine` binary:
```bash
# Build and install qrush-engine
cd /Users/snm/ws/xsnm/ws/crates/qrush-engine
cargo build --release --bin qrush-engine

# Run it
./target/release/qrush-engine \
  --redis redis://127.0.0.1:6379 \
  --queues default:5,critical:10
```

## Alternative: Use Built-in qrush-engine Binary

If you want to use the built-in `qrush-engine` binary from the `qrush-engine` crate, you need to:

1. **Create a job registration plugin/script** that the engine can load
2. **Or modify the engine binary** to register your jobs

The cleanest approach is to create your own binary (as shown in Step 3) that wraps the engine with your job registrations.

## Environment Variables

Both processes need:

```bash
# Shared by both processes
REDIS_URL=redis://127.0.0.1:6379

# Optional: For qrush-engine web UI
QRUSH_ENGINE_BASIC_AUTH=admin:password
```

## Monitoring

- **Web Server**: Your app routes (e.g., `http://localhost:8083/health`)
- **QRush Engine**: If you add web UI to engine, or use the CLI:
  ```bash
  cargo run --bin qrush -- status
  cargo run --bin qrush -- stats -w
  ```

## Production Deployment

### Using systemd

**`/etc/systemd/system/test-app.service`:**
```ini
[Unit]
Description=Test App Web Server
After=network.target redis.service

[Service]
Type=simple
User=youruser
WorkingDirectory=/path/to/test
ExecStart=/path/to/test/target/release/test
Environment="REDIS_URL=redis://127.0.0.1:6379"
Restart=always

[Install]
WantedBy=multi-user.target
```

**`/etc/systemd/system/qrush-engine.service`:**
```ini
[Unit]
Description=QRush Engine Worker
After=network.target redis.service

[Service]
Type=simple
User=youruser
WorkingDirectory=/path/to/test
ExecStart=/path/to/test/target/release/qrush_engine
Environment="REDIS_URL=redis://127.0.0.1:6379"
Environment="RUST_LOG=info"
Restart=always

[Install]
WantedBy=multi-user.target
```

### Using Docker Compose

```yaml
version: '3.8'

services:
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
  
  web:
    build: .
    command: cargo run --release
    environment:
      - REDIS_URL=redis://redis:6379
    depends_on:
      - redis
  
  qrush-engine:
    build: .
    command: cargo run --release --bin qrush_engine
    environment:
      - REDIS_URL=redis://redis:6379
      - RUST_LOG=info
    depends_on:
      - redis
    deploy:
      replicas: 2  # Scale workers horizontally
```

## Benefits of Separate Process Mode

1. **Resource Isolation**: Web server and workers don't compete for resources
2. **Independent Scaling**: Scale workers independently of web servers
3. **Better Fault Tolerance**: Worker crashes don't affect web server
4. **Easier Monitoring**: Separate metrics and logs
5. **Production Best Practice**: Matches common worker process patterns

## Migration Checklist

- [ ] Create shared job registry module
- [ ] Remove `QrushIntegrated::initialize()` from main.rs
- [ ] Remove `QrushEngine::initialize()` from main.rs  
- [ ] Remove worker configs from HttpServer setup
- [ ] Remove qrush routes from web server (or keep for monitoring)
- [ ] Create `qrush_engine` binary with job registrations
- [ ] Update Cargo.toml with new binary
- [ ] Test: Run web server and engine separately
- [ ] Verify: Jobs enqueue from web, process in engine
- [ ] Update deployment scripts/configs

## Troubleshooting

**Jobs not processing?**
- Ensure jobs are registered BEFORE `QueueConfig::initialize()`
- Check Redis connection in both processes
- Verify queue names match between enqueue and worker config

**Jobs enqueued but not found?**
- Check Redis keys: `snm:queue:pending:{queue_name}`
- Verify job registration matches job name

**Engine won't start?**
- Check Redis is running: `redis-cli ping`
- Verify `REDIS_URL` environment variable
- Check logs for registration errors