blueprint-profiling 0.2.0-alpha.2

Profiling utilities for Tangle Blueprints
Documentation
# blueprint-profiling

Job profiling and resource benchmarking for Blueprint services.

## Overview

This crate provides automated profiling tools to measure resource usage of Blueprint jobs. The profiling data is used for:

- **FaaS compatibility detection** - Determine if jobs can run on serverless platforms (AWS Lambda, GCP Functions, etc.)
- **VM sizing recommendations** - Right-size compute resources based on actual usage
- **Cost estimation** - Predict infrastructure costs before deployment
- **QoS baseline establishment** - Set performance expectations for monitoring

## Features

- **Cross-platform memory profiling** - Works on macOS and Linux using `libc::getrusage`
-**Statistical analysis** - Multiple runs with percentile calculation (p95, p99)
-**Configurable profiling** - Control sample size, warm-up runs, and timeouts
-**Async/await support** - Profile async Blueprint jobs
-**Production-ready** - Zero TODOs, no mocks, fully tested

## Usage

```rust
use blueprint_profiling::{JobProfile, ProfileConfig, ProfileRunner};
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Configure profiling
    let config = ProfileConfig {
        sample_size: 10,
        warmup_runs: 2,
        max_execution_time: Duration::from_secs(30),
    };

    // Profile a job
    let profile = ProfileRunner::profile_job(
        || async {
            // Your job logic here
            let result = expensive_computation().await;
            Ok(())
        },
        config,
    ).await?;

    // Analyze results
    println!("Avg: {}ms, P95: {}ms, P99: {}ms",
        profile.avg_duration_ms,
        profile.p95_duration_ms,
        profile.p99_duration_ms
    );
    println!("Peak memory: {}MB", profile.peak_memory_mb);

    // Check FaaS compatibility
    const AWS_LAMBDA_TIMEOUT_MS: u64 = 900_000; // 15 minutes
    const AWS_LAMBDA_MEMORY_MB: u32 = 10_240; // 10GB

    let faas_compatible = profile.p95_duration_ms < AWS_LAMBDA_TIMEOUT_MS
        && profile.peak_memory_mb < AWS_LAMBDA_MEMORY_MB;

    if faas_compatible {
        println!("✓ Job is compatible with AWS Lambda!");
    }

    Ok(())
}
```

## Architecture

This crate is separate from `blueprint-manager` to avoid circular dependencies:

- `blueprint-profiling` - Build/test-time profiling tool (this crate)
- `blueprint-manager` - Runtime tool that **reads** profiles from metadata

The profiling workflow:
1. Developer adds profiling tests to their Blueprint
2. Tests execute jobs multiple times to collect statistics
3. Profiles can be embedded in `blueprint.json` metadata
4. Blueprint Manager reads profiles to make deployment decisions

## Examples

See `examples/basic_usage.rs` for a complete working example:

```bash
cargo run --example basic_usage -p blueprint-profiling
```

## Inspired By

This design is inspired by Substrate's benchmarking framework:
- Automated execution as part of build/test process
- Statistical rigor with percentile analysis
- Conservative defaults (no profile = assume incompatible)
- Cross-platform measurement

## Platform Support

- **macOS** - Uses `ru_maxrss` in bytes
-**Linux** - Uses `ru_maxrss` in kilobytes
- ⚠️ **Windows** - Fallback to 0 (platform-specific implementation needed)

## API Reference

### `ProfileConfig`

Configuration for profiling runs:
- `sample_size: u32` - Number of measurement runs (default: 10)
- `warmup_runs: u32` - Warm-up iterations before measurement (default: 2)
- `max_execution_time: Duration` - Timeout per execution (default: 300s)

### `JobProfile`

Statistical summary of job resource usage:
- `avg_duration_ms: u64` - Average execution time
- `p95_duration_ms: u64` - 95th percentile duration
- `p99_duration_ms: u64` - 99th percentile duration
- `peak_memory_mb: u32` - Peak memory usage
- `stateful: bool` - Whether job maintains state
- `persistent_connections: bool` - Whether job uses persistent connections
- `sample_size: u32` - Number of samples collected

### `ProfileRunner::profile_job`

Profile a job by executing it multiple times:

```rust
pub async fn profile_job<F, Fut>(
    job_fn: F,
    config: ProfileConfig,
) -> Result<JobProfile, ProfilingError>
where
    F: Fn() -> Fut,
    Fut: Future<Output = Result<(), Box<dyn Error + Send + Sync>>>,
```