optirs-bench 0.3.1

OptiRS benchmarking, profiling, and performance analysis tools
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
# OptiRS Bench

Benchmarking, profiling, and performance analysis tools for the OptiRS machine learning optimization library.

## Overview

OptiRS-Bench provides comprehensive benchmarking and performance analysis capabilities for the OptiRS ecosystem. This crate includes tools for measuring optimization performance, detecting performance regressions, monitoring system resources, and ensuring the reliability and security of optimization workloads.

## Features

- **Performance Benchmarking**: Comprehensive optimization performance measurement
- **Regression Detection**: Automated detection of performance regressions
- **Memory Profiling**: Memory usage analysis and leak detection
- **System Monitoring**: Real-time system resource monitoring
- **Security Auditing**: Security analysis of optimization pipelines
- **Cross-Platform Support**: Benchmarking across different platforms and hardware
- **Continuous Integration**: Integration with CI/CD pipelines for automated testing
- **Comparative Analysis**: Side-by-side comparison of optimization strategies

## Benchmarking Tools

### Performance Measurement
- **Throughput Analysis**: Operations per second measurement
- **Latency Profiling**: Step-by-step timing analysis
- **Convergence Tracking**: Optimization convergence rate measurement
- **Resource Utilization**: CPU, memory, and GPU usage monitoring
- **Scalability Testing**: Performance across different problem sizes
- **Hardware-Specific Benchmarks**: Platform-optimized performance tests

### Regression Detection
- **Automated Testing**: Continuous performance regression detection
- **Statistical Analysis**: Statistical significance testing for performance changes
- **Threshold Monitoring**: Configurable performance degradation alerts
- **Historical Tracking**: Long-term performance trend analysis
- **Bisection Analysis**: Automated identification of regression-causing changes

## Installation

Add this to your `Cargo.toml`:

```toml
[dependencies]
optirs-bench = "0.3.1"
scirs2-core = "0.4.0"  # Required foundation
```

### Feature Selection

Enable specific benchmarking features:

```toml
[dependencies]
optirs-bench = { version = "0.3.1", features = ["profiling", "regression_detection", "security_auditing"] }
```

Available features:
- `profiling`: Memory and performance profiling tools (enabled by default)
- `regression_detection`: Automated regression detection
- `security_auditing`: Security analysis tools
- `ci_integration`: Continuous integration support

## Command-Line Tools

OptiRS-Bench includes several command-line utilities:

### Main Benchmarking Tool
```bash
# Run comprehensive benchmark suite
optirs-bench --optimizer adam --dataset cifar10 --iterations 1000

# Compare multiple optimizers
optirs-bench compare --optimizers adam,sgd,adamw --dataset imagenet

# Hardware-specific benchmarking
optirs-bench --hardware gpu --device nvidia-rtx-4090
```

### Performance Regression Detector
```bash
# Detect performance regressions
performance-regression-detector --baseline v1.0.0 --current HEAD

# Continuous monitoring
performance-regression-detector --monitor --threshold 5% --alert email
```

### Memory Leak Reporter
```bash
# Memory leak detection
memory-leak-reporter --duration 1h --sample-rate 1s

# Generate memory usage report
memory-leak-reporter --report --output memory_report.html
```

### Security Audit Scanner
```bash
# Security vulnerability scanning
security-audit-scanner --scan-dependencies --check-versions

# Plugin security analysis
security-audit-scanner --verify-plugins --sandbox-test
```

## Usage

### Basic Performance Benchmarking

```rust
use optirs_bench::{BenchmarkSuite, OptimizerBenchmark, BenchmarkConfig};
use optirs_core::optimizers::{Adam, SGD};
use criterion::Criterion;

// Create benchmark configuration
let config = BenchmarkConfig::new()
    .with_iterations(1000)
    .with_warmup_iterations(100)
    .with_dataset_size(10000)
    .with_batch_size(32)
    .build();

// Setup benchmark suite
let mut benchmark_suite = BenchmarkSuite::new()
    .with_config(config)
    .add_optimizer("Adam", Adam::new(0.001))
    .add_optimizer("SGD", SGD::new(0.01))
    .build()?;

// Run benchmarks
let results = benchmark_suite.run().await?;

// Generate report
results.generate_report("benchmark_results.html")?;
results.print_summary();
```

### Memory Profiling

```rust
use optirs_bench::{MemoryProfiler, AllocationTracker};

// Setup memory profiling
let mut profiler = MemoryProfiler::new()
    .with_sampling_rate(Duration::from_millis(100))
    .with_stack_trace_depth(10)
    .build()?;

// Start profiling
profiler.start_profiling()?;

// Your optimization code here
let mut optimizer = Adam::new(0.001);
for epoch in 0..100 {
    // Training loop
    optimizer.step(&mut params, &grads).await?;
    
    // Record memory usage
    profiler.record_memory_snapshot(&format!("epoch_{}", epoch))?;
}

// Stop profiling and generate report
let memory_report = profiler.stop_and_generate_report()?;
memory_report.save_to_file("memory_profile.json")?;
```

### Regression Detection

```rust
use optirs_bench::{RegressionDetector, PerformanceBaseline, StatisticalTest};

// Load performance baseline
let baseline = PerformanceBaseline::from_file("baseline_v1.0.0.json")?;

// Setup regression detector
let detector = RegressionDetector::new()
    .with_baseline(baseline)
    .with_significance_threshold(0.05)
    .with_effect_size_threshold(0.1)
    .with_statistical_test(StatisticalTest::WelchTTest)
    .build()?;

// Run current performance tests
let current_results = run_performance_tests().await?;

// Check for regressions
let regression_analysis = detector.analyze(&current_results)?;

if regression_analysis.has_regressions() {
    println!("Performance regressions detected:");
    for regression in regression_analysis.regressions() {
        println!("  - {}: {:.2}% slower (p-value: {:.4})", 
                 regression.metric_name, 
                 regression.performance_delta * 100.0,
                 regression.p_value);
    }
}
```

### System Resource Monitoring

```rust
use optirs_bench::{SystemMonitor, ResourceAlert};

// Setup system monitoring
let monitor = SystemMonitor::new()
    .with_sampling_interval(Duration::from_secs(1))
    .monitor_cpu(true)
    .monitor_memory(true)
    .monitor_gpu(true)
    .monitor_disk_io(true)
    .monitor_network_io(true)
    .build()?;

// Configure alerts
let alerts = ResourceAlert::new()
    .cpu_threshold(90.0)  // Alert if CPU usage > 90%
    .memory_threshold(8_000_000_000)  // Alert if memory usage > 8GB
    .gpu_memory_threshold(0.95)  // Alert if GPU memory > 95%
    .build();

monitor.set_alerts(alerts);

// Start monitoring
let monitoring_handle = monitor.start_monitoring().await?;

// Your optimization workload
run_training_workload().await?;

// Stop monitoring and get report
let resource_report = monitor.stop_and_report().await?;
resource_report.save_to_file("resource_usage.json")?;
```

### Comparative Analysis

```rust
use optirs_bench::{ComparativeAnalysis, OptimizerComparison, StatisticalComparison};

// Compare multiple optimizers
let comparison = ComparativeAnalysis::new()
    .add_optimizer("Adam", adam_results)
    .add_optimizer("SGD", sgd_results)
    .add_optimizer("AdamW", adamw_results)
    .with_metrics(&["convergence_speed", "final_accuracy", "memory_usage"])
    .build()?;

// Perform statistical comparison
let statistical_analysis = comparison.statistical_comparison()?;

// Generate visualization
comparison.generate_comparison_plots("optimizer_comparison.html")?;

// Print summary
for result in statistical_analysis.significant_differences() {
    println!("{} vs {}: {} is significantly better (p < 0.05)", 
             result.optimizer_a, 
             result.optimizer_b, 
             result.better_performer);
}
```

## Security Auditing

### Dependency Scanning

```rust
use optirs_bench::security::{SecurityAuditor, VulnerabilityScanner};

// Setup security auditor
let auditor = SecurityAuditor::new()
    .with_vulnerability_database(VulnerabilityDB::latest())
    .with_severity_threshold(Severity::Medium)
    .build()?;

// Scan dependencies
let scan_results = auditor.scan_dependencies().await?;

if scan_results.has_vulnerabilities() {
    println!("Security vulnerabilities found:");
    for vuln in scan_results.vulnerabilities() {
        println!("  - {}: {} ({})", 
                 vuln.crate_name, 
                 vuln.description, 
                 vuln.severity);
    }
}

// Generate security report
scan_results.generate_security_report("security_audit.html")?;
```

## Continuous Integration Integration

### GitHub Actions

```yaml
name: Performance Benchmarks

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  benchmark:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Install Rust
        uses: actions-rs/toolchain@v1
        with:
          toolchain: stable
      
      - name: Run benchmarks
        run: |
          cargo run --bin optirs-bench -- --output benchmark_results.json
          
      - name: Check for regressions
        run: |
          cargo run --bin performance-regression-detector -- \
            --baseline benchmark_baseline.json \
            --current benchmark_results.json \
            --fail-on-regression
```

### Jenkins Pipeline

```groovy
pipeline {
    agent any
    
    stages {
        stage('Build') {
            steps {
                sh 'cargo build --release'
            }
        }
        
        stage('Benchmark') {
            steps {
                sh 'cargo run --bin optirs-bench -- --ci-mode'
                archiveArtifacts 'benchmark_results.json'
            }
        }
        
        stage('Regression Check') {
            steps {
                script {
                    def regressionCheck = sh(
                        script: 'cargo run --bin performance-regression-detector',
                        returnStatus: true
                    )
                    if (regressionCheck != 0) {
                        error "Performance regression detected!"
                    }
                }
            }
        }
    }
}
```

## Configuration

### Benchmark Configuration File

```yaml
# bench_config.yaml
benchmark:
  iterations: 1000
  warmup_iterations: 100
  timeout: 300  # seconds
  
optimizers:
  - name: "Adam"
    learning_rate: 0.001
    beta1: 0.9
    beta2: 0.999
  - name: "SGD"
    learning_rate: 0.01
    momentum: 0.9
    
datasets:
  - name: "CIFAR-10"
    size: 50000
    batch_size: 32
  - name: "ImageNet"
    size: 1281167
    batch_size: 64
    
monitoring:
  sample_rate: 1000  # milliseconds
  metrics:
    - cpu_usage
    - memory_usage
    - gpu_utilization
    - disk_io
    
regression_detection:
  significance_threshold: 0.05
  effect_size_threshold: 0.1
  baseline_file: "baseline.json"
```

## Platform Support

| Platform | CPU Profiling | GPU Profiling | Memory Profiling | Security Scanning |
|----------|---------------|---------------|------------------|-------------------|
| Linux    || ✅ (CUDA/ROCm)|||
| macOS    || ✅ (Metal)   |||
| Windows  || ✅ (CUDA/DX)  |||
| Web      | ⚠️ (Limited)  || ⚠️ (Limited)    | ⚠️ (Limited)     |

## Contributing

OptiRS follows the Cool Japan organization's development standards. See the main OptiRS repository for contribution guidelines.

## License

This project is licensed under the Apache License, Version 2.0.