OptiRS Bench
Benchmarking, profiling, and performance analysis tools for the OptiRS machine learning optimization library.
Overview
OptiRS-Bench provides comprehensive benchmarking and performance analysis capabilities for the OptiRS ecosystem. This crate includes tools for measuring optimization performance, detecting performance regressions, monitoring system resources, and ensuring the reliability and security of optimization workloads.
Features
- Performance Benchmarking: Comprehensive optimization performance measurement
- Regression Detection: Automated detection of performance regressions
- Memory Profiling: Memory usage analysis and leak detection
- System Monitoring: Real-time system resource monitoring
- Security Auditing: Security analysis of optimization pipelines
- Cross-Platform Support: Benchmarking across different platforms and hardware
- Continuous Integration: Integration with CI/CD pipelines for automated testing
- Comparative Analysis: Side-by-side comparison of optimization strategies
Benchmarking Tools
Performance Measurement
- Throughput Analysis: Operations per second measurement
- Latency Profiling: Step-by-step timing analysis
- Convergence Tracking: Optimization convergence rate measurement
- Resource Utilization: CPU, memory, and GPU usage monitoring
- Scalability Testing: Performance across different problem sizes
- Hardware-Specific Benchmarks: Platform-optimized performance tests
Regression Detection
- Automated Testing: Continuous performance regression detection
- Statistical Analysis: Statistical significance testing for performance changes
- Threshold Monitoring: Configurable performance degradation alerts
- Historical Tracking: Long-term performance trend analysis
- Bisection Analysis: Automated identification of regression-causing changes
Installation
Add this to your Cargo.toml:
[]
= "0.1.0"
= "0.1.1" # Required foundation
Feature Selection
Enable specific benchmarking features:
[]
= { = "0.1.0", = ["profiling", "regression_detection", "security_auditing"] }
Available features:
profiling: Memory and performance profiling tools (enabled by default)regression_detection: Automated regression detectionsecurity_auditing: Security analysis toolsci_integration: Continuous integration support
Command-Line Tools
OptiRS-Bench includes several command-line utilities:
Main Benchmarking Tool
# Run comprehensive benchmark suite
# Compare multiple optimizers
# Hardware-specific benchmarking
Performance Regression Detector
# Detect performance regressions
# Continuous monitoring
Memory Leak Reporter
# Memory leak detection
# Generate memory usage report
Security Audit Scanner
# Security vulnerability scanning
# Plugin security analysis
Usage
Basic Performance Benchmarking
use ;
use ;
use Criterion;
// Create benchmark configuration
let config = new
.with_iterations
.with_warmup_iterations
.with_dataset_size
.with_batch_size
.build;
// Setup benchmark suite
let mut benchmark_suite = new
.with_config
.add_optimizer
.add_optimizer
.build?;
// Run benchmarks
let results = benchmark_suite.run.await?;
// Generate report
results.generate_report?;
results.print_summary;
Memory Profiling
use ;
// Setup memory profiling
let mut profiler = new
.with_sampling_rate
.with_stack_trace_depth
.build?;
// Start profiling
profiler.start_profiling?;
// Your optimization code here
let mut optimizer = new;
for epoch in 0..100
// Stop profiling and generate report
let memory_report = profiler.stop_and_generate_report?;
memory_report.save_to_file?;
Regression Detection
use ;
// Load performance baseline
let baseline = from_file?;
// Setup regression detector
let detector = new
.with_baseline
.with_significance_threshold
.with_effect_size_threshold
.with_statistical_test
.build?;
// Run current performance tests
let current_results = run_performance_tests.await?;
// Check for regressions
let regression_analysis = detector.analyze?;
if regression_analysis.has_regressions
System Resource Monitoring
use ;
// Setup system monitoring
let monitor = new
.with_sampling_interval
.monitor_cpu
.monitor_memory
.monitor_gpu
.monitor_disk_io
.monitor_network_io
.build?;
// Configure alerts
let alerts = new
.cpu_threshold // Alert if CPU usage > 90%
.memory_threshold // Alert if memory usage > 8GB
.gpu_memory_threshold // Alert if GPU memory > 95%
.build;
monitor.set_alerts;
// Start monitoring
let monitoring_handle = monitor.start_monitoring.await?;
// Your optimization workload
run_training_workload.await?;
// Stop monitoring and get report
let resource_report = monitor.stop_and_report.await?;
resource_report.save_to_file?;
Comparative Analysis
use ;
// Compare multiple optimizers
let comparison = new
.add_optimizer
.add_optimizer
.add_optimizer
.with_metrics
.build?;
// Perform statistical comparison
let statistical_analysis = comparison.statistical_comparison?;
// Generate visualization
comparison.generate_comparison_plots?;
// Print summary
for result in statistical_analysis.significant_differences
Security Auditing
Dependency Scanning
use ;
// Setup security auditor
let auditor = new
.with_vulnerability_database
.with_severity_threshold
.build?;
// Scan dependencies
let scan_results = auditor.scan_dependencies.await?;
if scan_results.has_vulnerabilities
// Generate security report
scan_results.generate_security_report?;
Continuous Integration Integration
GitHub Actions
name: Performance Benchmarks
on:
push:
branches:
pull_request:
branches:
jobs:
benchmark:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
- name: Run benchmarks
run: |
cargo run --bin optirs-bench -- --output benchmark_results.json
- name: Check for regressions
run: |
cargo run --bin performance-regression-detector -- \
--baseline benchmark_baseline.json \
--current benchmark_results.json \
--fail-on-regression
Jenkins Pipeline
pipeline
Configuration
Benchmark Configuration File
# bench_config.yaml
benchmark:
iterations: 1000
warmup_iterations: 100
timeout: 300 # seconds
optimizers:
- name: "Adam"
learning_rate: 0.001
beta1: 0.9
beta2: 0.999
- name: "SGD"
learning_rate: 0.01
momentum: 0.9
datasets:
- name: "CIFAR-10"
size: 50000
batch_size: 32
- name: "ImageNet"
size: 1281167
batch_size: 64
monitoring:
sample_rate: 1000 # milliseconds
metrics:
- cpu_usage
- memory_usage
- gpu_utilization
- disk_io
regression_detection:
significance_threshold: 0.05
effect_size_threshold: 0.1
baseline_file: "baseline.json"
Platform Support
| Platform | CPU Profiling | GPU Profiling | Memory Profiling | Security Scanning |
|---|---|---|---|---|
| Linux | ✅ | ✅ (CUDA/ROCm) | ✅ | ✅ |
| macOS | ✅ | ✅ (Metal) | ✅ | ✅ |
| Windows | ✅ | ✅ (CUDA/DX) | ✅ | ✅ |
| Web | ⚠️ (Limited) | ❌ | ⚠️ (Limited) | ⚠️ (Limited) |
Contributing
OptiRS follows the Cool Japan organization's development standards. See the main OptiRS repository for contribution guidelines.
License
This project is licensed under either of:
- Apache License, Version 2.0
- MIT License
at your option.