Expand description
Profiling-guided optimization for adaptive performance tuning.
This module provides runtime profiling and adaptive optimization:
- Profile collection: Gather execution statistics during runtime
- Hotspot detection: Identify performance bottlenecks
- Adaptive optimization: Adjust strategy based on observed behavior
- A/B testing: Compare optimization strategies
- Auto-tuning: Automatically select best configurations
§Example
ⓘ
use tensorlogic_infer::{ProfilingOptimizer, OptimizationGoal, TuningConfig};
// Create profiling-guided optimizer
let mut optimizer = ProfilingOptimizer::new()
.with_goal(OptimizationGoal::MinimizeLatency)
.with_tuning_enabled(true);
// Execute with profiling
for batch in dataset {
let result = optimizer.execute_and_profile(&graph, &batch)?;
// Optimizer automatically adapts based on observed performance
if optimizer.should_reoptimize() {
optimizer.apply_optimizations(&graph)?;
}
}
// Get optimization report
let report = optimizer.generate_report();
println!("Speedup: {:.2}x", report.speedup);Structs§
- Execution
Profile - Execution profile for a single run.
- Hotspot
- Hotspot in the computation graph.
- Optimization
Report - Optimization report.
- Optimization
Strategy - Optimization strategy configuration.
- Profiling
Optimizer - Profiling-guided optimizer.
- Tuning
Config - Tuning configuration.
Enums§
- Optimization
Goal - Optimization goal.
- Profiling
Optimizer Error - Profiling-guided optimization errors.