1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
//! # Performance Benchmarking Utilities
//!
//! This module provides utilities for measuring and comparing build times
//! for performance optimization and validation. It's designed to help
//! developers understand the performance impact of different build
//! configurations and optimizations.
//!
//! ## Key Features
//!
//! - **Precise timing**: High-resolution timing with `std::time::Instant`
//! - **Error handling**: Measures time even when operations fail
//! - **Comparison utilities**: Easy A/B testing of different approaches
//! - **Zero overhead**: Minimal performance impact on measured operations
//!
//! ## Usage Examples
//!
//! ### Basic Timing
//!
//! ```rust
//! use frozen_duckdb::benchmark;
//!
//! // Measure a single operation
//! let duration = benchmark::measure_build_time(|| {
//! // Your build operation here
//! std::thread::sleep(std::time::Duration::from_millis(100));
//! Ok(())
//! });
//!
//! println!("Operation took: {:?}", duration);
//! ```
//!
//! ### Comparing Two Approaches
//!
//! ```rust
//! use frozen_duckdb::benchmark;
//!
//! let (time1, time2) = benchmark::compare_build_times(
//! || {
//! // First approach (e.g., with pre-built binaries)
//! std::thread::sleep(std::time::Duration::from_millis(50));
//! Ok(())
//! },
//! || {
//! // Second approach (e.g., compiling from source)
//! std::thread::sleep(std::time::Duration::from_millis(200));
//! Ok(())
//! },
//! ).unwrap();
//!
//! println!("Approach 1: {:?}", time1);
//! println!("Approach 2: {:?}", time2);
//! println!("Improvement: {:.1}%",
//! ((time2.as_millis() - time1.as_millis()) as f64 / time2.as_millis() as f64) * 100.0);
//! ```
//!
//! ## Performance Characteristics
//!
//! - **Timing precision**: Microsecond-level accuracy
//! - **Overhead**: <1μs per measurement
//! - **Memory usage**: Minimal, no allocations during timing
//! - **Thread safety**: Safe for concurrent use
//!
//! ## Best Practices
//!
//! 1. **Warm up**: Run operations multiple times to account for JIT/caching
//! 2. **Multiple runs**: Average results over several iterations
//! 3. **Consistent environment**: Run benchmarks in controlled conditions
//! 4. **Statistical significance**: Use proper statistical analysis for comparisons
use Result;
use Instant;
/// Measures the execution time of a build operation with high precision.
///
/// This function provides a simple way to measure how long a build operation
/// takes to complete. It uses `std::time::Instant` for high-resolution timing
/// and measures the time even if the operation fails.
///
/// # Arguments
///
/// * `operation` - A closure that performs the build operation to measure
///
/// # Returns
///
/// A `std::time::Duration` representing the time taken to execute the operation.
/// The duration includes the full execution time, regardless of whether the
/// operation succeeded or failed.
///
/// # Examples
///
/// ```rust
/// use frozen_duckdb::benchmark;
/// use std::time::Duration;
///
/// // Measure a simple operation
/// let duration = benchmark::measure_build_time(|| {
/// std::thread::sleep(Duration::from_millis(100));
/// Ok(())
/// });
///
/// assert!(duration >= Duration::from_millis(100));
/// assert!(duration < Duration::from_millis(200));
///
/// // Measure an operation that fails
/// let duration = benchmark::measure_build_time(|| {
/// std::thread::sleep(Duration::from_millis(50));
/// Err(anyhow::anyhow!("Test error"))
/// });
///
/// // Still measures the time, even though it failed
/// assert!(duration >= Duration::from_millis(50));
/// ```
///
/// # Performance Characteristics
///
/// - **Timing precision**: Microsecond-level accuracy on most platforms
/// - **Overhead**: <1μs per measurement
/// - **Memory usage**: No allocations during timing
/// - **Thread safety**: Safe for concurrent use
///
/// # Best Practices
///
/// 1. **Warm up**: Run the operation once before measuring to account for JIT compilation
/// 2. **Multiple runs**: Average results over several iterations for accuracy
/// 3. **Consistent environment**: Run benchmarks in controlled conditions
/// 4. **Statistical analysis**: Use proper statistical methods for comparisons
/// Compares the execution times of two different build operations.
///
/// This function is useful for A/B testing different build approaches,
/// such as comparing pre-built binaries vs. compiling from source.
/// It measures both operations and returns their execution times.
///
/// # Arguments
///
/// * `operation1` - The first build operation to measure
/// * `operation2` - The second build operation to measure
///
/// # Returns
///
/// A `Result` containing a tuple of `(Duration, Duration)` representing
/// the execution times of the first and second operations respectively.
/// The function always succeeds, even if the operations fail.
///
/// # Examples
///
/// ```rust
/// use frozen_duckdb::benchmark;
/// use std::time::Duration;
///
/// let (time1, time2) = benchmark::compare_build_times(
/// || {
/// // Fast operation (e.g., using pre-built binaries)
/// std::thread::sleep(Duration::from_millis(50));
/// Ok(())
/// },
/// || {
/// // Slow operation (e.g., compiling from source)
/// std::thread::sleep(Duration::from_millis(200));
/// Ok(())
/// },
/// ).unwrap();
///
/// assert!(time2 > time1);
/// assert!(time1 >= Duration::from_millis(50));
/// assert!(time2 >= Duration::from_millis(200));
///
/// // Calculate improvement percentage
/// let improvement = ((time2.as_millis() - time1.as_millis()) as f64 / time2.as_millis() as f64) * 100.0;
/// println!("Improvement: {:.1}%", improvement);
/// ```
///
/// # Performance Analysis
///
/// This function is designed for performance comparison scenarios:
///
/// - **Build optimization**: Compare different build configurations
/// - **Binary vs source**: Measure impact of pre-built vs compiled binaries
/// - **Architecture comparison**: Test performance across different architectures
/// - **Dependency analysis**: Understand impact of different dependencies
///
/// # Statistical Considerations
///
/// For meaningful comparisons, consider:
///
/// 1. **Multiple runs**: Run comparisons multiple times and average results
/// 2. **Warm-up**: Allow system to stabilize before measurements
/// 3. **Consistent conditions**: Ensure same system load and environment
/// 4. **Outlier detection**: Remove anomalous measurements from analysis
/// 5. **Confidence intervals**: Calculate statistical significance of differences