Quantum Pulse
A lightweight, customizable profiling library for Rust applications with support for custom categories and percentile statistics.
Features
- π True Zero-Cost Abstraction - Stub implementation compiles to nothing when disabled
- π― Derive Macro Support - Automatic implementation with
#[derive(ProfileOp)]
- π Percentile Statistics - Automatic calculation of p50, p95, p99, and p99.9 percentiles using HDR histograms
- π·οΈ Type-Safe Categories - Define your own operation categories with compile-time guarantees
- π Multiple Output Formats - Console and CSV export options
- βΈοΈ Pausable Timers - Exclude specific periods from measurements
- π§ Clean API - Same interface whether profiling is enabled or disabled
- π Async Support - Full support for async/await patterns
- π― No Conditionals Required - Use the same code for both production and development
Installation
Add this to your Cargo.toml
:
[]
# For production builds (zero overhead)
= { = "0.1.5", = false }
# For development builds (with profiling and macros)
= { = "0.1.5", = ["full"] }
Or use feature flags in your application:
[]
= { = "0.1.5", = false }
[]
= ["quantum-pulse/full"]
Quick Start
π― Recommended: Using the Derive Macro
The easiest and most maintainable way to use quantum-pulse is with the ProfileOp
derive macro:
use ;
// Simply derive ProfileOp and add category attributes
Category Management
The ProfileOp
macro intelligently manages categories:
Alternative: Manual Implementation
For advanced use cases or when you prefer explicit control:
use ;
// Implement Operation trait manually
// Define custom categories
;
;
Advanced Features
Async Support
use ;
async
Complex Enum Variants
The ProfileOp
macro supports all enum variant types:
Report Generation
use ;
// Quick summary
let summary = get_summary;
println!;
println!;
// Detailed report with configuration
let report = new
.include_percentiles
.group_by_category
.time_format
.build;
println!;
// Export to CSV
let stats = get_all_stats;
let mut csv = String from;
for in stats
write.unwrap;
Pausable Timers
For operations where you need to exclude certain periods:
use ;
Zero-Cost Abstractions
Quantum Pulse implements true zero-cost abstractions through compile-time feature selection:
How It Works
// Your code always looks the same
let result = profile!;
// With default features (stub mode):
// - profile! macro expands to just the code block
// - No timing, no allocations, no overhead
// - Compiler optimizes it to: let result = expensive_operation();
// With "full" feature enabled:
// - Full profiling with timing and statistics
// - HDR histograms for accurate percentiles
// - Comprehensive reporting
Performance Characteristics
Configuration | Overhead | Use Case |
---|---|---|
Stub (default) | Zero - methods are empty and inlined away | Production |
Full | ~200-300ns per operation | Development, debugging |
Pause/Unpause Profiling
Control profiling dynamically with pause!()
and unpause!()
macros:
use ;
// Normal profiling - operations are recorded
profile!;
// Pause all profiling
pause!;
// This won't be recorded
profile!;
// Resume profiling
unpause!;
// This will be recorded again
profile!;
Stack-Based Pause/Unpause
For fine-grained control, pause only timers currently on the call stack with pause_stack!()
and unpause_stack!()
:
use ;
// Profile data processing, but exclude I/O wait time
profile!;
Key differences:
pause!()
/unpause!()
- Affects all profiling globallypause_stack!()
/unpause_stack!()
- Affects only timers currently on the call stack
Use Cases
Global Pause/Unpause:
- Exclude initialization/cleanup from performance measurements
- Focus profiling on specific sections during debugging
- Reduce overhead during non-critical operations
- Selective measurement in loops or batch operations
Stack-Based Pause/Unpause:
- Exclude I/O wait time from algorithm profiling
- Measure only CPU-bound work in mixed operations
- Exclude network latency from processing metrics
- Fine-grained control without affecting concurrent operations
- Conditional profiling based on runtime conditions
Migration Guide
From String-based Profiling
If you're currently using string-based operation names, migrate to type-safe enums:
// Before: String-based (error-prone, no compile-time checks)
profile!;
// After: Type-safe with ProfileOp (recommended)
profile!;
From Manual Implementation
If you have existing manual Operation
implementations, you can gradually migrate:
// Before: Manual implementation
// After: Simply add ProfileOp derive
Examples
Check out the examples/
directory for comprehensive examples:
macro_derive.rs
- Recommended: Using the ProfileOp derive macrobasic.rs
- Simple profiling examplecustom_categories.rs
- Manual category implementationasync_profiling.rs
- Profiling async codetrading_system.rs
- Real-world trading system example
Run examples with:
# Recommended: See the derive macro in action
# Other examples
Feature Flags
full
: Enable full profiling functionality with HDR histograms and derive macrosmacros
: Enable only the derive macros (included infull
)- Default (no features): Stub implementation with zero overhead
Best Practices
- Use ProfileOp Derive: Start with the derive macro for cleaner, more maintainable code
- Organize by Category: Group related operations under the same category name
- Descriptive Names: Use clear, descriptive names for both categories and operations
- Profile Boundaries: Profile at meaningful boundaries (API calls, database queries, etc.)
- Avoid Over-Profiling: Don't profile every function - focus on potential bottlenecks
Performance Considerations
The library is designed with performance in mind:
- True Zero-Cost: Stub implementations are completely removed by the compiler
- Efficient Percentiles: Using HDR histograms for O(1) percentile calculations
- Lock-Free Operations: Using atomic operations and thread-local storage
- Smart Inlining: Critical paths marked with
#[inline(always)]
in stub mode - No Runtime Checks: Feature selection happens at compile time
Benchmarks
Run benchmarks with:
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
This project is licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or https://apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or https://opensource.org/licenses/MIT)
at your option.
Acknowledgments
This library was designed for high-performance applications requiring microsecond-precision profiling with minimal overhead and maximum ergonomics.