hotpath - real-time Rust performance, memory and data flow profiler
hotpath-rs instruments functions, channels, futures, and streams to quickly find bottlenecks and focus optimizations where they matter most. It provides actionable insights into time, memory, and data flow with minimal setup.
Try the TUI demo via SSH - no installation required:
ssh demo.hotpath.rs
Explore the full documentation at hotpath.rs.
You can use it to produce one-off performance (timing or memory) reports:

or use the live TUI dashboard to monitor real-time performance metrics with debug info:
https://github.com/user-attachments/assets/2e890417-2b43-4b1b-8657-a5ef3b458153
Features
- Zero-cost when disabled - fully gated by a feature flag.
- Low-overhead profiling for both sync and async code.
- Live TUI dashboard - real-time monitoring of performance data flow metrics in TUI dashboard (built with ratatui.rs).
- Static reports for one-off programs - alternatively print profiling summaries without running the TUI.
- Memory allocation tracking - track bytes allocated and allocation counts per function.
- Channel and stream monitoring - instrument channels and streams to track message flow and throughput.
- Futures instrumentation - monitor any async piece of code to track poll counts, lifecycle and resolved values
- Detailed stats: avg, total time, call count, % of total runtime, and configurable percentiles (p95, p99, etc.).
- Background processing for minimal profiling impact.
- GitHub Actions integration - configure CI to automatically benchmark your program against a base branch for each PR
Roadmap
- latency, memory method calls tracking
- channels/streams profiling
- process threads monitoring
- futures monitoring
- improved docs on hotpath.rs
- interactive SSH demo
- MCP/LLM interface
- runtime metrics
- hosted backend integration
Quick Demo
Other then the SSH demo an easy way to quickly try the TUI is to run it in auto-instrumentation mode. The TUI process profiles itself and displays its own performance metrics in real time.
First, install hotpath CLI with auto-instrumentation enabled:
Then launch the console:
and you'll see timing, memory and channel usage metrics.
Make sure to reinstall it without the auto-profiling features so that you can also observe metrics of other programs!
Quick Start
⚠️ Note
This README reflects the latest development on themainbranch. For documentation matching the current release, see crates.io - it stays in sync with the published crate.
Add to your Cargo.toml:
[]
= "0.9"
[]
= ["hotpath/hotpath"]
= ["hotpath/hotpath-alloc"]
This config ensures that the lib has no compile time or runtime overhead unless explicitly enabled via a hotpath feature. All the lib dependencies are optional (i.e. not compiled) and all macros are noop unless profiling is enabled.
Usage
use Duration;
async
// When using with tokio, place the #[tokio::main] first
// You can configure any percentile between 0 and 100
async
Run your program with a hotpath feature:
cargo run --features=hotpath
Output:
[hotpath] Performance summary from basic::main (Total time: 122.13ms):
+-----------------------+-------+---------+---------+----------+---------+
| Function | Calls | Avg | P99 | Total | % Total |
+-----------------------+-------+---------+---------+----------+---------+
| basic::async_function | 100 | 1.16ms | 1.20ms | 116.03ms | 95.01% |
+-----------------------+-------+---------+---------+----------+---------+
| custom_block | 100 | 17.09µs | 39.55µs | 1.71ms | 1.40% |
+-----------------------+-------+---------+---------+----------+---------+
| basic::sync_function | 100 | 16.99µs | 35.42µs | 1.70ms | 1.39% |
+-----------------------+-------+---------+---------+----------+---------+
Live Performance Metrics TUI
hotpath includes a live terminal-based dashboard for real-time monitoring of profiling metrics, including function performance, channel statistics, and stream throughput. This is particularly useful for long-running applications like web servers, where you want to observe performance characteristics while the application is running.
Getting Started with TUI
1. Install the hotpath binary with TUI support:
2. Start your application with --features=hotpath:
3. In a separate terminal, launch the TUI console:
The TUI will connect to your running application and display real-time profiling metrics with automatic refresh.
HTTP Metrics Server: When profiling is enabled, an HTTP server automatically starts on 127.0.0.1:6770 to expose metrics for the TUI. This server binds to localhost only and requires no authentication.
HOTPATH_METRICS_PORT- Customize the port (default: 6770)HOTPATH_METRICS_SERVER_OFF=true- Disable the server entirely
MCP Server for LLMs
hotpath includes an MCP (Model Context Protocol) server that enables AI agents like Claude to query profiling data in real-time. This allows you to ask questions about your application's performance directly in your AI-assisted development workflow.
Configuration:
Enable the MCP server by adding the hotpath-mcp feature:
[]
= ["hotpath/hotpath"]
= ["hotpath/hotpath-mcp"]
Run with:
- Default port:
6771 - Endpoint:
http://localhost:6771/mcp HOTPATH_MCP_PORT- Customize the portHOTPATH_MCP_AUTH_TOKEN- Optional authentication token
Authentication:
When HOTPATH_MCP_AUTH_TOKEN is set, clients must include the token in the Authorization header. When not set, no authentication is required.
Available Tools:
| Tool | Description |
|---|---|
functions_timing |
Execution timing metrics (call count, avg, p50/p95/p99, total) |
functions_alloc |
Memory allocation metrics per function (requires hotpath-alloc) |
channels |
Channel metrics (sent/received counts, queue size, state) |
streams |
Stream metrics (items yielded, state) |
futures |
Future lifecycle metrics (poll counts, state) |
threads |
Thread CPU usage metrics |
gauges |
Gauge metrics (current/min/max values, update count) |
function_timing_logs(function_name) |
Detailed timing logs for a specific function |
function_alloc_logs(function_name) |
Detailed allocation logs for a specific function |
channel_logs(channel_id) |
Message logs for a specific channel |
stream_logs(stream_id) |
Item logs for a specific stream |
future_logs(future_id) |
Poll/completion logs for a specific future |
gauge_logs(gauge_id) |
Value update logs for a specific gauge |
Claude Code Configuration:
Run:
"mcpServers":
With authentication:
"mcpServers":
Usage Workflow:
- Start your application with the MCP feature enabled:
-
The MCP server starts on port 6771
-
Configure Claude Code to connect (see above)
-
Query profiling data via your AI assistant:
- "What are the slowest functions?"
- "Show me the p99 latencies"
- "Are there any channels with growing queues?"
Allocation Tracking
In addition to time-based profiling, hotpath can track memory allocations. This feature uses a custom global allocator from allocation-counter crate to intercept all memory allocations and provides detailed statistics about memory usage per function.
By default, allocation tracking is cumulative, meaning that a function's allocation count includes all allocations made by functions it calls (nested calls). Notably, it produces invalid results for recursive functions. To track only exclusive allocations (direct allocations made by each function, excluding nested calls), set the HOTPATH_ALLOC_SELF=true environment variable when running your program.
Run your program with the allocation tracking feature to print a similar report:
cargo run --features='hotpath,hotpath-alloc'

Profiling memory allocations for async functions
To profile memory usage of async functions you have to use a similar config:
async
async
async
It ensures that tokio runs in a current_thread runtime mode if the allocation profiling feature is enabled.
Why this limitation exists: The allocation tracking uses thread-local storage to track memory usage. In multi-threaded runtimes, async tasks can migrate between threads, making it impossible to accurately attribute allocations to specific function calls.
Channels, Futures, and Streams, Monitoring
In addition to function profiling, hotpath can instrument async channels, futures and streams to track message throughput, queue sizes, and data flow. This is particularly useful for debugging async applications and identifying bottlenecks in concurrent message-passing systems.
Channel Monitoring
The channel! macro wraps channel creation to automatically track statistics:
use mpsc;
async
std::sync channels can be instrumented by default. Enable tokio, futures, or crossbeam features for Tokio, futures-rs, and crossbeam channels, respectively.
Supported channel types:
tokio::sync::mpsc::channeltokio::sync::mpsc::unbounded_channeltokio::sync::oneshot::channelfutures_channel::mpsc::channelfutures_channel::mpsc::unboundedfutures_channel::oneshot::channelcrossbeam_channel::boundedcrossbeam_channel::unbounded
Optional features:
// Custom label for easier identification in TUI
let = channel!;
// Enable message logging (requires Debug trait on message type)
let = channel!;
Capacity parameter requirement:
⚠️ Important: For futures::channel::mpsc bounded channels, you must specify the capacity parameter because their API doesn't expose the capacity after creation:
use mpsc;
// futures bounded channel - MUST specify capacity
let = channel!;
Tokio and crossbeam channels don't require this parameter because their capacity is accessible from the channel handles.
Futures Monitoring
The future! macro and #[future_fn] attribute instrument async futures to track poll counts and lifecycle:
async
async
Optional features:
// Log the result value (requires Debug on return type)
let result = future!.await;
async
Stream Monitoring
The stream! macro instruments async streams to track items yielded:
use ;
async
Optional features:
// Custom label
let s = stream!;
// Enable item logging (requires Debug trait on item type)
let s = stream!;
Viewing Performance Metrics in TUI
When using the live TUI dashboard, channel and stream statistics are displayed alongside function metrics. The TUI shows:
- Real-time sent/received counts for channels
- Queue sizes and queued bytes
- Items yielded for streams
- State changes (active → full → closed)
- Recent message/item logs (when logging is enabled)
See the Live Performance Metrics TUI section for setup instructions.
Environment variables:
HOTPATH_LOGS_LIMIT- Maximum number of log entries to keep per channel/stream (default: 50)HOTPATH_METRICS_PORT- Port for the HTTP metrics server (default: 6770)HOTPATH_METRICS_SERVER_OFF- Set totrueor1to disable the HTTP metrics server entirely
How Channel and Stream Monitoring Works
The channel! macro wraps channels with lightweight proxies that transparently forward all messages while collecting real-time statistics. Each send and recv operation passes through a monitored proxy that emits updates to a background metrics collection thread.
The stream! macro wraps streams and tracks items as they are yielded, collecting statistics about throughput and completion.
Background processing: The first invocation of channel! or stream! automatically starts:
- A background thread for metrics collection
- An HTTP server exposing metrics in JSON format for the TUI (see Getting Started with TUI)
A note on accuracy
hotpath instruments channels by using a proxy on the receive side with the capacity of 1. Messages flow directly into your original channel, then through a proxy before reaching the consumer. This design adds 1 slot of extra buffering for bounded channels.
Please note that enabling monitoring can subtly affect channel behavior in some cases. For example, using try_send may behave slightly differently since the proxy adds 1 slot of extra capacity. Also some wrappers currently not propagate info about receiver getting dropped.
I'm actively improving the library, so any feedback, issues, bug reports are appreciated.
ChannelsGuard - Printing Statistics on Drop
In addition to the TUI, you can use ChannelsGuard to automatically print channel and stream statistics when your program ends (similar to function profiling output):
use mpsc;
async
Output example:
=== Channel Statistics (runtime: 5.23s) ===
+------------------+-------------+--------+------+----------+--------+------------+
| Channel | Type | State | Sent | Received | Queued | Queued Mem |
+------------------+-------------+--------+------+----------+--------+------------+
| task-queue | bounded[10] | active | 1543 | 1543 | 0 | 0 B |
| http-responses | unbounded | active | 892 | 890 | 2 | 200 B |
| shutdown-signal | oneshot | closed | 1 | 1 | 0 | 0 B |
+------------------+-------------+--------+------+----------+--------+------------+
Customize output format:
let _guard = new
.format
.build;
How It Works
#[hotpath::main]- Macro that initializes the background measurement processing#[hotpath::measure]- Macro that wraps functions with profiling code- Background thread - Measurements are sent to a dedicated worker thread via bounded channel
- Statistics aggregation - Worker thread maintains running statistics for each function/code block
- Automatic reporting - Performance summary displayed when the program exits
Debug Helpers
hotpath provides macros for tracking values and logging debug info that can be viewed in the TUI's "Data Flow" tab.
hotpath::dbg!
Works like std::dbg! but sends debug output to the profiler. Logs are grouped by source location and viewable in the TUI.
// Debug a single value - logs "3"
dbg!;
// Debug multiple values
dbg!;
hotpath::val!
Tracks key-value pairs. Unlike dbg!, values are grouped by key name, making it useful for tracking named metrics across different code locations.
// Track a counter value
val!.set;
// Track state changes
val!.set;
hotpath::gauge!
Tracks numeric values with set/increment/decrement operations. Gauges display current value, min/max, and update history.
// Set an absolute value
gauge!.set;
// Increment/decrement
gauge!.inc;
gauge!.dec;
// Chain operations
gauge!.set.inc.dec;
All debug macros need the hotpath feature to be work and are no-op otherwise. Values can be inspected in the TUI under the "Data Flow" tab or via the HTTP API at /debug.
API
Macros
#[hotpath::main]
Attribute macro that initializes the background measurement processing when applied. Supports parameters:
percentiles = [50, 95, 99]- Custom percentiles to displayformat = "json"- Output format ("table", "json", "json-pretty")limit = 20- Maximum number of functions to display (default: 15, 0 = show all)timeout = 5000- Optional timeout in milliseconds. If specified, the program will print the report and exit after the timeout (useful for profiling long-running programs like HTTP servers)output_path = "path/to/report.json"- Write report to file instead of stdout. TheHOTPATH_OUTPUT_PATHenv var takes precedence over this setting.
#[hotpath::measure]
An opt-in attribute macro that instruments functions to send timing measurements to the background processor.
#[hotpath::measure_all]
An attribute macro that applies #[measure] to all functions in a mod or impl block. Useful for bulk instrumentation without annotating each function individually. Can be used on:
- Inline module declarations - Instruments all functions within the module
- Impl blocks - Instruments all methods in the implementation
Example:
// Measure all methods in an impl block
// Measure all functions in a module
Note: Once Rust stabilizes
#![feature(proc_macro_hygiene)]and#![feature(custom_inner_attributes)], it will be possible to use#![measure_all]as an inner attribute directly inside module files (e.g., at the top ofmath_operations.rs) to automatically instrument all functions in that module.
#[hotpath::skip]
A marker attribute that excludes specific functions from instrumentation when used within a module or impl block annotated with #[measure_all]. The function executes normally but doesn't send measurements to the profiling system.
Example:
hotpath::measure_block!(label, expr)
Macro that measures the execution time of a code block with a static string label.
hotpath::channel!(expr)
Macro that instruments channels to track message flow statistics. Wraps channel creation with monitoring code that tracks sent/received counts, queue size, and channel state.
Supported patterns:
hotpath::channel!(mpsc::channel::<T>(size))- Basic instrumentationhotpath::channel!(mpsc::channel::<T>(size), label = "name")- With custom labelhotpath::channel!(mpsc::channel::<T>(size), log = true)- With message logging (requires Debug trait)hotpath::channel!(mpsc::channel::<T>(size), label = "name", log = true)- Both options combined
Supported channel types: tokio::sync::mpsc, tokio::sync::oneshot, futures_channel::mpsc, crossbeam_channel
hotpath::stream!(expr)
Macro that instruments streams to track items yielded. Wraps stream creation with monitoring code that tracks yield count and stream state.
Supported patterns:
hotpath::stream!(stream::iter(1..=100))- Basic instrumentationhotpath::stream!(stream::iter(1..=100), label = "name")- With custom labelhotpath::stream!(stream::iter(1..=100), log = true)- With item logging (requires Debug trait)hotpath::stream!(stream::iter(1..=100), label = "name", log = true)- Both options combined
FunctionsGuardBuilder API (Function Profiling)
hotpath::FunctionsGuardBuilder::new(caller_name) - Create a new builder with the specified caller name
Configuration methods:
.percentiles(&[u8])- Set custom percentiles to display (default: [95]).format(Format)- Set output format (Table, Json, JsonPretty).limit(usize)- Set maximum number of functions to display (default: 15, 0 = show all).output_path(path)- Write report to file instead of stdout (HOTPATH_OUTPUT_PATHenv var takes precedence).reporter(Box<dyn Reporter>)- Set custom reporter (overrides format).build()- Build and return the FunctionsGuard.build_with_timeout(Duration)- Build guard that automatically drops after duration and exits the program (useful for profiling long-running programs like HTTP servers)
ChannelsGuard API (Channel Monitoring)
hotpath::ChannelsGuard::new() - Create a guard that prints channel statistics when dropped
hotpath::ChannelsGuardBuilder::new() - Create a builder for customizing channel statistics output
Configuration methods:
.format(Format)- Set output format (Table, Json, JsonPretty).output_path(path)- Write report to file instead of stdout (HOTPATH_OUTPUT_PATHenv var takes precedence).build()- Build and return the ChannelsGuard
Example:
let _guard = new
.format
.build;
StreamsGuard API (Stream Monitoring)
hotpath::StreamsGuard::new() - Create a guard that prints stream statistics when dropped
hotpath::StreamsGuardBuilder::new() - Create a builder for customizing stream statistics output
Configuration methods:
.format(Format)- Set output format (Table, Json, JsonPretty).output_path(path)- Write report to file instead of stdout (HOTPATH_OUTPUT_PATHenv var takes precedence).build()- Build and return the StreamsGuard
Example:
let _guard = new
.format
.build;
Example:
let _guard = new
.percentiles
.limit
.format
.build;
Timed profiling example
use Duration;
Usage Patterns
Using hotpath::main macro vs FunctionsGuardBuilder API
The #[hotpath::main] macro is convenient for most use cases, but the FunctionsGuardBuilder API provides more control over when profiling starts and stops.
Key differences:
#[hotpath::main]- Automatic initialization and cleanup, report printed at program exitlet _guard = FunctionsGuardBuilder::new("name").build()- Manual control, report printed when guard is dropped, so you can fine-tune the measured scope.
Only one hotpath guard may be alive at a time, regardless of whether it was created by the main macro or by the builder API. If a second guard is created, the library will panic.
Using FunctionsGuardBuilder for more control
use Duration;
Using in unit tests
In unit tests you can profile each individual test case:
Run tests with profiling enabled:
Note: Use --test-threads=1 to ensure tests run sequentially, as only one hotpath guard can be active at a time.
Percentiles Support
By default, hotpath displays P95 percentile in the performance summary. You can customize which percentiles to display using the percentiles parameter:
async
For multiple measurements of the same function or code block, percentiles help identify performance distribution patterns. You can use percentile 0 to display min value and 100 to display max.
Output Formats
By default, hotpath displays results in a human-readable table format. You can also output results in JSON format for programmatic processing:
async
Supported format options:
"table"(default) - Human-readable table format"json"- Compact, oneline JSON format"json-pretty"- Pretty-printed JSON format"none"- Suppress all output (profiling still active, metrics server and MCP server still function)
Environment variable override: Set HOTPATH_OUTPUT_FORMAT to override the format for all guards (functions, channels, streams, futures). This takes precedence over programmatic .format() configuration. Invalid values will cause a panic. Use none to suppress all output while keeping profiling active (useful when only using the metrics server or MCP server).
HOTPATH_OUTPUT_FORMAT=json
HOTPATH_OUTPUT_FORMAT=none
Example JSON output:
You can combine multiple parameters:
Custom Reporters
You can implement your own reporting to control how profiling results are handled. This allows you to plug hotpath into existing tools like loggers, CI pipelines, or monitoring systems.
For complete working examples, see:
examples/csv_file_reporter.rs- Save metrics to CSV fileexamples/json_file_reporter.rs- Save metrics to JSON fileexamples/tracing_reporter.rs- Log metrics using the tracing crate
Benchmarking
Measure overhead of profiling 10k method calls with hyperfine:
Timing:
cargo build --example benchmark --features hotpath --release
hyperfine --warmup 3 './target/release/examples/benchmark'
Allocations:
cargo build --example benchmark --features='hotpath,hotpath-alloc' --release
hyperfine --warmup 3 './target/release/examples/benchmark'