async-inspect 🔍
X-ray vision for async Rust
async-inspect is a debugging tool that visualizes and inspects async state machines in Rust. See exactly what your futures are doing, where they're stuck, and why.
Documentation | Crates.io | API Docs
😰 The Problem
Debugging async Rust is frustrating:
async
What you see in a regular debugger:
❌ Useless! You can't tell:
- Which
.awaitis blocked - What the future is waiting for
- How long it's been waiting
- What state the async state machine is in
Common async debugging nightmares:
- 🐌 Tests hang forever (where?)
- 🔄 Deadlocks with no stack trace
- ⏰ Timeouts that shouldn't happen
- 🎲 Flaky tests (race conditions)
- 📉 Performance issues (lock contention? slow I/O?)
Current "solutions":
// Solution 1: Add prints everywhere 😭
async
// Solution 2: Use tokio-console (limited visibility)
// Solution 3: Give up and add timeouts everywhere 🤷
💡 The Solution
async-inspect gives you complete visibility into async execution:
$ async-inspect run ./my-app
┌─────────────────────────────────────────────────────────────┐
│ async-inspect - Task Inspector │
├─────────────────────────────────────────────────────────────┤
│ │
│ Task #42: fetch_user_data(user_id=12345) │
│ Status: BLOCKED (2.3s) │
│ State: WaitingForPosts │
│ │
│ Progress: ▓▓▓▓▓░░░ 2/4 steps │
│ │
│ ✅ fetch_user() - Completed (145ms) │
│ ⏳ fetch_posts() - IN PROGRESS (2.3s) ◄─── STUCK HERE │
│ └─> http::get("api.example.com/posts/12345") │
│ └─> TCP: ESTABLISHED, waiting for response │
│ └─> Timeout in: 27.7s │
│ ⏸️ fetch_friends() - Not started │
│ ⏸️ build_response() - Not started │
│ │
│ State Machine Polls: 156 (avg: 14.7ms between polls) │
│ │
│ Press 'd' for details | 't' for timeline | 'g' for graph │
└─────────────────────────────────────────────────────────────┘
Now you know EXACTLY:
- ✅ Which step is stuck (
fetch_posts) - ✅ What it's waiting for (HTTP response)
- ✅ How long it's been waiting (2.3s)
- ✅ What will happen next (timeout in 27.7s)
- ✅ Complete execution history
🎯 Why async-inspect?
Motivation
Async Rust is powerful but opaque. When you write:
async
The compiler transforms this into a state machine:
// Simplified - the real thing is more complex
The problem: This state machine is invisible to debuggers!
Traditional debuggers show you:
- ❌ Stack frames (useless - points to runtime internals)
- ❌ Variable values (many are "moved" or "uninitialized")
- ❌ Current line (incorrect - shows scheduler code)
async-inspect understands async state machines and shows you:
- ✅ Current state name and position
- ✅ All captured variables and their values
- ✅ Which
.awaityou're blocked on - ✅ Why you're blocked (I/O, lock, sleep, etc.)
- ✅ Complete execution timeline
- ✅ Dependencies between tasks
🆚 Comparison with Existing Tools
tokio-console
tokio-console is excellent but limited:
What tokio-console shows:
Task Duration Polls State
#42 2.3s 156 Running
#43 0.1s 5 Idle
#44 5.2s 892 Running
What it DOESN'T show:
- ❌ Which
.awaitis blocked - ❌ Internal state machine state
- ❌ What the task is waiting for
- ❌ Variable values
- ❌ Deadlock detection
- ❌ Timeline visualization
Comparison Table
| Feature | async-inspect | tokio-console | gdb/lldb | println! |
|---|---|---|---|---|
See current .await |
✅ | ❌ | ❌ | ⚠️ Manual |
| State machine state | ✅ | ❌ | ❌ | ❌ |
| Variable inspection | ✅ | ❌ | ⚠️ Limited | ❌ |
| Waiting reason | ✅ | ❌ | ❌ | ❌ |
| Timeline view | ✅ | ⚠️ Basic | ❌ | ❌ |
| Deadlock detection | ✅ | ❌ | ❌ | ❌ |
| Dependency graph | ✅ | ⚠️ Basic | ❌ | ❌ |
| Runtime agnostic | ✅ | ❌ Tokio only | ✅ | ✅ |
| Zero code changes | ✅ | ⚠️ Requires tracing | ✅ | ❌ |
async-inspect is complementary to tokio-console:
- tokio-console: High-level task monitoring
- async-inspect: Deep state machine inspection
Use both together for complete visibility!
Runtime Support
async-inspect works with multiple async runtimes:
- ✅ Tokio - Full support with
tokiofeature - ✅ async-std - Full support with
async-std-runtimefeature - ✅ smol - Full support with
smol-runtimefeature
Example usage with different runtimes:
// Tokio
use ;
async
// async-std
use ;
// smol
use ;
See the examples/ directory for complete working examples.
✨ Features (Planned)
Core Features
- 🔍 State Machine Inspection - See current state and variables
- ⏱️ Execution Timeline - Visualize async execution over time
- 🎯 Breakpoints - Pause at specific states or
.awaitpoints - 🔗 Dependency Tracking - See which tasks are waiting on others
- 💀 Deadlock Detection - Automatically find circular dependencies
- 📊 Performance Analysis - Identify slow operations and contention
- 🎮 Interactive Debugging - Step through async state transitions
- 📸 Snapshot & Replay - Record execution and replay later
Advanced Features
- 🌐 Distributed Tracing - Track async across services
- 🔥 Flamegraphs - Visualize where time is spent
- 🎛️ Live Inspection - Attach to running processes
- 📝 Export & Share - Save traces for collaboration
- 🤖 CI Integration - Detect hangs in test suites
- 🎨 Custom Views - Plugin system for specialized visualization
🚧 Status
Work in Progress - Early development
Current version: 0.1.0-alpha
🚀 Quick Start (Planned API)
Installation
# Not yet published
# Or build from source
Basic Usage
# Run your app with inspection enabled
# Attach to running process
# Run tests with inspection
# Start web dashboard
In Code (Optional Instrumentation)
// Add to Cargo.toml
async-inspect = "0.1"
// Instrument specific functions
async
// Or use manual inspection points
use *;
async
📖 Use Cases
1. Find Where Test is Stuck
async
With async-inspect:
)
))
)
2. Debug Deadlock
async
With async-inspect:
💀 DEADLOCK DETECTED!
Task #42: waiting for Mutex<i32> @ 0x7f8a9c0
└─> Held by: Task #89
Task #89: waiting for Mutex<i32> @ 0x7f8a9d0
└─> Held by: Task #42
Circular dependency:
Task #42 → Mutex A → Task #89 → Mutex B → Task #42
Suggestion:
• Acquire locks in consistent order (A before B)
• Use try_lock() with timeout
• Consider lock-free alternatives
3. Performance Investigation
) )
) )
)
)
4. CI/CD Integration
# .github/workflows/test.yml
- name: Run tests with async inspection
run: async-inspect test --timeout 30s --fail-on-hang
- name: Upload trace on failure
if: failure()
uses: actions/upload-artifact@v3
with:
name: async-trace
path: async-inspect-trace.json
🛠️ How It Works
Compiler Instrumentation
// Your code
async
// With instrumentation (conceptual)
async
Runtime Integration
- Tokio: Hooks into task spawning and polling
- async-std: Custom executor wrapper
- smol: Runtime instrumentation
- Generic: Works with any runtime via proc macros
Zero Overhead When Disabled
# Production build - no overhead
[]
= false
# Debug build - full instrumentation
[]
= true
🌐 Ecosystem Integration
async-inspect works seamlessly with your existing Rust async ecosystem tools:
Prometheus Metrics
Export metrics for monitoring dashboards:
use PrometheusExporter;
let exporter = new?;
exporter.update;
// In your /metrics endpoint:
let metrics = exporter.gather;
Available metrics:
async_inspect_tasks_total- Total tasks createdasync_inspect_active_tasks- Currently active tasksasync_inspect_blocked_tasks- Tasks waiting on I/Oasync_inspect_task_duration_seconds- Task execution timesasync_inspect_tasks_failed_total- Failed task count
OpenTelemetry Export
Send traces to Jaeger, Zipkin, or any OTLP backend:
use OtelExporter;
let exporter = new;
exporter.export_tasks;
Tracing Integration
Automatic capture via tracing-subscriber:
use *;
use AsyncInspectLayer;
registry
.with
.init;
Tokio Console Compatibility
Use alongside tokio-console for complementary insights:
# Terminal 1: Run with tokio-console
RUSTFLAGS="--cfg tokio_unstable"
# Terminal 2: Monitor with tokio-console
# async-inspect exports provide historical analysis
Grafana Dashboards
Import async-inspect metrics into Grafana:
- Configure Prometheus scraping
- Import dashboard template (coming soon)
- Monitor key metrics:
- Task creation rate
- Active/blocked task ratio
- Task duration percentiles
- Error rates
Feature Flags:
[]
= { = "0.0.1", = [
"prometheus-export", # Prometheus metrics
"opentelemetry-export", # OTLP traces
"tracing-sub", # Tracing integration
] }
📤 Export Formats
async-inspect supports multiple industry-standard export formats for visualization and analysis:
JSON Export
Export complete task and event data as structured JSON:
use JsonExporter;
// Export to file
export_to_file?;
// Or get as string
let json = export_to_string?;
Use with: jq, Python pandas, JavaScript tools, data pipelines
CSV Export
Export tasks and events in spreadsheet-compatible format:
use CsvExporter;
// Export tasks (id, name, duration, poll_count, etc.)
export_tasks_to_file?;
// Export events (event_id, task_id, timestamp, kind, details)
export_events_to_file?;
Use with: Excel, Google Sheets, pandas, data analysis
Chrome Trace Event Format
Export for visualization in chrome://tracing or Perfetto UI:
use ChromeTraceExporter;
export_to_file?;
How to visualize:
-
Chrome DevTools (built-in):
- Open Chrome/Chromium
- Navigate to
chrome://tracing - Click "Load" and select
trace.json - Explore the interactive timeline!
-
Perfetto UI (recommended):
- Go to https://ui.perfetto.dev/
- Click "Open trace file"
- Select
trace.json - Get advanced analysis features:
- Thread-level view
- SQL-based queries
- Statistical summaries
- Custom tracks
What you see:
- Task spawning and completion as events
- Poll operations with precise durations
- Await points showing blocking time
- Complete async execution timeline
- Task relationships and dependencies
Flamegraph Export
Generate flamegraphs for performance analysis:
use ;
// Basic export (folded stack format)
export_to_file?;
// Customized export
new
.include_polls // Exclude poll events
.include_awaits // Include await points
.min_duration_ms // Filter < 10ms operations
.export_to_file?;
// Generate SVG directly (requires 'flamegraph' feature)
generate_svg?;
How to visualize:
-
Speedscope (easiest, online):
- Go to https://www.speedscope.app/
- Drop
flamegraph.txtonto the page - Explore interactive flamegraph
-
inferno (local SVG generation):
| -
flamegraph.pl (classic):
What you see:
- Call stacks showing task hierarchies
- Time spent in each async operation
- Hotspots and bottlenecks
- Parent-child task relationships
Comprehensive Example
See examples/export_formats.rs for a complete example:
This demonstrates:
- All export formats in one workflow
- Realistic async operations
- Multiple concurrent tasks
- Export to JSON, CSV, Chrome Trace, and Flamegraph
- Usage instructions for each format
Output files:
async_inspect_exports/
├── data.json # Complete JSON export
├── tasks.csv # Task metrics
├── events.csv # Event timeline
├── trace.json # Chrome Trace Event Format
├── flamegraph.txt # Basic flamegraph
└── flamegraph_filtered.txt # Filtered flamegraph
🗺️ Roadmap
Phase 1: Core Inspector (Current)
- Basic state machine inspection
- Task listing and status
- Simple TUI interface
- Tokio runtime integration
Phase 2: Advanced Debugging
- Variable inspection
- Breakpoints on states
- Step-by-step execution
- Timeline visualization
Phase 3: Analysis Tools
- Deadlock detection
- Performance profiling
- Lock contention analysis
- Flamegraphs
Phase 4: Production Ready
- Web dashboard
- Live process attachment
- Distributed tracing
- CI/CD integration
- Plugin system
Phase 5: Ecosystem
- async-std support
- smol support
- IDE integration (VS Code, IntelliJ)
- Cloud deployment monitoring
🎨 Interface Preview (Planned)
TUI (Terminal)
┌─ async-inspect ─────────────────────────────────────────┐
│ [Tasks] [Timeline] [Graph] [Profile] [?] Help │
├──────────────────────────────────────────────────────────┤
│ │
│ Active Tasks: 23 CPU: ████░░ 45% │
│ Blocked: 8 Mem: ██░░░░ 20% │
│ Running: 15 │
│ │
│ Task State Duration Details │
│ ─────────────────────────────────────────────────────── │
│ #42 ⏳ WaitingPosts 2.3s http::get() │
│ #43 ✅ Done 0.1s Completed │
│ #44 💀 Deadlock 5.2s Mutex wait │
│ #45 🏃 Running 0.03s Computing │
│ │
│ [←→] Navigate [Enter] Details [g] Graph [q] Quit │
└──────────────────────────────────────────────────────────┘
Web Dashboard
http://localhost:8080
┌────────────────────────────────────────────────┐
│ async-inspect [Settings] │
├────────────────────────────────────────────────┤
│ │
│ 📊 Overview 🕒 Last updated: 2s ago │
│ │
│ ● 23 Tasks Active ▁▃▅▇█▇▅▃▁ Activity │
│ ⏸️ 8 Blocked │
│ 💀 1 Deadlock [View Details →] │
│ │
│ 📈 Performance │
│ ├─ Avg Response: 145ms │
│ ├─ 99th percentile: 2.3s │
│ └─ Slowest: fetch_posts() - 5.2s │
│ │
│ [View Timeline] [Export Trace] [Filter...] │
└────────────────────────────────────────────────┘
🤝 Contributing
Contributions welcome! This is a challenging project that needs expertise in:
- 🦀 Rust compiler internals
- 🔧 Async runtime implementation
- 🎨 UI/UX design
- 📊 Data visualization
- 🐛 Debugger implementation
Priority areas:
- State machine introspection
- Runtime hooks (Tokio, async-std)
- TUI implementation
- Deadlock detection algorithms
- Documentation and examples
See CONTRIBUTING.md for details.
📊 Telemetry
This project uses telemetry-kit to collect anonymous usage analytics. This helps us understand how async-inspect is used in the real world, enabling data-driven decisions instead of relying solely on GitHub issues.
What we collect:
- Commands executed (e.g.,
monitor,export,stats) - Feature usage patterns
- Command execution times
- Opt-out rates (anonymous)
What we DON'T collect:
- No personal information
- No code or file contents
- No IP addresses or location data
- No identifying information
Why Telemetry Matters
Open source projects often make decisions based on a vocal minority. Telemetry gives us visibility into:
- Which features are actually used vs. which are requested
- Real-world performance characteristics
- Usage patterns across different environments
- Where to focus development effort
We will publish a public dashboard showing aggregated, anonymous usage data at: Coming soon
Disabling Telemetry
You can disable telemetry in several ways:
1. Environment variable (recommended):
2. Standard DO_NOT_TRACK:
3. Compile-time (excludes telemetry code entirely):
[]
= { = "0.1", = false, = ["cli", "tokio"] }
Even when telemetry is disabled, we send a single anonymous opt-out signal. This helps us understand the opt-out rate without collecting any identifying information.
Learn more:
- telemetry-kit.dev - Project homepage
- docs.telemetry-kit.dev - Documentation
🔒 Security
async-inspect is designed to be used in development and CI/CD environments for analyzing async code. We take security seriously:
Supply Chain Security
- SLSA Level 3 Provenance: All release binaries include SLSA provenance attestations for verifiable builds
- Dependency Scanning: Automated dependency review on all pull requests
- License Compliance: Only permissive licenses (MIT, Apache-2.0, BSD) - GPL/AGPL excluded
- Security Audits: Continuous monitoring via
cargo-auditandcargo-deny
Build Verification
You can verify the provenance of any release binary:
# Install GitHub CLI attestation verification
Reporting Security Issues
If you discover a security vulnerability, please email security@ibrahimcesar.com instead of using the issue tracker.
📝 License
MIT OR Apache-2.0
🙏 Acknowledgments
Inspired by:
- tokio-console - Task monitoring for Tokio
- async-backtrace - Async stack traces
- tracing - Instrumentation framework
- Chrome DevTools - JavaScript async debugging
- Go's runtime tracer - Goroutine visualization
- rr - Time-travel debugging
async-inspect - Because async shouldn't be a black box 🔍
Status: 🚧 Pre-alpha - Architecture design phase
Star ⭐ this repo to follow development!
💬 Discussion
Have ideas or feedback? Open an issue or discussion!
Key questions we're exploring:
- How to minimize runtime overhead?
- Best UI for visualizing state machines?
- How to support multiple runtimes?
- What features would help you most?