debtmap
Stop guessing where bugs hide. Start fixing what matters.
debtmap finds the Rust functions that are complex, untested, and frequently changed—the places bugs actually live.
The Problem
Static analysis tools cry wolf. You get hundreds of warnings, most are noise, and you waste time on code that works fine.
debtmap is different. It combines 5 signals to find actual risk:
| Signal | What it catches |
|---|---|
| Cognitive complexity | Code that's hard to understand |
| Test coverage gaps | Untested critical paths |
| Git history | Code that breaks repeatedly |
| Pattern recognition | Ignores simple match statements |
| Entropy analysis | Filters repetitive false positives |
The result: a prioritized list of what to fix, with quantified impact.
Install
Usage
# Analyze your project
# With test coverage (recommended)
# Generate HTML report
What You Get
#1 SCORE: 8.9 [CRITICAL]
├─ TEST GAP: ./src/parser.rs:38 parse_complex_input()
├─ ACTION: Add 6 unit tests for full coverage
├─ IMPACT: -3.7 risk reduction
├─ DEPENDENCIES:
│ ├─ Called by: validate_input, process_request, handle_api_call
│ └─ Calls: tokenize, validate_syntax
└─ WHY: Complex logic (cyclomatic=6, cognitive=12) with 0% test coverage
STEPS:
1. Add 8 tests for 70% coverage gap [Easy]
Commands: cargo test parse_complex_input::
2. Extract complex branches into focused functions [Medium]
Commands: cargo clippy -- -W clippy::cognitive_complexity
3. Verify improvements [Easy]
Commands: cargo test --all
Every item tells you:
- What to fix (exact file and line)
- Why it matters (the risk signals that triggered it)
- How to fix it (concrete steps with commands)
- Impact (quantified risk reduction)
Why debtmap?
Fewer False Positives
A 100-line match statement converting enums to strings? Other tools flag it as complex. debtmap recognizes it as a simple mapping and moves on.
Five pattern systems eliminate noise:
- Pure mapping detection (40% complexity reduction for simple matches)
- Entropy analysis (repetitive validation chains aren't complex)
- Framework patterns (Axum handlers, Tokio async, Clap CLI)
- Recursive match detection (context-aware nesting analysis)
- Complexity classification (state machines vs god objects)
Actually Prioritized
Not alphabetical. Not by file. By actual risk:
Risk = Complexity × (1 - Coverage) × Change Frequency × Bug History
Complex + untested + frequently changed = fix first.
Fast
10-100x faster than Java/Python tools. Parallel processing, lock-free caching, written in Rust.
190K lines analyzed in 3.2 seconds (8 cores)
CI/CD Integration
# .github/workflows/quality.yml
name: Code Quality
on:
jobs:
debtmap:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: iepathos/debtmap-action@v1
with:
max-complexity-density: '10.0'
fail-on-violation: 'true'
Density-based thresholds work for any codebase size—no adjustment needed as your code grows.
Documentation
Full Documentation — guides, examples, configuration reference
Quick links:
Roadmap
Current focus: Rust analysis excellence
- Cognitive + cyclomatic complexity
- Test coverage correlation
- Git history analysis
- Pattern-based false positive reduction
- Framework detection (Axum, Actix, Tokio, Diesel, Clap)
- Interactive TUI and HTML dashboards
- Unsafe code analysis
- Performance pattern detection
- Multi-language support (Go, Python, TypeScript)
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Good first issues:
- Improve Rust-specific analysis
- Add new complexity metrics
- Expand test coverage
- Documentation improvements
License
MIT — see LICENSE
Questions? Open an issue or check the documentation.