testgap
AI-powered test gap finder. Scans your codebase with tree-sitter, identifies untested functions, and uses Claude to suggest what tests to write.
Install
From crates.io (once published):
From source:
Quick Demo
$ testgap analyze --no-ai
testgap — Test Gap Analysis
────────────────────────────────────────
Project: /home/user/my-project
Languages: rust, typescript
Coverage: 42/68 functions (61.8%)
AI: disabled
CRITICAL (3) ─────────────────────────
process_payment src/billing.rs:47
Public function with high complexity (12) and no test coverage
Signature: pub fn process_payment(order: &Order, method: PaymentMethod) -> Result<Receipt>
Complexity: 12
validate_schema src/api/handlers.rs:112
Public function with high complexity (8) and no test coverage
Signature: pub fn validate_schema(input: &Value, schema: &Schema) -> Result<()>
Complexity: 8
WARNING (5) ──────────────────────────
...
Summary: 3 critical, 5 warning, 18 info
Usage
# Analyze current directory
# Static analysis only (no AI, no API key needed)
# Analyze a specific project
# JSON output for CI
# Markdown report
# Filter by language
# Only show critical gaps
# Reduce AI cost — only analyze critical gaps
# Create a config file
Best Practices
Best use cases:
- CI gate for untested public APIs
- Pre-release audit of test coverage
- Onboarding into an unfamiliar codebase
- Prioritizing which tests to write first
When NOT to use testgap:
- Not a replacement for runtime coverage tools (lcov, tarpaulin, istanbul) — testgap uses static analysis
- Skip generated or vendored code via exclude patterns in
.testgap.toml - Don't chase 100% — Info-level gaps on private helpers are usually fine
Tips:
- Start with
--no-aito get a fast baseline without API costs - Use
--fail-on-criticalin CI to catch regressions - Use
--ai-severity criticalto send only critical gaps to the AI, dramatically reducing API cost - Pipe JSON output to
jqfor custom filtering:testgap analyze --format json | jq '.gaps[] | select(.severity == "critical")' - Add a
.testgap.tomlconfig file early so the whole team shares the same settings
Supported Languages
| Language | Extensions | Test Detection |
|---|---|---|
| Rust | .rs |
#[test], #[cfg(test)], test_ prefix |
| JavaScript | .js, .jsx, .mjs, .cjs |
test(), it(), describe(), test dirs |
| TypeScript | .ts, .tsx, .mts, .cts |
test(), it(), describe(), test dirs |
| Python | .py |
test_ prefix, test dirs |
| Go | .go |
Test prefix, *testing.T parameter |
How It Works
- Scan — walks your project and classifies files as source or test
- Extract — uses tree-sitter to parse functions from source files
- Map — matches test functions to source functions by name, file convention, and body references
- Detect — identifies untested functions and classifies severity:
- Critical: public + complex + untested
- Warning: public + untested
- Info: private + untested
- Analyze (optional) — sends gaps to Claude API for risk assessment and test suggestions
Configuration
Create a .testgap.toml in your project root:
See .testgap.toml.example for all options.
CI Integration
- name: Check test gaps
run: testgap analyze --format json --fail-on-critical --no-ai
Exit codes:
0— no critical gaps (or--fail-on-criticalnot set)1— critical gaps found (with--fail-on-critical)2— runtime error
Environment Variables
ANTHROPIC_API_KEY— required for AI analysis (use--no-aito skip)
Architecture
See docs/ARCHITECTURE.md for the data flow diagram, module responsibilities, and key design decisions.
Contributing
See CONTRIBUTING.md for development setup, running tests, and how to add a new language.
License
MIT