rustqual
Comprehensive Rust code quality analyzer — six dimensions: Complexity, Coupling, DRY, IOSP, SRP, Test Quality — plus 7 structural binary checks integrated into SRP and Coupling. Particularly useful as a structural quality guardrail for AI-generated code, catching the god-functions, mixed concerns, duplicated patterns, and weak tests that AI coding agents commonly produce.
Quality Dimensions
rustqual analyzes your Rust code across six quality dimensions, each contributing to an overall quality score:
| Dimension | Weight | What it checks |
|---|---|---|
| IOSP | 25% | Function separation (Integration vs Operation) |
| Complexity | 20% | Cognitive/cyclomatic complexity, magic numbers, nesting depth, function length, unsafe blocks, error handling |
| DRY | 20% | Duplicate functions, fragments, dead code, boilerplate |
| SRP | 15% | Struct cohesion (LCOM4), module length, function clusters, structural checks (BTC, SLM, NMS) |
| Test Quality | 10% | Assertion density (TQ-001), test function length (TQ-002), mock-heavy tests (TQ-003), assertion-free tests (TQ-004), coverage gaps (TQ-005) |
| Coupling | 10% | Module instability, circular dependencies, SDP, structural checks (OI, SIT, DEH, IET) |
What is IOSP?
The Integration Operation Segregation Principle (from Ralf Westphal's Flow Design) states that every function should be either:
- Integration — orchestrates other functions, contains no logic of its own
- Operation — contains logic (control flow, computation), but does not call other "own" functions
A function that does both is a violation. A function too small to matter (empty body, single expression without logic or own calls) is classified as Trivial.
┌─────────────┐ ┌─────────────┐ ┌────────────────────┐
│ Integration │ │ Operation │ │ ✗ Violation │
│ │ │ │ │ │
│ calls A() │ │ if x > 0 │ │ if x > 0 │
│ calls B() │ │ y = x*2 │ │ result = calc() │ ← mixes both
│ calls C() │ │ return y │ │ return result + 1 │
└─────────────┘ └─────────────┘ └────────────────────┘
Installation
# From crates.io
# From source
# Then use either:
Quick Start
# Analyze current directory
# Analyze a specific file or directory
# Show all functions, not just findings
# Do not exit with code 1 on findings (for local exploration)
# Generate a default config file
# Watch mode: re-analyze on file changes
Using AI coding agents? See Using with AI Coding Agents for integration patterns with Claude Code, Cursor, Copilot, and other tools.
Output Formats
Text (default)
── src/order.rs
✓ INTEGRATION process_order (line 12)
✓ OPERATION calculate_discount (line 28)
Complexity: logic=2, calls=0, nesting=1, cognitive=2, cyclomatic=3
✗ VIOLATION process_payment (line 48) [MEDIUM]
Logic: if (line 50), comparison (line 50), if (line 56)
Calls: determine_payment_method (line 55), charge_credit_card (line 59)
Complexity: logic=3, calls=2, nesting=1, cognitive=5, cyclomatic=4
· TRIVIAL get_name (line 72)
~ SUPPRESSED legacy_handler (line 85)
═══ Summary ═══
Functions: 24 Quality Score: 82.3%
IOSP: 85.7% (4I, 8O, 10T, 2 violations)
Complexity: 90.0% (3 complexity, 1 magic numbers)
DRY: 95.0% (1 duplicates, 2 dead code)
SRP: 100.0%
Test Quality: 100.0%
Coupling: 100.0%
~ Suppressed: 1
4 quality findings. Run with --verbose for details.
JSON
# or
Produces machine-readable output with summary, functions, coupling, duplicates, dead_code, fragments, boilerplate, and srp sections:
GitHub Actions Annotations
Produces ::warning, ::error, and ::notice annotations that GitHub Actions renders inline on PRs:
::warning file=src/order.rs,line=48::IOSP violation in process_payment: logic=[if (line 50)], calls=[determine_payment_method (line 55)]
::error::Quality analysis: 2 violation(s), 82.3% quality score
DOT (Graphviz)
Generates a call-graph visualization with color-coded nodes:
- Green: Integration
- Blue: Operation
- Red: Violation
- Gray: Trivial
SARIF
Produces SARIF v2.1.0 output for integration with GitHub Code Scanning, VS Code SARIF Viewer, and other static analysis platforms. Includes rules for all dimensions (IOSP, complexity, coupling, DRY, SRP, test quality).
HTML
Generates a self-contained HTML report with:
- Dashboard showing overall quality score and 6 dimension scores
- Collapsible detail sections for IOSP, Complexity, DRY, SRP, Test Quality, and Coupling findings
- Color-coded severity indicators and inline CSS (no external dependencies)
CLI Reference
rustqual [OPTIONS] [PATH]
| Argument / Flag | Description |
|---|---|
PATH |
File or directory to analyze. Defaults to . |
-v, --verbose |
Show all functions, not just findings |
--json |
Output as JSON (shorthand for --format json) |
--format <FORMAT> |
Output format: text, json, github, dot, sarif, html |
-c, --config <PATH> |
Path to config file. Defaults to auto-discovered rustqual.toml |
--strict-closures |
Treat closures as logic (stricter analysis) |
--strict-iterators |
Treat iterator chains (.map, .filter, ...) as logic |
--allow-recursion |
Don't count recursive calls as violations |
--strict-error-propagation |
Count ? operator as logic (implicit control flow) |
--no-fail |
Do not exit with code 1 on quality findings (local exploration) |
--fail-on-warnings |
Treat warnings (e.g. suppression ratio exceeded) as errors (exit 1) |
--init |
Generate a tailored rustqual.toml based on current codebase metrics |
--completions <SHELL> |
Generate shell completions (bash, zsh, fish, elvish, powershell) |
--save-baseline <FILE> |
Save current results as a JSON baseline |
--compare <FILE> |
Compare current results against a saved baseline |
--fail-on-regression |
Exit with code 1 if quality score regressed vs baseline |
--watch |
Watch for file changes and re-analyze continuously |
--suggestions |
Show refactoring suggestions for IOSP violations |
--sort-by-effort |
Sort violations by refactoring effort score (descending) |
--findings |
Show only findings with file:line locations (one per line) |
--min-quality-score <SCORE> |
Exit with code 1 if quality score is below threshold (0–100) |
--diff [REF] |
Only analyze files changed vs a git ref (default: HEAD) |
--coverage <LCOV_FILE> |
Path to LCOV coverage file for test quality analysis (TQ-005) |
Exit Codes
| Code | Meaning |
|---|---|
0 |
Success (no findings, or --no-fail set) |
1 |
Quality findings found (default), regression detected (--fail-on-regression), quality gate breached (--min-quality-score), or warnings present with --fail-on-warnings |
2 |
Configuration error (invalid or unreadable config file) |
Configuration
The analyzer auto-discovers rustqual.toml by searching from the analysis path upward through parent directories. You can also specify a config explicitly with --config. Generate a commented default config with --init.
If a rustqual.toml exists but cannot be parsed (syntax errors, unknown fields), the analyzer exits with code 2 and an error message instead of silently falling back to defaults.
Full rustqual.toml Reference
# ────────────────────────────────────────────────────────────────
# External Prefixes
# ────────────────────────────────────────────────────────────────
# Calls to these crate/module prefixes are NOT counted as "own" calls.
= [
"std", "core", "alloc", "log", "tracing", "anyhow", "thiserror",
"serde", "tokio", "println", "eprintln", "format", "vec", "dbg",
"todo", "unimplemented", "panic", "assert", "assert_eq", "assert_ne",
"debug_assert",
]
# ────────────────────────────────────────────────────────────────
# Ignore Functions
# ────────────────────────────────────────────────────────────────
# Functions matching these patterns are completely excluded from analysis.
# Supports full glob syntax: *, ?, [abc], [!abc]
= [
"main", # entry point, always mixes logic + calls
"test_*", # test functions
"visit_*", # syn::Visit trait implementations
]
# ────────────────────────────────────────────────────────────────
# Exclude Files
# ────────────────────────────────────────────────────────────────
# Glob patterns for files to exclude from analysis entirely.
= []
# ────────────────────────────────────────────────────────────────
# Strictness
# ────────────────────────────────────────────────────────────────
= false # If true, closures count as logic
= false # If true, iterator chains count as own calls
= false # If true, recursive calls don't violate IOSP
= false # If true, ? operator counts as logic
# ────────────────────────────────────────────────────────────────
# Suppression Ratio
# ────────────────────────────────────────────────────────────────
# Maximum fraction of functions that may be suppressed (0.0–1.0).
# Exceeding this ratio produces a warning.
= 0.05
# If true, exit with code 1 when warnings are present (e.g. suppression ratio exceeded).
# Default: false. Use --fail-on-warnings CLI flag to enable.
= false
# ────────────────────────────────────────────────────────────────
# Complexity Analysis
# ────────────────────────────────────────────────────────────────
[]
= true
= 15 # Cognitive complexity threshold
= 10 # Cyclomatic complexity threshold
= 4 # Maximum nesting depth before warning
= 60 # Maximum function body lines before warning
= true # Flag numeric literals not in allowed list
= ["0", "1", "-1", "2", "0.0", "1.0"]
= true # Flag functions containing unsafe blocks
= true # Flag unwrap/expect/panic/todo usage
= false # If true, .expect() calls don't trigger warnings
# ────────────────────────────────────────────────────────────────
# Coupling Analysis
# ────────────────────────────────────────────────────────────────
[]
= true
= 0.8 # Instability threshold (Ce / (Ca + Ce))
= 15 # Maximum afferent coupling
= 12 # Maximum efferent coupling
= true # Check Stable Dependencies Principle
# ────────────────────────────────────────────────────────────────
# DRY / Duplicate Detection
# ────────────────────────────────────────────────────────────────
[]
= true
= 50 # Minimum token count for duplicate detection
= 5 # Minimum line count
= 3 # Minimum statements for fragment detection
= 0.85 # Jaccard similarity for near-duplicates
= true # Skip test functions
= true # Enable dead code detection
= true # Flag use foo::* imports
= true # Flag repeated match blocks (DRY-005)
# ────────────────────────────────────────────────────────────────
# Boilerplate Detection
# ────────────────────────────────────────────────────────────────
[]
= true
= true # Suggest derive macros / crates
= [ # Which patterns to check (BP-001 through BP-010)
"BP-001", "BP-002", "BP-003", "BP-004", "BP-005",
"BP-006", "BP-007", "BP-008", "BP-009", "BP-010",
]
# ────────────────────────────────────────────────────────────────
# SRP Analysis
# ────────────────────────────────────────────────────────────────
[]
= true
= 0.6 # Composite score threshold for warnings
= 12 # Maximum struct fields
= 15 # Maximum impl methods
= 10 # Maximum external call targets
= 5 # Maximum function parameters (AST-based)
= 3 # LCOM4 component threshold
= [0.4, 0.25, 0.15, 0.2] # [lcom4, fields, methods, fan_out]
= 300 # Production lines before penalty starts
= 800 # Production lines at maximum penalty
= 3 # Max independent function groups before warning
= 5 # Min statements for a function to count in clusters
# ────────────────────────────────────────────────────────────────
# Structural Binary Checks
# ────────────────────────────────────────────────────────────────
[]
= true
= true # Broken Trait Contract (SRP)
= true # Self-less Methods (SRP)
= true # Needless &mut self (SRP)
= true # Orphaned Impl (Coupling)
= true # Single-Impl Trait (Coupling)
= true # Downcast Escape Hatch (Coupling)
= true # Inconsistent Error Types (Coupling)
# ────────────────────────────────────────────────────────────────
# Test Quality Analysis
# ────────────────────────────────────────────────────────────────
[]
= true
= "" # Path to LCOV file (or use --coverage CLI flag)
# ────────────────────────────────────────────────────────────────
# Quality Weights
# ────────────────────────────────────────────────────────────────
[]
= 0.25 # Weight for IOSP dimension
= 0.20 # Weight for Complexity dimension
= 0.20 # Weight for DRY dimension
= 0.15 # Weight for SRP dimension
= 0.10 # Weight for Test Quality dimension
= 0.10 # Weight for Coupling dimension
# Weights must sum to 1.0
Inline Suppression
To suppress specific findings, add a // qual:allow comment on or immediately before the function definition:
// qual:allow
// qual:allow(iosp) reason: "legacy code, scheduled for refactoring"
// qual:allow(complexity)
// qual:allow(srp)
// #[derive(Debug, Clone)]
Supported dimensions: iosp, complexity, coupling, srp, dry, test_quality.
The legacy // iosp:allow syntax is still supported as an alias for // qual:allow(iosp).
Suppressed functions appear as SUPPRESSED in the output and do not count toward findings. If more than max_suppression_ratio (default 5%) of functions are suppressed, a warning is displayed.
API Annotation
Mark public API functions with // qual:api to exclude them from dead code (DRY-003) and untested function (TQ-003) detection:
// qual:api
// qual:api
Unlike // qual:allow, API markers do not count against the suppression ratio. Use // qual:api for functions that are part of your library's public interface — they have no callers within the project because they're meant to be called by external consumers.
Inverse Annotation
Mark inverse method pairs with // qual:inverse(fn_name) to suppress near-duplicate DRY findings between them:
// qual:inverse(parse)
// qual:inverse(as_str)
Common use cases: serialize/deserialize, encode/decode, to_bytes/from_bytes. Like // qual:api, inverse markers do not count against the suppression ratio — they document intentional structural similarity.
Lenient vs. Strict Mode
By default the analyzer runs in lenient mode. This makes it practical for idiomatic Rust code:
| Construct | Lenient (default) | --strict-closures |
--strict-iterators |
|---|---|---|---|
items.iter().map(|x| x + 1) |
ignored entirely | closure logic counted | .map() as own call |
|| { if cond { a } } |
closure logic ignored | if counted as logic |
— |
self.do_work() in closure |
call ignored | call counted as own | — |
x? |
not logic | — | — |
async { if x { } } |
ignored (like closures) | — | — |
Use --strict-error-propagation to count ? as logic.
Features
Quality Score
The overall quality score is a weighted average of six dimension scores (weights are configurable via [weights] in rustqual.toml):
| Dimension | Default Weight | Metric |
|---|---|---|
| IOSP | 25% | Compliance ratio (non-trivial functions) |
| Complexity | 20% | 1 - (complexity + magic numbers + nesting + length + unsafe + error handling) / total |
| DRY | 20% | 1 - (duplicates + fragments + dead code + boilerplate + wildcards + repeated matches) / total |
| SRP | 15% | 1 - (struct warnings + module warnings + param warnings + structural BTC/SLM/NMS) / total |
| Test Quality | 10% | 1 - (assertion density + test length + mock-heavy + assertion-free + coverage gap) / total |
| Coupling | 10% | 1 - (coupling warnings + 2×cycles + SDP violations + structural OI/SIT/DEH/IET) / total |
Quality score ranges from 0% (all findings) to 100% (no findings). Weights must sum to 1.0.
Quality Gates
By default, the analyzer exits with code 1 on any findings — no extra flags needed for CI. Use --no-fail for local exploration.
# Fail if quality score is below 90%
# Local exploration (never fail)
Violation Severity
Violations are categorized by severity based on the number of findings:
| Severity | Condition |
|---|---|
| Low | ≤2 total findings |
| Medium | 3–5 total findings |
| High | >5 total findings |
Severity is shown as [LOW], [MEDIUM], [HIGH] in text output and as a severity field in JSON/SARIF.
Complexity Metrics
Each analyzed function gets complexity metrics (shown with --verbose):
- cognitive_complexity: Cognitive complexity score (increments for nesting depth)
- cyclomatic_complexity: Cyclomatic complexity score (decision points + 1)
- magic_numbers: Numeric literals not in the configured allowed list
- logic_count: Number of logic occurrences (if, match, operators, etc.)
- call_count: Number of own-function calls
- max_nesting: Maximum nesting depth of control flow
- function_lines: Number of lines in the function body
- unsafe_blocks: Count of
unsafeblocks - unwrap/expect/panic/todo: Error handling pattern counts
Coupling Analysis
Detects module-level coupling issues:
- Afferent coupling (Ca): Modules depending on this one (fan-in)
- Efferent coupling (Ce): Modules this one depends on (fan-out)
- Instability: Ce / (Ca + Ce), ranging from 0.0 (stable) to 1.0 (unstable)
- Circular dependencies: Detected via Kosaraju's iterative SCC algorithm
Leaf modules (Ca=0) are excluded from instability warnings since I=1.0 is natural for them.
- Stable Dependencies Principle (SDP): Flags when a stable module (low instability) depends on a more unstable module. This violates the principle that dependencies should flow toward stability.
DRY Analysis
Detects five categories of repetition:
- Duplicate functions: Exact and near-duplicate functions (via AST normalization + Jaccard similarity)
- Duplicate fragments: Repeated statement sequences across functions (sliding window + merge)
- Dead code: Functions never called from production code, or only called from tests. Detects both direct calls and function references passed as arguments (e.g.,
.for_each(some_fn)). - Boilerplate patterns: 10 common Rust boilerplate patterns (BP-001 through BP-010) including trivial
From/Displayimpls, manual getters/setters, builder patterns, manualDefault, repetitive match arms, error enum boilerplate, and clone-heavy conversions - Wildcard imports: Flags
use foo::*glob imports (excludesprelude::*paths anduse super::*in test modules) - Repeated match patterns (DRY-005): Detects identical
matchblocks (≥3 arms) duplicated across ≥3 instances in ≥2 functions, via AST normalization and structural hashing
SRP Analysis
Detects Single Responsibility Principle violations at three levels:
- Struct-level: LCOM4 cohesion analysis using Union-Find on method→field access graph. Composite score combines normalized LCOM4, field count, method count, and fan-out with configurable weights.
- Module-level (length): Production line counting (before
#[cfg(test)]) with linear penalty between configurable baseline and ceiling. - Module-level (cohesion): Detects files with too many independent function clusters. Uses Union-Find on private substantive functions, leveraging IOSP own-call data. Functions that call each other or share a common caller are united into the same cluster. A file with ≥
max_independent_clusters(default 3) independent groups indicates multiple responsibilities that should be split into separate modules.
Structural Binary Checks
Seven binary (pass/fail) checks for common Rust structural issues, integrated into existing dimensions:
| Rule | Name | Dimension | What it checks |
|---|---|---|---|
| BTC | Broken Trait Contract | SRP | Impl blocks missing required trait methods |
| SLM | Self-less Methods | SRP | Methods in impl blocks that don't use self (could be free functions) |
| NMS | Needless &mut self |
SRP | Methods taking &mut self that only read from self |
| OI | Orphaned Impl | Coupling | Impl blocks in files that don't define the implemented type |
| SIT | Single-Impl Trait | Coupling | Traits with exactly one implementation (unnecessary abstraction) |
| DEH | Downcast Escape Hatch | Coupling | .downcast_ref() / .downcast_mut() / .downcast() usage (broken abstraction) |
| IET | Inconsistent Error Types | Coupling | Modules returning 3+ different error types (missing unified error type) |
Each rule can be individually toggled via [structural] config. Suppress with // qual:allow(srp) or // qual:allow(coupling) depending on the dimension.
Baseline Comparison
Track quality over time:
# Save current state as baseline
# ... make changes ...
# Compare against baseline (shows new/fixed findings, score delta)
# Fail CI only on regression
The baseline format (v2) includes quality score, all dimension counts, and total findings. V1 baselines (IOSP-only) are still supported for backward compatibility.
Refactoring Suggestions
Provides pattern-based refactoring hints for violations, such as extracting conditions, splitting dispatch logic, or converting loops to iterator chains.
Watch Mode
Monitors the filesystem for .rs file changes and re-runs analysis automatically. Useful during refactoring sessions.
Shell Completions
# Generate completions for your shell
Using with AI Coding Agents
Why AI-Generated Code Needs Structural Analysis
AI coding agents (Claude Code, Cursor, Copilot, etc.) are excellent at producing working code quickly, but they consistently exhibit structural problems that rustqual is designed to catch:
- IOSP violations: AI agents routinely generate functions that mix orchestration with logic — calling helper functions inside
ifblocks, combining validation with dispatch. These "god-functions" are hard to test and hard to maintain. - Complexity creep: Generated functions tend to be long, deeply nested, and full of inline logic rather than composed from small, focused operations.
- Duplication: When asked to implement similar features, AI agents often copy-paste patterns rather than extracting shared abstractions, leading to DRY violations.
- Weak tests: AI-generated tests frequently lack meaningful assertions, contain overly long test functions, or rely heavily on mocks without verifying real behavior. The Test Quality dimension catches assertion-free tests, low assertion density, and coverage gaps.
IOSP is particularly valuable for AI-generated code because it enforces a strict decomposition: every function is either an Integration (orchestrates, no logic) or an Operation (logic, no own calls). This constraint forces the kind of small, testable, single-purpose functions that AI agents tend not to produce on their own.
CLAUDE.md / Cursor Rules Integration
Project-level instruction files (.claude/CLAUDE.md, .cursorrules, etc.) can teach AI agents to follow IOSP principles. Add rules like these to your project:
- ------
This works with any AI tool that reads project-level instruction files. The key insight is that the agent gets actionable feedback: rustqual tells it exactly which function violated which principle, so it can self-correct.
CI Quality Gate for AI-Generated Code
Add rustqual to your CI pipeline so that AI-generated PRs are automatically checked:
name: Quality Check
on:
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- run: cargo install rustqual cargo-llvm-cov
- name: Generate coverage data
run: cargo llvm-cov --lcov --output-path lcov.info
- name: Check quality (changed files only)
run: rustqual --diff HEAD~1 --coverage lcov.info --fail-on-warnings --format github
Key flags for AI workflows:
--diff HEAD~1— only analyze files changed in the PR, not the entire codebase--coverage lcov.info— include test quality coverage analysis (TQ-005)--fail-on-warnings— treat suppression ratio violations as errors--min-quality-score 90— reject PRs that drop quality below a threshold--format github— produces inline annotations on the PR diff
See CI Integration for more workflow examples including baseline comparison.
Pre-commit Hook
Catch violations before they enter version control — especially useful when AI agents generate code locally:
#!/bin/bash
# .git/hooks/pre-commit
if ! ; then
fi
This gives the AI agent (or developer) immediate feedback before the code is committed. See Pre-commit Hook for the basic setup.
Recommended Workflow
The full quality loop for AI-assisted development:
- Agent instructions — CLAUDE.md / Cursor rules teach the agent IOSP principles and rustqual usage
- Pre-commit hook — catches violations locally before they enter version control
- Coverage verification — generate LCOV data with
cargo llvm-covand pass via--coverageto detect weak or missing tests - CI quality gate — prevents merges below quality threshold using
--min-quality-scoreor--fail-on-regression - Baseline tracking —
--save-baselineand--comparetrack quality score over time, ensuring AI-generated code does not erode structural quality
Architecture
The analyzer uses a two-pass pipeline:
┌──────────────────────────────────┐
│ Pass 1: Collect │
.rs files ──read──► │ Read + Parse all files (rayon) │
│ Build ProjectScope (all names) │
│ Scan for // qual:allow markers │
└────────────────┬─────────────────┘
│
┌────────────────▼─────────────────┐
│ Pass 2: Analyze │
│ For each function: │
│ BodyVisitor walks AST │
│ → logic + call occurrences │
│ → complexity metrics │
│ → classify: I / O / V / T │
│ Coupling analysis (use-graph) │
│ DRY detection (normalize+hash) │
│ SRP analysis (LCOM4+composite) │
│ Compute quality score │
└────────────────┬─────────────────┘
│
┌────────────────▼─────────────────┐
│ Output │
│ Text / JSON / GitHub / DOT / │
│ SARIF / HTML / Suggestions / │
│ Baseline comparison │
└──────────────────────────────────┘
Source Files
~80 source files in src/, ~23,000 lines total (including tests):
src/
├── lib.rs Crate root: CLI, config, quality gates, run() (~710 lines)
├── cli.rs Clap CLI struct and argument definitions (~125 lines)
├── main.rs Thin binary wrapper (rustqual) (~5 lines)
├── bin/
│ └── cargo-qual/
│ └── main.rs Thin binary wrapper (cargo qual) (~5 lines)
├── pipeline/ Analysis orchestration (split into submodules)
│ ├── mod.rs run_analysis, output_results (~750 lines)
│ ├── discovery.rs File collection, parsing, git diff (~245 lines)
│ ├── metrics.rs Coupling + SRP + DRY computation (~400 lines)
│ └── warnings.rs Complexity/ext warnings, suppression ratio (~385 lines)
├── analyzer/
│ ├── mod.rs Core analysis engine, Analyzer struct (~908 lines)
│ ├── types.rs Classification, FunctionAnalysis, metrics (~290 lines)
│ ├── visitor/ BodyVisitor (AST walking, trivial match)
│ │ ├── mod.rs Struct, helpers, is_trivial_match_arm (~630 lines)
│ │ └── visit.rs Visit trait implementation (~290 lines)
│ └── classify.rs classify_function (3-tuple w/ own_calls) (~330 lines)
├── config/
│ ├── mod.rs Config loading, glob compilation (~475 lines)
│ ├── init.rs Tailored config generation, ProjectMetrics (~410 lines)
│ └── sections.rs Sub-configs, DEFAULT_* constants, WeightsConfig (~380 lines)
├── dry/
│ ├── mod.rs FileVisitor trait, function collectors (~600 lines)
│ ├── functions.rs Duplicate function detection (~433 lines)
│ ├── fragments.rs Fragment-level duplicate detection (~809 lines)
│ ├── dead_code.rs Dead code detection (~470 lines)
│ ├── wildcards.rs Wildcard import detection (~265 lines)
│ └── boilerplate/ Boilerplate pattern detection (BP-001–010)
│ ├── mod.rs Types, helpers, detect_boilerplate() (~140 lines)
│ └── ... 10 per-pattern files (BP-001–BP-010)
├── report/
│ ├── mod.rs AnalysisResult, Summary, quality score (~620 lines)
│ ├── text/ Text format output (split into submodules)
│ │ ├── mod.rs print_report, file/function entries (~300 lines)
│ │ ├── summary.rs Summary section printers (~125 lines)
│ │ ├── dry.rs DRY section printer (~100 lines)
│ │ ├── coupling.rs Coupling section printer (~200 lines)
│ │ └── srp.rs SRP section printer (~80 lines)
│ ├── json.rs JSON format output (~450 lines)
│ ├── json_types.rs Serializable JSON struct definitions (~200 lines)
│ ├── github.rs GitHub Actions annotations (~375 lines)
│ ├── dot.rs DOT/Graphviz output (~155 lines)
│ ├── sarif/ SARIF v2.1.0 output (split into submodule)
│ │ ├── mod.rs print_sarif Integration + envelope (~455 lines)
│ │ └── collectors.rs collect_*_findings() Operations (~300 lines)
│ ├── html/ Self-contained HTML report (split into submodules)
│ │ ├── mod.rs print_html, build_html_string, dashboard (~240 lines)
│ │ ├── sections.rs IOSP, complexity, coupling sections (~240 lines)
│ │ └── tables.rs DRY + SRP sections, generic table builder (~300 lines)
│ ├── suggestions.rs Refactoring suggestions (~192 lines)
│ └── baseline.rs Baseline v2 save/compare (~456 lines)
├── srp/
│ ├── mod.rs SRP types, visitors, constructor detection (~535 lines)
│ ├── cohesion.rs LCOM4 (with constructor support), composite (~420 lines)
│ └── module.rs Production lines, function cohesion clusters (~580 lines)
├── coupling/ Module coupling analysis (split into submodules)
│ ├── mod.rs Types, analyze_coupling Integration, tests (~430 lines)
│ ├── graph.rs build_module_graph (use-tree walking) (~100 lines)
│ ├── metrics.rs compute_coupling_metrics (Ca/Ce/I) (~45 lines)
│ ├── cycles.rs detect_cycles (Kosaraju iterative SCC) (~80 lines)
│ └── sdp.rs Stable Dependencies Principle check (~210 lines)
├── normalize.rs AST normalization for DRY (~784 lines)
├── findings.rs Dimension enum, suppression parsing (~240 lines)
├── scope.rs ProjectScope, two-pass name resolution (~264 lines)
└── watch.rs File watcher for --watch mode (~126 lines)
How Classification Works
- Trivial check: Empty bodies are immediately
Trivial. Single-statement bodies are analyzed — only classified as Trivial if they contain neither logic nor own calls. - AST walking:
BodyVisitorimplementssyn::visit::Visitto walk the function body, recording:- Logic:
if,match,for,while,loop, binary operators (+,&&,>, etc.), optionally?operator - Own calls: function/method calls that match names defined in the project (via
ProjectScope) - Nesting depth: tracks control-flow nesting for complexity metrics
- Logic:
- Classification:
- Logic only → Operation
- Own calls only → Integration
- Both → Violation (with severity based on finding count)
- Neither → Trivial
- Recursion exception: If
allow_recursionis enabled and the only own call is to the function itself, it's classified as Operation instead of Violation.
ProjectScope: Solving the Method Call Problem
Without type information, the analyzer cannot distinguish self.push(x) (Vec method, external) from self.analyze(x) (own method). The ProjectScope solves this with a two-pass approach:
- First pass: Scan all
.rsfiles and collect every declared function, method, struct, enum, and trait name. - Second pass: During analysis, a call is only counted as "own" if the name exists in the project scope.
This means v.push(1) is never counted as own (since push is not defined in your project), while self.analyze_file(f) is (because analyze_file is defined in your project).
Universal methods (~26 entries like new, default, fmt, clone, eq, ...) are always treated as external, even if your project implements them via trait impls. This prevents false positives from standard trait implementations.
IOSP Score
IOSP Score = (Integrations + Operations) / (Integrations + Operations + Violations) × 100%
Trivial and suppressed functions are excluded because they are too small or explicitly allowed.
CI Integration
GitHub Actions
name: Quality Check
on:
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- name: Install rustqual
run: cargo install --path .
- name: Check code quality
run: rustqual src/ --min-quality-score 90 --format github
GitHub Actions with Baseline
- name: Check quality regression
run: |
rustqual src/ --compare baseline.json --fail-on-regression --format github
Generic CI (JSON)
- name: Quality Check
run: |
cargo run --release -- src/ --json > quality-report.json
cat quality-report.json
Pre-commit Hook
#!/bin/bash
# .git/hooks/pre-commit
if ! ; then
fi
How to Fix Violations
When a function is flagged as a violation, refactor by splitting it into pure integrations and operations:
Before (violation):
After (IOSP compliant):
// Integration: orchestrates, no logic
// Operation: logic only, no own calls
// Integration: delegates to transform or default
Common refactoring patterns:
| Pattern | Approach |
|---|---|
if + call in branch |
Extract the condition into an Operation, use .then() or pass result to Integration |
for loop with calls |
Use iterator chains (.iter().map(|x| process(x)).collect()) — closures are lenient |
| Match + calls | Extract match logic into an Operation that returns an enum/value, dispatch in Integration |
Use --suggestions to get automated refactoring hints.
Self-Compliance
rustqual analyzes itself with zero findings:
)
This is verified by the integration test suite and CI.
Testing
RUSTFLAGS="-Dwarnings"
The test suite covers:
- analyzer/ (tests across 4 modules): classification, closures, iterators, scope integration, recursion,
?operator, async/await, severity, complexity metrics, suppression - config/ (tests across 2 modules): external call matching, ignore patterns, config loading, validation, glob compilation, default generation
- report/ (tests across 8 modules): summary statistics, JSON structure, suppression counting, baseline roundtrip, complexity, HTML generation, SARIF structure, GitHub annotations
- dry/ (tests across 5 modules): duplicate detection, fragment detection, dead code detection, boilerplate patterns, normalization
- srp/ (tests across 3 modules): LCOM4 computation, composite scoring, module line counting, function cohesion clusters (shared-caller unification)
- pipeline (25+ tests): file collection, suppression lines, coupling suppression, SRP suppression, suppression ratio
- scope (16 tests): scope collection,
is_own_function,is_own_method - integration (4 tests): self-analysis, sample expectations, JSON validity, verbose output
- showcase (3 tests): before/after IOSP refactoring examples
Known Limitations
- Syntactic analysis only: Uses
synfor AST parsing without type resolution. Cannot determine the receiver type of method calls — relies onProjectScopeheuristics andexternal_prefixesconfig as fallbacks. - Macros: Macro invocations are not expanded.
println!etc. are handled as special cases viaexternal_prefixes, but custom macros producing logic or calls may be misclassified. - External file modules:
mod foo;declarations pointing to separate files are not followed. Only inline modules (mod foo { ... }) are analyzed recursively. - Parallelization: The analysis pass is sequential because
proc_macro2::Span(withspan-locationsenabled for line numbers) is notSync. File I/O is parallelized viarayon.
Dependencies
| Crate | Purpose |
|---|---|
syn |
Rust AST parsing (with full, visit features) |
proc-macro2 |
Span locations for line numbers |
quote |
Token stream formatting (generic type display) |
derive_more |
Display derive for analysis types |
clap |
CLI argument parsing |
clap_complete |
Shell completion generation |
walkdir |
Recursive directory traversal |
colored |
Terminal color output |
serde |
Config deserialization |
toml |
TOML config file parsing |
serde_json |
JSON output serialization |
globset |
Glob pattern matching for ignore/exclude |
rayon |
Parallel file I/O |
notify |
File system watching for --watch mode |
License
MIT