Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
PMAT
Getting Started | Features | Examples | Documentation
What is PMAT?
PMAT (Pragmatic Multi-language Agent Toolkit) provides everything needed to analyze code quality and generate AI-ready context:
- Context Generation - Deep analysis for Claude, GPT, and other LLMs
- Technical Debt Grading - A+ through F scoring with 6 orthogonal metrics
- Mutation Testing - Test suite quality validation (85%+ kill rate)
- Repository Scoring - Quantitative health assessment (0-211 scale)
- Semantic Search - Natural language code discovery
- MCP Integration - 19 tools for Claude Code, Cline, and AI agents
- Quality Gates - Pre-commit hooks, CI/CD integration
- 17+ Languages - Rust, TypeScript, Python, Go, Java, C/C++, and more
Part of the PAIML Stack, following Toyota Way quality principles (Jidoka, Genchi Genbutsu, Kaizen).
Getting Started
Add to your system:
# Install from crates.io
# Or from source (latest)
&&
Basic Usage
# Generate AI-ready context
# Analyze code complexity
# Grade technical debt (A+ through F)
# Score repository health
# Run mutation testing
MCP Server Mode
# Start MCP server for Claude Code, Cline, etc.
Features
Context Generation
Generate comprehensive context for AI assistants:
Technical Debt Grading (TDG)
Six orthogonal metrics for accurate quality assessment:
Grading Scale:
- A+/A: Excellent quality, minimal debt
- B+/B: Good quality, manageable debt
- C+/C: Needs improvement
- D/F: Significant technical debt
Mutation Testing
Validate test suite effectiveness:
Supported Languages: Rust, Python, TypeScript, JavaScript, Go, C++
Repository Health Scoring
Evidence-based quality metrics (0-211 scale):
Workflow Prompts
Pre-configured AI prompts enforcing EXTREME TDD:
Git Hooks
Automatic quality enforcement:
Examples
Generate Context for AI
# For Claude Code
# With semantic search
CI/CD Integration
# .github/workflows/quality.yml
name: Quality Gates
on:
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: cargo install pmat
- run: pmat analyze tdg --fail-on-violation --min-grade B
- run: pmat mutate --target src/ --threshold 80
Quality Baseline Workflow
# 1. Create baseline
# 2. Check for regressions
Architecture
pmat/
├── server/ CLI and MCP server
│ ├── src/
│ │ ├── cli/ Command handlers
│ │ ├── services/ Analysis engines
│ │ ├── mcp/ MCP protocol
│ │ └── tdg/ Technical Debt Grading
├── crates/
│ └── pmat-dashboard/ Pure WASM dashboard
└── docs/
└── specifications/ Technical specs
Quality
| Metric | Value |
|---|---|
| Tests | 4600+ passing |
| Coverage | >85% |
| Mutation Score | >80% |
| Languages | 17+ supported |
| MCP Tools | 19 available |
Falsifiable Quality Commitments
Per Popper's demarcation criterion, all claims are measurable and testable:
| Commitment | Threshold | Verification Method |
|---|---|---|
| Context Generation | < 5 seconds for 10K LOC project | time pmat context on test corpus |
| Memory Usage | < 500 MB for 100K LOC analysis | Measured via heaptrack in CI |
| Test Coverage | ≥ 85% line coverage | cargo llvm-cov (CI enforced) |
| Mutation Score | ≥ 80% killed mutants | pmat mutate --threshold 80 |
| Build Time | < 3 minutes incremental | cargo build --timings |
| CI Pipeline | < 15 minutes total | GitHub Actions workflow timing |
| Binary Size | < 50 MB release binary | ls -lh target/release/pmat |
| Language Parsers | All 17 languages parse without panic | Fuzz testing in CI |
How to Verify:
# Run self-assessment with Popper Falsifiability Score
# Individual commitment verification
Failure = Regression: Any commitment violation blocks CI merge.
Benchmark Results (Statistical Rigor)
All benchmarks use Criterion.rs with proper statistical methodology:
| Operation | Mean | 95% CI | Std Dev | Sample Size |
|---|---|---|---|---|
| Context (1K LOC) | 127ms | [124, 130] | ±12.3ms | n=1000 runs |
| Context (10K LOC) | 1.84s | [1.79, 1.90] | ±156ms | n=500 runs |
| TDG Scoring | 156ms | [148, 164] | ±18.2ms | n=500 runs |
| Complexity Analysis | 23ms | [22, 24] | ±3.1ms | n=1000 runs |
Comparison Baselines (vs. Alternatives):
| Metric | PMAT | ctags | tree-sitter | Effect Size |
|---|---|---|---|---|
| 10K LOC parsing | 1.84s | 0.3s | 0.8s | d=0.72 (medium) |
| Memory (10K LOC) | 287MB | 45MB | 120MB | - |
| Semantic depth | Full | Syntax only | AST only | - |
See docs/BENCHMARKS.md for complete statistical analysis.
ML/AI Reproducibility
PMAT uses ML for semantic search and embeddings. All ML operations are reproducible:
Random Seed Management:
- Embedding generation uses fixed seed (SEED=42) for deterministic outputs
- Clustering operations use fixed seed (SEED=12345)
- Seeds documented in docs/ml/REPRODUCIBILITY.md
Model Artifacts:
- Pre-trained models from HuggingFace (all-MiniLM-L6-v2)
- Model versions pinned in Cargo.toml
- Hash verification on download
Dataset Sources
PMAT does not train models but uses these data sources for evaluation:
| Dataset | Source | Purpose | Size |
|---|---|---|---|
| CodeSearchNet | GitHub/Microsoft | Semantic search benchmarks | 2M functions |
| PMAT-bench | Internal | Regression testing | 500 queries |
Data provenance and licensing documented in docs/ml/REPRODUCIBILITY.md.
Sovereign Stack
PMAT is built on the PAIML Sovereign Stack - pure-Rust, SIMD-accelerated libraries:
| Library | Purpose | Version |
|---|---|---|
| aprender | ML library (text similarity, clustering, topic modeling) | 0.24.0 |
| trueno | SIMD compute library for matrix operations | 0.11.0 |
| trueno-graph | GPU-first graph database (PageRank, Louvain, CSR) | 0.1.7 |
| trueno-rag | RAG pipeline with VectorStore | 0.1.8 |
| trueno-db | Embedded analytics database | 0.3.10 |
| trueno-viz | Terminal graph visualization | 0.1.17 |
| trueno-zram-core | SIMD LZ4/ZSTD compression (optional) | 0.3.0 |
| pmat | Code analysis toolkit | 2.213.4 |
Key Benefits:
- Pure Rust (no C dependencies, no FFI)
- SIMD-first (AVX2, AVX-512, NEON auto-detection)
- 2-4x speedup on graph algorithms via aprender adapter
Documentation
- PMAT Book - Complete guide
- API Reference - Rust API docs
- MCP Tools - MCP integration guide
- Specifications - Technical specs
License
MIT License - see LICENSE for details.