pmat 3.17.0

PMAT - Zero-config AI context generation and code quality toolkit (CLI, MCP, HTTP)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
# PMAT Unified Quality System - Production Deployment Guide

## Executive Summary

This guide provides comprehensive instructions for deploying the PMAT Unified Quality Enforcement System in production environments. The system implements a dual-track approach delivering immediate value while building toward theoretical maximum quality automation.

**Key Features:**
- 🔄 **Real-time Quality Monitoring**: 5-10ms file analysis latency using incremental AST parsing
- ⚖️ **SRE-style Error Budgets**: Flexible enforcement with team-specific quality budgets
- 🧠 **Intelligent Assistance**: Context-aware refactoring suggestions with confidence scoring  
- 🤖 **Conservative Automation**: Safe transformations with Git rollback protection
- 📊 **Comprehensive Observability**: Prometheus metrics, Grafana dashboards, GitHub Actions integration

## Prerequisites

### System Requirements
- **OS**: Linux, macOS, or Windows with WSL2
- **Memory**: Minimum 4GB RAM, recommended 8GB for large codebases  
- **Storage**: 2GB free space for caches and logs
- **Network**: Outbound HTTPS for GitHub API, Prometheus, and dependency updates

### Software Dependencies
- **Rust**: 1.70.0 or later (`rustup install stable`)
- **Git**: 2.30.0 or later for safety net features
- **Optional**: Docker and docker-compose for containerized deployment

### Access Requirements
- **GitHub**: Personal access token with repo and workflow permissions
- **Prometheus**: Access to metrics collection endpoint (if using external Prometheus)
- **Team Permissions**: Admin access to repositories for webhook and Actions setup

## Phase 1: Foundation Setup (Week 1)

### 1.1 Installation

#### Option A: Direct Installation (Recommended)
```bash
# Install latest stable version
cargo install pmat --version latest --features unified-quality

# Verify installation
pmat --version
pmat unified-quality --help
```

#### Option B: Docker Deployment
```bash
# Clone the repository
git clone https://github.com/paiml/paiml-mcp-agent-toolkit
cd paiml-mcp-agent-toolkit

# Build Docker image
docker build -t pmat-unified-quality .

# Run container with mounted project
docker run -v $(pwd):/workspace pmat-unified-quality
```

### 1.2 Initial Configuration

Create the configuration file `pmat-quality.toml`:

```toml
[foundation]
# Real-time monitoring configuration
update_interval = "5s"
complexity_threshold = 20
watch_patterns = [
    "**/*.rs",
    "**/*.py", 
    "**/*.js",
    "**/*.ts"
]
debounce_interval = "500ms"
max_batch_size = 50
incremental_parsing = true
cache_ast = true

[enforcement]
# Error budget enforcement
enabled = true
default_complexity_budget = 100
default_satd_budget = 10
default_coverage_floor = 0.8
regeneration_rate = 10.0  # Points per day
grace_period_days = 14

[intelligence]
# AI-powered assistance
enabled = true
confidence_threshold = 0.75
max_suggestions = 5
include_examples = true
learning_mode = true

[automation] 
# Conservative automation
enabled = false  # Start disabled, enable after team training
safety_threshold = 0.95
git_safety_net = true
rollback_timeout = "10m"
max_automated_changes = 5

[performance]
# Performance monitoring
continuous_monitoring = true
benchmark_interval = "1h"
max_analysis_time_ms = 5000
max_memory_mb = 1024
regression_threshold_percent = 20.0

[observability]
# Metrics and monitoring
prometheus_enabled = true
prometheus_port = 9090
dashboard_enabled = true 
dashboard_port = 8080
github_actions_enabled = false  # Configure later
```

### 1.3 Team Registration

Register your teams with appropriate error budgets:

```bash
# Register development team
pmat unified-quality team register \
    --team-id "dev-team" \
    --complexity-budget 150 \
    --satd-budget 15 \
    --coverage-floor 0.75

# Register critical systems team  
pmat unified-quality team register \
    --team-id "platform-team" \
    --complexity-budget 75 \
    --satd-budget 5 \
    --coverage-floor 0.90

# Register new team (higher budget during ramp-up)
pmat unified-quality team register \
    --team-id "new-team" \
    --complexity-budget 200 \
    --satd-budget 25 \
    --coverage-floor 0.65 \
    --grace-period-days 30
```

### 1.4 Initial Monitoring Setup

Start monitoring your primary codebase:

```bash
# Initialize monitoring for main repository
cd /path/to/your/project
pmat unified-quality init

# Start monitoring daemon
pmat unified-quality monitor start \
    --config pmat-quality.toml \
    --daemon \
    --log-level info

# Verify monitoring is working
pmat unified-quality status
pmat unified-quality metrics summary
```

## Phase 2: Quality Intelligence (Week 2-3)

### 2.1 Enable Intelligent Assistance

```bash
# Enable intelligence features
pmat unified-quality intelligence enable \
    --confidence-threshold 0.8 \
    --learning-mode true

# Analyze current codebase for improvement opportunities
pmat unified-quality analyze hotspots \
    --top-files 20 \
    --output-format detailed

# Generate initial refactoring recommendations
pmat unified-quality suggest refactoring \
    --target-complexity 15 \
    --min-confidence 0.75
```

### 2.2 Set Up Web Dashboard

The dashboard provides real-time visibility into quality metrics:

```bash
# Start dashboard (runs on port 8080 by default)
pmat unified-quality dashboard start \
    --port 8080 \
    --update-interval 5s

# Access at http://localhost:8080
# Features available:
# - Real-time quality metrics
# - File-level analysis
# - Team budget status  
# - Refactoring suggestions
# - Performance insights
```

### 2.3 Configure Team Onboarding

Set up progressive onboarding for team adoption:

```bash
# Start onboarding for a team
pmat unified-quality onboarding start \
    --team-id "dev-team" \
    --learning-style practical \
    --experience-level intermediate

# Track onboarding progress
pmat unified-quality onboarding progress \
    --team-id "dev-team"

# Generate onboarding report
pmat unified-quality onboarding report \
    --team-id "dev-team" \
    --format markdown \
    --output team-progress.md
```

## Phase 3: Enforcement Integration (Week 3-4)

### 3.1 Pre-commit Hook Installation

Install quality gates in your Git workflow:

```bash
# Install pre-commit hook
pmat unified-quality hooks install \
    --pre-commit \
    --complexity-threshold 20 \
    --block-on-violation

# The hook will run:
# 1. Complexity analysis on changed files
# 2. SATD detection and limits
# 3. Error budget checks
# 4. Coverage impact analysis
```

### 3.2 GitHub Actions Integration

Create `.github/workflows/quality-gate.yml`:

```yaml
name: Quality Gate
on:
  pull_request:
    branches: [main, master]
  push:
    branches: [main, master]

jobs:
  quality-gate:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      pull-requests: write
      checks: write
    
    steps:
    - uses: actions/checkout@v4
      with:
        fetch-depth: 0
    
    - name: Setup Rust
      uses: actions-rs/toolchain@v1
      with:
        toolchain: stable
        profile: minimal
        override: true
    
    - name: Install PMAT
      run: cargo install pmat --version latest --features unified-quality
    
    - name: Run Quality Analysis
      id: quality
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      run: |
        pmat unified-quality analyze pull-request \
          --pr-number ${{ github.event.number }} \
          --base-ref ${{ github.base_ref }} \
          --head-ref ${{ github.head_ref }} \
          --output-format github-actions
    
    - name: Update PR Comment
      if: github.event_name == 'pull_request'
      uses: actions/github-script@v7
      with:
        script: |
          const fs = require('fs');
          const comment = fs.readFileSync('quality-report.md', 'utf8');
          
          // Find existing comment
          const comments = await github.rest.issues.listComments({
            owner: context.repo.owner,
            repo: context.repo.repo,
            issue_number: context.issue.number,
          });
          
          const existingComment = comments.data.find(
            comment => comment.body.includes('📊 Code Quality Report')
          );
          
          if (existingComment) {
            // Update existing comment
            await github.rest.issues.updateComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              comment_id: existingComment.id,
              body: comment,
            });
          } else {
            // Create new comment
            await github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: context.issue.number,
              body: comment,
            });
          }
```

### 3.3 Configure Error Budget Policies

Set up team-specific quality enforcement policies:

```bash
# Configure strict enforcement for critical components
pmat unified-quality policy create \
    --name "critical-systems" \
    --max-complexity 10 \
    --max-satd 0 \
    --min-coverage 0.95 \
    --block-on-violation \
    --applies-to "src/core/**" "src/security/**"

# Configure relaxed policy for experimental code
pmat unified-quality policy create \
    --name "experimental" \
    --max-complexity 30 \
    --max-satd 20 \
    --min-coverage 0.60 \
    --warn-only \
    --applies-to "experiments/**" "prototypes/**"
```

## Phase 4: Automation & Optimization (Week 4+)

### 4.1 Enable Conservative Automation

**⚠️ IMPORTANT**: Only enable after team training and testing in non-critical branches.

```bash
# Enable automation with safety limits
pmat unified-quality automation enable \
    --safety-threshold 0.95 \
    --max-changes-per-day 10 \
    --git-safety-net \
    --rollback-timeout 30m \
    --dry-run  # Test first

# After testing, enable for real
pmat unified-quality automation enable \
    --production-mode \
    --notify-changes \
    --require-approval-above-threshold 0.85
```

### 4.2 Performance Optimization

Configure performance monitoring and optimization:

```bash
# Establish performance baseline
pmat unified-quality performance baseline \
    --name "production-v1" \
    --full-analysis

# Enable continuous performance monitoring
pmat unified-quality performance monitor \
    --continuous \
    --alert-on-regression 25% \
    --optimize-automatically false

# Run performance benchmarks
pmat unified-quality performance benchmark \
    --suite comprehensive \
    --output-format prometheus
```

### 4.3 Prometheus Integration

Configure metrics collection in `prometheus.yml`:

```yaml
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'pmat-quality'
    static_configs:
      - targets: ['localhost:9090']
    scrape_interval: 30s
    metrics_path: /metrics
```

Example Grafana dashboard queries:

```promql
# Average complexity per team
avg(quality_pmat_complexity) by (team)

# Error budget consumption rate  
rate(quality_pmat_violations_total[5m]) * 3600

# Analysis performance
histogram_quantile(0.95, quality_pmat_analysis_duration_seconds_bucket)
```

## Monitoring and Maintenance

### Daily Operations

```bash
# Check system health
pmat unified-quality health check

# Review quality metrics
pmat unified-quality metrics dashboard

# Check error budget status
pmat unified-quality budget status --all-teams

# Review automation results
pmat unified-quality automation report --last-24h
```

### Weekly Maintenance

```bash
# Clean old data and optimize
pmat unified-quality maintenance weekly

# Update quality baselines
pmat unified-quality baseline update --all-projects

# Generate team reports
pmat unified-quality report generate \
    --teams all \
    --format pdf \
    --include-trends \
    --email-to stakeholders@company.com
```

### Monthly Reviews

```bash
# Comprehensive system analysis
pmat unified-quality analyze comprehensive \
    --include-trends \
    --benchmark-comparison \
    --team-progress

# Update quality policies based on data
pmat unified-quality policy review \
    --suggest-adjustments \
    --data-driven

# Plan next phase improvements
pmat unified-quality roadmap generate \
    --based-on-usage \
    --team-feedback
```

## Troubleshooting

### Common Issues

**High Memory Usage**
```bash
# Check cache size
pmat unified-quality cache stats

# Clean caches if needed
pmat unified-quality cache clean --older-than 7d

# Adjust cache limits in config
[foundation]
max_cache_size_mb = 512
cache_cleanup_interval = "1h"
```

**Slow Analysis Performance**
```bash
# Check performance stats
pmat unified-quality performance stats

# Enable optimization
pmat unified-quality performance optimize \
    --strategy cache-optimization \
    --strategy parallel-processing

# Reduce analysis scope if needed
[foundation]
watch_patterns = ["**/*.rs"]  # Focus on primary language
max_batch_size = 25          # Reduce batch size
```

**Error Budget Conflicts**
```bash
# Check budget status
pmat unified-quality budget status --team-id problematic-team

# Regenerate budgets if needed
pmat unified-quality budget regenerate --team-id problematic-team

# Adjust policies if too restrictive
pmat unified-quality policy adjust \
    --team-id problematic-team \
    --increase-budget 25%
```

### Log Analysis

Important log locations:
- **System logs**: `~/.pmat/logs/unified-quality.log`
- **Performance logs**: `~/.pmat/logs/performance.log`
- **Automation logs**: `~/.pmat/logs/automation.log`

```bash
# Monitor real-time logs
tail -f ~/.pmat/logs/unified-quality.log

# Check for errors in last hour
grep ERROR ~/.pmat/logs/*.log | grep "$(date -d '1 hour ago' '+%Y-%m-%d %H')"

# Export logs for analysis
pmat unified-quality logs export \
    --since "2024-01-01" \
    --level warn,error \
    --format json \
    --output quality-issues.json
```

## Security Considerations

### Access Control
- Use dedicated service accounts for CI/CD integration
- Rotate GitHub tokens monthly
- Limit network access to required endpoints only
- Enable audit logging for all configuration changes

### Data Protection
- Quality metrics may contain sensitive code patterns
- Configure data retention policies appropriately
- Use encrypted storage for cached AST data
- Implement data export controls for compliance

### Secrets Management
```bash
# Use environment variables for sensitive config
export PMAT_GITHUB_TOKEN="your-token-here"
export PMAT_PROMETHEUS_AUTH="user:password"

# Or use external secret management
pmat unified-quality config set \
    --secret github_token \
    --from-vault "secret/pmat/github-token"
```

## Scaling Considerations

### Horizontal Scaling
For large organizations (>1000 developers):

```bash
# Deploy multiple instances with load balancing
pmat unified-quality cluster init \
    --nodes 3 \
    --load-balancer nginx \
    --shared-storage redis

# Partition teams across instances
pmat unified-quality cluster assign-teams \
    --instance-1 "team-a,team-b,team-c" \
    --instance-2 "team-d,team-e,team-f"
```

### Performance Tuning
```toml
[performance]
# For large codebases (>1M LOC)
max_analysis_time_ms = 10000
max_memory_mb = 4096
parallel_analysis = true
worker_threads = 8

# Cache optimization for high-frequency changes
[foundation]
cache_size_mb = 2048
incremental_cache = true
persist_cache = true
```

## Success Metrics

Track these key metrics to measure deployment success:

### Technical Metrics
- **Analysis Latency**: < 5ms per file (95th percentile)
- **System Uptime**: > 99.5%
- **Memory Usage**: < 1GB peak
- **Cache Hit Ratio**: > 85%

### Quality Metrics  
- **Average Complexity**: Trending down over 12 weeks
- **SATD Debt**: < 10 issues per team per sprint
- **Coverage**: Maintaining > 80% overall
- **Quality Violations**: < 5 high-severity per week

### Adoption Metrics
- **Team Onboarding**: 100% teams through Phase 2 within 8 weeks
- **Automation Usage**: > 50% teams using conservative automation
- **Developer Satisfaction**: > 8/10 in quarterly surveys
- **Time to Quality**: < 2 weeks from violation to resolution

## Next Steps

After successful deployment:

1. **Advanced Features**: Enable experimental ML-driven suggestions
2. **Custom Policies**: Develop organization-specific quality rules  
3. **Integration Expansion**: Connect to additional tools (JIRA, Slack)
4. **Research Track**: Evaluate formal verification capabilities
5. **Community Contribution**: Share quality patterns with PMAT community

## Support Resources

- **Documentation**: https://docs.pmat.dev/unified-quality
- **Community**: https://github.com/paiml/paiml-mcp-agent-toolkit/issues
- **Issue Tracking**: https://github.com/paiml/paiml-mcp-agent-toolkit/issues
- **Professional Support**: enterprise@paiml.com

---

**Remember**: The unified quality system implements a progressive adoption approach. Start with monitoring (Phase 1), add intelligence (Phase 2), enable enforcement (Phase 3), and finally activate automation (Phase 4). Each phase builds foundation for the next while delivering immediate value.

For questions or deployment assistance, reach out to the PMAT community or professional services team.