1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
//! Stress and Performance Test Suite for AGPM
//!
//! This test suite contains stress tests and performance benchmarks that validate
//! AGPM's behavior under high load and extreme conditions. These tests take significantly
//! longer to run than integration tests and are **not executed in CI**.
//!
//! # Purpose
//!
//! Stress tests serve several critical purposes:
//!
//! - **Validate parallelism**: Test concurrent operations with high --max-parallel values
//! - **Find performance regressions**: Catch slowdowns before releases
//! - **Verify resource limits**: Ensure system handles edge cases (500+ dependencies)
//! - **Test cache efficiency**: Validate worktree reuse and fetch optimization
//! - **Measure throughput**: Track installation/update rates over time
//!
//! # When to Run Stress Tests
//!
//! Run stress tests:
//! - Before major releases (v0.x.0)
//! - After significant performance improvements
//! - When debugging performance issues
//! - After changes to parallelism/caching logic
//! - To establish performance baselines on new hardware
//!
//! # Running Stress Tests
//!
//! All stress tests are **parallel-safe** and run concurrently, which helps surface race
//! conditions and deadlocks. Performance is logged via `println!` for manual review rather
//! than asserted, relying on nextest's test timeout to catch hangs. Each test uses isolated
//! temp directories.
//!
//! **Note**: Stress tests are excluded from default nextest runs via `.config/nextest.toml`.
//! You must use the `-P all` profile to run them with nextest.
//!
//! ## Run All Stress Tests
//!
//! ```bash
//! # With cargo nextest (requires -P all profile)
//! cargo nextest run -P all -E 'binary(stress)'
//! cargo nextest run -P all --test stress
//!
//! # With standard cargo test
//! cargo test --test stress
//! ```
//!
//! ## Run with Verbose Output
//!
//! ```bash
//! cargo nextest run -P all -E 'binary(stress)' --no-capture
//! cargo test --test stress -- --nocapture
//! ```
//!
//! ## Run Specific Test
//!
//! ```bash
//! cargo nextest run -P all -E 'binary(stress) and test(test_heavy_stress_500_dependencies)'
//! cargo nextest run -P all -E 'binary(stress) and test(parallelism::)'
//! cargo test --test stress test_heavy_stress_500_dependencies
//! ```
//!
//! ## Run with Release Optimizations
//!
//! ```bash
//! cargo nextest run -P all -E 'binary(stress)' --release
//! cargo test --test stress --release
//! ```
//!
//! # Performance Baselines
//!
//! Recorded on M1 MacBook Pro (2024-10-10, AGPM v0.4.3):
//!
//! ## Large Scale Tests (`large_scale.rs`)
//!
//! | Test | Dependencies | Duration | Rate | Notes |
//! |------|--------------|----------|------|-------|
//! | `test_heavy_stress_500_dependencies` | 500 agents | <60s | ~8.3/s | 5 repos, worktree reuse |
//! | `test_heavy_stress_500_updates` | 500 updates | <45s | ~11/s | Update existing installations |
//! | `test_community_repo_500_dependencies` | 500 agents | <90s | ~5.5/s | Real agpm-community repo |
//! | `test_mixed_repos_file_and_https` | 200 mixed | <30s | ~6.7/s | file:// + https:// |
//! | `test_community_repo_parallel_checkout_performance` | Varies | <60s | - | Checkout performance |
//!
//! ## Parallelism Tests (`parallelism.rs`)
//!
//! | Test | Load | Duration | Notes |
//! |------|------|----------|-------|
//! | `test_extreme_parallelism` | 100 agents, --max-parallel 100 | ~5s | System throttling |
//! | `test_rapid_sequential_operations` | 3 agents, repeated | ~3s | Cache reuse |
//! | `test_mixed_parallelism_levels` | 50 agents, varying | ~10s | Different --max-parallel |
//! | `test_parallelism_resource_contention` | 30 agents, parallel | ~8s | Lock contention |
//! | `test_parallelism_graceful_limits` | 20 agents, limits | ~6s | Graceful degradation |
//!
//! # Test Organization
//!
//! - **large_scale.rs**: Tests with hundreds of dependencies (500+)
//! - **parallelism.rs**: Concurrency and --max-parallel flag behavior
//!
//! # Interpreting Results
//!
//! ## Expected Behavior
//!
//! - Installation rate: 5-15 agents/second (depends on size, parallelism)
//! - Update rate: 10-20 updates/second (faster due to cache)
//! - Memory usage: Linear growth with --max-parallel value
//! - No deadlocks or race conditions
//!
//! ## Warning Signs
//!
//! - Installation rate drops below 3/second → investigate cache efficiency
//! - Tests time out (>120s) → check for deadlocks or resource exhaustion
//! - High variance between runs → potential race conditions
//! - Memory usage grows exponentially → memory leak
//!
//! # Contributing
//!
//! When adding new stress tests:
//!
//! 1. Document expected duration and performance baseline
//! 2. Use `#[tokio::test]` (no `#[ignore]` needed - suite is separate)
//! 3. Include test description explaining what it stresses
//! 4. Update performance baselines table in this file
//! 5. Consider adding `#[ignore]` if test is extremely slow (>5 min)
// Shared test utilities (from parent tests/ directory)
// Stress test modules