strest 0.1.1

Blazing-fast async HTTP load tester in Rust - lock-free design, real-time stats, distributed runs, and optional chart exports for high-load API testing.
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
# strest

⚠️ Warning: Only use strest for testing infrastructure you own or have explicit permission to test. Unauthorized use may be illegal.

strest is a command-line tool for stress testing web servers by sending a large number of HTTP requests. It provides insights into server performance by measuring average response times, reporting observed requests per minute (RPM), and other relevant metrics.

# Screenshot Overview  
These screenshots showcase key metrics and real-time statistics from strest’s stress testing, including response time, error rate, request count, latency percentiles (all vs ok), timeouts, status distribution, and throughput.

<div style="text-align: center;">
  <img src="docs/screenshot.png" alt="CLI Screenshot" width="1000" />
</div>

### Latency

<table>
  <tr>
    <td align="center">
      <a href="docs/average_response_time.png" target="_blank">
        <img src="docs/average_response_time.png" alt="Average Response Time" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/latency_percentiles_P50.png" target="_blank">
        <img src="docs/latency_percentiles_P50.png" alt="Latency Percentiles P50" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/latency_percentiles_P90.png" target="_blank">
        <img src="docs/latency_percentiles_P90.png" alt="Latency Percentiles P90" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/latency_percentiles_P99.png" target="_blank">
        <img src="docs/latency_percentiles_P99.png" alt="Latency Percentiles P99" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
  </tr>
</table>

### Throughput

<table>
  <tr>
    <td align="center">
      <a href="docs/requests_per_second.png" target="_blank">
        <img src="docs/requests_per_second.png" alt="Requests Per Second" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/cumulative_total_requests.png" target="_blank">
        <img src="docs/cumulative_total_requests.png" alt="Cumulative Total Requests" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/cumulative_successful_requests.png" target="_blank">
        <img src="docs/cumulative_successful_requests.png" alt="Cumulative Successful Requests" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/inflight_requests.png" target="_blank">
        <img src="docs/inflight_requests.png" alt="In-Flight Requests" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
  </tr>
</table>

### Errors

<table>
  <tr>
    <td align="center">
      <a href="docs/cumulative_error_rate.png" target="_blank">
        <img src="docs/cumulative_error_rate.png" alt="Cumulative Error Rate" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/error_rate_breakdown.png" target="_blank">
        <img src="docs/error_rate_breakdown.png" alt="Error Rate Breakdown" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/timeouts_per_second.png" target="_blank">
        <img src="docs/timeouts_per_second.png" alt="Timeouts Per Second" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
    <td align="center">
      <a href="docs/status_code_distribution.png" target="_blank">
        <img src="docs/status_code_distribution.png" alt="Status Code Distribution" width="220" style="border: 1px solid #ddd; border-radius: 4px;" />
      </a>
    </td>
  </tr>
</table>

## Features

- Send HTTP requests to a specified URL for a specified duration.
- Customize the HTTP method, headers, and request payload data.
- Measure the average response time of successful requests.
- Report the observed requests per minute (RPM) metric.
- Display real-time statistics and progress in the terminal.
- UI shows timeouts, transport errors, non-expected status, and ok vs all percentiles.
- UI chart window length is configurable via `--ui-window-ms` (default: 10000).
- Optional non-interactive summary output for long-running tests.
- Streams run metrics to disk while aggregating summary and chart data during the run.
- Optional rate limiting for controlled load generation.
- Optional CSV/JSON exports for pipeline integration.
- Scenario scripts with multi-step flows, dynamic templates, and per-step asserts.
- Experimental WASM scripting to generate scenarios programmatically.
- Warm-up period support to exclude early metrics from summaries and charts.
- TLS/HTTP/2 controls (TLS min/max, HTTP/2 toggle, ALPN selection).
- HDR histogram percentiles for accurate end-of-run latency stats.
- Pluggable output sinks (Prometheus textfile, OTel JSON, Influx line protocol).
- Distributed mode with controller/agent coordination and weighted load splits.
- Distributed streaming summaries for live aggregation and sink updates.
- Manual controller mode with HTTP start/stop control and scenario registry.
- Agent standby mode with automatic reconnects between runs.
- Experimental HTTP/3 support (build flag required).
- Linux systemd install/uninstall helpers for controller/agent services.

## Who It's For

- Engineers who want a config-first, CLI-driven load test tool.
- Teams who need multi-step scenarios with assertions, not full JS runtimes.
- CI and lab users who want reproducible runs and exportable metrics.
- Distributed testing setups with controller/agent coordination.

## Not a Fit For

- k6 users looking for JavaScript scripting or k6-compatible workflows.
- GUI-first users who want a hosted dashboard-first experience today.

## Prerequisites

- Make sure you have Rust and Cargo installed on your system. You can install Rust from [rustup.rs]https://rustup.rs/.

## Installation

### From crates.io (recommended)

```bash
cargo install strest
```

### Prebuilt binaries

Prebuilt binaries are attached to GitHub Releases for tagged versions (Linux, macOS, Windows).

### From source

To use strest from source, follow these installation instructions:

1. Clone the repository to your local machine:

    ```bash
    git clone https://github.com/Lythaeon/strest.git
    ```

2. Change to the project directory:

    ```bash
    cd strest
    ```

3. Build the project:

    ```bash
    cargo build --release --locked
    ```

4. Once the build is complete, you can find the executable binary in the `/target/release/` directory.

5. Copy the binary to a directory in your system's PATH to make it globally accessible:

    ```bash
    sudo cp ./target/release/strest /usr/local/bin/
    ```

Alternatively, install from the local path using Cargo:

```bash
cargo install --path . --locked
```

## Getting Started

Quick smoke test:

```bash
strest -u http://localhost:3000 -t 30
```

Scenario quickstart:

```toml
# strest.toml
[scenario]
base_url = "http://localhost:3000"

[[scenario.steps]]
method = "get"
path = "/health"
assert_status = 200
```

```bash
strest --config strest.toml -t 30 --no-ui --summary --no-charts
```

## Usage

strest is used via the command line. Here's a basic example of how to use it:

```bash
strest -u http://localhost:3000 -t 60 --no-charts
```

This command sends GET requests to `http://localhost:3000` for 60 seconds.

For long-running or CI runs, disable the UI and print a summary:

```bash
strest -u http://localhost:3000 -t 600 --no-ui --summary --no-charts
```

For more options and customization, use the --help flag to see the available command-line options and their descriptions.

```bash
strest --help
```

### Logging

Use `--verbose` to enable debug logging (useful for distributed controller/agent handshakes). You can also override the log level via `STREST_LOG` or `RUST_LOG`.

### Presets

Smoke (quick validation, low load):

```bash
strest -u http://localhost:3000 -t 15 --rate 50 --max-tasks 50 --spawn-rate 10 --spawn-interval 100 --no-charts
```

Steady (sustained load, CI-friendly):

```bash
strest -u http://localhost:3000 -t 300 --rate 500 --max-tasks 500 --spawn-rate 20 --spawn-interval 100 --no-ui --summary --no-charts
```

Ramp (gradual increase):

```toml
# ramp.toml
url = "http://localhost:3000"
duration = 300

[load]
rate = 100

[[load.stages]]
duration = "60s"
target = 300

[[load.stages]]
duration = "120s"
target = 800
```

```bash
strest --config ramp.toml --no-ui --summary --no-charts
```

### Charts

By default charts are stored in `~/.strest/charts` (or `%USERPROFILE%\\.strest\\charts` on Windows). You can change the location via `--charts-path` (`-c`).

To disable charts use the `--no-charts` flag.

Charts produced:
- `average_response_time.png`
- `cumulative_successful_requests.png`
- `cumulative_error_rate.png`
- `cumulative_total_requests.png`
- `requests_per_second.png`
- `latency_percentiles_P50.png` (all vs ok overlay)
- `latency_percentiles_P90.png` (all vs ok overlay)
- `latency_percentiles_P99.png` (all vs ok overlay)
- `timeouts_per_second.png`
- `error_rate_breakdown.png` (timeouts vs transport vs non-expected)
- `status_code_distribution.png`
- `inflight_requests.png`

### UI Metrics

The UI highlights:
- Total requests, success count, and error breakdown (timeouts, transport errors, non-expected status).
- All vs ok latency percentiles (P50/P90/P99).
- Live RPS and RPM.

### Temp Data

Run data is logged to a temporary file during the test while summary and chart data are aggregated during the run. This keeps the request pipeline from blocking on metrics in long runs. By default this lives in `~/.strest/tmp` (or `%USERPROFILE%\\.strest\\tmp` on Windows). You can change the location via `--tmp-path`. Temporary data is deleted after the run unless `--keep-tmp` is set.

Charts collection can be bounded for long runs:

- `--metrics-range` limits chart collection to a time window (e.g., `10-30` seconds).
- `--metrics-max` caps the total number of metrics kept for charts (default: `1000000`).

### Common Options

- `--method` (`-X`) sets the HTTP method.
- `--url` (`-u`) sets the target URL.
- `--headers` (`-H`) adds request headers (repeatable, `Key: Value`).
- `--data` (`-d`) sets the request body data (POST/PUT/PATCH).
- `--duration` (`-t`) sets the test duration in seconds.
- `--no-ui` disables the interactive UI and shows a progress bar in the terminal (summary output is printed automatically).
- `--ui-window-ms` sets the UI chart window length in milliseconds (default: `10000`).
- `--summary` prints an end-of-run summary.
- `--status` (`-s`) sets the expected HTTP status code.
- `--timeout` sets the request timeout (supports `ms`, `s`, `m`, `h`).
- `--warmup` ignores the first N seconds for summary/charts/exports (supports `ms`, `s`, `m`, `h`).
- `--proxy` (`-p`) sets a proxy URL.
- `--max-tasks` (`-m`) limits concurrent request tasks (`--concurrency` alias).
- `--spawn-rate` (`-r`) and `--spawn-interval` (`-i`) control how quickly tasks are spawned.
- `--rate` sets a global requests-per-second limit.
- `--controller-listen` starts a distributed controller (e.g., `0.0.0.0:9009`).
- `--controller-mode` selects controller mode (`auto` or `manual`).
- `--control-listen` sets the manual control-plane HTTP listen address.
- `--control-auth-token` sets the control-plane Bearer token.
- `--agent-join` joins a distributed controller as an agent.
- `--auth-token` sets a shared token for controller/agent authentication.
- `--agent-weight` sets an agent weight for load distribution.
- `--agent-id` sets an explicit agent id.
- `--min-agents` sets how many agents the controller waits for before starting.
- `--agent-wait-timeout-ms` sets a max wait time for min agents (auto mode; manual start honors this too).
- `--agent-standby` keeps agents connected between distributed runs.
- `--agent-reconnect-ms` sets the standby reconnect interval.
- `--agent-heartbeat-interval-ms` sets the agent heartbeat interval.
- `--agent-heartbeat-timeout-ms` sets the controller heartbeat timeout.
- `--stream-interval-ms` sets the stream snapshot interval for distributed mode.
- `--script` runs a WASM script that produces a scenario (requires `--features wasm` build).
- `--tls-min` and `--tls-max` set the TLS version floor/ceiling.
- `--http2` enables HTTP/2 (adaptive).
- `--http3` enables HTTP/3 (requires `--features http3` and `RUSTFLAGS=--cfg reqwest_unstable`).
- `--alpn` sets the advertised protocols (repeatable, e.g. `--alpn h2`).
- `--tmp-path` sets where temporary run data is written.
- `--keep-tmp` keeps temporary run data after completion.
- `--log-shards` controls the number of log writers (default `1`).
- `--export-csv` writes metrics to a CSV file (bounded by `--metrics-range` and `--metrics-max`).
- `--export-json` writes summary and metrics to a JSON file (bounded by `--metrics-range` and `--metrics-max`).
- `--install-service` installs a Linux systemd service for controller/agent.
- `--uninstall-service` removes a Linux systemd service for controller/agent.
- `--service-name` overrides the systemd service name.

HTTP/3 is experimental and requires building with `--features http3` plus
`RUSTFLAGS="--cfg reqwest_unstable"` (reqwest requirement):

```bash
RUSTFLAGS="--cfg reqwest_unstable" cargo build --release --features http3
```

### Configuration File

You can provide a config file with `--config path`. If no config is specified, `strest` will look for `./strest.toml` or `./strest.json` (TOML is preferred if both exist). CLI flags override config values.

Example `strest.toml`:

```toml
url = "http://localhost:3000"
method = "get"
duration = 60
timeout = "10s"
warmup = "5s"
status = 200
no_ui = true
ui_window_ms = 10000
summary = true
no_charts = true

tls_min = "1.2"
tls_max = "1.3"
http2 = true
alpn = ["h2"]

headers = [
  "Content-Type: application/json",
  "X-Env: local",
]

metrics_range = "10-30"
metrics_max = 1000000

[load]
rate = 1000

[[load.stages]]
duration = "10s"
target = 500

[[load.stages]]
duration = "20s"
target = 1500
```

Load profiles are optional. `load.rate` is the initial RPS, and each stage linearly ramps to its `target` RPS over the stage `duration`. You can use `rpm` instead of `rate/target` for RPM-based control.

Example `strest.json`:

```json
{
  "url": "http://localhost:3000",
  "method": "get",
  "duration": 60,
  "warmup": "5s",
  "status": 200,
  "no_ui": true,
  "ui_window_ms": 10000,
  "summary": true,
  "no_charts": true,
  "tls_min": "1.2",
  "tls_max": "1.3",
  "http2": true,
  "alpn": ["h2"],
  "headers": [
    "Content-Type: application/json",
    "X-Env: local"
  ],
  "metrics_range": "10-30",
  "metrics_max": 1000000,
  "load": {
    "rate": 1000,
    "stages": [
      { "duration": "10s", "target": 500 },
      { "duration": "20s", "target": 1500 }
    ]
  }
}
```

### Scenario Scripts

Scenario scripts model multi-step flows with per-step asserts and templated payloads. If `scenario.base_url` is set you can omit the top-level `url`. Templates use `{{var}}` placeholders from `scenario.vars`, `step.vars`, and built-ins: `seq`, `step`, `timestamp_ms`, `timestamp_s`.
`think_time` adds a delay after a step completes before the next step starts (supports `ms`, `s`, `m`, `h`).

Example `strest.toml`:

```toml
[scenario]
base_url = "http://localhost:3000"
vars = { user = "demo" }

[[scenario.steps]]
name = "login"
method = "post"
path = "/login"
headers = ["Content-Type: application/json"]
data = "{\"user\":\"{{user}}\",\"seq\":\"{{seq}}\"}"
assert_status = 200
assert_body_contains = "token"
think_time = "500ms"

[[scenario.steps]]
name = "profile"
method = "get"
path = "/profile"
headers = ["Authorization: Bearer {{seq}}"]
```

Example `strest.json`:

```json
{
  "scenario": {
    "base_url": "http://localhost:3000",
    "vars": { "user": "demo" },
    "steps": [
      {
        "name": "login",
        "method": "post",
        "path": "/login",
        "headers": ["Content-Type: application/json"],
        "data": "{\"user\":\"{{user}}\",\"seq\":\"{{seq}}\"}",
        "assert_status": 200,
        "assert_body_contains": "token",
        "think_time": "500ms"
      },
      {
        "name": "profile",
        "method": "get",
        "path": "/profile",
        "headers": ["Authorization: Bearer {{seq}}"]
      }
    ]
  }
}
```

### WASM Scripts (Experimental)

You can generate scenarios from a WASM module and run them with `--script`. This is useful when you want programmable test setup while still using strest’s scenario engine.

Build with the optional feature:

```bash
cargo build --release --features wasm
```

Run with a WASM script:

```bash
strest --script ./script.wasm --no-ui --summary --no-charts
```

Example WASM script (prebuilt in this repo):

```bash
# Optional: regenerate from WAT
wasm-tools parse examples/wasm/interesting.wat -o examples/wasm/interesting.wasm

# Run the example scenario
cargo run --features wasm -- --script examples/wasm/interesting.wasm -t 20 --no-ui --summary --no-charts
```

Note: the example scenario targets `http://localhost:8887` and expects `/health`, `/login`,
`/search`, `/items/{id}`, and `/checkout` endpoints to exist. Update the WAT if your server differs.

**WASM contract**

Your module must export:

- `memory`
- `scenario_ptr() -> i32` (pointer to a UTF-8 JSON buffer)
- `scenario_len() -> i32` (length of that buffer in bytes)

The JSON payload must match the `scenario` config schema (same as `strest.toml` / `strest.json`). It **must** include `schema_version: 1`. Size is capped at 1MB.

Sandboxing policy (enforced):

- No imports are allowed.
- The exported `memory` must declare a maximum and be <= 128 pages.
- The module size is capped at 4MB.
- The scenario payload is capped at 1MB.
- `scenario_ptr` and `scenario_len` must return constant `i32` values.
- `memory64` and shared memory are not allowed.

Minimal Rust example (`wasm32-unknown-unknown`):

```rust
#[no_mangle]
pub extern "C" fn scenario_ptr() -> i32 {
    SCENARIO.as_ptr() as i32
}

#[no_mangle]
pub extern "C" fn scenario_len() -> i32 {
    SCENARIO.len() as i32
}

static SCENARIO: &str = r#"{
  "schema_version": 1,
  "base_url": "http://localhost:3000",
  "steps": [
    { "method": "get", "path": "/health", "assert_status": 200 }
  ]
}"#;
```

Notes:

- If you pass `--url`, it becomes the default base URL when the scenario omits `base_url`.
- `--script` cannot be combined with an explicit `scenario` config section.

### Output Sinks

Configure output sinks in the config file to emit summary metrics periodically during the run
and once after the run completes. The default update interval is 1000ms.

Example `strest.toml`:

```toml
[sinks]
# Optional. Controls periodic updates (defaults to 1000ms).
update_interval_ms = 1000

[sinks.prometheus]
path = "./out/strest.prom"

[sinks.otel]
path = "./out/strest.otel.json"

[sinks.influx]
path = "./out/strest.influx"
```

### Distributed Mode

Run a controller and one or more agents. If you configure sinks on agents, they write per-agent
files when stream summaries are off. If you configure sinks on the controller, it writes an
aggregated sink report at the end of the run (or periodically when streaming).
If `distributed.stream_summaries = true`, agents stream
periodic summaries to the controller; the controller updates sinks during the run and agents
skip local sink writes. Stream cadence is controlled by `distributed.stream_interval_ms`
(default 1000ms) and sink update cadence by `sinks.update_interval_ms`. When UI rendering is
enabled, the controller aggregates streamed metrics into the UI.
Use `distributed.agent_wait_timeout_ms` (or `--agent-wait-timeout-ms`) to bound how long the
controller waits for `min_agents` before starting.
The controller does not generate load; it only orchestrates and aggregates. Agents are the
ones that send requests.
Agents send periodic heartbeats; the controller marks agents unhealthy if no heartbeat is
seen within `--agent-heartbeat-timeout-ms` (default 3000ms).
Aggregated charts are available in distributed mode when `--stream-summaries` is enabled and
`--no-charts` is not set (charts are written by the controller). Per-agent exports are still
disabled during distributed runs.
CLI equivalents: `--stream-summaries` and `--stream-interval-ms 1000`.

### Kubernetes (Basic Manifests)

The `kubernetes/` folder ships minimal manifests for a controller and scalable agents.

Apply:

```bash
kubectl apply -f kubernetes/
```

Scale agents:

```bash
kubectl scale deployment strest-agent --replicas=100
```

Start a run (manual controller mode; port-forward the control plane):

```bash
kubectl port-forward service/strest-controller 9010:9010
curl -X POST http://127.0.0.1:9010/start -d '{"start_after_ms":2000}'
```

Agents discover the controller via the service DNS name
`strest-controller.<namespace>.svc.cluster.local` (the manifests use
`strest-controller:9009`).

Scaling notes:
- There is no hard agent limit; practical limits come from OS file descriptors, CPU, and memory.
- Streaming summaries add controller load (histogram decode + merge). Increase `--stream-interval-ms`
  to reduce overhead as agent counts grow.
- Wire messages are capped at 4MB; very large histograms can exceed this limit.

Controller:

```bash
strest --controller-listen 0.0.0.0:9009 --min-agents 2 --auth-token secret
```

Agent:

```bash
strest --agent-join 10.0.0.5:9009 --auth-token secret --agent-weight 2
```

Agent standby (keeps the agent connected and auto-reconnects between runs):

```bash
strest --agent-join 10.0.0.5:9009 --auth-token secret --agent-standby --agent-reconnect-ms 1000
```

Example `strest.toml` controller config:

```toml
[distributed]
role = "controller"
listen = "0.0.0.0:9009"
auth_token = "secret"
min_agents = 2
agent_wait_timeout_ms = 30000
agent_heartbeat_timeout_ms = 3000
stream_summaries = true
stream_interval_ms = 1000
```

Example `strest.toml` agent config:

```toml
[distributed]
role = "agent"
join = "10.0.0.5:9009"
auth_token = "secret"
agent_id = "agent-1"
weight = 2
agent_heartbeat_interval_ms = 1000
```

Manual controller mode (HTTP control plane):

```bash
strest --controller-listen 0.0.0.0:9009 --controller-mode manual --control-listen 127.0.0.1:9010 --auth-token secret --control-auth-token control-secret
```

Manual controller config example:

```toml
[distributed]
role = "controller"
controller_mode = "manual"
listen = "0.0.0.0:9009"
control_listen = "127.0.0.1:9010"
control_auth_token = "control-secret"
```

Start and stop via HTTP:

```bash
curl -X POST http://127.0.0.1:9010/start -H "Authorization: Bearer control-secret"
curl -X POST http://127.0.0.1:9010/start -H "Authorization: Bearer control-secret" -d '{"scenario_name":"login"}'
curl -X POST http://127.0.0.1:9010/stop -H "Authorization: Bearer control-secret"
```

The `/start` payload can include `scenario_name` (from the registry) and/or an inline
`scenario` (same schema as the config file). If you pass a `scenario` without a name,
it runs once and is not stored. If you pass both `scenario` and `scenario_name`,
the controller stores/updates that named scenario and runs it. You can also pass
`start_after_ms` to delay the run and `agent_wait_timeout_ms` to wait for enough
agents before starting. If omitted, the controller runs the default scenario or
`--url` configured on startup.

Scenario registry (preload multiple named scenarios):

```toml
[scenario]
base_url = "http://localhost:3000"

[[scenario.steps]]
method = "get"
path = "/health"
assert_status = 200

[scenarios.login]
base_url = "http://localhost:3000"

[[scenarios.login.steps]]
method = "post"
path = "/login"
data = "{\"user\":\"demo\"}"
assert_status = 200
```

### Linux Systemd Service

Install a controller service (requires sudo):

```bash
sudo strest --controller-listen 0.0.0.0:9009 --controller-mode manual --control-listen 127.0.0.1:9010 --install-service --service-name strest-controller
```

Install an agent service:

```bash
sudo strest --agent-join 10.0.0.5:9009 --agent-standby --install-service --service-name strest-agent
```

Uninstall a service:

```bash
sudo strest --controller-listen 0.0.0.0:9009 --uninstall-service --service-name strest-controller
```

Systemd install/uninstall writes to `/etc/systemd/system` and runs `systemctl`, so it must be executed with sudo.

### Reproducible Builds

Use `--locked` to ensure the build uses the exact dependency versions in `Cargo.lock`:

```bash
cargo build --release --locked
```

### Testing

Run the full test suite with nextest:

```bash
cargo make test
```

Run the WASM end-to-end test:

```bash
cargo make test-wasm
```

### Formatting

Check formatting with:

```bash
cargo make format-check
```

Auto-format with:

```bash
cargo make format
```

## Contributions

If you'd like to contribute, please start with `CONTRIBUTING.md` for the exact workflow and checks.

I'm a solo maintainer, so response times may vary. I review contributions as time allows and will respond when I can.


This project is licensed under the GNU AGPL v3.0 - see the [LICENSE](LICENSE) file for details.

## Motivation 

strest was born to provide performance insight for stexs and the infrastructure behind it.