lmn-0.1.7 is not a library.
Full documentation at lmn.talek.cloud
Why Lumen
Most load testers answer "how fast is my API?" Lumen also answers "did this release break performance?" — by letting you define pass/fail thresholds and wiring the exit code into CI.
# exits 0 if thresholds pass, 2 if they fail
# lmn.yaml
execution:
request_count: 1000
concurrency: 50
thresholds:
- metric: error_rate
operator: lt
value: 0.01 # < 1% errors
- metric: latency_p99
operator: lt
value: 500.0 # p99 < 500ms
Installation
Docker (zero-install):
Homebrew and pre-built binaries: see Installation docs.
Quick Start
# 100 GET requests, see latency table
# POST with an inline JSON body
# Run from a YAML config file
See the Quickstart guide for a full walkthrough.
Features
- Dynamic request bodies — per-request random data from typed JSON templates
- Threshold-gated CI — exit code
2on p99/error-rate/throughput failures; wires into any pipeline - Load curves — staged virtual user ramp-up with linear or step profiles
- Auth & headers —
${ENV_VAR}secret injection,.envauto-load, repeatable headers - Response tracking — extract and aggregate fields from response bodies (e.g. API error codes)
- JSON output — machine-readable report for dashboards and CI artifacts
- Config files — full YAML config with CLI flag precedence
Observability
Stream traces to any OpenTelemetry-compatible backend:
Start a local Tempo + Grafana stack from lmn-cli/:
# Grafana at http://localhost:3000 → Explore → Tempo
Reference
- CLI reference — full flag and config reference
- Template placeholders — request and response template reference
- JSON output schema — machine-readable report structure
Project Structure
lmn/
├── lmn-core/ # engine, templates, HTTP, thresholds (library crate)
└── lmn-cli/ # CLI entry point, OTel setup (binary crate)
License
Apache-2.0 — see LICENSE.