Why Lumen
Most load testers answer "how fast is my API?" Lumen also answers "did this release break performance?" — by letting you define pass/fail thresholds and wiring the exit code into CI.
# exits 0 if thresholds pass, 2 if they fail
# lmn.yaml
execution:
request_count: 1000
concurrency: 50
thresholds:
- metric: error_rate
operator: lt
value: 0.01 # < 1% errors
- metric: latency_p99
operator: lt
value: 500.0 # p99 < 500ms
Installation
Homebrew / pre-built binary (coming soon)
From crates.io:
From source:
Docker:
Quick Start
# 100 GET requests, see latency table
# POST with an inline JSON body
# 1000 requests, 50 concurrent
# Authenticated request using an env var
# Run from a YAML config file
Authentication and Headers
Attach headers to every request with --header (repeatable):
Use ${ENV_VAR} in header values to avoid hardcoding secrets. A .env file in the working directory is loaded automatically at startup.
# .env
API_TOKEN=my-secret-token
Headers can also live in the config file:
run:
host: https://api.example.com
headers:
Authorization: "Bearer ${API_TOKEN}"
X-Tenant-ID: "acme"
CLI --header takes precedence over config headers: on the same key.
Dynamic Request Bodies
Generate a unique request body per request from a JSON template:
{{placeholder}}— generates a fresh value per request{{placeholder:once}}— generates once, reused across all requests{{ENV:VAR_NAME}}— resolved from environment at startup, no definition needed
Store a template as a reusable alias:
See lmn-core/TEMPLATES.md for the full placeholder reference.
Load Curves
Scale virtual users over time with a staged load curve:
# lmn.yaml
run:
host: https://api.example.com
method: post
execution:
stages:
- duration: 30s
target_vus: 5
- duration: 2m
target_vus: 50
ramp: linear
- duration: 30s
target_vus: 0
ramp: linear
thresholds:
- metric: latency_p95
operator: lt
value: 2000.0
CI Integration
Use exit code 2 (threshold failure) to gate deployments:
# .github/workflows/load-test.yml
name: Load Test
on:
pull_request:
jobs:
load-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install lmn
run: cargo install lmn
- name: Run load test
run: lmn run -f lmn.yaml
env:
API_TOKEN: ${{ secrets.API_TOKEN }}
Exit codes:
| Code | Meaning |
|---|---|
0 |
Run completed, all thresholds passed |
1 |
Error — invalid config, unreachable host |
2 |
Run completed, one or more thresholds failed |
Observability
Stream traces to any OpenTelemetry-compatible backend:
Start a local Tempo + Grafana stack from lmn-cli/:
# Grafana at http://localhost:3000 → Explore → Tempo
Output
# ASCII table (default)
# JSON to stdout
# ASCII table + JSON artifact
Reference
lmn-cli/CLI.md— full flag and config referencelmn-core/TEMPLATES.md— template placeholder referenceexamples/— ready-to-use configs, templates, and load curves
Project Structure
lmn/
├── lmn-core/ # engine, templates, HTTP, thresholds (library crate)
└── lmn-cli/ # CLI entry point, OTel setup (binary crate)
License
Apache-2.0 — see LICENSE.