Expand description
§lmn-core
Core engine for lmn — a fast HTTP load testing tool.
This crate provides the building blocks for running load tests programmatically: HTTP execution, dynamic request templating, load curve definitions, result sampling, threshold evaluation, and report generation. The lmn CLI is built entirely on top of this crate.
Full documentation at https://lmn.talek.cloud
§Features
- Fixed and curve-based execution — run N requests at fixed concurrency, or drive VU counts dynamically over time with linear or step ramps
- Dynamic request templates — JSON templates with typed placeholder generators (strings, floats),
:onceplaceholders, and${ENV_VAR}secret injection - Response field tracking — extract and aggregate typed fields from response bodies across all requests
- Two-stage sampling — VU-threshold gate + Vitter’s Algorithm R reservoir to bound memory while preserving statistical accuracy
- Threshold evaluation — pass/fail rules on latency percentiles, error rate, and throughput
- Structured reports — serializable
RunReportwith percentiles, per-stage breakdowns, status code distribution, and sampling metadata - OpenTelemetry tracing — all major operations are instrumented with named spans
§Usage
[dependencies]
lmn-core = "0.1"
tokio = { version = "1", features = ["full"] }
serde_json = "1"§Key types
| Type | Module | Purpose |
|---|---|---|
RunCommand | command::run | Entry point — owns request spec, execution mode, and sampling config |
ExecutionMode | execution | Fixed { request_count, concurrency } or Curve(LoadCurve) |
RequestSpec | execution | Host, method, body, template paths, headers |
SamplingConfig | execution | VU threshold and reservoir size |
RunStats | execution | Raw output of a completed run |
RunReport | output | Serializable report built from RunStats |
LoadCurve | load_curve | Staged VU ramp definition (parses from JSON) |
Threshold | threshold | Single pass/fail rule on a metric |
§Minimal example — fixed load test
use lmn_core::command::{Command, Commands, HttpMethod};
use lmn_core::command::run::RunCommand;
use lmn_core::execution::{ExecutionMode, RequestSpec};
use lmn_core::output::{RunReport, RunReportParams};
#[tokio::main]
async fn main() {
let cmd = RunCommand {
request: RequestSpec {
host: "https://example.com/api/ping".to_string(),
method: HttpMethod::Get,
body: None,
template_path: None,
response_template_path: None,
headers: vec![],
},
execution: ExecutionMode::Fixed {
request_count: 1000,
concurrency: 50,
},
};
if let Ok(Some(stats)) = Commands::Run(cmd).execute().await {
let report = RunReport::from_params(RunReportParams { stats: &stats });
println!("{}", serde_json::to_string_pretty(&report).unwrap());
}
}§Load curves
Define time-based VU ramps using LoadCurve, which parses from JSON:
use lmn_core::load_curve::LoadCurve;
use lmn_core::execution::ExecutionMode;
let curve: LoadCurve = r#"{
"stages": [
{ "duration": "30s", "target_vus": 5 },
{ "duration": "2m", "target_vus": 50, "ramp": "linear" },
{ "duration": "30s", "target_vus": 0, "ramp": "linear" }
]
}"#.parse().unwrap();
let execution = ExecutionMode::Curve(curve);Stage ramp defaults to "linear" if omitted. Use "step" for an immediate jump.
§Request templates
Load a JSON template file with typed placeholder definitions:
use lmn_core::request_template::Template;
let template = Template::parse(Path::new("request.json")).unwrap();
// Generate on demand — thread-safe, used by both fixed and curve executors
let body = template.generate_one();
// Or pre-generate N bodies upfront (e.g. for deterministic replay)
let bodies = template.pre_generate(1000);Template files embed placeholder definitions under _loadtest_metadata_templates:
{
"user_id": "{{user_id}}",
"amount": "{{price}}",
"_loadtest_metadata_templates": {
"user_id": {
"type": "string",
"details": { "length": { "min": 8, "max": 16 } }
},
"price": {
"type": "float",
"min": 1.0,
"max": 999.99,
"details": { "decimals": 2 }
}
}
}Environment variables are resolved at template load time:
{ "token": "{{ENV:API_TOKEN}}" }See Template Placeholders for the full reference.
§Thresholds
use lmn_core::threshold::{evaluate, EvaluateParams, parse_thresholds};
let thresholds = parse_thresholds(r#"{
"thresholds": [
{ "metric": "latency_p99", "operator": "lt", "value": 200.0 },
{ "metric": "error_rate", "operator": "lt", "value": 0.01 }
]
}"#).unwrap();
let result = evaluate(EvaluateParams { report: &report, thresholds: &thresholds });
if !result.all_passed() {
eprintln!("thresholds failed: {:?}", result);
}parse_thresholds accepts both JSON and YAML strings. Available metrics: latency_p50, latency_p75, latency_p90, latency_p95, latency_p99, error_rate, throughput_rps. Operators: lt, lte, gt, gte, eq.
§Configuration
If you prefer YAML-based configuration, lmn-core exposes a full config parser:
use lmn_core::config::parse_config;
let config = parse_config(yaml_str).unwrap();See the Config File Reference for the full YAML schema.
§License
Apache-2.0