Expand description
Karga — a small, flexible load-testing framework for Rust.
Karga is inspired by the design of Serde (a slim core with pluggable extensions) and by tools such as K6, RLT, and Goose for practical load-testing concerns.
The library is intentionally minimal: you provide small building blocks (metrics,
aggregates, reporters, executors) and compose them into a Scenario. For convenience,
there are a few built-in implementations that cover common use cases.
§Architecture
The main building blocks are:
Scenario: configuration object that defines the action to be executedExecutor: responsible for actually running the scenario. Executors control concurrency, scheduling, and are the primary place where performance matters. We provide a high-performanceStageExecutor, but executors are replaceable.Metric: the smallest unit produced by an action. A scenario’s action returns aMetricdescribing a single sample.Aggregate: a lightweight, specialized collector that knows how to processMetrics into a compact intermediate representation.Report: transforms anAggregateinto human- or machine-friendly output.Reporter: consumesReports and sends them somewhere (stdout, file, database).
§Design goals
- Small, well-documented core that is easy to extend.
- High performance in the executor layer — low allocation overhead and efficient scheduling are the primary optimizations.
- Composability: users can supply their own metrics/aggregates/reporters or use the built-ins for convenience.
§Example
A simple HTTP example:
use std::time::{Duration, Instant};
use karga::{
aggregate::BasicAggregate,
executor::{Stage, StageExecutor},
metric::BasicMetric,
report::{BasicReport, StdoutReporter},
Executor, Reporter, Scenario,
};
use reqwest::Client;
#[tokio::main]
async fn main() {
tracing_subscriber::fmt().init();
// NEVER instantiate heavy things like clients inside the action
// unless you want to kill performance
let client = Client::new();
let results: BasicAggregate = StageExecutor::builder()
.stages(vec![
// Start with a ramp up from 0 to 10 over 3 seconds
Stage::new(Duration::from_secs(3), 10.0),
// Increase the rate of change to go from 10 to 100 over the
// next 3 seconds
Stage::new(Duration::from_secs(3), 100.0),
// ramp down from 100 to 10 over the next 3 sceonds
Stage::new(Duration::from_secs(3), 10.0),
])
.build()
.exec(
&Scenario::builder()
.name("Http scenario")
.action(move || {
let client = client.clone();
async move {
let start = Instant::now();
// Yeah lets hardcode it
let res = client.get("http://localhost:3000").send().await;
let success = match res {
Ok(r) => r.status() == 200,
Err(_) => false,
};
let elapsed = start.elapsed();
BasicMetric {
latency: elapsed,
success,
// We dont care about it in this example
bytes: 0,
}
}
})
.build(),
)
.await
.unwrap();
let report = BasicReport::from(results);
// Thats quite strange syntax but whatever
StdoutReporter {}.report(&report).await.unwrap();
}This example demonstrates how Karga combines a simple scenario, a configurable executor, and built-in reporting to form a full benchmark pipeline.
§Feature flags
common traits and registration. (Enabled by default)
builtins: provides basic implementations (BasicMetric,BasicAggregate,BasicReport,StdoutReporter) for quick experiments and demos. (Enabled by default)
§Where to start
Re-exports§
pub use aggregate::Aggregate;pub use executor::Executor;pub use executor::Stage;pub use executor::StageExecutor;pub use metric::Metric;pub use report::Report;pub use report::Reporter;pub use scenario::Scenario;
Modules§
- aggregate
- Metric aggregators
- executor
- Orchestrators that define how things will actually run Executor — orchestration of runtime execution and rate control
- metric
- Single metrics
- report
- Reports and Reporters
- scenario
- Main module of the framework that glues everything together
The
Scenariostruct defines the workload definition layer of karga