# ktstr
[](https://github.com/likewhatevs/ktstr/actions/workflows/ci.yml)
[](https://codecov.io/gh/likewhatevs/ktstr)
[](https://likewhatevs.github.io/ktstr/guide/)
[](https://likewhatevs.github.io/ktstr/api/ktstr/)
[](https://github.com/likewhatevs/ktstr/issues)
> **Early stage.** APIs, CLI, and internals are actively evolving.
> Expect breaking changes between releases.
Test harness for Linux process schedulers, with a focus on
[sched_ext](https://github.com/sched-ext/scx).
## Why ktstr?
sched_ext lets you write Linux process schedulers as BPF programs.
A scheduler runs on every CPU and
affects every process -- bugs cause system-wide stalls or crashes.
Scheduler behavior depends on CPU topology, cgroup hierarchy, workload
mix, and kernel version. You cannot test this with unit tests because
the relevant state only exists inside a running kernel. ktstr also
tests under EEVDF (the kernel's built-in scheduler) as a baseline.
Without ktstr, testing means manually booting a VM, setting up cgroups,
running workloads, and eyeballing whether things went wrong -- with no
reproducibility across machines because topology varies per host. ktstr
automates this:
- **Clean slate** -- each test boots its own kernel in a KVM VM. No
shared state between tests.
- **Topology as code** -- `topology(1, 2, 4, 2)` gives you 1 NUMA
node, 2 LLCs (last-level caches), 4 cores/LLC, 2 threads. x86_64
and aarch64. The same
test produces the same topology on any host.
- **Declarative scenarios** -- tests declare cgroups, cpusets, and
workloads as data (`CgroupDef`, `Step`, `Op`). The framework
handles the rest.
- **Automated assertions** -- checks for starvation, cgroup
isolation violations, and CPU time fairness. No manual inspection.
- **[Gauntlet](https://likewhatevs.github.io/ktstr/guide/running-tests/gauntlet.html)** --
one `#[ktstr_test]` expands across the cross-product of topology
presets (4-252 vCPUs, 1-15 LLCs, optional SMT and multi-NUMA) and
scheduler flag profiles, filtered by per-test constraints.
- **Host-side introspection** -- reads kernel state and BPF maps
from guest memory without guest-side instrumentation.
- **Per-thread profile diff** -- `ktstr ctprof capture` walks
every live thread's scheduling, memory, I/O, and taskstats
delay counters into a snapshot; `ktstr ctprof compare` diffs
two snapshots for thread-level scheduling/memory/I/O
regression hunting.
- **Auto-repro** -- on failure, reruns the scenario with BPF probes
on the crash call chain, capturing arguments and struct state at
each call site.
- **[Features](https://likewhatevs.github.io/ktstr/guide/features.html)** --
testing, observability, debugging, and infrastructure.
## Installation
Add ktstr as a dev-dependency:
```toml
[dev-dependencies]
ktstr = { version = "0.4" }
```
This is all test authors need -- run with
`cargo ktstr test --kernel ../linux` (wraps
[cargo-nextest](https://nexte.st/) with kernel resolution).
The `anyhow::Result` referenced in examples below is re-exported
through `ktstr::prelude`; no separate `anyhow` dev-dependency needed.
**Optional CLI tools**:
```sh
cargo install --locked ktstr --bin ktstr --bin cargo-ktstr
```
This installs the two user-facing binaries:
- `ktstr` -- standalone CLI for kernel cache management,
interactive VM shells, host-wide per-thread profiling, and
lock introspection
- `cargo-ktstr` -- wraps `cargo nextest run` with kernel resolution,
coverage, verifier stats, shell access, and `cargo ktstr export`
for reproducing test scenarios as self-contained shell scripts
The workspace defines two additional `[[bin]]` targets —
`ktstr-jemalloc-probe` and `ktstr-jemalloc-alloc-worker` — but
these are test-fixture binaries spawned by integration tests
(`tests/jemalloc_probe_tests.rs`), not commands operators run
directly. The `--bin` flags above scope the install to just the
two user-facing entry points; without them, `cargo install`
would also place the test-fixture binaries on `$PATH`.
`scx-ktstr` (the test fixture scheduler) is built automatically
by the workspace and does not need a separate install.
## Setup
**Linux only (x86_64, aarch64).** ktstr boots KVM virtual machines;
it does not build or run on other platforms.
**Required:**
- Linux host with `/dev/kvm`
- Rust >= 1.94.1 (stable, pinned via `rust-toolchain.toml`)
- clang (BPF skeleton compilation)
- pkg-config, make, gcc
- autotools (autoconf, autopoint, flex, bison, gawk) -- vendored
libbpf/libelf/zlib build
- BTF (`/sys/kernel/btf/vmlinux` -- present by default on most
distros; set `KTSTR_KERNEL` if missing)
- Internet access on first build (downloads busybox source)
**Optional:**
- [cargo-nextest](https://nexte.st/) -- nextest runs each test as a
separate process, letting a `#[ctor]` hook intercept its
`--list`/`--exact` protocol to expand `#[ktstr_test]` entries across
topology presets and flag profiles. `cargo test` uses an in-process
harness where the hook falls through, so only the base topology runs.
- Test kernel: Linux 6.12+ with sched_ext for scheduler tests;
`cargo ktstr kernel build` fetches and caches one. See
[Supported kernels](https://likewhatevs.github.io/ktstr/guide/features.html#supported-kernels).
```sh
# Ubuntu/Debian
sudo apt install clang pkg-config make gcc autoconf autopoint flex bison gawk
# Fedora
sudo dnf install clang pkgconf make gcc autoconf gettext-devel flex bison gawk
```
**liblzma note:** ktstr links `xz2` with the `static` feature — no
separate `liblzma-dev` / `xz-devel` package is needed. See
[CONTRIBUTING.md](CONTRIBUTING.md#liblzma-build-configuration)
for the dynamic-link path if you're modifying the workspace.
**Test files** go in `tests/` as standard Rust integration tests. Use `#[ktstr_test]` from `ktstr::prelude::*`.
See the [getting started guide](https://likewhatevs.github.io/ktstr/guide/getting-started.html) for kernel discovery and building a test kernel.
## Quick start
### Write a test
Declare cgroups and workers as data. No scheduler setup required:
```rust
use ktstr::prelude::*;
#[ktstr_test(llcs = 1, cores = 2, threads = 1)]
fn two_cgroups(ctx: &Ctx) -> Result<AssertResult> {
execute_defs(ctx, vec![
CgroupDef::named("cg_0").workers(2),
CgroupDef::named("cg_1").workers(2),
])
}
```
Each test boots a KVM VM, creates the declared cgroups and workers,
runs the workload, and checks for starvation and fairness. For
canned scenarios, see `scenarios::steady` in the
[getting started guide](https://likewhatevs.github.io/ktstr/guide/getting-started.html).
### Define a scheduler
To test a custom sched_ext scheduler, use `#[derive(Scheduler)]` to
declare the binary, default topology, and feature flags:
```rust
use ktstr::prelude::*;
#[derive(Scheduler)]
// topology(1, 2, 4, 1): 1 NUMA node, 2 LLCs, 4 cores/LLC, 1 thread/core
#[scheduler(name = "my_sched", binary = "scx_my_sched", topology(1, 2, 4, 1))]
enum MySchedFlag {
#[flag(args = ["--enable-llc"])]
Llc,
#[flag(args = ["--enable-stealing"], requires = [Llc])]
Steal,
}
```
`binary = "scx_my_sched"` tells ktstr to auto-discover the scheduler
binary in `target/{debug,release}/`, the directory containing the test
binary, or an explicit path via `KTSTR_SCHEDULER` env var. If the
scheduler is a `[[bin]]` target in the same workspace, `cargo build`
places it there and discovery is automatic. The resolved binary is
packed into the VM's initramfs. Tests without a `scheduler` attribute
run under EEVDF (the kernel's default scheduler).
`topology(numa_nodes, llcs, cores_per_llc, threads_per_core)` sets
the VM's CPU topology -- `topology(1, 2, 4, 1)` creates 1 NUMA node,
2 LLCs, 4 cores per LLC, 1 thread per core (8 vCPUs). Topologies
display as `NnNlNcNt` (e.g. `1n2l4c1t`). In `#[ktstr_test]`, use
named attributes instead: `llcs = 2, cores = 4, threads = 1,
numa_nodes = 1`. Unset dimensions inherit from the scheduler's
topology. For non-uniform NUMA, see `Topology::with_nodes()` in the
[topology guide](https://likewhatevs.github.io/ktstr/guide/concepts/topology.html).
This generates two consts and per-variant flag constants:
- `const MY_SCHED: Scheduler` — the scheduler definition itself,
for use in builder chains and library code that needs the bare
`Scheduler` type.
- `const MY_SCHED_PAYLOAD: Payload` — a `&'static Payload`
wrapper around `MY_SCHED` (kind: `PayloadKind::Scheduler`), used
wherever a `Payload` reference is expected. The `scheduler =`
slot on `#[ktstr_test]` is one such site; pass the
`_PAYLOAD` form, not the bare `Scheduler` form.
Tests referencing `MY_SCHED_PAYLOAD` inherit its topology and
flags. Add `scheduler = MY_SCHED_PAYLOAD` to `#[ktstr_test]` to use it:
```rust
#[ktstr_test(scheduler = MY_SCHED_PAYLOAD)]
fn sched_two_cgroups(ctx: &Ctx) -> Result<AssertResult> {
execute_defs(ctx, vec![
CgroupDef::named("cg_0").workers(2),
CgroupDef::named("cg_1").workers(2),
])
}
```
The topology macro argument requires `llcs` to be an exact multiple
of `numa_nodes`; `topology(1, 2, 4, 1)` (2 LLCs, 1 NUMA node) is
fine, `topology(2, 3, ...)` is rejected at compile time.
### Multi-step scenarios
For dynamic topology changes, use `execute_steps` with `Step` and
`HoldSpec`:
```rust
use ktstr::prelude::*;
#[ktstr_test(scheduler = MY_SCHED_PAYLOAD, llcs = 1, cores = 4, threads = 1)]
fn cpuset_split(ctx: &Ctx) -> Result<AssertResult> {
let steps = vec![Step::with_defs(
vec![
CgroupDef::named("cg_0").with_cpuset(CpusetSpec::Disjoint { index: 0, of: 2 }),
CgroupDef::named("cg_1").with_cpuset(CpusetSpec::Disjoint { index: 1, of: 2 }),
],
HoldSpec::FULL,
)];
execute_steps(ctx, steps)
}
```
### Run a binary payload
To run a binary workload (`schbench`, `fio`, `stress-ng`,
anything else) as part of a test, declare a `Payload` and
reference it via `payload = ...` (primary slot) or
`workloads = [...]` (additional slots):
```rust
// SCHBENCH is a `pub const SCHBENCH: Payload` declared in
// tests/common/fixtures.rs. Bring it into scope alongside the
// fixtures module's other re-exports (SCHBENCH_HINTED,
// SCHBENCH_JSON, etc.) before the test references it.
mod common; // requires tests/common/fixtures.rs setup -- see tests/common/ in the repo
use common::fixtures::*;
#[ktstr_test(scheduler = MY_SCHED_PAYLOAD, payload = SCHBENCH)]
fn schbench_under_my_sched(ctx: &Ctx) -> Result<AssertResult> {
let report = ctx.payload(&SCHBENCH).run()?;
Ok(AssertResult::from(report))
}
```
See
[Payload Definitions](https://likewhatevs.github.io/ktstr/guide/writing-tests/scheduler-definitions.html#derive-payload)
for the `#[derive(Payload)]` macro and the full field surface
(`default_args`, `default_checks`, `metrics`, `include_files`).
`tests/common/fixtures.rs` carries reusable examples
(`SCHBENCH`, `SCHBENCH_HINTED`, `SCHBENCH_JSON`).
### Run
```sh
cargo ktstr test --kernel ../linux
```
`--kernel` accepts a kernel source tree path (e.g. `../linux`,
auto-built on first use), a version (`6.14.2`, or `6.14` for
latest patch), a cache key (see `kernel list`), a version
range (`6.12..6.14`), or a git source (`git+URL#REF`).
`cargo ktstr test` wraps `cargo nextest run` with kernel
resolution (source tree, version, or cache key), kconfig
fragment merging, and shell access. Bare `cargo nextest run`
works only when the kernel image is already on the cache key
the test binary expects.
Requires `/dev/kvm`.
Passing tests:
```
PASS [ 11.34s] my_crate::my_sched_tests ktstr/two_cgroups
PASS [ 14.02s] my_crate::my_sched_tests ktstr/sched_two_cgroups
PASS [ 13.87s] my_crate::my_sched_tests ktstr/cpuset_split
```
A failing test prints assertion details:
```
FAIL [ 12.05s] my_crate::my_sched_tests ktstr/two_cgroups
--- STDERR ---
ktstr_test 'two_cgroups' [topo=1n1l2c1t] failed:
stuck 3500ms on cpu1 at +1200ms
--- stats ---
4 workers, 2 cpus, 8 migrations, worst_spread=12.3%, worst_gap=3500ms
cg0: workers=2 cpus=2 spread=5.1% gap=3500ms migrations=4 iter=15230
cg1: workers=2 cpus=2 spread=12.3% gap=890ms migrations=4 iter=14870
```
### Dev workflow
These commands require `cargo install --locked ktstr` (see [Installation](#installation)).
The frictionless loop is to build a cached kernel once and then run
tests against the cache:
```sh
cargo ktstr kernel build # latest stable into XDG cache
cargo nextest run # tests find the cached kernel
```
`cargo ktstr` wraps the full workflow and has subcommands beyond
`test`:
```sh
cargo ktstr test # build/resolve kernel + run tests
cargo ktstr nextest # visible alias for `test`
cargo ktstr test --kernel ~/linux -- -E 'test(my_test)' # local source tree + nextest filter
cargo ktstr coverage # tests under cargo-llvm-cov nextest
cargo ktstr llvm-cov report --lcov --output-path lcov.info # raw llvm-cov passthrough (report/clean/show-env)
cargo ktstr kernel build 6.14.2 # cache a specific version
cargo ktstr kernel build --source ~/linux # build from local source tree
cargo ktstr kernel build --git URL --ref v6.14 # shallow-clone a git tree
cargo ktstr kernel list # list cached kernels (shows (EOL) tags)
cargo ktstr kernel clean --keep 3 # keep 3 most recent
cargo ktstr model fetch # prefetch the LlmExtract model
cargo ktstr model status # report whether a SHA-checked model is cached
cargo ktstr verifier --scheduler scx_my_sched # BPF verifier stats
cargo ktstr stats # aggregate gauntlet sidecars
cargo ktstr stats show-host --run <key> # print archived HostContext for a run
cargo ktstr show-host # print current host context
cargo ktstr show-thresholds my_test # print resolved Assert thresholds for a test
cargo ktstr export my_test # write a self-contained .run for bare-metal repro
cargo ktstr shell --kernel 6.14.2 # interactive VM shell
cargo ktstr shell --kernel 6.14.2 --no-perf-mode # shell on shared runners (skip flock/pinning/RT)
cargo ktstr completions bash # shell completions
```
### Standalone CLI
`ktstr` is the debugging companion to the `#[ktstr_test]`
test harness. It owns kernel cache management, interactive VM
shells, host-wide per-thread profiling, and lock introspection.
Every `ktstr kernel ...` subcommand is identical to the corresponding
`cargo ktstr kernel ...`.
```sh
ktstr topo # show host CPU topology
ktstr shell --kernel 6.14.2 # interactive VM shell (kernel optional)
ktstr kernel list # manage cached kernels
ktstr kernel build 6.14.2
ktstr kernel build --source ../linux
ktstr kernel build --git URL --ref v6.14
ktstr kernel clean --keep 3
ktstr ctprof capture --output baseline.ctprof.zst # snapshot every live thread's counters
# ctprof capture pulls per-thread jemalloc counters via ptrace; needs root,
# `sudo setcap cap_sys_ptrace+eip $(which ktstr)`, or `kernel.yama.ptrace_scope=0`
ktstr ctprof compare baseline.ctprof.zst candidate.ctprof.zst # diff two snapshots on the selected grouping axis
ktstr ctprof show baseline.ctprof.zst # render one snapshot, no diff math
ktstr ctprof metric-list # discover the metric vocabulary (--sort-by / --metrics)
ktstr locks # enumerate held flocks (read-only)
ktstr completions bash
```
To reproduce a test scenario as a bare-metal shell script
without the test harness, use `cargo ktstr export`.
## Release profile — `panic = "abort"`
ktstr's release profile sets `panic = "abort"`. Any panic on any
thread tears down the entire process without unwinding: `Drop`
impls do not run, `std::panic::catch_unwind` cannot observe the
failure, and `libc::abort` delivers SIGABRT before the kernel
returns control.
Contributors writing library or binary code should write
panic-free code on every thread that runs in the release profile
— especially the monitor loop, KVM vCPU threads, and anything
spawned from `WorkloadHandle`. Relying on `catch_unwind` as a
soft failure boundary is a bug; introduce explicit `Result`
plumbing instead. The only escape hatch is `panic_hook` (see
`src/vmm/vcpu_panic.rs`), which runs synchronously on the
panicking thread before `libc::abort` to flip kill/exited
signalling atomics; it does not recover, only classifies.
Tests run under the default `panic = "unwind"` profile, so
`catch_unwind` works as expected inside `#[test]` bodies — but
code paths that only execute under the release profile cannot be
tested for unwind-safety directly.
## Documentation
**[Guide](https://likewhatevs.github.io/ktstr/guide/)** -- getting started, concepts,
writing tests, recipes, architecture.
**[ctprof reference](https://likewhatevs.github.io/ktstr/guide/reference/ctprof.html)** --
metric registry, aggregation rules, taskstats kconfig gating,
adding-a-metric guide.
**[API docs](https://likewhatevs.github.io/ktstr/api/ktstr/)** -- rustdoc for all workspace crates.
## Contributing
Pull requests welcome. See [CONTRIBUTING.md](CONTRIBUTING.md)
for the workflow, coding conventions, and how to run the test
suite locally.
## License
GPL-2.0-only