ktstr 0.4.9

Test harness for Linux process schedulers
docs.rs failed to build ktstr-0.4.9
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.

ktstr

CI codecov guide api PRs welcome

Early stage. APIs, CLI, and internals are actively evolving. Expect breaking changes between releases.

Test harness for Linux process schedulers, with a focus on sched_ext.

Why ktstr?

sched_ext lets you write Linux process schedulers as BPF programs. A scheduler runs on every CPU and affects every process -- bugs cause system-wide stalls or crashes. Scheduler behavior depends on CPU topology, cgroup hierarchy, workload mix, and kernel version. You cannot test this with unit tests because the relevant state only exists inside a running kernel. ktstr also tests under EEVDF (the kernel's built-in scheduler) as a baseline.

Without ktstr, testing means manually booting a VM, setting up cgroups, running workloads, and eyeballing whether things went wrong -- with no reproducibility across machines because topology varies per host. ktstr automates this:

  • Clean slate -- each test boots its own kernel in a KVM VM. No shared state between tests.
  • Topology as code -- topology(1, 2, 4, 2) gives you 1 NUMA node, 2 LLCs (last-level caches), 4 cores/LLC, 2 threads. x86_64 and aarch64. The same test produces the same topology on any host.
  • Declarative scenarios -- tests declare cgroups, cpusets, and workloads as data (CgroupDef, Step, Op). The framework handles the rest.
  • Automated assertions -- checks for starvation, cgroup isolation violations, and CPU time fairness. No manual inspection.
  • Gauntlet -- one #[ktstr_test] expands across the cross-product of topology presets (4-252 vCPUs, 1-15 LLCs, optional SMT and multi-NUMA) and scheduler flag profiles, filtered by per-test constraints.
  • Host-side introspection -- reads kernel state and BPF maps from guest memory without guest-side instrumentation.
  • Auto-repro -- on failure, reruns the scenario with BPF probes on the crash call chain, capturing arguments and struct state at each call site.
  • Features -- testing, observability, debugging, and infrastructure.

Installation

Add ktstr as a dev-dependency:

[dev-dependencies]
ktstr = { version = "0.4" }

This is all test authors need -- run with cargo-nextest or cargo test. The anyhow::Result referenced in examples below is re-exported through ktstr::prelude; no separate anyhow dev-dependency needed.

Optional CLI tools (cargo install --locked ktstr) installs the two user-facing binaries:

  • ktstr -- host-side CLI for running scenarios outside VMs and managing cached kernel images
  • cargo-ktstr -- wraps cargo nextest run with kernel resolution, coverage, verifier stats, and shell access

The workspace defines two additional [[bin]] targets — ktstr-jemalloc-probe and ktstr-jemalloc-alloc-worker — but these are test-fixture binaries spawned by integration tests (tests/jemalloc_probe_tests.rs), not commands operators run directly; the cargo install flow does not surface them as user-facing entry points.

scx-ktstr (the test fixture scheduler) is built automatically by the workspace and does not need a separate install.

Setup

Linux only (x86_64, aarch64). ktstr boots KVM virtual machines; it does not build or run on other platforms.

Required:

  • Linux host with /dev/kvm
  • Rust >= 1.95 (stable, pinned via rust-toolchain.toml)
  • clang (BPF skeleton compilation)
  • pkg-config, make, gcc
  • autotools (autoconf, autopoint, flex, bison, gawk) -- vendored libbpf/libelf/zlib build
  • BTF (/sys/kernel/btf/vmlinux -- present by default on most distros; set KTSTR_KERNEL if missing)
  • Internet access on first build (downloads busybox source)

Optional:

  • cargo-nextest -- nextest runs each test as a separate process, letting a #[ctor] hook intercept its --list/--exact protocol to expand #[ktstr_test] entries across topology presets and flag profiles. cargo test uses an in-process harness where the hook falls through, so only the base topology runs.
  • Test kernel: Linux 6.12+ with sched_ext for scheduler tests; cargo ktstr kernel build fetches and caches one. See Supported kernels.
# Ubuntu/Debian
sudo apt install clang pkg-config make gcc autoconf autopoint flex bison gawk

# Fedora
sudo dnf install clang pkgconf make gcc autoconf gettext-devel flex bison gawk

liblzma note: ktstr links xz2 with the static feature — no separate liblzma-dev / xz-devel package is needed. See CONTRIBUTING.md for the dynamic-link path if you're modifying the workspace.

Test files go in tests/ as standard Rust integration tests. Use #[ktstr_test] from ktstr::prelude::*.

See the getting started guide for kernel discovery and building a test kernel.

Quick start

Write a test

Declare cgroups and workers as data. No scheduler setup required:

use ktstr::prelude::*;

#[ktstr_test(llcs = 1, cores = 2, threads = 1)]
fn two_cgroups(ctx: &Ctx) -> Result<AssertResult> {
    execute_defs(ctx, vec![
        CgroupDef::named("cg_0").workers(2),
        CgroupDef::named("cg_1").workers(2),
    ])
}

Each test boots a KVM VM, creates the declared cgroups and workers, runs the workload, and checks for starvation and fairness. For canned scenarios, see scenarios::steady in the getting started guide.

Define a scheduler

To test a custom sched_ext scheduler, use #[derive(Scheduler)] to declare the binary, default topology, and feature flags:

use ktstr::prelude::*;

#[derive(Scheduler)]
#[scheduler(name = "my_sched", binary = "scx_my_sched", topology(1, 2, 4, 1))]
enum MySchedFlag {
    #[flag(args = ["--enable-llc"])]
    Llc,
    #[flag(args = ["--enable-stealing"], requires = [Llc])]
    Steal,
}

binary = "scx_my_sched" tells ktstr to auto-discover the scheduler binary in target/{debug,release}/, the directory containing the test binary, or an explicit path via KTSTR_SCHEDULER env var. If the scheduler is a [[bin]] target in the same workspace, cargo build places it there and discovery is automatic. The resolved binary is packed into the VM's initramfs. Tests without a scheduler attribute run under EEVDF (the kernel's default scheduler).

topology(numa_nodes, llcs, cores_per_llc, threads_per_core) sets the VM's CPU topology -- topology(1, 2, 4, 1) creates 1 NUMA node, 2 LLCs, 4 cores per LLC, 1 thread per core (8 vCPUs). Topologies display as NnNlNcNt (e.g. 1n2l4c1t). In #[ktstr_test], use named attributes instead: llcs = 2, cores = 4, threads = 1, numa_nodes = 1. Unset dimensions inherit from the scheduler's topology. For non-uniform NUMA, see Topology::with_nodes() in the topology guide.

This generates two consts and per-variant flag constants:

  • const MY_SCHED: Scheduler — the scheduler definition itself, for use in builder chains and library code that needs the bare Scheduler type.
  • const MY_SCHED_PAYLOAD: Payload — a &'static Payload wrapper around MY_SCHED (kind: PayloadKind::Scheduler), used wherever a Payload reference is expected. The scheduler = slot on #[ktstr_test] is one such site; pass the _PAYLOAD form, not the bare Scheduler form.

Tests referencing MY_SCHED_PAYLOAD inherit its topology and flags. Add scheduler = MY_SCHED_PAYLOAD to #[ktstr_test] to use it:

#[ktstr_test(scheduler = MY_SCHED_PAYLOAD)]
fn sched_two_cgroups(ctx: &Ctx) -> Result<AssertResult> {
    execute_defs(ctx, vec![
        CgroupDef::named("cg_0").workers(2),
        CgroupDef::named("cg_1").workers(2),
    ])
}

The topology macro argument requires llcs to be an exact multiple of numa_nodes; topology(1, 2, 4, 1) (2 LLCs, 1 NUMA node) is fine, topology(2, 3, ...) is rejected at compile time.

Multi-step scenarios

For dynamic topology changes, use execute_steps with Step and HoldSpec:

use ktstr::prelude::*;

#[ktstr_test(scheduler = MY_SCHED_PAYLOAD, llcs = 1, cores = 4, threads = 1)]
fn cpuset_split(ctx: &Ctx) -> Result<AssertResult> {
    let steps = vec![Step::with_defs(
        vec![
            CgroupDef::named("cg_0").with_cpuset(CpusetSpec::Disjoint { index: 0, of: 2 }),
            CgroupDef::named("cg_1").with_cpuset(CpusetSpec::Disjoint { index: 1, of: 2 }),
        ],
        HoldSpec::FULL,
    )];
    execute_steps(ctx, steps)
}

Run a binary payload

To run a binary workload (schbench, fio, stress-ng, anything else) as part of a test, declare a Payload and reference it via payload = ... (primary slot) or workloads = [...] (additional slots):

// SCHBENCH is a `pub const SCHBENCH: Payload` declared in
// tests/common/fixtures.rs. Bring it into scope alongside the
// fixtures module's other re-exports (SCHBENCH_HINTED,
// SCHBENCH_JSON, etc.) before the test references it.
mod common;
use common::fixtures::*;

#[ktstr_test(scheduler = MY_SCHED_PAYLOAD, payload = SCHBENCH)]
fn schbench_under_my_sched(ctx: &Ctx) -> Result<AssertResult> {
    let report = ctx.payload(&SCHBENCH).run()?;
    Ok(AssertResult::from(report))
}

See Payload Definitions for the #[derive(Payload)] macro and the full field surface (default_args, default_checks, metrics, include_files). tests/common/fixtures.rs carries reusable examples (SCHBENCH, SCHBENCH_HINTED, SCHBENCH_JSON).

Run

cargo nextest run

Requires /dev/kvm.

Passing tests:

    PASS [  11.34s] my_crate::my_sched_tests ktstr/two_cgroups
    PASS [  14.02s] my_crate::my_sched_tests ktstr/sched_two_cgroups
    PASS [  13.87s] my_crate::my_sched_tests ktstr/cpuset_split

A failing test prints assertion details:

    FAIL [  12.05s] my_crate::my_sched_tests ktstr/two_cgroups

--- STDERR ---
ktstr_test 'two_cgroups' [topo=1n1l2c1t] failed:
  stuck 3500ms on cpu1 at +1200ms

--- stats ---
4 workers, 2 cpus, 8 migrations, worst_spread=12.3%, worst_gap=3500ms
  cg0: workers=2 cpus=2 spread=5.1% gap=3500ms migrations=4 iter=15230
  cg1: workers=2 cpus=2 spread=12.3% gap=890ms migrations=4 iter=14870

Dev workflow

These commands require cargo install --locked ktstr (see Installation). The frictionless loop is to build a cached kernel once and then run tests against the cache:

cargo ktstr kernel build                                   # latest stable into XDG cache
cargo nextest run                                          # tests find the cached kernel

cargo ktstr wraps the full workflow and has subcommands beyond test:

cargo ktstr test                                           # build/resolve kernel + run tests
cargo ktstr nextest                                        # visible alias for `test`
cargo ktstr test --kernel ~/linux -- -E 'test(my_test)'    # local source tree + nextest filter
cargo ktstr coverage                                       # tests under cargo-llvm-cov nextest
cargo ktstr llvm-cov report --lcov --output-path lcov.info # raw llvm-cov passthrough (report/clean/show-env)
cargo ktstr kernel build 6.14.2                            # cache a specific version
cargo ktstr kernel build --source ~/linux                  # build from local source tree
cargo ktstr kernel build --git URL --ref v6.14             # shallow-clone a git tree
cargo ktstr kernel list                                    # list cached kernels (shows (EOL) tags)
cargo ktstr kernel clean --keep 3                          # keep 3 most recent
cargo ktstr model fetch                                    # prefetch the LlmExtract model
cargo ktstr model status                                   # report whether a SHA-checked model is cached
cargo ktstr verifier --scheduler scx_my_sched              # BPF verifier stats
cargo ktstr stats                                          # aggregate gauntlet sidecars
cargo ktstr stats show-host --run <key>                    # print archived HostContext for a run
cargo ktstr show-host                                      # print current host context
cargo ktstr show-thresholds my_test                        # print resolved Assert thresholds for a test
cargo ktstr cleanup                                        # remove leftover ktstr cgroups
cargo ktstr shell --kernel 6.14.2                          # interactive VM shell
cargo ktstr shell --kernel 6.14.2 --no-perf-mode           # shell on shared runners (skip flock/pinning/RT)
cargo ktstr completions bash                               # shell completions

Host-side CLI

ktstr runs scenarios directly on the host under whatever scheduler is already active -- no VM, just the real hardware. This complements #[ktstr_test] library tests, which boot a KVM VM per test with a controlled virtual topology for reproducible results.

Every ktstr kernel ... subcommand is identical to the corresponding cargo ktstr kernel ....

ktstr list                                                 # list available scenarios
ktstr run                                                  # run all scenarios on the host
ktstr topo                                                 # show host CPU topology
ktstr cleanup                                              # remove leftover cgroups
ktstr shell --kernel 6.14.2                                # interactive VM shell (kernel optional)
ktstr kernel list                                          # manage cached kernels
ktstr kernel build 6.14.2
ktstr kernel build --source ../linux
ktstr kernel build --git URL --ref v6.14
ktstr kernel clean --keep 3
ktstr host-state capture --output baseline.hst.zst         # snapshot every live thread's counters
ktstr host-state compare baseline.hst.zst candidate.hst.zst # diff two snapshots on (pcomm, comm)
ktstr completions bash

Or via cargo run from the workspace:

cargo run --bin ktstr -- list
cargo run --bin ktstr -- run

Release profile — panic = "abort"

ktstr's release profile sets panic = "abort". Any panic on any thread tears down the entire process without unwinding: Drop impls do not run, std::panic::catch_unwind cannot observe the failure, and libc::abort delivers SIGABRT before the kernel returns control.

Contributors writing library or binary code should write panic-free code on every thread that runs in the release profile — especially the monitor loop, KVM vCPU threads, and anything spawned from WorkloadHandle. Relying on catch_unwind as a soft failure boundary is a bug; introduce explicit Result plumbing instead. The only escape hatch is panic_hook (see src/vmm/vcpu_panic.rs), which runs synchronously on the panicking thread before libc::abort to flip kill/exited signalling atomics; it does not recover, only classifies.

Tests run under the default panic = "unwind" profile, so catch_unwind works as expected inside #[test] bodies — but code paths that only execute under the release profile cannot be tested for unwind-safety directly.

Documentation

Guide -- getting started, concepts, writing tests, recipes, architecture.

API docs -- rustdoc for all workspace crates.

License

GPL-2.0-only