Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
ktstr
Early stage. APIs, CLI, and internals are actively evolving. Expect breaking changes between releases.
Test harness for Linux process schedulers, with a focus on sched_ext.
Why ktstr?
sched_ext lets you write Linux process schedulers as BPF programs. A scheduler runs on every CPU and affects every process -- bugs cause system-wide stalls or crashes. Scheduler behavior depends on CPU topology, cgroup hierarchy, workload mix, and kernel version. You cannot test this with unit tests because the relevant state only exists inside a running kernel. ktstr also tests under EEVDF (the kernel's built-in scheduler) as a baseline.
Without ktstr, testing means manually booting a VM, setting up cgroups, running workloads, and eyeballing whether things went wrong -- with no reproducibility across machines because topology varies per host. ktstr automates this:
- Clean slate -- each test boots its own kernel in a KVM VM. No shared state between tests.
- Topology as code --
topology(1, 2, 4, 2)gives you 1 NUMA node, 2 LLCs (last-level caches), 4 cores/LLC, 2 threads. x86_64 and aarch64. The same test produces the same topology on any host. - Declarative scenarios -- tests declare cgroups, cpusets, and
workloads as data (
CgroupDef,Step,Op). The framework handles the rest. - Automated assertions -- checks for starvation, cgroup isolation violations, and CPU time fairness. No manual inspection.
- Gauntlet --
one
#[ktstr_test]expands across the cross-product of topology presets (4-252 vCPUs, 1-15 LLCs, optional SMT and multi-NUMA) and scheduler flag profiles, filtered by per-test constraints. - Host-side introspection -- reads kernel state and BPF maps from guest memory without guest-side instrumentation.
- Per-thread profile diff --
ktstr ctprof capturewalks every live thread's scheduling, memory, I/O, and taskstats delay counters into a snapshot;ktstr ctprof comparediffs two snapshots for thread-level scheduling/memory/I/O regression hunting. - Auto-repro -- on failure, reruns the scenario with BPF probes on the crash call chain, capturing arguments and struct state at each call site.
- Features -- testing, observability, debugging, and infrastructure.
Installation
Add ktstr as a dev-dependency:
[]
= { = "0.4" }
This is all test authors need -- run with
cargo ktstr test --kernel ../linux (wraps
cargo-nextest with kernel resolution).
The anyhow::Result referenced in examples below is re-exported
through ktstr::prelude; no separate anyhow dev-dependency needed.
Optional CLI tools:
This installs the two user-facing binaries:
ktstr-- standalone CLI for kernel cache management, interactive VM shells, host-wide per-thread profiling, and lock introspectioncargo-ktstr-- wrapscargo nextest runwith kernel resolution, coverage, verifier stats, shell access, andcargo ktstr exportfor reproducing test scenarios as self-contained shell scripts
The workspace defines two additional [[bin]] targets —
ktstr-jemalloc-probe and ktstr-jemalloc-alloc-worker — but
these are test-fixture binaries spawned by integration tests
(tests/jemalloc_probe_tests.rs), not commands operators run
directly. The --bin flags above scope the install to just the
two user-facing entry points; without them, cargo install
would also place the test-fixture binaries on $PATH.
scx-ktstr (the test fixture scheduler) is built automatically
by the workspace and does not need a separate install.
Setup
Linux only (x86_64, aarch64). ktstr boots KVM virtual machines; it does not build or run on other platforms.
Required:
- Linux host with
/dev/kvm - Rust >= 1.94.1 (stable, pinned via
rust-toolchain.toml) - clang (BPF skeleton compilation)
- pkg-config, make, gcc
- autotools (autoconf, autopoint, flex, bison, gawk) -- vendored libbpf/libelf/zlib build
- BTF (
/sys/kernel/btf/vmlinux-- present by default on most distros; setKTSTR_KERNELif missing) - Internet access on first build (downloads busybox source)
Optional:
- cargo-nextest -- nextest runs each test as a
separate process, letting a
#[ctor]hook intercept its--list/--exactprotocol to expand#[ktstr_test]entries across topology presets and flag profiles.cargo testuses an in-process harness where the hook falls through, so only the base topology runs. - Test kernel: Linux 6.12+ with sched_ext for scheduler tests;
cargo ktstr kernel buildfetches and caches one. See Supported kernels.
# Ubuntu/Debian
# Fedora
liblzma note: ktstr links xz2 with the static feature — no
separate liblzma-dev / xz-devel package is needed. See
CONTRIBUTING.md
for the dynamic-link path if you're modifying the workspace.
Test files go in tests/ as standard Rust integration tests. Use #[ktstr_test] from ktstr::prelude::*.
See the getting started guide for kernel discovery and building a test kernel.
Quick start
Write a test
Declare cgroups and workers as data. No scheduler setup required:
use *;
Each test boots a KVM VM, creates the declared cgroups and workers,
runs the workload, and checks for starvation and fairness. For
canned scenarios, see scenarios::steady in the
getting started guide.
Define a scheduler
To test a custom sched_ext scheduler, use #[derive(Scheduler)] to
declare the binary, default topology, and feature flags:
use *;
// topology(1, 2, 4, 1): 1 NUMA node, 2 LLCs, 4 cores/LLC, 1 thread/core
binary = "scx_my_sched" tells ktstr to auto-discover the scheduler
binary in target/{debug,release}/, the directory containing the test
binary, or an explicit path via KTSTR_SCHEDULER env var. If the
scheduler is a [[bin]] target in the same workspace, cargo build
places it there and discovery is automatic. The resolved binary is
packed into the VM's initramfs. Tests without a scheduler attribute
run under EEVDF (the kernel's default scheduler).
topology(numa_nodes, llcs, cores_per_llc, threads_per_core) sets
the VM's CPU topology -- topology(1, 2, 4, 1) creates 1 NUMA node,
2 LLCs, 4 cores per LLC, 1 thread per core (8 vCPUs). Topologies
display as NnNlNcNt (e.g. 1n2l4c1t). In #[ktstr_test], use
named attributes instead: llcs = 2, cores = 4, threads = 1, numa_nodes = 1. Unset dimensions inherit from the scheduler's
topology. For non-uniform NUMA, see Topology::with_nodes() in the
topology guide.
This generates two consts and per-variant flag constants:
const MY_SCHED: Scheduler— the scheduler definition itself, for use in builder chains and library code that needs the bareSchedulertype.const MY_SCHED_PAYLOAD: Payload— a&'static Payloadwrapper aroundMY_SCHED(kind:PayloadKind::Scheduler), used wherever aPayloadreference is expected. Thescheduler =slot on#[ktstr_test]is one such site; pass the_PAYLOADform, not the bareSchedulerform.
Tests referencing MY_SCHED_PAYLOAD inherit its topology and
flags. Add scheduler = MY_SCHED_PAYLOAD to #[ktstr_test] to use it:
The topology macro argument requires llcs to be an exact multiple
of numa_nodes; topology(1, 2, 4, 1) (2 LLCs, 1 NUMA node) is
fine, topology(2, 3, ...) is rejected at compile time.
Multi-step scenarios
For dynamic topology changes, use execute_steps with Step and
HoldSpec:
use *;
Run a binary payload
To run a binary workload (schbench, fio, stress-ng,
anything else) as part of a test, declare a Payload and
reference it via payload = ... (primary slot) or
workloads = [...] (additional slots):
// SCHBENCH is a `pub const SCHBENCH: Payload` declared in
// tests/common/fixtures.rs. Bring it into scope alongside the
// fixtures module's other re-exports (SCHBENCH_HINTED,
// SCHBENCH_JSON, etc.) before the test references it.
// requires tests/common/fixtures.rs setup -- see tests/common/ in the repo
use *;
See
Payload Definitions
for the #[derive(Payload)] macro and the full field surface
(default_args, default_checks, metrics, include_files).
tests/common/fixtures.rs carries reusable examples
(SCHBENCH, SCHBENCH_HINTED, SCHBENCH_JSON).
Run
--kernel accepts a kernel source tree path (e.g. ../linux,
auto-built on first use), a version (6.14.2, or 6.14 for
latest patch), a cache key (see kernel list), a version
range (6.12..6.14), or a git source (git+URL#REF).
cargo ktstr test wraps cargo nextest run with kernel
resolution (source tree, version, or cache key), kconfig
fragment merging, and shell access. Bare cargo nextest run
works only when the kernel image is already on the cache key
the test binary expects.
Requires /dev/kvm.
Passing tests:
PASS [ 11.34s] my_crate::my_sched_tests ktstr/two_cgroups
PASS [ 14.02s] my_crate::my_sched_tests ktstr/sched_two_cgroups
PASS [ 13.87s] my_crate::my_sched_tests ktstr/cpuset_split
A failing test prints assertion details:
FAIL [ 12.05s] my_crate::my_sched_tests ktstr/two_cgroups
ktstr_test 'two_cgroups' [topo=1n1l2c1t] failed:
stuck 3500ms on cpu1 at +1200ms
4 workers, 2 cpus, 8 migrations, worst_spread=12.3%, worst_gap=3500ms
cg0: workers=2 cpus=2 spread=5.1% gap=3500ms migrations=4 iter=15230
cg1: workers=2 cpus=2 spread=12.3% gap=890ms migrations=4 iter=14870
Dev workflow
These commands require cargo install --locked ktstr (see Installation).
The frictionless loop is to build a cached kernel once and then run
tests against the cache:
cargo ktstr wraps the full workflow and has subcommands beyond
test:
Standalone CLI
ktstr is the debugging companion to the #[ktstr_test]
test harness. It owns kernel cache management, interactive VM
shells, host-wide per-thread profiling, and lock introspection.
Every ktstr kernel ... subcommand is identical to the corresponding
cargo ktstr kernel ....
# ctprof capture pulls per-thread jemalloc counters via ptrace; needs root,
# `sudo setcap cap_sys_ptrace+eip $(which ktstr)`, or `kernel.yama.ptrace_scope=0`
To reproduce a test scenario as a bare-metal shell script
without the test harness, use cargo ktstr export.
Release profile — panic = "abort"
ktstr's release profile sets panic = "abort". Any panic on any
thread tears down the entire process without unwinding: Drop
impls do not run, std::panic::catch_unwind cannot observe the
failure, and libc::abort delivers SIGABRT before the kernel
returns control.
Contributors writing library or binary code should write
panic-free code on every thread that runs in the release profile
— especially the monitor loop, KVM vCPU threads, and anything
spawned from WorkloadHandle. Relying on catch_unwind as a
soft failure boundary is a bug; introduce explicit Result
plumbing instead. The only escape hatch is panic_hook (see
src/vmm/vcpu_panic.rs), which runs synchronously on the
panicking thread before libc::abort to flip kill/exited
signalling atomics; it does not recover, only classifies.
Tests run under the default panic = "unwind" profile, so
catch_unwind works as expected inside #[test] bodies — but
code paths that only execute under the release profile cannot be
tested for unwind-safety directly.
Documentation
Guide -- getting started, concepts, writing tests, recipes, architecture.
ctprof reference -- metric registry, aggregation rules, taskstats kconfig gating, adding-a-metric guide.
API docs -- rustdoc for all workspace crates.
Contributing
Pull requests welcome. See CONTRIBUTING.md for the workflow, coding conventions, and how to run the test suite locally.
License
GPL-2.0-only