1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
//! Curated canned scenarios for common scheduler test patterns.
//!
//! Each function takes a [`Ctx`] and returns `Result<AssertResult>`.
//! These are thin wrappers over existing scenario implementations,
//! providing better names in a single discoverable namespace.
//!
//! # Categories
//!
//! - **Basic**: steady-state cgroups with no dynamic ops.
//! - **Cpuset**: cpuset assignment and mid-run mutation.
//! - **Dynamic**: cgroup add/remove during a running workload.
//! - **Affinity**: per-worker CPU affinity patterns.
//! - **Stress**: oversubscription, per-CPU pinning, diverse workloads.
//! - **Nested**: workers in nested sub-cgroups.
//!
//! # Example
//!
//! ```rust,no_run
//! use ktstr::prelude::*;
//!
//! #[ktstr_test(sockets = 2, cores = 4, threads = 1)]
//! fn test_steady(ctx: &Ctx) -> Result<AssertResult> {
//! scenarios::steady(ctx)
//! }
//! ```
use Result;
use crateAssertResult;
use crateWorkType;
use Ctx;
use ;
// ---------------------------------------------------------------------------
// Basic
// ---------------------------------------------------------------------------
/// Two cgroups, no cpusets, equal CPU-spin load.
///
/// Simplest possible scenario: tests that the scheduler can handle
/// two cgroups running simultaneously without starvation.
/// Two cgroups with LLC-aligned cpusets.
///
/// Each cgroup gets CPUs from a different LLC. Tests scheduler
/// behavior when cgroups are partitioned along cache boundaries.
/// Skips on single-LLC topologies.
/// Two cgroups with 32 mixed workers each (oversubscribed).
///
/// Worker count far exceeds CPU count, testing dispatch under
/// heavy oversubscription with mixed workload types.
// ---------------------------------------------------------------------------
// Cpuset — delegates to super::cpuset
// ---------------------------------------------------------------------------
/// Two cgroups start without cpusets, then get disjoint cpusets mid-run.
///
/// Tests the scheduler's response to cpuset assignment on running
/// cgroups. Workers must migrate to their assigned CPUs.
/// Two cgroups start with disjoint cpusets, then cpusets are cleared mid-run.
///
/// Tests the scheduler's response to cpuset removal. Workers that
/// were confined to a subset of CPUs become free to run anywhere.
/// Two cgroups with cpusets that shrink then grow.
///
/// Three-phase scenario: even split, then shrink cg_0 / grow cg_1,
/// then reverse. Tests scheduler adaptation to cpuset resizing.
// ---------------------------------------------------------------------------
// Dynamic — delegates to super::dynamic
// ---------------------------------------------------------------------------
/// Two cgroups initially, then one or two more added mid-run.
///
/// Tests the scheduler's response to new cgroups appearing while
/// workers are already running.
/// Four cgroups initially, then the second half removed mid-run.
///
/// Tests the scheduler's response to cgroup removal while workers
/// in surviving cgroups continue running.
// ---------------------------------------------------------------------------
// Affinity — delegates to super::affinity
// ---------------------------------------------------------------------------
/// Two cgroups with worker affinities randomized mid-run.
///
/// Workers start with no affinity, then get random CPU subsets
/// applied four times during the run.
/// Two cgroups with workers pinned to a 2-CPU subset.
///
/// All workers in both cgroups share the same narrow affinity mask.
/// Tests scheduler behavior under heavy contention on few CPUs.
// ---------------------------------------------------------------------------
// Stress — delegates to super::basic / super::interaction
// ---------------------------------------------------------------------------
/// Host workers competing with cgroup workers for CPU time.
///
/// Two cgroups plus unconstrained host workers (one per CPU).
/// Tests scheduler fairness between cgroup-managed and
/// non-cgroup-managed tasks.
/// Heavy + bursty + IO cgroups.
///
/// Three cgroups with different workload types: CPU-heavy, bursty
/// wake/sleep, and synchronous IO. Tests fairness across mixed
/// workload patterns.
// ---------------------------------------------------------------------------
// Nested — delegates to super::nested
// ---------------------------------------------------------------------------
/// Workers in nested sub-cgroups.
///
/// Creates a multi-level cgroup hierarchy (cg_0/sub_a, cg_0/sub_b,
/// cg_1/sub_b, cg_1/sub_a/deep) with workers at the leaf level.
/// Tests scheduler handling of nested cgroup hierarchies.
/// Move tasks between nested cgroups.
///
/// Creates nested cgroups, spawns workers in one, then moves them
/// through the hierarchy (sub -> parent -> sibling/sub -> sibling).
/// Tests task migration across nesting levels.