1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
//! BTF offsets for `BPF_MAP_TYPE_RINGBUF` / `BPF_MAP_TYPE_USER_RINGBUF`
//! and `BPF_MAP_TYPE_STACK_TRACE` diagnostic-map rendering.
//!
//! Both groups exist to surface read-only state from kernel BPF
//! infrastructure that the standard `read_value` / `iter_hash_map`
//! paths can't decode. Their walkers live next to each other in
//! `dump/render_map.rs`; their offset definitions live next to each
//! other here.
//!
//! Verified against the kernel source:
//! - `kernel/bpf/ringbuf.c` for `bpf_ringbuf_map` (line 82) and
//! `bpf_ringbuf` (line 28). `rb->mask = data_sz - 1` set in
//! `bpf_ringbuf_alloc` (line 185) — capacity is `mask + 1`.
//! `bpf_ringbuf_map` is BTF-listed via `BTF_ID_LIST_SINGLE` at
//! `kernel/bpf/ringbuf.c:377`, which forces emission of `bpf_ringbuf`
//! as a referenced type into vmlinux BTF.
//! - `kernel/bpf/stackmap.c` for `bpf_stack_map` (line 26) and
//! `stack_map_bucket` (line 19). `n_buckets =
//! roundup_pow_of_two(max_entries)` per `stack_map_alloc:122`, so
//! the iteration bound differs from the user-declared `max_entries`.
use Result;
use Btf;
use ;
/// Byte offsets within `struct bpf_ringbuf_map` and `struct bpf_ringbuf`
/// (`kernel/bpf/ringbuf.c`) needed to surface ringbuf occupancy from
/// guest memory without walking the records themselves.
///
/// The map type itself stores only a pointer to the heap-allocated
/// `bpf_ringbuf` (`bpf_ringbuf_map.rb`); the consumer/producer positions
/// and the data-region mask live on that secondary struct. The dump path
/// reads `rb` from the bpf_map base, then dereferences it via
/// `translate_any_kva` to read the four position/mask fields.
///
/// Capacity is derived from `mask + 1` — see
/// `bpf_ringbuf_area_alloc` in `kernel/bpf/ringbuf.c` which sets
/// `rb->mask = data_sz - 1` for a power-of-two `data_sz`. Pending
/// bytes is `producer_pos - consumer_pos` (both monotonically advancing
/// 64-bit counters; the kernel uses unsigned wraparound subtraction
/// to compute occupancy in the dispatch path).
/// Resolve BTF offsets for `bpf_ringbuf_map` + `bpf_ringbuf`.
/// Returns `Err` if either type or any required field is missing.
pub
/// Byte offsets within `struct bpf_stack_map` and `struct stack_map_bucket`
/// (`kernel/bpf/stackmap.c`) needed to enumerate stored stack traces
/// from guest memory.
///
/// `bpf_stack_map.buckets[]` is a flex array of `struct stack_map_bucket *`
/// indexed 0..n_buckets (where n_buckets = roundup_pow_of_two(max_entries),
/// see `stack_map_alloc`). A non-null slot points to a bucket whose
/// `data[]` flex array holds `nr` u64 program counters (or
/// `bpf_stack_build_id` records when `BPF_F_STACK_BUILD_ID` is set on
/// the map; the dump path treats both as opaque trace bytes).
/// Resolve BTF offsets for `bpf_stack_map` + `stack_map_bucket`.
/// Returns `Err` if either type or any required field is missing.
pub