Skip to main content

Group

Struct Group 

Source
pub struct Group { /* private fields */ }
Expand description

A group of counters that can be managed as a unit.

A Group represents a group of Counters that can be enabled, disabled, reset, or read as a single atomic operation. This is necessary if you want to compare counter values, produce ratios, and so on, since those operations are only meaningful on counters that cover exactly the same period of execution.

A Counter is placed in a group when it is created, by calling the Builder’s group method. A Group’s read method returns values of all its member counters at once as a Counts value, which can be indexed by Counter to retrieve a specific value.

For example, the following program computes the average number of cycles used per instruction retired for a call to println!:

use perf_event::{Builder, Group};
use perf_event::events::Hardware;

let mut group = Group::new()?;
let cycles = Builder::new().group(&mut group).kind(Hardware::CPU_CYCLES).build()?;
let insns = Builder::new().group(&mut group).kind(Hardware::INSTRUCTIONS).build()?;

let vec = (0..=51).collect::<Vec<_>>();

group.enable()?;
println!("{:?}", vec);
group.disable()?;

let counts = group.read()?;
println!("cycles / instructions: {} / {} ({:.2} cpi)",
         counts[&cycles],
         counts[&insns],
         (counts[&cycles] as f64 / counts[&insns] as f64));

The lifetimes of Counters and Groups are independent: placing a Counter in a Group does not take ownership of the Counter, nor must the Counters in a group outlive the Group. If a Counter is dropped, it is simply removed from its Group, and omitted from future results. If a Group is dropped, its individual counters continue to count.

Enabling or disabling a Group affects each Counter that belongs to it. Subsequent reads from the Counter will not reflect activity while the Group was disabled, unless the Counter is re-enabled individually.

A Group and its members must all observe the same tasks and cpus; mixing these makes building the Counter return an error. Unfortunately, there is no way at present to specify a Group’s task and cpu, so you can only use Group on the calling task. If this is a problem, please file an issue.

Internally, a Group is just a wrapper around an event file descriptor.

§Limits on group size

Hardware counters are implemented using special-purpose registers on the processor, of which there are only a fixed number. (For example, an Intel high-end laptop processor from 2015 has four such registers per virtual processor.) Without using groups, if you request more hardware counters than the processor can actually support, a complete count isn’t possible, but the kernel will rotate the processor’s real registers amongst the measurements you’ve requested to at least produce a sample.

But since the point of a counter group is that its members all cover exactly the same period of time, this tactic can’t be applied to support large groups. If the kernel cannot schedule a group, its counters remain zero. I think you can detect this situation by comparing the group’s time_enabled and time_running values. It might also be useful to set the pinned bit, which puts the counter in an error state if it’s not able to be put on the CPU; see [#10].

According to the perf_list(1) man page, you may be able to free up a hardware counter by disabling the kernel’s NMI watchdog, which reserves one for detecting kernel hangs:

$ echo 0 > /proc/sys/kernel/nmi_watchdog

You can reenable the watchdog when you’re done like this:

$ echo 1 > /proc/sys/kernel/nmi_watchdog

Implementations§

Source§

impl Group

Source

pub fn new() -> Result<Group>

Construct a new, empty Group.

Examples found in repository?
examples/println-cpi.rs (line 5)
1fn main() -> std::io::Result<()> {
2    use perf_event::events::Hardware;
3    use perf_event::{Builder, Group};
4
5    let mut group = Group::new()?;
6    let cycles = Builder::new()
7        .group(&mut group)
8        .kind(Hardware::CPU_CYCLES)
9        .build()?;
10    let insns = Builder::new()
11        .group(&mut group)
12        .kind(Hardware::INSTRUCTIONS)
13        .build()?;
14
15    let vec = (0..=51).collect::<Vec<_>>();
16
17    group.enable()?;
18    println!("{:?}", vec);
19    group.disable()?;
20
21    let counts = group.read()?;
22    println!(
23        "cycles / instructions: {} / {} ({:.2} cpi)",
24        counts[&cycles],
25        counts[&insns],
26        (counts[&cycles] as f64 / counts[&insns] as f64)
27    );
28
29    Ok(())
30}
More examples
Hide additional examples
examples/group.rs (line 15)
4fn main() -> std::io::Result<()> {
5    const ACCESS: Cache = Cache {
6        which: WhichCache::L1D,
7        operation: CacheOp::READ,
8        result: CacheResult::ACCESS,
9    };
10    const MISS: Cache = Cache {
11        result: CacheResult::MISS,
12        ..ACCESS
13    };
14
15    let mut group = Group::new()?;
16    let access_counter = Builder::new().group(&mut group).kind(ACCESS).build()?;
17    let miss_counter = Builder::new().group(&mut group).kind(MISS).build()?;
18    let branches = Builder::new()
19        .group(&mut group)
20        .kind(Hardware::BRANCH_INSTRUCTIONS)
21        .build()?;
22    let missed_branches = Builder::new()
23        .group(&mut group)
24        .kind(Hardware::BRANCH_MISSES)
25        .build()?;
26
27    // Note that if you add more counters than you actually have hardware for,
28    // the kernel will time-slice them, which means you may get no coverage for
29    // short measurements. See the documentation.
30
31    let vec = (0..=51).collect::<Vec<_>>();
32
33    group.enable()?;
34    println!("{:?}", vec);
35    group.disable()?;
36
37    let counts = group.read()?;
38    println!(
39        "L1D cache misses/references: {} / {} ({:.0}%)",
40        counts[&miss_counter],
41        counts[&access_counter],
42        (counts[&miss_counter] as f64 / counts[&access_counter] as f64) * 100.0
43    );
44
45    println!(
46        "branch prediction misses/total: {} / {} ({:.0}%)",
47        counts[&missed_branches],
48        counts[&branches],
49        (counts[&missed_branches] as f64 / counts[&branches] as f64) * 100.0
50    );
51
52    // You can iterate over a `Counts` value:
53    for (id, value) in &counts {
54        println!("Counter id {} has value {}", id, value);
55    }
56
57    Ok(())
58}
examples/locality.rs (line 88)
84fn measure(label: &str, task: impl FnOnce()) {
85    use perf_event::events::{Cache, CacheOp, CacheResult, WhichCache};
86    use perf_event::{Builder, Group};
87
88    let mut group = Group::new().expect("creating group is ok");
89    let read_counter = Builder::new()
90        .group(&mut group)
91        .kind(Cache {
92            which: WhichCache::L1D,
93            operation: CacheOp::READ,
94            result: CacheResult::ACCESS,
95        })
96        .build()
97        .expect("building read_counter is ok");
98    let read_miss_counter = Builder::new()
99        .group(&mut group)
100        .kind(Cache {
101            which: WhichCache::L1D,
102            operation: CacheOp::READ,
103            result: CacheResult::MISS,
104        })
105        .build()
106        .expect("building read_miss_counter is ok");
107    let prefetch_counter = Builder::new()
108        .group(&mut group)
109        .kind(Cache {
110            which: WhichCache::L1D,
111            operation: CacheOp::PREFETCH,
112            result: CacheResult::ACCESS,
113        })
114        .build()
115        .expect("building prefetch_counter is ok");
116
117    group.enable().expect("enabling group is ok");
118    task();
119    group.disable().expect("disabling group is ok");
120
121    let counts = group.read().expect("reading group is ok");
122    let reads = counts[&read_counter];
123    let read_misses = counts[&read_miss_counter];
124    let read_hits = reads - read_misses;
125    let prefetches = counts[&prefetch_counter];
126
127    println!(
128        "{label}: hits / reads: {read_hits:8} / {reads:8} {:6.2}%, \
129         prefetched {prefetches:8}",
130        (read_hits as f64 / reads as f64) * 100.0,
131    );
132
133    if counts.time_enabled() != counts.time_running() {
134        println!(
135            "time enabled: {}  time running: {}",
136            counts.time_enabled(),
137            counts.time_running(),
138        );
139    }
140}
examples/big-group.rs (line 15)
4fn main() -> std::io::Result<()> {
5    const ACCESS: Cache = Cache {
6        which: WhichCache::L1D,
7        operation: CacheOp::READ,
8        result: CacheResult::ACCESS,
9    };
10    const MISS: Cache = Cache {
11        result: CacheResult::MISS,
12        ..ACCESS
13    };
14
15    let mut group = Group::new()?;
16    let access_counter = Builder::new().group(&mut group).kind(ACCESS).build()?;
17    let miss_counter = Builder::new().group(&mut group).kind(MISS).build()?;
18    let branches = Builder::new()
19        .group(&mut group)
20        .kind(Hardware::BRANCH_INSTRUCTIONS)
21        .build()?;
22    let missed_branches = Builder::new()
23        .group(&mut group)
24        .kind(Hardware::BRANCH_MISSES)
25        .build()?;
26    let insns = Builder::new()
27        .group(&mut group)
28        .kind(Hardware::INSTRUCTIONS)
29        .build()?;
30    let cycles = Builder::new()
31        .group(&mut group)
32        .kind(Hardware::CPU_CYCLES)
33        .build()?;
34
35    // Note that if you add more counters than you actually have hardware for,
36    // the kernel will time-slice them, which means you may get no coverage for
37    // short measurements. See the documentation.
38    //
39    // On my machine, this program won't collect any data unless I disable the
40    // NMI watchdog, as described in the documentation for `Group`. My machine
41    // has four counters, and this program tries to use all of them, but the NMI
42    // watchdog uses one up.
43
44    let mut vec = (0..=100000).collect::<Vec<_>>();
45
46    group.enable()?;
47    vec.sort();
48    println!("{:?}", &vec[0..10]);
49    group.disable()?;
50
51    let counts = group.read()?;
52
53    println!(
54        "enabled for {}ns, actually running for {}ns",
55        counts.time_enabled(),
56        counts.time_running()
57    );
58
59    if counts.time_running() == 0 {
60        println!("Group was never running; no results available.");
61        return Ok(());
62    }
63
64    if counts.time_running() < counts.time_enabled() {
65        println!("Counts cover only a portion of the execution.");
66    }
67
68    println!(
69        "L1D cache misses/references: {} / {} ({:.0}%)",
70        counts[&miss_counter],
71        counts[&access_counter],
72        (counts[&miss_counter] as f64 / counts[&access_counter] as f64) * 100.0
73    );
74
75    println!(
76        "branch prediction misses/total: {} / {} ({:.0}%)",
77        counts[&missed_branches],
78        counts[&branches],
79        (counts[&missed_branches] as f64 / counts[&branches] as f64) * 100.0
80    );
81
82    println!(
83        "{} instructions, {} cycles ({:.2} cpi)",
84        counts[&insns],
85        counts[&cycles],
86        counts[&cycles] as f64 / counts[&insns] as f64
87    );
88
89    // You can iterate over a `Counts` value:
90    for (id, value) in &counts {
91        println!("Counter id {} has value {}", id, value);
92    }
93
94    Ok(())
95}
Source

pub fn enable(&mut self) -> Result<()>

Allow all Counters in this Group to begin counting their designated events, as a single atomic operation.

This does not affect whatever values the Counters had previously; new events add to the current counts. To clear the Counters, use the reset method.

Examples found in repository?
examples/println-cpi.rs (line 17)
1fn main() -> std::io::Result<()> {
2    use perf_event::events::Hardware;
3    use perf_event::{Builder, Group};
4
5    let mut group = Group::new()?;
6    let cycles = Builder::new()
7        .group(&mut group)
8        .kind(Hardware::CPU_CYCLES)
9        .build()?;
10    let insns = Builder::new()
11        .group(&mut group)
12        .kind(Hardware::INSTRUCTIONS)
13        .build()?;
14
15    let vec = (0..=51).collect::<Vec<_>>();
16
17    group.enable()?;
18    println!("{:?}", vec);
19    group.disable()?;
20
21    let counts = group.read()?;
22    println!(
23        "cycles / instructions: {} / {} ({:.2} cpi)",
24        counts[&cycles],
25        counts[&insns],
26        (counts[&cycles] as f64 / counts[&insns] as f64)
27    );
28
29    Ok(())
30}
More examples
Hide additional examples
examples/group.rs (line 33)
4fn main() -> std::io::Result<()> {
5    const ACCESS: Cache = Cache {
6        which: WhichCache::L1D,
7        operation: CacheOp::READ,
8        result: CacheResult::ACCESS,
9    };
10    const MISS: Cache = Cache {
11        result: CacheResult::MISS,
12        ..ACCESS
13    };
14
15    let mut group = Group::new()?;
16    let access_counter = Builder::new().group(&mut group).kind(ACCESS).build()?;
17    let miss_counter = Builder::new().group(&mut group).kind(MISS).build()?;
18    let branches = Builder::new()
19        .group(&mut group)
20        .kind(Hardware::BRANCH_INSTRUCTIONS)
21        .build()?;
22    let missed_branches = Builder::new()
23        .group(&mut group)
24        .kind(Hardware::BRANCH_MISSES)
25        .build()?;
26
27    // Note that if you add more counters than you actually have hardware for,
28    // the kernel will time-slice them, which means you may get no coverage for
29    // short measurements. See the documentation.
30
31    let vec = (0..=51).collect::<Vec<_>>();
32
33    group.enable()?;
34    println!("{:?}", vec);
35    group.disable()?;
36
37    let counts = group.read()?;
38    println!(
39        "L1D cache misses/references: {} / {} ({:.0}%)",
40        counts[&miss_counter],
41        counts[&access_counter],
42        (counts[&miss_counter] as f64 / counts[&access_counter] as f64) * 100.0
43    );
44
45    println!(
46        "branch prediction misses/total: {} / {} ({:.0}%)",
47        counts[&missed_branches],
48        counts[&branches],
49        (counts[&missed_branches] as f64 / counts[&branches] as f64) * 100.0
50    );
51
52    // You can iterate over a `Counts` value:
53    for (id, value) in &counts {
54        println!("Counter id {} has value {}", id, value);
55    }
56
57    Ok(())
58}
examples/locality.rs (line 117)
84fn measure(label: &str, task: impl FnOnce()) {
85    use perf_event::events::{Cache, CacheOp, CacheResult, WhichCache};
86    use perf_event::{Builder, Group};
87
88    let mut group = Group::new().expect("creating group is ok");
89    let read_counter = Builder::new()
90        .group(&mut group)
91        .kind(Cache {
92            which: WhichCache::L1D,
93            operation: CacheOp::READ,
94            result: CacheResult::ACCESS,
95        })
96        .build()
97        .expect("building read_counter is ok");
98    let read_miss_counter = Builder::new()
99        .group(&mut group)
100        .kind(Cache {
101            which: WhichCache::L1D,
102            operation: CacheOp::READ,
103            result: CacheResult::MISS,
104        })
105        .build()
106        .expect("building read_miss_counter is ok");
107    let prefetch_counter = Builder::new()
108        .group(&mut group)
109        .kind(Cache {
110            which: WhichCache::L1D,
111            operation: CacheOp::PREFETCH,
112            result: CacheResult::ACCESS,
113        })
114        .build()
115        .expect("building prefetch_counter is ok");
116
117    group.enable().expect("enabling group is ok");
118    task();
119    group.disable().expect("disabling group is ok");
120
121    let counts = group.read().expect("reading group is ok");
122    let reads = counts[&read_counter];
123    let read_misses = counts[&read_miss_counter];
124    let read_hits = reads - read_misses;
125    let prefetches = counts[&prefetch_counter];
126
127    println!(
128        "{label}: hits / reads: {read_hits:8} / {reads:8} {:6.2}%, \
129         prefetched {prefetches:8}",
130        (read_hits as f64 / reads as f64) * 100.0,
131    );
132
133    if counts.time_enabled() != counts.time_running() {
134        println!(
135            "time enabled: {}  time running: {}",
136            counts.time_enabled(),
137            counts.time_running(),
138        );
139    }
140}
examples/big-group.rs (line 46)
4fn main() -> std::io::Result<()> {
5    const ACCESS: Cache = Cache {
6        which: WhichCache::L1D,
7        operation: CacheOp::READ,
8        result: CacheResult::ACCESS,
9    };
10    const MISS: Cache = Cache {
11        result: CacheResult::MISS,
12        ..ACCESS
13    };
14
15    let mut group = Group::new()?;
16    let access_counter = Builder::new().group(&mut group).kind(ACCESS).build()?;
17    let miss_counter = Builder::new().group(&mut group).kind(MISS).build()?;
18    let branches = Builder::new()
19        .group(&mut group)
20        .kind(Hardware::BRANCH_INSTRUCTIONS)
21        .build()?;
22    let missed_branches = Builder::new()
23        .group(&mut group)
24        .kind(Hardware::BRANCH_MISSES)
25        .build()?;
26    let insns = Builder::new()
27        .group(&mut group)
28        .kind(Hardware::INSTRUCTIONS)
29        .build()?;
30    let cycles = Builder::new()
31        .group(&mut group)
32        .kind(Hardware::CPU_CYCLES)
33        .build()?;
34
35    // Note that if you add more counters than you actually have hardware for,
36    // the kernel will time-slice them, which means you may get no coverage for
37    // short measurements. See the documentation.
38    //
39    // On my machine, this program won't collect any data unless I disable the
40    // NMI watchdog, as described in the documentation for `Group`. My machine
41    // has four counters, and this program tries to use all of them, but the NMI
42    // watchdog uses one up.
43
44    let mut vec = (0..=100000).collect::<Vec<_>>();
45
46    group.enable()?;
47    vec.sort();
48    println!("{:?}", &vec[0..10]);
49    group.disable()?;
50
51    let counts = group.read()?;
52
53    println!(
54        "enabled for {}ns, actually running for {}ns",
55        counts.time_enabled(),
56        counts.time_running()
57    );
58
59    if counts.time_running() == 0 {
60        println!("Group was never running; no results available.");
61        return Ok(());
62    }
63
64    if counts.time_running() < counts.time_enabled() {
65        println!("Counts cover only a portion of the execution.");
66    }
67
68    println!(
69        "L1D cache misses/references: {} / {} ({:.0}%)",
70        counts[&miss_counter],
71        counts[&access_counter],
72        (counts[&miss_counter] as f64 / counts[&access_counter] as f64) * 100.0
73    );
74
75    println!(
76        "branch prediction misses/total: {} / {} ({:.0}%)",
77        counts[&missed_branches],
78        counts[&branches],
79        (counts[&missed_branches] as f64 / counts[&branches] as f64) * 100.0
80    );
81
82    println!(
83        "{} instructions, {} cycles ({:.2} cpi)",
84        counts[&insns],
85        counts[&cycles],
86        counts[&cycles] as f64 / counts[&insns] as f64
87    );
88
89    // You can iterate over a `Counts` value:
90    for (id, value) in &counts {
91        println!("Counter id {} has value {}", id, value);
92    }
93
94    Ok(())
95}
Source

pub fn disable(&mut self) -> Result<()>

Make all Counters in this Group stop counting their designated events, as a single atomic operation. Their counts are unaffected.

Examples found in repository?
examples/println-cpi.rs (line 19)
1fn main() -> std::io::Result<()> {
2    use perf_event::events::Hardware;
3    use perf_event::{Builder, Group};
4
5    let mut group = Group::new()?;
6    let cycles = Builder::new()
7        .group(&mut group)
8        .kind(Hardware::CPU_CYCLES)
9        .build()?;
10    let insns = Builder::new()
11        .group(&mut group)
12        .kind(Hardware::INSTRUCTIONS)
13        .build()?;
14
15    let vec = (0..=51).collect::<Vec<_>>();
16
17    group.enable()?;
18    println!("{:?}", vec);
19    group.disable()?;
20
21    let counts = group.read()?;
22    println!(
23        "cycles / instructions: {} / {} ({:.2} cpi)",
24        counts[&cycles],
25        counts[&insns],
26        (counts[&cycles] as f64 / counts[&insns] as f64)
27    );
28
29    Ok(())
30}
More examples
Hide additional examples
examples/group.rs (line 35)
4fn main() -> std::io::Result<()> {
5    const ACCESS: Cache = Cache {
6        which: WhichCache::L1D,
7        operation: CacheOp::READ,
8        result: CacheResult::ACCESS,
9    };
10    const MISS: Cache = Cache {
11        result: CacheResult::MISS,
12        ..ACCESS
13    };
14
15    let mut group = Group::new()?;
16    let access_counter = Builder::new().group(&mut group).kind(ACCESS).build()?;
17    let miss_counter = Builder::new().group(&mut group).kind(MISS).build()?;
18    let branches = Builder::new()
19        .group(&mut group)
20        .kind(Hardware::BRANCH_INSTRUCTIONS)
21        .build()?;
22    let missed_branches = Builder::new()
23        .group(&mut group)
24        .kind(Hardware::BRANCH_MISSES)
25        .build()?;
26
27    // Note that if you add more counters than you actually have hardware for,
28    // the kernel will time-slice them, which means you may get no coverage for
29    // short measurements. See the documentation.
30
31    let vec = (0..=51).collect::<Vec<_>>();
32
33    group.enable()?;
34    println!("{:?}", vec);
35    group.disable()?;
36
37    let counts = group.read()?;
38    println!(
39        "L1D cache misses/references: {} / {} ({:.0}%)",
40        counts[&miss_counter],
41        counts[&access_counter],
42        (counts[&miss_counter] as f64 / counts[&access_counter] as f64) * 100.0
43    );
44
45    println!(
46        "branch prediction misses/total: {} / {} ({:.0}%)",
47        counts[&missed_branches],
48        counts[&branches],
49        (counts[&missed_branches] as f64 / counts[&branches] as f64) * 100.0
50    );
51
52    // You can iterate over a `Counts` value:
53    for (id, value) in &counts {
54        println!("Counter id {} has value {}", id, value);
55    }
56
57    Ok(())
58}
examples/locality.rs (line 119)
84fn measure(label: &str, task: impl FnOnce()) {
85    use perf_event::events::{Cache, CacheOp, CacheResult, WhichCache};
86    use perf_event::{Builder, Group};
87
88    let mut group = Group::new().expect("creating group is ok");
89    let read_counter = Builder::new()
90        .group(&mut group)
91        .kind(Cache {
92            which: WhichCache::L1D,
93            operation: CacheOp::READ,
94            result: CacheResult::ACCESS,
95        })
96        .build()
97        .expect("building read_counter is ok");
98    let read_miss_counter = Builder::new()
99        .group(&mut group)
100        .kind(Cache {
101            which: WhichCache::L1D,
102            operation: CacheOp::READ,
103            result: CacheResult::MISS,
104        })
105        .build()
106        .expect("building read_miss_counter is ok");
107    let prefetch_counter = Builder::new()
108        .group(&mut group)
109        .kind(Cache {
110            which: WhichCache::L1D,
111            operation: CacheOp::PREFETCH,
112            result: CacheResult::ACCESS,
113        })
114        .build()
115        .expect("building prefetch_counter is ok");
116
117    group.enable().expect("enabling group is ok");
118    task();
119    group.disable().expect("disabling group is ok");
120
121    let counts = group.read().expect("reading group is ok");
122    let reads = counts[&read_counter];
123    let read_misses = counts[&read_miss_counter];
124    let read_hits = reads - read_misses;
125    let prefetches = counts[&prefetch_counter];
126
127    println!(
128        "{label}: hits / reads: {read_hits:8} / {reads:8} {:6.2}%, \
129         prefetched {prefetches:8}",
130        (read_hits as f64 / reads as f64) * 100.0,
131    );
132
133    if counts.time_enabled() != counts.time_running() {
134        println!(
135            "time enabled: {}  time running: {}",
136            counts.time_enabled(),
137            counts.time_running(),
138        );
139    }
140}
examples/big-group.rs (line 49)
4fn main() -> std::io::Result<()> {
5    const ACCESS: Cache = Cache {
6        which: WhichCache::L1D,
7        operation: CacheOp::READ,
8        result: CacheResult::ACCESS,
9    };
10    const MISS: Cache = Cache {
11        result: CacheResult::MISS,
12        ..ACCESS
13    };
14
15    let mut group = Group::new()?;
16    let access_counter = Builder::new().group(&mut group).kind(ACCESS).build()?;
17    let miss_counter = Builder::new().group(&mut group).kind(MISS).build()?;
18    let branches = Builder::new()
19        .group(&mut group)
20        .kind(Hardware::BRANCH_INSTRUCTIONS)
21        .build()?;
22    let missed_branches = Builder::new()
23        .group(&mut group)
24        .kind(Hardware::BRANCH_MISSES)
25        .build()?;
26    let insns = Builder::new()
27        .group(&mut group)
28        .kind(Hardware::INSTRUCTIONS)
29        .build()?;
30    let cycles = Builder::new()
31        .group(&mut group)
32        .kind(Hardware::CPU_CYCLES)
33        .build()?;
34
35    // Note that if you add more counters than you actually have hardware for,
36    // the kernel will time-slice them, which means you may get no coverage for
37    // short measurements. See the documentation.
38    //
39    // On my machine, this program won't collect any data unless I disable the
40    // NMI watchdog, as described in the documentation for `Group`. My machine
41    // has four counters, and this program tries to use all of them, but the NMI
42    // watchdog uses one up.
43
44    let mut vec = (0..=100000).collect::<Vec<_>>();
45
46    group.enable()?;
47    vec.sort();
48    println!("{:?}", &vec[0..10]);
49    group.disable()?;
50
51    let counts = group.read()?;
52
53    println!(
54        "enabled for {}ns, actually running for {}ns",
55        counts.time_enabled(),
56        counts.time_running()
57    );
58
59    if counts.time_running() == 0 {
60        println!("Group was never running; no results available.");
61        return Ok(());
62    }
63
64    if counts.time_running() < counts.time_enabled() {
65        println!("Counts cover only a portion of the execution.");
66    }
67
68    println!(
69        "L1D cache misses/references: {} / {} ({:.0}%)",
70        counts[&miss_counter],
71        counts[&access_counter],
72        (counts[&miss_counter] as f64 / counts[&access_counter] as f64) * 100.0
73    );
74
75    println!(
76        "branch prediction misses/total: {} / {} ({:.0}%)",
77        counts[&missed_branches],
78        counts[&branches],
79        (counts[&missed_branches] as f64 / counts[&branches] as f64) * 100.0
80    );
81
82    println!(
83        "{} instructions, {} cycles ({:.2} cpi)",
84        counts[&insns],
85        counts[&cycles],
86        counts[&cycles] as f64 / counts[&insns] as f64
87    );
88
89    // You can iterate over a `Counts` value:
90    for (id, value) in &counts {
91        println!("Counter id {} has value {}", id, value);
92    }
93
94    Ok(())
95}
Source

pub fn reset(&mut self) -> Result<()>

Reset all Counters in this Group to zero, as a single atomic operation.

Source

pub fn read(&mut self) -> Result<Counts>

Return the values of all the Counters in this Group as a Counts value.

A Counts value is a map from specific Counters to their values. You can find a specific Counter’s value by indexing:

let mut group = Group::new()?;
let counter1 = Builder::new().group(&mut group).kind(...).build()?;
let counter2 = Builder::new().group(&mut group).kind(...).build()?;
...
let counts = group.read()?;
println!("Rhombus inclinations per taxi medallion: {} / {} ({:.0}%)",
         counts[&counter1],
         counts[&counter2],
         (counts[&counter1] as f64 / counts[&counter2] as f64) * 100.0);
Examples found in repository?
examples/println-cpi.rs (line 21)
1fn main() -> std::io::Result<()> {
2    use perf_event::events::Hardware;
3    use perf_event::{Builder, Group};
4
5    let mut group = Group::new()?;
6    let cycles = Builder::new()
7        .group(&mut group)
8        .kind(Hardware::CPU_CYCLES)
9        .build()?;
10    let insns = Builder::new()
11        .group(&mut group)
12        .kind(Hardware::INSTRUCTIONS)
13        .build()?;
14
15    let vec = (0..=51).collect::<Vec<_>>();
16
17    group.enable()?;
18    println!("{:?}", vec);
19    group.disable()?;
20
21    let counts = group.read()?;
22    println!(
23        "cycles / instructions: {} / {} ({:.2} cpi)",
24        counts[&cycles],
25        counts[&insns],
26        (counts[&cycles] as f64 / counts[&insns] as f64)
27    );
28
29    Ok(())
30}
More examples
Hide additional examples
examples/group.rs (line 37)
4fn main() -> std::io::Result<()> {
5    const ACCESS: Cache = Cache {
6        which: WhichCache::L1D,
7        operation: CacheOp::READ,
8        result: CacheResult::ACCESS,
9    };
10    const MISS: Cache = Cache {
11        result: CacheResult::MISS,
12        ..ACCESS
13    };
14
15    let mut group = Group::new()?;
16    let access_counter = Builder::new().group(&mut group).kind(ACCESS).build()?;
17    let miss_counter = Builder::new().group(&mut group).kind(MISS).build()?;
18    let branches = Builder::new()
19        .group(&mut group)
20        .kind(Hardware::BRANCH_INSTRUCTIONS)
21        .build()?;
22    let missed_branches = Builder::new()
23        .group(&mut group)
24        .kind(Hardware::BRANCH_MISSES)
25        .build()?;
26
27    // Note that if you add more counters than you actually have hardware for,
28    // the kernel will time-slice them, which means you may get no coverage for
29    // short measurements. See the documentation.
30
31    let vec = (0..=51).collect::<Vec<_>>();
32
33    group.enable()?;
34    println!("{:?}", vec);
35    group.disable()?;
36
37    let counts = group.read()?;
38    println!(
39        "L1D cache misses/references: {} / {} ({:.0}%)",
40        counts[&miss_counter],
41        counts[&access_counter],
42        (counts[&miss_counter] as f64 / counts[&access_counter] as f64) * 100.0
43    );
44
45    println!(
46        "branch prediction misses/total: {} / {} ({:.0}%)",
47        counts[&missed_branches],
48        counts[&branches],
49        (counts[&missed_branches] as f64 / counts[&branches] as f64) * 100.0
50    );
51
52    // You can iterate over a `Counts` value:
53    for (id, value) in &counts {
54        println!("Counter id {} has value {}", id, value);
55    }
56
57    Ok(())
58}
examples/locality.rs (line 121)
84fn measure(label: &str, task: impl FnOnce()) {
85    use perf_event::events::{Cache, CacheOp, CacheResult, WhichCache};
86    use perf_event::{Builder, Group};
87
88    let mut group = Group::new().expect("creating group is ok");
89    let read_counter = Builder::new()
90        .group(&mut group)
91        .kind(Cache {
92            which: WhichCache::L1D,
93            operation: CacheOp::READ,
94            result: CacheResult::ACCESS,
95        })
96        .build()
97        .expect("building read_counter is ok");
98    let read_miss_counter = Builder::new()
99        .group(&mut group)
100        .kind(Cache {
101            which: WhichCache::L1D,
102            operation: CacheOp::READ,
103            result: CacheResult::MISS,
104        })
105        .build()
106        .expect("building read_miss_counter is ok");
107    let prefetch_counter = Builder::new()
108        .group(&mut group)
109        .kind(Cache {
110            which: WhichCache::L1D,
111            operation: CacheOp::PREFETCH,
112            result: CacheResult::ACCESS,
113        })
114        .build()
115        .expect("building prefetch_counter is ok");
116
117    group.enable().expect("enabling group is ok");
118    task();
119    group.disable().expect("disabling group is ok");
120
121    let counts = group.read().expect("reading group is ok");
122    let reads = counts[&read_counter];
123    let read_misses = counts[&read_miss_counter];
124    let read_hits = reads - read_misses;
125    let prefetches = counts[&prefetch_counter];
126
127    println!(
128        "{label}: hits / reads: {read_hits:8} / {reads:8} {:6.2}%, \
129         prefetched {prefetches:8}",
130        (read_hits as f64 / reads as f64) * 100.0,
131    );
132
133    if counts.time_enabled() != counts.time_running() {
134        println!(
135            "time enabled: {}  time running: {}",
136            counts.time_enabled(),
137            counts.time_running(),
138        );
139    }
140}
examples/big-group.rs (line 51)
4fn main() -> std::io::Result<()> {
5    const ACCESS: Cache = Cache {
6        which: WhichCache::L1D,
7        operation: CacheOp::READ,
8        result: CacheResult::ACCESS,
9    };
10    const MISS: Cache = Cache {
11        result: CacheResult::MISS,
12        ..ACCESS
13    };
14
15    let mut group = Group::new()?;
16    let access_counter = Builder::new().group(&mut group).kind(ACCESS).build()?;
17    let miss_counter = Builder::new().group(&mut group).kind(MISS).build()?;
18    let branches = Builder::new()
19        .group(&mut group)
20        .kind(Hardware::BRANCH_INSTRUCTIONS)
21        .build()?;
22    let missed_branches = Builder::new()
23        .group(&mut group)
24        .kind(Hardware::BRANCH_MISSES)
25        .build()?;
26    let insns = Builder::new()
27        .group(&mut group)
28        .kind(Hardware::INSTRUCTIONS)
29        .build()?;
30    let cycles = Builder::new()
31        .group(&mut group)
32        .kind(Hardware::CPU_CYCLES)
33        .build()?;
34
35    // Note that if you add more counters than you actually have hardware for,
36    // the kernel will time-slice them, which means you may get no coverage for
37    // short measurements. See the documentation.
38    //
39    // On my machine, this program won't collect any data unless I disable the
40    // NMI watchdog, as described in the documentation for `Group`. My machine
41    // has four counters, and this program tries to use all of them, but the NMI
42    // watchdog uses one up.
43
44    let mut vec = (0..=100000).collect::<Vec<_>>();
45
46    group.enable()?;
47    vec.sort();
48    println!("{:?}", &vec[0..10]);
49    group.disable()?;
50
51    let counts = group.read()?;
52
53    println!(
54        "enabled for {}ns, actually running for {}ns",
55        counts.time_enabled(),
56        counts.time_running()
57    );
58
59    if counts.time_running() == 0 {
60        println!("Group was never running; no results available.");
61        return Ok(());
62    }
63
64    if counts.time_running() < counts.time_enabled() {
65        println!("Counts cover only a portion of the execution.");
66    }
67
68    println!(
69        "L1D cache misses/references: {} / {} ({:.0}%)",
70        counts[&miss_counter],
71        counts[&access_counter],
72        (counts[&miss_counter] as f64 / counts[&access_counter] as f64) * 100.0
73    );
74
75    println!(
76        "branch prediction misses/total: {} / {} ({:.0}%)",
77        counts[&missed_branches],
78        counts[&branches],
79        (counts[&missed_branches] as f64 / counts[&branches] as f64) * 100.0
80    );
81
82    println!(
83        "{} instructions, {} cycles ({:.2} cpi)",
84        counts[&insns],
85        counts[&cycles],
86        counts[&cycles] as f64 / counts[&insns] as f64
87    );
88
89    // You can iterate over a `Counts` value:
90    for (id, value) in &counts {
91        println!("Counter id {} has value {}", id, value);
92    }
93
94    Ok(())
95}

Trait Implementations§

Source§

impl AsRawFd for Group

Source§

fn as_raw_fd(&self) -> RawFd

Extracts the raw file descriptor. Read more
Source§

impl Debug for Group

Source§

fn fmt(&self, fmt: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl IntoRawFd for Group

Source§

fn into_raw_fd(self) -> RawFd

Consumes this object, returning the raw underlying file descriptor. Read more

Auto Trait Implementations§

§

impl Freeze for Group

§

impl RefUnwindSafe for Group

§

impl Send for Group

§

impl Sync for Group

§

impl Unpin for Group

§

impl UnwindSafe for Group

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.