canbench_rs/lib.rs
1//! `canbench` is a tool for benchmarking canisters on the Internet Computer.
2//!
3//! ## Quickstart
4//!
5//! This example is also available to tinker with in the examples directory. See the [fibonacci example](https://github.com/dfinity/bench/tree/main/examples/fibonacci).
6//!
7//! ### 1. Install the `canbench` binary.
8//!
9//! The `canbench` is what runs your canister's benchmarks.
10//!
11//! ```bash
12//! cargo install canbench
13//! ```
14//!
15//! ### 2. Add optional dependency to `Cargo.toml`
16//!
17//! Typically you do not want your benchmarks to be part of your canister when deploying it to the Internet Computer.
18//! Therefore, we include `canbench` only as an optional dependency so that it's only included when running benchmarks.
19//! For more information about optional dependencies, you can read more about them [here](https://doc.rust-lang.org/cargo/reference/features.html#optional-dependencies).
20//!
21//! ```toml
22//! canbench-rs = { version = "x.y.z", optional = true }
23//! ```
24//!
25//! ### 3. Add a configuration to `canbench.yml`
26//!
27//! The `canbench.yml` configuration file tells `canbench` how to build and run you canister.
28//! Below is a typical configuration.
29//! Note that we're compiling the canister with the `canbench` feature so that the benchmarking logic is included in the Wasm.
30//!
31//! ```yml
32//! build_cmd:
33//! cargo build --release --target wasm32-unknown-unknown --features canbench-rs
34//!
35//! wasm_path:
36//! ./target/wasm32-unknown-unknown/release/<YOUR_CANISTER>.wasm
37//! ```
38//! #### Init Args
39//!
40//! Init args can be specified using the `init_args` key in the configuration file:
41//! ```yml
42//! init_args:
43//! hex: 4449444c0001710568656c6c6f
44//! ```
45//!
46//! #### Stable Memory
47//!
48//! A file can be specified to be loaded in the canister's stable memory _after_ initialization.
49//!
50//! ```yml
51//! stable_memory:
52//! file:
53//! stable_memory.bin
54//! ```
55//!
56//! <div class="warning">Contents of the stable memory file are loaded <i>after</i> the call to the canister's init method.
57//! Therefore, changes made to stable memory in the init method would be overwritten.</div>
58//!
59//! ### 4. Start benching! 🏋🏽
60//!
61//! Let's say we have a canister that exposes a `query` computing the fibonacci sequence of a given number.
62//! Here's what that query can look like:
63//!
64//! ```rust
65//! #[ic_cdk::query]
66//! fn fibonacci(n: u32) -> u32 {
67//! if n == 0 {
68//! return 0;
69//! } else if n == 1 {
70//! return 1;
71//! }
72//!
73//! let mut a = 0;
74//! let mut b = 1;
75//! let mut result = 0;
76//!
77//! for _ in 2..=n {
78//! result = a + b;
79//! a = b;
80//! b = result;
81//! }
82//!
83//! result
84//! }
85//! ```
86//!
87//! Now, let's add some benchmarks to this query:
88//!
89//! ```rust
90//! #[cfg(feature = "canbench-rs")]
91//! mod benches {
92//! use super::*;
93//! use canbench_rs::bench;
94//!
95//! # fn fibonacci(_: u32) -> u32 { 0 }
96//!
97//! #[bench]
98//! fn fibonacci_20() {
99//! // Prevent the compiler from optimizing the call and propagating constants.
100//! std::hint::black_box(fibonacci(std::hint::black_box(20)));
101//! }
102//!
103//! #[bench]
104//! fn fibonacci_45() {
105//! // Prevent the compiler from optimizing the call and propagating constants.
106//! std::hint::black_box(fibonacci(std::hint::black_box(45)));
107//! }
108//! }
109//! ```
110//!
111//! Run `canbench`. You'll see an output that looks similar to this:
112//!
113//! ```txt
114//! $ canbench
115//!
116//! ---------------------------------------------------
117//!
118//! Benchmark: fibonacci_20 (new)
119//! total:
120//! instructions: 2301 (new)
121//! heap_increase: 0 pages (new)
122//! stable_memory_increase: 0 pages (new)
123//!
124//! ---------------------------------------------------
125//!
126//! Benchmark: fibonacci_45 (new)
127//! total:
128//! instructions: 3088 (new)
129//! heap_increase: 0 pages (new)
130//! stable_memory_increase: 0 pages (new)
131//!
132//! ---------------------------------------------------
133//!
134//! Executed 2 of 2 benchmarks.
135//! ```
136//!
137//! ### 5. Track performance regressions
138//!
139//! Notice that `canbench` reported the above benchmarks as "new".
140//! `canbench` allows you to persist the results of these benchmarks.
141//! In subsequent runs, `canbench` reports the performance relative to the last persisted run.
142//!
143//! Let's first persist the results above by running `canbench` again, but with the `persist` flag:
144//!
145//! ```txt
146//! $ canbench --persist
147//! # optionally add `--csv` to generate a CSV report
148//! $ canbench --persist --csv
149//! ...
150//! ---------------------------------------------------
151//!
152//! Executed 2 of 2 benchmarks.
153//! Successfully persisted results to canbench_results.yml
154//! ```
155//!
156//! Now, if we run `canbench` again, `canbench` will run the benchmarks, and will additionally report that there were no changes detected in performance.
157//!
158//! ```txt
159//! $ canbench
160//! Finished release [optimized] target(s) in 0.34s
161//!
162//! ---------------------------------------------------
163//!
164//! Benchmark: fibonacci_20
165//! total:
166//! instructions: 2301 (no change)
167//! heap_increase: 0 pages (no change)
168//! stable_memory_increase: 0 pages (no change)
169//!
170//! ---------------------------------------------------
171//!
172//! Benchmark: fibonacci_45
173//! total:
174//! instructions: 3088 (no change)
175//! heap_increase: 0 pages (no change)
176//! stable_memory_increase: 0 pages (no change)
177//!
178//! ---------------------------------------------------
179//!
180//! Executed 2 of 2 benchmarks.
181//! ```
182//!
183//! Let's try swapping out our implementation of `fibonacci` with an implementation that's miserably inefficient.
184//! Replace the `fibonacci` function defined previously with the following:
185//!
186//! ```rust
187//! #[ic_cdk::query]
188//! fn fibonacci(n: u32) -> u32 {
189//! match n {
190//! 0 => 1,
191//! 1 => 1,
192//! _ => fibonacci(n - 1) + fibonacci(n - 2),
193//! }
194//! }
195//! ```
196//!
197//! And running `canbench` again, we see that it detects and reports a regression.
198//!
199//! ```txt
200//! $ canbench
201//!
202//! ---------------------------------------------------
203//!
204//! Benchmark: fibonacci_20
205//! total:
206//! instructions: 337.93 K (regressed by 14586.14%)
207//! heap_increase: 0 pages (no change)
208//! stable_memory_increase: 0 pages (no change)
209//!
210//! ---------------------------------------------------
211//!
212//! Benchmark: fibonacci_45
213//! total:
214//! instructions: 56.39 B (regressed by 1826095830.76%)
215//! heap_increase: 0 pages (no change)
216//! stable_memory_increase: 0 pages (no change)
217//!
218//! ---------------------------------------------------
219//!
220//! Executed 2 of 2 benchmarks.
221//! ```
222//!
223//! Apparently, the recursive implementation is many orders of magnitude more expensive than the iterative implementation 😱
224//! Good thing we found out before deploying this implementation to production.
225//!
226//! Notice that `fibonacci_45` took > 50B instructions, which is substantially more than the instruction limit given for a single message execution on the Internet Computer. `canbench` runs benchmarks in an environment that gives them up to 10T instructions.
227//!
228//! ## Additional Examples
229//!
230//! For the following examples, we'll be using the following canister code, which you can also find in the [examples](./examples/btreemap_vs_hashmap) directory.
231//! This canister defines a simple state as well as a `pre_upgrade` function that stores that state into stable memory.
232//!
233//! ```rust
234//! use candid::{CandidType, Encode};
235//! use ic_cdk_macros::pre_upgrade;
236//! use std::cell::RefCell;
237//!
238//! #[derive(CandidType)]
239//! struct User {
240//! name: String,
241//! }
242//!
243//! #[derive(Default, CandidType)]
244//! struct State {
245//! users: std::collections::BTreeMap<u64, User>,
246//! }
247//!
248//! thread_local! {
249//! static STATE: RefCell<State> = RefCell::new(State::default());
250//! }
251//!
252//! #[pre_upgrade]
253//! fn pre_upgrade() {
254//! // Serialize state.
255//! let bytes = STATE.with(|s| Encode!(s).unwrap());
256//!
257//! // Write to stable memory.
258//! ic_cdk::api::stable::StableWriter::default()
259//! .write(&bytes)
260//! .unwrap();
261//! }
262//! ```
263//!
264//! ### Excluding setup code
265//!
266//! Let's say we want to benchmark how long it takes to run the `pre_upgrade` function. We can define the following benchmark:
267//!
268//! ```rust
269//! #[cfg(feature = "canbench-rs")]
270//! mod benches {
271//! use super::*;
272//! use canbench_rs::bench;
273//!
274//! # fn initialize_state() {}
275//! # fn pre_upgrade() {}
276//!
277//! #[bench]
278//! fn pre_upgrade_bench() {
279//! // Some function that fills the state with lots of data.
280//! initialize_state();
281//!
282//! pre_upgrade();
283//! }
284//! }
285//! ```
286//!
287//! The problem with the above benchmark is that it's benchmarking both the `pre_upgrade` call _and_ the initialization of the state.
288//! What if we're only interested in benchmarking the `pre_upgrade` call?
289//! To address this, we can use the `#[bench(raw)]` macro to specify exactly which code we'd like to benchmark.
290//!
291//! ```rust
292//! #[cfg(feature = "canbench-rs")]
293//! mod benches {
294//! use super::*;
295//! use canbench_rs::bench;
296//!
297//! # fn initialize_state() {}
298//! # fn pre_upgrade() {}
299//!
300//! #[bench(raw)]
301//! fn pre_upgrade_bench() -> canbench_rs::BenchResult {
302//! // Some function that fills the state with lots of data.
303//! initialize_state();
304//!
305//! // Only benchmark the pre_upgrade. Initializing the state isn't
306//! // included in the results of our benchmark.
307//! canbench_rs::bench_fn(pre_upgrade)
308//! }
309//! }
310//! ```
311//!
312//! Running `canbench` on the example above will benchmark only the code wrapped in `canbench_rs::bench_fn`, which in this case is the call to `pre_upgrade`.
313//!
314//! ```txt
315//! $ canbench pre_upgrade_bench
316//!
317//! ---------------------------------------------------
318//!
319//! Benchmark: pre_upgrade_bench (new)
320//! total:
321//! instructions: 717.10 M (new)
322//! heap_increase: 519 pages (new)
323//! stable_memory_increase: 184 pages (new)
324//!
325//! ---------------------------------------------------
326//!
327//! Executed 1 of 1 benchmarks.
328//! ```
329//!
330//! ### Granular Benchmarking
331//!
332//! Building on the example above, the `pre_upgrade` function does two steps:
333//!
334//! 1. Serialize the state
335//! 2. Write to stable memory
336//!
337//! Suppose we're interested in understanding, within `pre_upgrade`, the resources spent in each of these steps.
338//! `canbench` allows you to do more granular benchmarking using the `canbench_rs::bench_scope` function.
339//! Here's how we can modify our `pre_upgrade` function:
340//!
341//!
342//! ```rust
343//! # use candid::{Encode, CandidType};
344//! # use ic_cdk_macros::pre_upgrade;
345//! # use std::cell::RefCell;
346//! #
347//! # #[derive(CandidType)]
348//! # struct User {
349//! # name: String,
350//! # }
351//! #
352//! # #[derive(Default, CandidType)]
353//! # struct State {
354//! # users: std::collections::BTreeMap<u64, User>,
355//! # }
356//! #
357//! # thread_local! {
358//! # static STATE: RefCell<State> = RefCell::new(State::default());
359//! # }
360//!
361//! #[pre_upgrade]
362//! fn pre_upgrade() {
363//! // Serialize state.
364//! let bytes = {
365//! #[cfg(feature = "canbench-rs")]
366//! let _p = canbench_rs::bench_scope("serialize_state");
367//! STATE.with(|s| Encode!(s).unwrap())
368//! };
369//!
370//! // Write to stable memory.
371//! #[cfg(feature = "canbench-rs")]
372//! let _p = canbench_rs::bench_scope("writing_to_stable_memory");
373//! ic_cdk::api::stable::StableWriter::default()
374//! .write(&bytes)
375//! .unwrap();
376//! }
377//! ```
378//!
379//! In the code above, we've asked `canbench` to profile each of these steps separately.
380//! Running `canbench` now, each of these steps are reported.
381//!
382//! ```txt
383//! $ canbench pre_upgrade_bench
384//!
385//! ---------------------------------------------------
386//!
387//! Benchmark: pre_upgrade_bench (new)
388//! total:
389//! instructions: 717.11 M (new)
390//! heap_increase: 519 pages (new)
391//! stable_memory_increase: 184 pages (new)
392//!
393//! serialize_state (profiling):
394//! instructions: 717.10 M (new)
395//! heap_increase: 519 pages (new)
396//! stable_memory_increase: 0 pages (new)
397//!
398//! writing_to_stable_memory (profiling):
399//! instructions: 502 (new)
400//! heap_increase: 0 pages (new)
401//! stable_memory_increase: 184 pages (new)
402//!
403//! ---------------------------------------------------
404//!
405//! Executed 1 of 1 benchmarks.
406//! ```
407//!
408//! ### Debugging
409//!
410//! The `ic_cdk::eprintln!()` macro facilitates tracing canister and benchmark execution.
411//! Output is displayed on the console when `canbench` is executed with
412//! the `--show-canister-output` option.
413//!
414//! ```rust
415//! # #[cfg(feature = "canbench-rs")]
416//! # mod benches {
417//! # use super::*;
418//! # use canbench_rs::bench;
419//! #
420//! #[bench]
421//! fn bench_with_debug_print() {
422//! // Run `canbench --show-canister-output` to see the output.
423//! ic_cdk::eprintln!("Hello from {}!", env!("CARGO_PKG_NAME"));
424//! }
425//! # }
426//! ```
427//!
428//! Example output:
429//!
430//! ```bash
431//! $ canbench bench_with_debug_print --show-canister-output
432//! [...]
433//! 2021-05-06 19:17:10.000000003 UTC: [Canister lxzze-o7777-77777-aaaaa-cai] Hello from example!
434//! [...]
435//! ```
436//!
437//! Refer to the [Internet Computer specification](https://internetcomputer.org/docs/references/ic-interface-spec#debugging-aids) for more details.
438//!
439//! ### Preventing Compiler Optimizations
440//!
441//! If benchmark results appear suspiciously low and remain consistent
442//! despite increased benchmarked function complexity, the `std::hint::black_box`
443//! function helps prevent compiler optimizations.
444//!
445//! ```rust
446//! # #[cfg(feature = "canbench-rs")]
447//! # mod benches {
448//! # use super::*;
449//! # use canbench_rs::bench;
450//! #
451//! #[bench]
452//! fn fibonacci_20() {
453//! // Prevent the compiler from optimizing the call and propagating constants.
454//! std::hint::black_box(fibonacci(std::hint::black_box(20)));
455//! }
456//! # }
457//! ```
458//!
459//! Note that passing constant values as function arguments can also
460//! trigger compiler optimizations. If the actual code uses
461//! variables (not constants), both the arguments and the result
462//! of the benchmarked function must be wrapped in `black_box` calls.
463//!
464//! Refer to the [Rust documentation](https://doc.rust-lang.org/std/hint/fn.black_box.html)
465//! for more details.
466//!
467pub use canbench_rs_macros::bench;
468use candid::CandidType;
469use serde::{Deserialize, Serialize};
470use std::{cell::RefCell, collections::BTreeMap, ops::Add};
471
472thread_local! {
473 static SCOPES: RefCell<BTreeMap<&'static str, Vec<Measurement>>> =
474 const { RefCell::new(BTreeMap::new()) };
475}
476
477/// The results of a benchmark.
478#[derive(Debug, PartialEq, Serialize, Deserialize, CandidType, Default)]
479pub struct BenchResult {
480 /// A measurement for the entire duration of the benchmark.
481 pub total: Measurement,
482
483 /// Measurements for scopes.
484 #[serde(default)]
485 pub scopes: BTreeMap<String, Measurement>,
486}
487
488/// A benchmark measurement containing various stats.
489#[derive(Debug, PartialEq, Serialize, Deserialize, CandidType, Clone, Default)]
490pub struct Measurement {
491 /// The number of instructions.
492 #[serde(default)]
493 pub instructions: u64,
494
495 /// The increase in heap (measured in pages).
496 #[serde(default)]
497 pub heap_increase: u64,
498
499 /// The increase in stable memory (measured in pages).
500 #[serde(default)]
501 pub stable_memory_increase: u64,
502}
503
504impl Add for Measurement {
505 type Output = Self;
506
507 fn add(self, other: Self) -> Self::Output {
508 Self {
509 instructions: self.instructions + other.instructions,
510 heap_increase: self.heap_increase + other.heap_increase,
511 stable_memory_increase: self.stable_memory_increase + other.stable_memory_increase,
512 }
513 }
514}
515
516/// Benchmarks the given function.
517pub fn bench_fn<R>(f: impl FnOnce() -> R) -> BenchResult {
518 reset();
519
520 let is_tracing_enabled = TRACING_BUFFER.with_borrow(|p| !p.is_empty());
521
522 if !is_tracing_enabled {
523 let start_heap = heap_size();
524 let start_stable_memory = ic_cdk::api::stable::stable_size();
525 let start_instructions = instruction_count();
526 f();
527 let instructions = instruction_count() - start_instructions;
528 let stable_memory_increase = ic_cdk::api::stable::stable_size() - start_stable_memory;
529 let heap_increase = heap_size() - start_heap;
530
531 let total = Measurement {
532 instructions,
533 heap_increase,
534 stable_memory_increase,
535 };
536 let scopes: std::collections::BTreeMap<_, _> = get_scopes_measurements()
537 .into_iter()
538 .map(|(k, v)| (k.to_string(), v))
539 .collect();
540
541 BenchResult { total, scopes }
542 } else {
543 // The first 4 bytes are a flag to indicate if tracing is enabled. It will be read by the
544 // tracing function (instrumented code) to decide whether to trace or not.
545 let tracing_started_flag_address = TRACING_BUFFER.with_borrow_mut(|p| p.as_mut_ptr());
546 unsafe {
547 // Ideally, we'd like to reverse the following 2 statements, but it might be possible
548 // for the compiler not to inline `ic_cdk::api::performance_counter` which would be
549 // problematic as `performance_counter` would be traced itself. Perhaps we can call
550 // ic0.performance_counter directly.
551 INSTRUCTIONS_START = ic_cdk::api::performance_counter(0) as i64;
552 *tracing_started_flag_address = 1;
553 }
554 f();
555 unsafe {
556 *tracing_started_flag_address = 0;
557 INSTRUCTIONS_END = ic_cdk::api::performance_counter(0) as i64;
558 }
559
560 // Only the traces are meaningful, and it's written to `TRACING_BUFFER` and will be
561 // collected in the tracing query method.
562 BenchResult::default()
563 }
564}
565
566/// Benchmarks the scope this function is declared in.
567///
568/// NOTE: It's important to assign this function, otherwise benchmarking won't work correctly.
569///
570/// # Correct Usage
571///
572/// ```
573/// fn my_func() {
574/// let _p = canbench_rs::bench_scope("my_scope");
575/// // Do something.
576/// }
577/// ```
578///
579/// # Incorrect Usages
580///
581/// ```
582/// fn my_func() {
583/// let _ = canbench_rs::bench_scope("my_scope"); // Doesn't capture the scope.
584/// // Do something.
585/// }
586/// ```
587///
588/// ```
589/// fn my_func() {
590/// canbench_rs::bench_scope("my_scope"); // Doesn't capture the scope.
591/// // Do something.
592/// }
593/// ```
594#[must_use]
595pub fn bench_scope(name: &'static str) -> BenchScope {
596 BenchScope::new(name)
597}
598
599/// An object used for benchmarking a specific scope.
600pub struct BenchScope {
601 name: &'static str,
602 start_instructions: u64,
603 start_stable_memory: u64,
604 start_heap: u64,
605}
606
607impl BenchScope {
608 fn new(name: &'static str) -> Self {
609 let start_heap = heap_size();
610 let start_stable_memory = ic_cdk::api::stable::stable_size();
611 let start_instructions = instruction_count();
612
613 Self {
614 name,
615 start_instructions,
616 start_stable_memory,
617 start_heap,
618 }
619 }
620}
621
622impl Drop for BenchScope {
623 fn drop(&mut self) {
624 let instructions = instruction_count() - self.start_instructions;
625 let stable_memory_increase = ic_cdk::api::stable::stable_size() - self.start_stable_memory;
626 let heap_increase = heap_size() - self.start_heap;
627
628 SCOPES.with(|p| {
629 let mut p = p.borrow_mut();
630 p.entry(self.name).or_default().push(Measurement {
631 instructions,
632 heap_increase,
633 stable_memory_increase,
634 });
635 });
636 }
637}
638
639// Clears all scope data.
640fn reset() {
641 SCOPES.with(|p| p.borrow_mut().clear());
642}
643
644// Returns the measurements for any declared scopes,
645// aggregated by the scope name.
646fn get_scopes_measurements() -> std::collections::BTreeMap<&'static str, Measurement> {
647 SCOPES
648 .with(|p| p.borrow().clone())
649 .into_iter()
650 .map(|(scope, measurements)| {
651 let mut total = Measurement::default();
652 for measurement in measurements {
653 total = total + measurement;
654 }
655 (scope, total)
656 })
657 .collect()
658}
659
660fn instruction_count() -> u64 {
661 #[cfg(target_arch = "wasm32")]
662 {
663 ic_cdk::api::performance_counter(0)
664 }
665
666 #[cfg(not(target_arch = "wasm32"))]
667 {
668 // Consider using cpu time here.
669 0
670 }
671}
672
673fn heap_size() -> u64 {
674 #[cfg(target_arch = "wasm32")]
675 {
676 core::arch::wasm32::memory_size(0) as u64
677 }
678
679 #[cfg(not(target_arch = "wasm32"))]
680 {
681 0
682 }
683}
684
685thread_local! {
686 static TRACING_BUFFER: RefCell<Vec<u8>> = const { RefCell::new(Vec::new()) };
687}
688
689static mut INSTRUCTIONS_START: i64 = 0;
690static mut INSTRUCTIONS_END: i64 = 0;
691const NUM_BYTES_ENABLED_FLAG: usize = 4;
692const NUM_BYTES_NUM_ENTRIES: usize = 8;
693const MAX_NUM_LOG_ENTRIES: usize = 100_000_000;
694const NUM_BYTES_FUNC_ID: usize = 4;
695const NUM_BYTES_INSTRUCTION_COUNTER: usize = 8;
696const BUFFER_SIZE: usize = NUM_BYTES_ENABLED_FLAG
697 + NUM_BYTES_NUM_ENTRIES
698 + MAX_NUM_LOG_ENTRIES * (NUM_BYTES_FUNC_ID + NUM_BYTES_INSTRUCTION_COUNTER);
699const LOGS_START_OFFSET: usize = NUM_BYTES_ENABLED_FLAG + NUM_BYTES_NUM_ENTRIES;
700const MAX_NUM_LOG_ENTRIES_IN_RESPONSE: usize = 131_000;
701
702#[export_name = "__prepare_tracing"]
703fn prepare_tracing() -> i32 {
704 TRACING_BUFFER.with_borrow_mut(|b| {
705 *b = vec![0; BUFFER_SIZE];
706 b.as_ptr() as i32
707 })
708}
709
710pub fn get_traces(bench_instructions: u64) -> Result<Vec<(i32, i64)>, String> {
711 TRACING_BUFFER.with_borrow(|b| {
712 if b[0] == 1 {
713 panic!("Tracing is still enabled.");
714 }
715 let num_entries = i64::from_le_bytes(
716 b[NUM_BYTES_ENABLED_FLAG..(NUM_BYTES_ENABLED_FLAG + NUM_BYTES_NUM_ENTRIES)]
717 .try_into()
718 .unwrap(),
719 );
720 if num_entries > MAX_NUM_LOG_ENTRIES as i64 {
721 return Err(format!(
722 "There are {num_entries} log entries which is more than \
723 {MAX_NUM_LOG_ENTRIES}, as we can currently support",
724 ));
725 }
726 let instructions_start = unsafe { INSTRUCTIONS_START };
727 let mut traces = vec![(i32::MAX, 0)];
728 for i in 0..num_entries {
729 let log_start_address = i as usize
730 * (NUM_BYTES_FUNC_ID + NUM_BYTES_INSTRUCTION_COUNTER)
731 + LOGS_START_OFFSET;
732 let func_id = i32::from_le_bytes(
733 b[log_start_address..log_start_address + NUM_BYTES_FUNC_ID]
734 .try_into()
735 .unwrap(),
736 );
737 let instruction_counter = i64::from_le_bytes(
738 b[log_start_address + NUM_BYTES_FUNC_ID
739 ..log_start_address + NUM_BYTES_FUNC_ID + NUM_BYTES_INSTRUCTION_COUNTER]
740 .try_into()
741 .unwrap(),
742 );
743 traces.push((func_id, instruction_counter - instructions_start));
744 }
745 traces.push((i32::MIN, unsafe { INSTRUCTIONS_END - instructions_start }));
746 let traces = adjust_traces_for_overhead(traces, bench_instructions);
747 // TODO(EXC-2020): consider using compression.
748 let traces = truncate_traces(traces);
749 Ok(traces)
750 })
751}
752
753fn adjust_traces_for_overhead(traces: Vec<(i32, i64)>, bench_instructions: u64) -> Vec<(i32, i64)> {
754 let num_logs = traces.len() - 2;
755 let overhead = (traces[num_logs].1 as f64 - bench_instructions as f64) / (num_logs as f64);
756 traces
757 .into_iter()
758 .enumerate()
759 .map(|(i, (id, count))| {
760 if i <= num_logs {
761 (id, count - (overhead * i as f64) as i64)
762 } else {
763 (id, count - (overhead * num_logs as f64) as i64)
764 }
765 })
766 .collect()
767}
768
769fn truncate_traces(traces: Vec<(i32, i64)>) -> Vec<(i32, i64)> {
770 if traces.len() <= MAX_NUM_LOG_ENTRIES_IN_RESPONSE {
771 return traces;
772 }
773
774 let mut num_traces_by_depth = BTreeMap::new();
775
776 let mut depth = 0;
777 for (func_id, _) in traces.iter() {
778 if *func_id >= 0 {
779 depth += 1;
780 *num_traces_by_depth.entry(depth).or_insert(0) += 1;
781 } else {
782 depth -= 1;
783 }
784 }
785 assert_eq!(depth, 0, "Traces are not balanced.");
786 let mut depth_to_truncate = 0;
787 let mut cumulative_traces = 0;
788 for (depth, num_traces) in num_traces_by_depth.iter() {
789 cumulative_traces += num_traces;
790 if cumulative_traces <= MAX_NUM_LOG_ENTRIES_IN_RESPONSE {
791 depth_to_truncate = *depth;
792 } else {
793 break;
794 }
795 }
796
797 let truncated: Vec<_> = traces
798 .into_iter()
799 .scan(0, |depth, (func_id, instruction_counter)| {
800 if func_id >= 0 {
801 *depth += 1;
802 Some((*depth, func_id, instruction_counter))
803 } else {
804 *depth -= 1;
805 Some((*depth + 1, func_id, instruction_counter))
806 }
807 })
808 .filter(|(depth, _, _)| *depth <= depth_to_truncate)
809 .map(|(_, func_id, instruction_counter)| (func_id, instruction_counter))
810 .collect();
811
812 truncated
813}