pub trait Benchmark {
type Input: Clone;
type Output;
// Required methods
fn prepare(&self) -> Self::Input;
fn execute(&self, input: Self::Input) -> Result<Self::Output, String>;
fn name(&self) -> String;
fn sync(&self);
// Provided methods
fn num_samples(&self) -> usize { ... }
fn options(&self) -> Option<String> { ... }
fn shapes(&self) -> Vec<Vec<usize>> { ... }
fn profile(&self, args: Self::Input) -> Result<ProfileDuration, String> { ... }
fn profile_full(&self, args: Self::Input) -> Result<ProfileDuration, String> { ... }
fn run(
&self,
timing_method: TimingMethod,
) -> Result<BenchmarkDurations, String> { ... }
}
Expand description
Benchmark trait.
Required Associated Types§
Required Methods§
Sourcefn prepare(&self) -> Self::Input
fn prepare(&self) -> Self::Input
Prepare the benchmark, run anything that is essential for the benchmark, but shouldn’t count as included in the duration.
§Notes
This should not include warmup, the benchmark will be run at least one time without measuring the execution time.
Sourcefn execute(&self, input: Self::Input) -> Result<Self::Output, String>
fn execute(&self, input: Self::Input) -> Result<Self::Output, String>
Execute the benchmark and returns the logical output of the task executed.
It is important to return the output since otherwise deadcode optimization might optimize away code that should be benchmarked.
Provided Methods§
Sourcefn num_samples(&self) -> usize
fn num_samples(&self) -> usize
Number of samples per run required to have a statistical significance.
Sourcefn profile(&self, args: Self::Input) -> Result<ProfileDuration, String>
fn profile(&self, args: Self::Input) -> Result<ProfileDuration, String>
Start measuring the computation duration.
Sourcefn profile_full(&self, args: Self::Input) -> Result<ProfileDuration, String>
fn profile_full(&self, args: Self::Input) -> Result<ProfileDuration, String>
Start measuring the computation duration. Use the full duration irregardless of whether device duration is available or not.
Sourcefn run(&self, timing_method: TimingMethod) -> Result<BenchmarkDurations, String>
fn run(&self, timing_method: TimingMethod) -> Result<BenchmarkDurations, String>
Run the benchmark a number of times.