Struct trawler::WorkloadBuilder
source · pub struct WorkloadBuilder<'a> { /* private fields */ }
Expand description
Set the parameters for a new Lobsters-like workload.
Implementations
sourceimpl<'a> WorkloadBuilder<'a>
impl<'a> WorkloadBuilder<'a>
sourcepub fn scale(&mut self, mem_factor: f64, req_factor: f64) -> &mut Self
pub fn scale(&mut self, mem_factor: f64, req_factor: f64) -> &mut Self
Set the memory and request scale factor for the workload.
A factor of 1 generates a workload commensurate with what the real lobste.rs
sees. At memory scale 1, the site starts out with ~40k
stories with a total of ~300k comments spread across 9k users. At request factor 1, the
generated load is on average 44 requests/minute, with a request distribution set according
to the one observed on lobste.rs (see data/
for details).
sourcepub fn issuers(&mut self, n: usize) -> &mut Self
pub fn issuers(&mut self, n: usize) -> &mut Self
Set the number of threads used to issue requests to the backend.
Each thread can issue in_flight
requests simultaneously.
sourcepub fn time(&mut self, warmup: Duration, runtime: Duration) -> &mut Self
pub fn time(&mut self, warmup: Duration, runtime: Duration) -> &mut Self
Set the runtime for the benchmark.
sourcepub fn in_flight(&mut self, max: usize) -> &mut Self
pub fn in_flight(&mut self, max: usize) -> &mut Self
The maximum number of outstanding request any single issuer is allowed to have to the backend. Defaults to 20.
sourcepub fn with_histogram<'this>(&'this mut self, path: &'a str) -> &'this mut Self
pub fn with_histogram<'this>(&'this mut self, path: &'a str) -> &'this mut Self
Instruct the load generator to store raw histogram data of request latencies into the given file upon completion.
If the given file exists at the start of the benchmark, the existing histograms will be amended, not replaced.
sourceimpl<'a> WorkloadBuilder<'a>
impl<'a> WorkloadBuilder<'a>
sourcepub fn run<C, I>(&self, factory: I, prime: bool)where
I: Send + 'static,
C: LobstersClient<Factory = I> + 'static,
pub fn run<C, I>(&self, factory: I, prime: bool)where
I: Send + 'static,
C: LobstersClient<Factory = I> + 'static,
Run this workload with clients spawned from the given factory.
If prime
is true, the database will be seeded with stories and comments according to the
memory scaling factory before the benchmark starts. If the site has already been primed,
there is no need to prime again unless the backend is emptied or the memory scale factor is
changed. Note that priming does not delete the database, nor detect the current scale, so
always empty the backend before calling run
with prime
set.