[−][src]Struct trawler::WorkloadBuilder
Set the parameters for a new Lobsters-like workload.
Methods
impl<'a> WorkloadBuilder<'a>
[src]
pub fn scale(&mut self, mem_factor: f64, req_factor: f64) -> &mut Self
[src]
Set the memory and request scale factor for the workload.
A factor of 1 generates a workload commensurate with what the real lobste.rs
sees. At memory scale 1, the site starts out with ~40k
stories with a total of ~300k comments spread across 9k users. At request factor 1, the
generated load is on average 44 requests/minute, with a request distribution set according
to the one observed on lobste.rs (see data/
for details).
pub fn warmup_scale(&mut self, req_factor: f64) -> &mut Self
[src]
Set the request scale factor used for the warmup part of the workload.
Defaults to the request scale factor set by [scale
].
See [scale
] for details.
pub fn issuers(&mut self, n: usize) -> &mut Self
[src]
Set the number of threads used to issue requests to the backend.
Each thread can issue in_flight
requests simultaneously.
pub fn time(&mut self, warmup: Duration, runtime: Duration) -> &mut Self
[src]
Set the runtime for the benchmark.
pub fn in_flight(&mut self, max: usize) -> &mut Self
[src]
The maximum number of outstanding request any single issuer is allowed to have to the backend. Defaults to 20.
pub fn with_histogram<'this>(&'this mut self, path: &'a str) -> &'this mut Self
[src]
Instruct the load generator to store raw histogram data of request latencies into the given file upon completion.
If the given file exists at the start of the benchmark, the existing histograms will be amended, not replaced.
impl<'a> WorkloadBuilder<'a>
[src]
pub fn run<C>(&self, client: C, prime: bool) where
C: LobstersClient + 'static,
[src]
C: LobstersClient + 'static,
Run this workload with clients spawned from the given factory.
If prime
is true, the database will be seeded with stories and comments according to the
memory scaling factory before the benchmark starts. If the site has already been primed,
there is no need to prime again unless the backend is emptied or the memory scale factor is
changed. Note that priming does not delete the database, nor detect the current scale, so
always empty the backend before calling run
with prime
set.
Trait Implementations
impl<'a> Default for WorkloadBuilder<'a>
[src]
Auto Trait Implementations
impl<'a> Send for WorkloadBuilder<'a>
impl<'a> Unpin for WorkloadBuilder<'a>
impl<'a> Sync for WorkloadBuilder<'a>
impl<'a> UnwindSafe for WorkloadBuilder<'a>
impl<'a> RefUnwindSafe for WorkloadBuilder<'a>
Blanket Implementations
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,
type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>
[src]
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<V, T> VZip<V> for T where
V: MultiLane<T>,
V: MultiLane<T>,