Struct slottle::Throttle [−][src]
Limiting resource access speed by interval and concurrent.
Implementations
impl Throttle
[src]
pub fn builder() -> ThrottleBuilder
[src]
Initialize a builder to create throttle.
pub fn run<F, T>(&self, f: F) -> T where
F: FnOnce() -> T,
[src]
F: FnOnce() -> T,
Run a function.
Call run(...)
may block current thread by throttle’s state and configuration.
Example
use std::time::Duration; use rayon::prelude::*; use slottle::Throttle; let throttle = Throttle::builder() .interval(Duration::from_millis(5)) .build() .unwrap(); let ans: Vec<u32> = vec![3, 2, 1] .into_par_iter() .map(|x| { // parallel run here throttle.run(|| x + 1) }) .collect(); assert_eq!(ans, vec![4, 3, 2]);
pub fn run_fallible<F, T, E>(&self, f: F) -> Result<T, E> where
F: FnOnce() -> Result<T, E>,
[src]
F: FnOnce() -> Result<T, E>,
Run a function which are fallible.
When f
return an Err
, throttle will treat this function run
into “failed” state. Failure will counting by ThrottleLog
and may change
following delay intervals in current throttle scope by user defined Interval
within ThrottleBuilder::interval()
.
Call run_fallible(...)
may block current thread by throttle’s state and configuration.
Example
use std::time::{Duration, Instant}; use rayon::prelude::*; use slottle::{Throttle,Interval}; let throttle = Throttle::builder() .interval(Interval::new( |log| match log.unwrap().failure_count_cont() { 0 => Duration::from_millis(10), // if successful _ => Duration::from_millis(50), // if failed }, 1, // log_size )) .build() .unwrap(); let started_time = Instant::now(); vec![Result::<(), ()>::Err(()); 3] // 3 Err here .into_par_iter() .for_each(|err| { throttle.run_fallible(|| { let time_passed_ms = started_time.elapsed().as_secs_f64() * 1000.0; println!("time passed: {:.2}ms", time_passed_ms); err }); });
The pervious code will roughly print:
time passed: 0.32ms
time passed: 10.19ms
time passed: 60.72ms
Explanation: Data in ThrottleLog
will delay one op to take effect
If you read previous example and result carefully, You may notice first op failed but second op not immediate slowdown (50ms). The slowdown appeared on third. You may wonder what happen here?
Say technically, all the following statements are true:
- We known an op failed or not, only when it has finished.
- Current implementation of
Throttle
try to do “waiting” just before an op start.- If put waiting after an op finished, final op may blocking the thread unnecessarily.
- The “next allowed timepoint” must assigned with “waiting” as an atomic unit.
- If not, in multi-thread situation, more than one op may retrieve the same “allowed timepoint”, then run in the same time.
So, combine those 3 points. When op 1 finished and ThrottleLog
updating, “next allowed timepoint”
already be calculated for other pending ops (those ops may started before current op finished if
concurrent >= 2
). But it looking little weird when concurrent == 1
.
Here is the chart:
f: assigned jobs, s: sleep function
thread 1: |f1()---|s()----|f2()--|s()---------------------------------|f3()---|.......
| int.succ | interval (failed) |...............
^ ^ ^-- at this point throttle determined which time f3 allowed to run
\ \
\ -- f1 finished, now throttle known f1 failed, write into the log
\
-- at this point throttle determined "which time f2 allowed to run"
time pass ----->
Thus, data in ThrottleLog
will delay one op to take effect (no matter how many concurrent).
pub fn retry<F, T, E, R>(&self, f: F, max_retry: usize) -> Result<T, E> where
F: FnMut(usize) -> R,
R: Into<RetryableResult<T, E>>,
[src]
F: FnMut(usize) -> R,
R: Into<RetryableResult<T, E>>,
Run a function and retry when it failed.
If f
return an Result::Err
, throttle will auto re-run the function. Retry will
happen again and again until it reach max_retry
limitation or succeed.
For example, assume max_retry == 4
that f
may run 5
times as maximum.
This method may effect intervals calculation due to any kind of Err
happened.
Check run_fallible()
to see how it work.
Call retry(...)
may block current thread by throttle’s state and configuration.
Example
use std::time::Duration; use rayon::prelude::*; use slottle::Throttle; let throttle = Throttle::builder().build().unwrap(); let which_round_finished: Vec<Result<_, _>> = vec![2, 1, 0] .into_par_iter() .map(|x| { throttle.retry( // round always in `1..=(max_retry + 1)` (`1..=2` in this case) |round| match x + round >= 3 { false => Err(round), true => Ok(round), }, 1, // max_retry == 1 ) }) .collect(); assert_eq!(which_round_finished, vec![Ok(1), Ok(2), Err(2)]);
Function f
can also return RetryableResult::FatalErr
to ask throttle don’t do any
further retry:
use std::time::Duration; use rayon::prelude::*; use slottle::{Throttle, RetryableResult}; let throttle = Throttle::builder().build().unwrap(); let which_round_finished: Vec<Result<_, _>> = vec![2, 1, 0] .into_par_iter() .map(|x| { throttle.retry( // round always in `1..=(max_retry + 1)` (`1..=2` in this case) |round| match x + round >= 3 { // FatalErr would not retry false => RetryableResult::FatalErr(round), true => RetryableResult::Ok(round), }, 1, // max_retry == 1 ) }) .collect(); assert_eq!(which_round_finished, vec![Ok(1), Err(1), Err(1)]);
Trait Implementations
Auto Trait Implementations
impl !RefUnwindSafe for Throttle
impl Send for Throttle
impl Sync for Throttle
impl Unpin for Throttle
impl !UnwindSafe for Throttle
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
pub fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> From<T> for T
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
pub fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,