[][src]Struct governor::RateLimiter

pub struct RateLimiter<K, S, C> where
    S: StateStore<Key = K>,
    C: Clock
{ /* fields omitted */ }

A rate limiter.

This is the structure that ties together the parameters (how many cells to allow in what time period) and the concrete state of rate limiting decisions. This crate ships in-memory state stores, but it's possible (by implementing the StateStore trait) to make others.

Implementations

impl<S, C> RateLimiter<NotKeyed, S, C> where
    S: DirectStateStore,
    C: ReasonablyRealtime
[src]

pub async fn until_ready(&self)[src]

Asynchronously resolves as soon as the rate limiter allows it.

When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).

If multiple futures are dispatched against the rate limiter, it is advisable to use until_ready_with_jitter, to avoid thundering herds.

pub async fn until_ready_with_jitter(&self, jitter: Jitter)[src]

Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.

When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).

This method allows for a randomized additional delay between polls of the rate limiter, which can help reduce the likelihood of thundering herd effects if multiple tasks try to wait on the same rate limiter.

pub async fn until_n_ready(
    &self,
    n: NonZeroU32
) -> Result<(), InsufficientCapacity>
[src]

Asynchronously resolves as soon as the rate limiter allows it.

This is similar to until_ready except it waits for an abitrary number of n cells to be available.

Returns InsufficientCapacity if the n provided exceeds the maximum capacity of the rate limiter.

pub async fn until_n_ready_with_jitter(
    &self,
    n: NonZeroU32,
    jitter: Jitter
) -> Result<(), InsufficientCapacity>
[src]

Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.

This is similar to until_ready_with_jitter except it waits for an abitrary number of n cells to be available.

Returns InsufficientCapacity if the n provided exceeds the maximum capacity of the rate limiter.

impl RateLimiter<NotKeyed, InMemoryState, DefaultClock>[src]

Direct in-memory rate limiters - Constructors

Here we construct an in-memory rate limiter that makes direct (un-keyed) rate-limiting decisions. Direct rate limiters can be used to e.g. regulate the transmission of packets on a single connection, or to ensure that an API client stays within a service's rate limit.

pub fn direct(
    quota: Quota
) -> RateLimiter<NotKeyed, InMemoryState, DefaultClock>
[src]

Constructs a new in-memory direct rate limiter for a quota with the default real-time clock.

impl<C> RateLimiter<NotKeyed, InMemoryState, C> where
    C: Clock
[src]

pub fn direct_with_clock(quota: Quota, clock: &C) -> Self[src]

Constructs a new direct rate limiter for a quota with a custom clock.

impl<S, C> RateLimiter<NotKeyed, S, C> where
    S: DirectStateStore,
    C: Clock
[src]

pub fn check(&self) -> Result<(), NotUntil<'_, C::Instant>>[src]

Allow a single cell through the rate limiter.

If the rate limit is reached, check returns information about the earliest time that a cell might be allowed through again.

pub fn check_n(
    &self,
    n: NonZeroU32
) -> Result<(), NegativeMultiDecision<NotUntil<'_, C::Instant>>>
[src]

Allow only all n cells through the rate limiter.

This method can succeed in only one way and fail in two ways:

  • Success: If all n cells can be accommodated, it returns Ok(()).
  • Failure (but ok): Not all cells can make it through at the current time. The result is Err(NegativeMultiDecision::BatchNonConforming(NotUntil)), which can be interrogated about when the batch might next conform.
  • Failure (the batch can never go through): The rate limit quota's burst size is too low for the given number of cells to ever be allowed through.

Performance

This method diverges a little from the GCRA algorithm, using multiplication to determine the next theoretical arrival time, and so is not as fast as checking a single cell.

impl<K, C> RateLimiter<K, HashMapStateStore<K>, C> where
    K: Hash + Eq + Clone,
    C: Clock
[src]

pub fn hashmap_with_clock(quota: Quota, clock: &C) -> Self[src]

Constructs a new rate limiter with a custom clock, backed by a HashMap.

impl<K, C> RateLimiter<K, DashMapStateStore<K>, C> where
    K: Hash + Eq + Clone,
    C: Clock
[src]

pub fn dashmap_with_clock(quota: Quota, clock: &C) -> Self[src]

Constructs a new rate limiter with a custom clock, backed by a DashMap.

impl<K, S, C> RateLimiter<K, S, C> where
    K: Hash + Eq + Clone,
    S: KeyedStateStore<K>,
    C: ReasonablyRealtime
[src]

pub async fn until_key_ready(&self, key: &K)[src]

Asynchronously resolves as soon as the rate limiter allows it.

When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).

If multiple futures are dispatched against the rate limiter, it is advisable to use until_ready_with_jitter, to avoid thundering herds.

pub async fn until_key_ready_with_jitter(&self, key: &K, jitter: Jitter)[src]

Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.

When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).

This method allows for a randomized additional delay between polls of the rate limiter, which can help reduce the likelihood of thundering herd effects if multiple tasks try to wait on the same rate limiter.

impl<K> RateLimiter<K, DefaultKeyedStateStore<K>, DefaultClock> where
    K: Clone + Hash + Eq
[src]

pub fn keyed(quota: Quota) -> Self[src]

Constructs a new keyed rate limiter backed by the DefaultKeyedStateStore.

pub fn dashmap(quota: Quota) -> Self[src]

Constructs a new keyed rate limiter explicitly backed by a [DashMap][dashmap::DashMap].

impl<K> RateLimiter<K, HashMapStateStore<K>, DefaultClock> where
    K: Clone + Hash + Eq
[src]

pub fn hashmap(quota: Quota) -> Self[src]

Constructs a new keyed rate limiter explicitly backed by a HashMap.

impl<K, S, C> RateLimiter<K, S, C> where
    S: KeyedStateStore<K>,
    K: Hash,
    C: Clock
[src]

pub fn check_key(&self, key: &K) -> Result<(), NotUntil<'_, C::Instant>>[src]

Allow a single cell through the rate limiter for the given key.

If the rate limit is reached, check_key returns information about the earliest time that a cell might be allowed through again under that key.

pub fn check_key_n(
    &self,
    key: &K,
    n: NonZeroU32
) -> Result<(), NegativeMultiDecision<NotUntil<'_, C::Instant>>>
[src]

Allow only all n cells through the rate limiter for the given key.

This method can succeed in only one way and fail in two ways:

  • Success: If all n cells can be accommodated, it returns Ok(()).
  • Failure (but ok): Not all cells can make it through at the current time. The result is Err(NegativeMultiDecision::BatchNonConforming(NotUntil)), which can be interrogated about when the batch might next conform.
  • Failure (the batch can never go through): The rate limit is too low for the given number of cells.

Performance

This method diverges a little from the GCRA algorithm, using multiplication to determine the next theoretical arrival time, and so is not as fast as checking a single cell.

impl<K, S, C> RateLimiter<K, S, C> where
    S: ShrinkableKeyedStateStore<K>,
    K: Hash,
    C: Clock
[src]

Keyed rate limiters - Housekeeping

As the inputs to a keyed rate-limiter can be arbitrary keys, the set of retained keys retained grows, while the number of active keys may stay smaller. To save on space, a keyed rate-limiter allows removing those keys that are "stale", i.e., whose values are no different from keys' that aren't present in the rate limiter state store.

pub fn retain_recent(&self)[src]

Retains all keys in the rate limiter that were used recently enough.

Any key whose rate limiting state is indistinguishable from a "fresh" state (i.e., the theoretical arrival time lies in the past).

pub fn shrink_to_fit(&self)[src]

Shrinks the capacity of the rate limiter's state store, if possible.

pub fn len(&self) -> usize[src]

Returns the number of "live" keys in the rate limiter's state store.

Depending on how the state store is implemented, this may return an estimate or an out-of-date result.

pub fn is_empty(&self) -> bool[src]

Returns true if the rate limiter has no keys in it.

As with len, this method may return imprecise results (indicating that the state store is empty while a concurrent rate-limiting operation is taking place).

impl<K, S, C> RateLimiter<K, S, C> where
    S: StateStore<Key = K>,
    C: Clock
[src]

pub fn new(quota: Quota, state: S, clock: &C) -> Self[src]

Creates a new rate limiter from components.

This is the most generic way to construct a rate-limiter; most users should prefer direct or other methods instead.

pub fn into_state_store(self) -> S[src]

Consumes the RateLimiter and returns the state store.

This is mostly useful for debugging and testing.

Trait Implementations

impl<K: Debug, S: Debug, C: Debug> Debug for RateLimiter<K, S, C> where
    S: StateStore<Key = K>,
    C: Clock,
    C::Instant: Debug
[src]

Auto Trait Implementations

impl<K, S, C> RefUnwindSafe for RateLimiter<K, S, C> where
    C: RefUnwindSafe,
    S: RefUnwindSafe,
    <C as Clock>::Instant: RefUnwindSafe
[src]

impl<K, S, C> Send for RateLimiter<K, S, C> where
    C: Send,
    S: Send
[src]

impl<K, S, C> Sync for RateLimiter<K, S, C> where
    C: Sync,
    S: Sync
[src]

impl<K, S, C> Unpin for RateLimiter<K, S, C> where
    C: Unpin,
    S: Unpin,
    <C as Clock>::Instant: Unpin
[src]

impl<K, S, C> UnwindSafe for RateLimiter<K, S, C> where
    C: UnwindSafe,
    S: UnwindSafe,
    <C as Clock>::Instant: UnwindSafe
[src]

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>,