Struct governor::RateLimiter [−][src]
pub struct RateLimiter<K, S, C, MW = NoOpMiddleware> where
S: StateStore<Key = K>,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>, { /* fields omitted */ }
Expand description
A rate limiter.
This is the structure that ties together the parameters (how many cells to allow in what time
period) and the concrete state of rate limiting decisions. This crate ships in-memory state
stores, but it’s possible (by implementing the StateStore
trait) to make others.
Implementations
impl<S, C, MW> RateLimiter<NotKeyed, S, C, MW> where
S: DirectStateStore,
C: ReasonablyRealtime,
MW: RateLimitingMiddleware<C::Instant, NegativeOutcome = NotUntil<C::Instant>>,
impl<S, C, MW> RateLimiter<NotKeyed, S, C, MW> where
S: DirectStateStore,
C: ReasonablyRealtime,
MW: RateLimitingMiddleware<C::Instant, NegativeOutcome = NotUntil<C::Instant>>,
Asynchronously resolves as soon as the rate limiter allows it.
When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).
If multiple futures are dispatched against the rate limiter, it is advisable to use
until_ready_with_jitter
, to avoid thundering herds.
Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.
When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).
This method allows for a randomized additional delay between polls of the rate limiter, which can help reduce the likelihood of thundering herd effects if multiple tasks try to wait on the same rate limiter.
pub async fn until_n_ready(
&self,
n: NonZeroU32
) -> Result<MW::PositiveOutcome, InsufficientCapacity>
pub async fn until_n_ready(
&self,
n: NonZeroU32
) -> Result<MW::PositiveOutcome, InsufficientCapacity>
Asynchronously resolves as soon as the rate limiter allows it.
This is similar to until_ready
except it waits for an abitrary number
of n
cells to be available.
Returns InsufficientCapacity
if the n
provided exceeds the maximum
capacity of the rate limiter.
pub async fn until_n_ready_with_jitter(
&self,
n: NonZeroU32,
jitter: Jitter
) -> Result<MW::PositiveOutcome, InsufficientCapacity>
pub async fn until_n_ready_with_jitter(
&self,
n: NonZeroU32,
jitter: Jitter
) -> Result<MW::PositiveOutcome, InsufficientCapacity>
Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.
This is similar to until_ready_with_jitter
except it waits for an
abitrary number of n
cells to be available.
Returns InsufficientCapacity
if the n
provided exceeds the maximum
capacity of the rate limiter.
Direct in-memory rate limiters - Constructors
Here we construct an in-memory rate limiter that makes direct (un-keyed) rate-limiting decisions. Direct rate limiters can be used to e.g. regulate the transmission of packets on a single connection, or to ensure that an API client stays within a service’s rate limit.
pub fn direct(
quota: Quota
) -> RateLimiter<NotKeyed, InMemoryState, DefaultClock, NoOpMiddleware>
pub fn direct(
quota: Quota
) -> RateLimiter<NotKeyed, InMemoryState, DefaultClock, NoOpMiddleware>
Constructs a new in-memory direct rate limiter for a quota with the default real-time clock.
Constructs a new direct rate limiter for a quota with a custom clock.
impl<S, C, MW> RateLimiter<NotKeyed, S, C, MW> where
S: DirectStateStore,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
impl<S, C, MW> RateLimiter<NotKeyed, S, C, MW> where
S: DirectStateStore,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
Allow a single cell through the rate limiter.
If the rate limit is reached, check
returns information about the earliest
time that a cell might be allowed through again.
pub fn check_n(
&self,
n: NonZeroU32
) -> Result<MW::PositiveOutcome, NegativeMultiDecision<MW::NegativeOutcome>>
pub fn check_n(
&self,
n: NonZeroU32
) -> Result<MW::PositiveOutcome, NegativeMultiDecision<MW::NegativeOutcome>>
Allow only all n
cells through the rate limiter.
This method can succeed in only one way and fail in two ways:
- Success: If all
n
cells can be accommodated, it returnsOk(())
. - Failure (but ok): Not all cells can make it through at the current time.
The result is
Err(NegativeMultiDecision::BatchNonConforming(NotUntil))
, which can be interrogated about when the batch might next conform. - Failure (the batch can never go through): The rate limit quota’s burst size is too low for the given number of cells to ever be allowed through.
Performance
This method diverges a little from the GCRA algorithm, using multiplication to determine the next theoretical arrival time, and so is not as fast as checking a single cell.
impl<K, C> RateLimiter<K, HashMapStateStore<K>, C, NoOpMiddleware<C::Instant>> where
K: Hash + Eq + Clone,
C: Clock,
impl<K, C> RateLimiter<K, HashMapStateStore<K>, C, NoOpMiddleware<C::Instant>> where
K: Hash + Eq + Clone,
C: Clock,
Constructs a new rate limiter with a custom clock, backed by a HashMap
.
impl<K, C> RateLimiter<K, DashMapStateStore<K>, C, NoOpMiddleware<C::Instant>> where
K: Hash + Eq + Clone,
C: Clock,
impl<K, C> RateLimiter<K, DashMapStateStore<K>, C, NoOpMiddleware<C::Instant>> where
K: Hash + Eq + Clone,
C: Clock,
Constructs a new rate limiter with a custom clock, backed by a
DashMap
.
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
K: Hash + Eq + Clone,
S: KeyedStateStore<K>,
C: ReasonablyRealtime,
MW: RateLimitingMiddleware<C::Instant, NegativeOutcome = NotUntil<C::Instant>>,
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
K: Hash + Eq + Clone,
S: KeyedStateStore<K>,
C: ReasonablyRealtime,
MW: RateLimitingMiddleware<C::Instant, NegativeOutcome = NotUntil<C::Instant>>,
Asynchronously resolves as soon as the rate limiter allows it.
When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).
If multiple futures are dispatched against the rate limiter, it is advisable to use
until_ready_with_jitter
, to avoid thundering herds.
pub async fn until_key_ready_with_jitter(
&self,
key: &K,
jitter: Jitter
) -> MW::PositiveOutcome
pub async fn until_key_ready_with_jitter(
&self,
key: &K,
jitter: Jitter
) -> MW::PositiveOutcome
Asynchronously resolves as soon as the rate limiter allows it, with a randomized wait period.
When polled, the returned future either resolves immediately (in the case where the rate limiter allows it), or else triggers an asynchronous delay, after which the rate limiter is polled again. This means that the future might resolve at some later time (depending on what other measurements are made on the rate limiter).
This method allows for a randomized additional delay between polls of the rate limiter, which can help reduce the likelihood of thundering herd effects if multiple tasks try to wait on the same rate limiter.
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
S: KeyedStateStore<K>,
K: Hash,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
S: KeyedStateStore<K>,
K: Hash,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
Allow a single cell through the rate limiter for the given key.
If the rate limit is reached, check_key
returns information about the earliest
time that a cell might be allowed through again under that key.
pub fn check_key_n(
&self,
key: &K,
n: NonZeroU32
) -> Result<MW::PositiveOutcome, NegativeMultiDecision<MW::NegativeOutcome>>
pub fn check_key_n(
&self,
key: &K,
n: NonZeroU32
) -> Result<MW::PositiveOutcome, NegativeMultiDecision<MW::NegativeOutcome>>
Allow only all n
cells through the rate limiter for the given key.
This method can succeed in only one way and fail in two ways:
- Success: If all
n
cells can be accommodated, it returnsOk(())
. - Failure (but ok): Not all cells can make it through at the current time.
The result is
Err(NegativeMultiDecision::BatchNonConforming(NotUntil))
, which can be interrogated about when the batch might next conform. - Failure (the batch can never go through): The rate limit is too low for the given number of cells.
Performance
This method diverges a little from the GCRA algorithm, using multiplication to determine the next theoretical arrival time, and so is not as fast as checking a single cell.
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
S: ShrinkableKeyedStateStore<K>,
K: Hash,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
S: ShrinkableKeyedStateStore<K>,
K: Hash,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
Keyed rate limiters - Housekeeping
As the inputs to a keyed rate-limiter can be arbitrary keys, the set of retained keys retained grows, while the number of active keys may stay smaller. To save on space, a keyed rate-limiter allows removing those keys that are “stale”, i.e., whose values are no different from keys’ that aren’t present in the rate limiter state store.
Retains all keys in the rate limiter that were used recently enough.
Any key whose rate limiting state is indistinguishable from a “fresh” state (i.e., the theoretical arrival time lies in the past).
Shrinks the capacity of the rate limiter’s state store, if possible.
Returns the number of “live” keys in the rate limiter’s state store.
Depending on how the state store is implemented, this may return an estimate or an out-of-date result.
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
S: StateStore<Key = K>,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
S: StateStore<Key = K>,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
Creates a new rate limiter from components.
This is the most generic way to construct a rate-limiter; most users should prefer
direct
or other methods instead.
Consumes the RateLimiter
and returns the state store.
This is mostly useful for debugging and testing.
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
S: StateStore<Key = K>,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
impl<K, S, C, MW> RateLimiter<K, S, C, MW> where
S: StateStore<Key = K>,
C: Clock,
MW: RateLimitingMiddleware<C::Instant>,
pub fn with_middleware<Outer: RateLimitingMiddleware<C::Instant>>(
self
) -> RateLimiter<K, S, C, Outer>
pub fn with_middleware<Outer: RateLimitingMiddleware<C::Instant>>(
self
) -> RateLimiter<K, S, C, Outer>
Convert the given rate limiter into one that uses a different middleware.