leaky-bucket
A token-based rate limiter based on the leaky bucket algorithm.
If the bucket overflows and goes over its max configured capacity, the task that tried to acquire the tokens will be suspended until the required number of tokens has been drained from the bucket.
Since this crate uses timing facilities from tokio it has to be used within
a Tokio runtime with the time
feature enabled.
This library has some neat features, which includes:
Not requiring a background task. This is usually needed by token bucket rate limiters to drive progress. Instead, one of the waiting tasks temporarily assumes the role as coordinator (called the core). This reduces the amount of tasks needing to sleep, which can be a source of jitter for imprecise sleeping implementations and tight limiters. See below for more details.
Dropped tasks release any resources they've reserved. So that constructing and cancellaing asynchronous tasks to not end up taking up wait slots it never uses which would be the case for cell-based rate limiters.
Usage
The core type is RateLimiter
, which allows for limiting the throughput
of a section using its acquire
, try_acquire
, and acquire_one
methods.
The following is a simple example where we wrap requests through a HTTP
Client
, to ensure that we don't exceed a given limit:
use RateLimiter;
/// A blog client.
Implementation details
Each rate limiter has two acquisition modes. A fast path and a slow path. The fast path is used if the desired number of tokens are readily available, and simply involves decrementing the number of tokens available in the shared pool.
If the required number of tokens is not available, the task will be forced to be suspended until the next refill interval. Here one of the acquiring tasks will switch over to work as a core. This is known as core switching.
use RateLimiter;
use Duration;
let limiter = builder
.initial
.interval
.build;
// This is instantaneous since the rate limiter starts with 10 tokens to
// spare.
limiter.acquire.await;
// This however needs to core switch and wait for a while until the desired
// number of tokens is available.
limiter.acquire.await;
The core is responsible for sleeping for the configured interval so that more tokens can be added. After which it ensures that any tasks that are waiting to acquire including itself are appropriately unsuspended.
On-demand core switching is what allows this rate limiter implementation to
work without a coordinating background thread. But we need to ensure that
any asynchronous tasks that uses RateLimiter
must either run an
acquire
call to completion, or be cancelled by being dropped.
If none of these hold, the core might leak and be locked indefinitely
preventing any future use of the rate limiter from making progress. This is
similar to if you would lock an asynchronous Mutex
but never drop its
guard.
You can run this example with:
use Future;
use Arc;
use Context;
use RateLimiter;
;
let limiter = new;
let waker = new.into;
let mut cx = from_waker;
let mut a0 = Box pin;
// Poll once to ensure that the core task is assigned.
assert!;
assert!;
// We leak the core task, preventing the rate limiter from making progress
// by assigning new core tasks.
forget;
// Awaiting acquire here would block forever.
// limiter.acquire(1).await;
Fairness
By default RateLimiter
uses a fair scheduler. This ensures that the
core task makes progress even if there are many tasks waiting to acquire
tokens. This might cause more core switching, increasing the total work
needed. An unfair scheduler is expected to do a bit less work under
contention. But without fair scheduling some tasks might end up taking
longer to acquire than expected.
Unfair rate limiters also have access to a fast path for acquiring tokens, which might further improve throughput.
This behavior can be tweaked with the Builder::fair
option.
use RateLimiter;
let limiter = builder
.fair
.build;
The unfair-scheduling
example can showcase this phenomenon.
# fair
Max: 1011ms, Total: 1012ms
Timings:
0: 101ms
1: 101ms
2: 101ms
3: 101ms
4: 101ms
...
# unfair
Max: 1014ms, Total: 1014ms
Timings:
0: 1014ms
1: 101ms
2: 101ms
3: 101ms
4: 101ms
...
As can be seen above the first task in the unfair scheduler takes longer to run because it prioritises releasing other tasks waiting to acquire over itself.