rl-core
The core logic for a token-bucket rate limiter.
This just implements the logic for the limiting and has no method for ensuring consistency or blocking. Those can either be added by the application or a wrapping library.
Local Example
Here is an example of applying an in-process rate limit. This should be wrapped up into a library and likely integrated with your favourite async runtime. However no one has done this yet.
// Send at most 1 message per second per domain with a burst of 4.
const DOMAIN_LIMIT: Config = new;
lazy_static!
Distributed Example
Here is a simple example of applying a per-user rate limit to login attempts. In this example it is assumed that we can acquire row-level locks from our DB to ensure serialized rate-limit updates.
// Rate login attempts to to 1 per hour with a 10 login burst.
const LOGIN_RATE_LIMIT: Config = new;
If your DB doesn't support row-level updates you can do optimistic checking where after acquiring the rate limit you compare-and-set the new value. If the compare fails someone else acquired it in the meantime and you need to retry. For high-throughput use-cases you likely want to manage rate limits on a shared-service and utilize rl-core on that service. (This service has not yet been written.)
Features
- Weighted request support.
- Calculation of required wait time.
- Efficient updates.
- Low memory use.
- Minimal dependencies.
- Serialization of Tracker via serde. (Enable feature
use_serde
.)
Non-features
It is intended that the following could be implemented on top of rl-core.
- Distributed rate-limiting support.
- Waiting support.