Skip to main content

Crate tower_resilience_ratelimiter

Crate tower_resilience_ratelimiter 

Source
Expand description

Advanced rate limiting middleware for Tower services.

This crate provides enhanced rate limiting inspired by Resilience4j’s RateLimiter, with features beyond Tower’s built-in rate limiting.

§Features

  • Permit-based rate limiting: Control requests per time period
  • Multiple window types: Fixed, sliding log, and sliding counter algorithms
  • Configurable timeout: Wait up to a specified duration for permits
  • Automatic refresh: Permits automatically refresh after each period
  • Event system: Observability through rate limiter events

§Window Types

The rate limiter supports three different windowing strategies:

  • Fixed (default): Resets permits at fixed intervals. Simple and efficient but can allow bursts at window boundaries.

  • SlidingLog: Stores timestamps of each request. Provides precise rate limiting with no burst allowance, but uses O(n) memory where n = requests in window.

  • SlidingCounter: Uses weighted averaging between time buckets. Approximate sliding window behavior with O(1) memory - ideal for high-throughput APIs.

§Examples

§Basic Rate Limiting (Fixed Window)

use tower_resilience_ratelimiter::RateLimiterLayer;
use tower::ServiceBuilder;
use std::time::Duration;

// Allow 100 requests per second, wait up to 500ms for a permit
let rate_limiter = RateLimiterLayer::builder()
    .limit_for_period(100)
    .refresh_period(Duration::from_secs(1))
    .timeout_duration(Duration::from_millis(500))
    .on_permit_acquired(|wait_duration| {
        println!("Permit acquired after {:?}", wait_duration);
    })
    .on_permit_rejected(|timeout| {
        println!("Rate limited! Timeout: {:?}", timeout);
    })
    .build();

// Apply to a service
let service = ServiceBuilder::new()
    .layer(rate_limiter)
    .service(tower::service_fn(|req: String| async move {
        Ok::<_, std::io::Error>(format!("Response: {}", req))
    }));

§Sliding Log Rate Limiting (Precise)

Use sliding log for precise rate limiting with no burst allowance at window boundaries. This is ideal when you need to strictly enforce rate limits, such as when calling external APIs with strict quotas.

use tower_resilience_ratelimiter::{RateLimiterLayer, WindowType};
use tower::ServiceBuilder;
use std::time::Duration;

let rate_limiter = RateLimiterLayer::builder()
    .limit_for_period(100)
    .refresh_period(Duration::from_secs(1))
    .window_type(WindowType::SlidingLog)
    .timeout_duration(Duration::from_millis(500))
    .build();

let service = ServiceBuilder::new()
    .layer(rate_limiter)
    .service(tower::service_fn(|req: String| async move {
        Ok::<_, std::io::Error>(format!("Response: {}", req))
    }));

§Sliding Counter Rate Limiting (Efficient)

Use sliding counter for high-throughput APIs where you want approximate sliding window behavior without the memory overhead of storing timestamps.

use tower_resilience_ratelimiter::{RateLimiterLayer, WindowType};
use tower::ServiceBuilder;
use std::time::Duration;

let rate_limiter = RateLimiterLayer::builder()
    .limit_for_period(10000)  // High throughput
    .refresh_period(Duration::from_secs(1))
    .window_type(WindowType::SlidingCounter)
    .timeout_duration(Duration::from_millis(100))
    .build();

let service = ServiceBuilder::new()
    .layer(rate_limiter)
    .service(tower::service_fn(|req: String| async move {
        Ok::<_, std::io::Error>(format!("Response: {}", req))
    }));

§Fallback When Rate Limited

Handle rate limiting errors with appropriate fallback strategies:

§Return Informative Error

use tower_resilience_ratelimiter::{RateLimiterLayer, RateLimiterError};
use tower::{Service, ServiceBuilder, ServiceExt};
use std::time::Duration;

let rate_limiter = RateLimiterLayer::builder()
    .limit_for_period(10)
    .refresh_period(Duration::from_secs(1))
    .timeout_duration(Duration::from_millis(100))
    .build();

let mut service = ServiceBuilder::new()
    .layer(rate_limiter)
    .service(tower::service_fn(|req: String| async move {
        Ok::<String, std::io::Error>(format!("Processed: {}", req))
    }));

match service.ready().await?.call("request".to_string()).await {
    Ok(response) => println!("Success: {}", response),
    Err(e) => {
        println!("Rate limited - please try again later");
        // Could return 429 Too Many Requests in HTTP context
    }
}

§Queue for Later Processing

use tower_resilience_ratelimiter::{RateLimiterLayer, RateLimiterError};
use tower::{Service, ServiceBuilder, ServiceExt};
use std::time::Duration;
use std::sync::Arc;
use tokio::sync::Mutex;

let queue = Arc::new(Mutex::new(Vec::new()));
let rate_limiter = RateLimiterLayer::builder()
    .limit_for_period(10)
    .refresh_period(Duration::from_secs(1))
    .timeout_duration(Duration::from_millis(50))
    .build();

let mut service = ServiceBuilder::new()
    .layer(rate_limiter)
    .service(tower::service_fn(|req: String| async move {
        Ok::<String, std::io::Error>(req)
    }));

let queue_clone = Arc::clone(&queue);
let result: Result<String, std::io::Error> = match service.ready().await?.call("request".to_string()).await {
    Ok(response) => Ok(response),
    Err(_) => {
        // Queue request for later processing
        queue_clone.lock().await.push("request".to_string());
        Ok("Queued for processing".to_string())
    }
};

§Shed Load Gracefully

use tower_resilience_ratelimiter::{RateLimiterLayer, RateLimiterError};
use tower::{Service, ServiceBuilder, ServiceExt};
use std::time::Duration;

let rate_limiter = RateLimiterLayer::builder()
    .limit_for_period(100)
    .refresh_period(Duration::from_secs(1))
    .timeout_duration(Duration::from_millis(10)) // Short timeout = fast rejection
    .build();

let mut service = ServiceBuilder::new()
    .layer(rate_limiter)
    .service(tower::service_fn(|req: String| async move {
        Ok::<String, std::io::Error>(req)
    }));

let result = service.ready().await?.call("request".to_string()).await
    .unwrap_or_else(|_| {
        // Shed load - return reduced functionality response
        "Service at capacity - showing cached data".to_string()
    });

Structs§

RateLimiter
A Tower Service that applies rate limiting.
RateLimiterConfig
Configuration for the rate limiter pattern.
RateLimiterConfigBuilder
Builder for RateLimiterConfig.
RateLimiterHandle
A read-only handle for observing rate limiter state.
RateLimiterLayer
A Tower Layer that applies rate limiting to a service.

Enums§

RateLimiterError
Errors that can occur when using the rate limiter.
RateLimiterEvent
Events emitted by the rate limiter middleware.
RateLimiterServiceError
Service-level error that wraps inner service errors.
WindowType
The type of window used for rate limiting.