touch_ratelimit 0.1.0

A composable, extensible rate limiting crate for Rust
Documentation
# touch-ratelimit

A composable, extensible rate limiting crate for Rust.

`touch-ratelimit` provides clean building blocks for implementing rate limiting
in Rust applications using a clear separation of concerns:

- **Algorithms** – how rate limiting works (e.g. token bucket)
- **Stores** – where rate limiting state lives
- **Middleware** – how rate limiting is applied to requests
- **Adapters** – framework integrations (e.g. Axum)

The crate is designed to be **framework-agnostic**, **storage-agnostic**, and
**algorithm-agnostic**, making it easy to extend without rewriting core logic.

---

## Features

- Token bucket rate limiting algorithm
- In-memory storage backend
- Tower-based middleware
- Axum integration
- Designed for extension (Redis, additional algorithms planned)

---

## Installation

### Core crate

```toml
[dependencies]
touch-ratelimit = "0.1"
```

### With Axum integration

```toml
[dependencies]
touch-ratelimit = { version = "0.1", features = ["axum"] }
axum = "0.8"
tokio = { version = "1", features = ["full"] }
```

---

## Example: Axum

```rust
use axum::{routing::get, Router};
use touch_ratelimit::{
    adapters::axum::axum_rate_limit_layer,
    storage::InMemoryStore,
    bucket::token_bucket::TokenBucket,
};

#[tokio::main]
async fn main() {
    let store = InMemoryStore::token_bucket(10.0, 1.0);

    let app = Router::new()
        .route("/", get(|| async { "hello" }))
        .layer(axum_rate_limit_layer(store));

    axum::Server::bind(&"0.0.0.0:3000".parse().unwrap())
        .serve(app.into_make_service())
        .await
        .unwrap();
}
```

Requests exceeding the configured rate limit receive:

```
HTTP 429 Too Many Requests
```

---

## Core Concepts

### RateLimiter

A `RateLimiter` represents the rate-limiting logic for **a single identity**
(e.g. one user, one IP address, one API key).

Examples include:

- Token bucket
- Sliding window (planned)
- Leaky bucket (planned)

Each `RateLimiter` instance is **stateful** and is **not shared** across identities.

---

### RateLimitStore

A `RateLimitStore` manages **many** `RateLimiter` instances and maps them to
request keys.

Responsibilities include:

- Creating rate limiters for new keys
- Handling concurrency
- Owning rate-limiting state

The store is responsible for **where** and **how** state is stored.

---

### Middleware

Middleware enforces rate limits before forwarding requests to the inner service.

The core middleware is built on **Tower** and is framework-agnostic.
Framework adapters (e.g. Axum) are thin wrappers around this middleware.

---

## Storage Behavior (Important)

The default `InMemoryStore` keeps all rate-limiting state inside the
application’s memory.

This means:

- State is **lost on restart**
- State is **not shared across processes**
- Each server instance enforces limits independently

This is suitable for:

- Development
- Single-instance deployments
- Edge or sidecar setups

Distributed stores (e.g. Redis) can be added without changing middleware or
algorithms.

---

## Key Extraction

The Axum adapter identifies requests using the `x-forwarded-for` header by
default.

If the header is missing or invalid, rate limiting is **skipped** for that
request.

This behavior is useful when running behind a reverse proxy or load balancer.

---

## Extensibility

The crate is designed so new components can be added independently:

- New algorithms can implement `RateLimiter`
- New storage backends can implement `RateLimitStore`
- New framework adapters can be built on top of the core middleware

No changes to existing middleware are required.

---