# http-cache-tower-server
[](https://crates.io/crates/http-cache-tower-server)
[](https://docs.rs/http-cache-tower-server)

Server-side HTTP response caching middleware for Tower-based frameworks (Axum, Hyper, Tonic).
## Overview
This crate provides Tower middleware for caching your server's HTTP responses to improve performance and reduce load. Unlike client-side caching, this middleware caches responses **after** your handlers execute, making it ideal for expensive operations like database queries or complex computations.
## When to Use This
Use `http-cache-tower-server` when you want to:
- Cache expensive API responses (database queries, aggregations)
- Reduce load on backend services
- Improve response times for read-heavy workloads
- Cache server-rendered content
- Speed up responses that are computed but rarely change
## Client vs Server Caching
| `http-cache-tower` | **Client-side caching** | Cache responses from external APIs you call |
| `http-cache-tower-server` | **Server-side caching** | Cache your own application's responses |
**Important:** If you're experiencing issues with path parameter extraction or routing when using `http-cache-tower` in a server application, you should use this crate instead. See [Issue #121](https://github.com/06chaynes/http-cache/issues/121) for details.
## Installation
```sh
cargo add http-cache-tower-server
```
### Features
By default, `manager-cacache` is enabled.
- `manager-cacache` (default): Enable [cacache](https://github.com/zkat/cacache-rs) disk-based cache backend
- `manager-moka`: Enable [moka](https://github.com/moka-rs/moka) in-memory cache backend
## Quick Start
### Basic Example (Axum)
```rust
use axum::{Router, routing::get, response::IntoResponse};
use http_cache_tower_server::ServerCacheLayer;
use http_cache::CACacheManager;
async fn expensive_handler() -> impl IntoResponse {
// Simulate expensive operation
tokio::time::sleep(tokio::time::Duration::from_secs(2)).await;
// Set cache control to cache for 60 seconds
(
[("cache-control", "max-age=60")],
"This response is cached for 60 seconds"
)
}
#[tokio::main]
async fn main() {
// Create cache manager
let manager = CACacheManager::new("./cache", false);
// Create router with cache layer
let app = Router::new()
.route("/expensive", get(expensive_handler))
.layer(ServerCacheLayer::new(manager));
// Run server
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000")
.await
.unwrap();
axum::serve(listener, app).await.unwrap();
}
```
## How It Works
1. **Request arrives** → Routing layer processes it (path params extracted)
2. **Cache lookup** → Check if response is cached
3. **Cache hit** → Return cached response immediately
4. **Cache miss** → Call your handler
5. **Handler returns** → Check Cache-Control headers
6. **Should cache?** → Store response if cacheable
7. **Return response** → Send to client
### Cache Status Headers
Responses include an `x-cache` header indicating cache status:
- `x-cache: HIT` → Response served from cache
- `x-cache: MISS` → Response generated by handler (may be cached)
- No header → Response not cacheable
## Cache Key Generation
### Built-in Keyers
#### DefaultKeyer (default)
Caches based on HTTP method and path:
```rust
use http_cache_tower_server::{ServerCacheLayer, DefaultKeyer};
let layer = ServerCacheLayer::new(manager);
// GET /users/123 → "GET /users/123"
// GET /users/456 → "GET /users/456"
```
#### QueryKeyer
Includes query parameters in cache key:
```rust
use http_cache_tower_server::{ServerCacheLayer, QueryKeyer};
let layer = ServerCacheLayer::with_keyer(manager, QueryKeyer);
// GET /search?q=rust → "GET /search?q=rust"
// GET /search?q=http → "GET /search?q=http"
```
### CustomKeyer
For advanced scenarios (authentication, content negotiation, etc.):
```rust
use http_cache_tower_server::{ServerCacheLayer, CustomKeyer};
use http::Request;
// Include user ID from headers in cache key
.get("x-user-id")
.and_then(|v| v.to_str().ok())
.unwrap_or("anonymous");
format!("{} {} user:{}", req.method(), req.uri().path(), user_id)
});
let layer = ServerCacheLayer::with_keyer(manager, keyer);
// GET /dashboard with x-user-id: 123 → "GET /dashboard user:123"
// GET /dashboard with x-user-id: 456 → "GET /dashboard user:456"
```
## Configuration Options
```rust
use http_cache_tower_server::{ServerCacheLayer, ServerCacheOptions};
use std::time::Duration;
let options = ServerCacheOptions {
// Default TTL when no Cache-Control header present
default_ttl: Some(Duration::from_secs(60)),
// Maximum TTL (even if response specifies longer)
max_ttl: Some(Duration::from_secs(3600)),
// Minimum TTL (even if response specifies shorter)
min_ttl: Some(Duration::from_secs(10)),
// Add X-Cache headers (HIT/MISS)
cache_status_headers: true,
// Maximum response body size to cache (128 MB)
max_body_size: 128 * 1024 * 1024,
// Cache responses without explicit Cache-Control
cache_by_default: false,
// Respect Vary header (currently extracted but not enforced)
respect_vary: true,
};
let layer = ServerCacheLayer::new(manager)
.with_options(options);
```
## Caching Behavior (RFC 9111 Compliant)
This middleware implements a **shared cache** per RFC 9111 (HTTP Caching).
### Cached Responses
Responses are cached when they have:
- Status code: 2xx (200, 201, 204, etc.)
- Cache-Control: `max-age=X` → Cached for X seconds
- Cache-Control: `s-maxage=X` → Cached for X seconds (shared cache specific)
- Cache-Control: `public` → Cached with default TTL
### Never Cached
Responses are **never** cached if they have:
- Status code: Non-2xx (4xx, 5xx, 3xx)
- Cache-Control: `no-store` → Prevents all caching
- Cache-Control: `no-cache` → Requires revalidation (not supported)
- Cache-Control: `private` → Only for private caches
### Directive Precedence
When multiple directives are present:
1. `s-maxage` (shared cache specific) takes precedence
2. `max-age` (general directive)
3. `public` (uses default TTL)
4. Expires header (fallback, not currently parsed)
### Example Headers
```rust
// Cached for 60 seconds
("cache-control", "max-age=60")
// Cached for 120 seconds (s-maxage overrides max-age for shared caches)
("cache-control", "max-age=60, s-maxage=120")
// Cached with default TTL
("cache-control", "public")
// Never cached
("cache-control", "no-store")
("cache-control", "private")
("cache-control", "no-cache")
```
## Security Considerations
### ⚠️ This is a Shared Cache
**Critical:** Cached responses are served to **ALL users**. Never cache user-specific data without appropriate measures.
### Safe Usage Patterns
#### ✅ Public Content
```rust
async fn public_page() -> impl IntoResponse {
(
[("cache-control", "max-age=300")],
"Public content safe to cache"
)
}
```
#### ✅ User-Specific with CustomKeyer
```rust
// Include user ID in cache key
format!("{} {} user:{}", req.method(), req.uri().path(), user_id)
});
```
#### ❌ UNSAFE: User Data Without Keyer
```rust
// ❌ DANGEROUS: Will serve user123's data to user456!
async fn user_profile() -> impl IntoResponse {
let user_data = get_current_user_data().await;
(
[("cache-control", "max-age=60")], // ❌ Don't do this!
user_data
)
}
```
#### ✅ User Data with Private Directive
```rust
// ✅ Safe: Won't be cached
async fn user_profile() -> impl IntoResponse {
let user_data = get_current_user_data().await;
(
[("cache-control", "private")], // Won't be cached
user_data
)
}
```
### Best Practices
1. **Never cache authenticated endpoints** unless using a CustomKeyer that includes session/user ID
2. **Use `Cache-Control: private`** for user-specific responses
3. **Validate cache keys** to prevent cache poisoning
4. **Set body size limits** to prevent DoS attacks
5. **Use TTL constraints** to prevent cache bloat
## Advanced Examples
### Content Negotiation
For responses that vary by Accept-Language:
```rust
.get("accept-language")
.and_then(|v| v.to_str().ok())
.unwrap_or("en");
format!("{} {} lang:{}", req.method(), req.uri().path(), lang)
});
let layer = ServerCacheLayer::with_keyer(manager, keyer);
```
### Conditional Caching
Only cache certain routes:
```rust
use axum::middleware;
async fn cache_middleware(
req: Request<Body>,
next: Next<Body>,
) -> Response {
// Only cache GET requests to /api/*
if req.method() == Method::GET && req.uri().path().starts_with("/api/") {
// Apply cache layer
}
next.run(req).await
}
```
### TTL by Route
```rust
async fn long_cache_handler() -> impl IntoResponse {
(
[("cache-control", "max-age=3600")], // 1 hour
"Rarely changing content"
)
}
async fn short_cache_handler() -> impl IntoResponse {
(
[("cache-control", "max-age=60")], // 1 minute
"Frequently updated content"
)
}
```
## Limitations
### Vary Header
The middleware extracts `Vary` headers but does not currently enforce them during cache lookup. For content negotiation:
- Use a `CustomKeyer` that includes relevant headers in the cache key, OR
- Set `Cache-Control: private` to prevent caching
### Authorization Header
The middleware does not check for `Authorization` headers in requests. Authenticated endpoints should either:
- Use `Cache-Control: private` (won't be cached), OR
- Use a `CustomKeyer` that includes user/session ID, OR
- Not be cached at all
### Expires Header
The `Expires` header is recognized but not currently parsed. Modern applications should use `Cache-Control` directives instead.
## Examples
See the [examples](examples/) directory:
- [`axum_basic.rs`](examples/axum_basic.rs) - Basic usage with Axum
Run with:
```sh
cargo run --example axum_basic --features manager-cacache
```
## Comparison with Other Crates
### vs axum-response-cache
- This crate: RFC 9111 compliant, respects Cache-Control headers
- axum-response-cache: Simpler API, less RFC compliant
### vs tower-cache-control
- This crate: Full caching implementation with storage
- tower-cache-control: Only sets Cache-Control headers
## Minimum Supported Rust Version (MSRV)
1.82.0
## Contributing
Contributions are welcome! Please see the [main repository](https://github.com/06chaynes/http-cache) for contribution guidelines.
## License
Licensed under either of
- Apache License, Version 2.0 ([LICENSE-APACHE](../LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license ([LICENSE-MIT](../LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.