Crate axum_response_cache

Source
Expand description

This library provides Axum middleware that caches HTTP responses to the incoming requests based on their HTTP method and path.

The main struct is CacheLayer. It can be created with any cache that implements two traits from the cached crate: cached::Cached and cached::CloneCached.

The current version of CacheLayer is compatible only with services accepting Axum’s Request<Body> and returning axum::response::Response, thus it is not compatible with non-Axum tower services.

It’s possible to configure the layer to re-use an old expired response in case the wrapped service fails to produce a new successful response.

Only successful responses are cached (responses with status codes outside of the [200-299] range are passed-through or ignored).

The cache limits maximum size of the response’s body (128 MB by default).

§Examples

To cache a response over a specific route, just wrap it in a CacheLayer:

use axum::{Router, extract::Path, routing::get};
use axum_response_cache::CacheLayer;

#[tokio::main]
async fn main() {
    let mut router = Router::new()
        .route(
            "/hello/{name}",
            get(|Path(name): Path<String>| async move { format!("Hello, {name}!") })
                // this will cache responses with each `:name` for 60 seconds.
                .layer(CacheLayer::with_lifespan(60)),
        );

    let listener = tokio::net::TcpListener::bind("0.0.0.0:8080").await.unwrap();
    axum::serve(listener, router).await.unwrap();
}

§Reusing last successful response

use axum::{
    body::Body,
    extract::Path,
    http::status::StatusCode,
    http::Request,
    Router,
    routing::get,
};
use axum_response_cache::CacheLayer;
use tower::Service as _;

// a handler that returns 200 OK only the first time it’s called
async fn handler(Path(name): Path<String>) -> (StatusCode, String) {
    static FIRST_RUN: AtomicBool = AtomicBool::new(true);
    let first_run = FIRST_RUN.swap(false, Ordering::AcqRel);

    if first_run {
        (StatusCode::OK, format!("Hello, {name}"))
    } else {
        (StatusCode::INTERNAL_SERVER_ERROR, String::from("Error!"))
    }
}

let mut router = Router::new()
    .route("/hello/{name}", get(handler))
    .layer(CacheLayer::with_lifespan(60).use_stale_on_failure());

// first request will fire handler and get the response
let status1 = router.call(Request::get("/hello/foo").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status1);

// second request will reuse the last response since the handler now returns ISE
let status2 = router.call(Request::get("/hello/foo").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status2);

§Serving static files

This middleware can be used to cache files served in memory to limit hard drive load on the server. To serve files you can use tower-http::services::ServeDir layer.

let router = Router::new().nest_service("/", ServeDir::new("static/"));

§Limiting the body size

use axum::{
    body::Body,
    extract::Path,
    http::status::StatusCode,
    http::Request,
    Router,
    routing::get,
};
use axum_response_cache::CacheLayer;
use tower::Service as _;

// returns a short string, well below the limit
async fn ok_handler() -> &'static str {
    "ok"
}

async fn too_long_handler() -> &'static str {
    "a response that is well beyond the limit of the cache!"
}

let mut router = Router::new()
    .route("/ok", get(ok_handler))
    .route("/too_long", get(too_long_handler))
    // limit max cached body to only 16 bytes
    .layer(CacheLayer::with_lifespan(60).body_limit(16));

let status_ok = router.call(Request::get("/ok").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status_ok);

let status_too_long = router.call(Request::get("/too_long").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::INTERNAL_SERVER_ERROR, status_too_long);

§Manual Cache Invalidation

This middleware allows manual cache invalidation by setting the X-Invalidate-Cache header in the request. This can be useful when you know the underlying data has changed and you want to force a fresh pull of data.

use axum::{
    body::Body,
    extract::Path,
    http::status::StatusCode,
    http::Request,
    Router,
    routing::get,
};
use axum_response_cache::CacheLayer;
use tower::Service as _;

async fn handler(Path(name): Path<String>) -> (StatusCode, String) {
    (StatusCode::OK, format!("Hello, {name}"))
}

let mut router = Router::new()
    .route("/hello/{name}", get(handler))
    .layer(CacheLayer::with_lifespan(60).allow_invalidation());

// first request will fire handler and get the response
let status1 = router.call(Request::get("/hello/foo").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status1);

// second request should return the cached response
let status2 = router.call(Request::get("/hello/foo").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status2);

// third request with X-Invalidate-Cache header to invalidate the cache
let status3 = router.call(
    Request::get("/hello/foo")
        .header("X-Invalidate-Cache", "true")
        .body(Body::empty())
        .unwrap(),
    )
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status3);

// fourth request to verify that the handler is called again
let status4 = router.call(Request::get("/hello/foo").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status4);

Cache invalidation could be dangerous because it can allow a user to force the server to make a request to an external service or database. It is disabled by default, but can be enabled by calling the CacheLayer::allow_invalidation method.

§Using custom cache

use axum::{Router, routing::get};
use axum_response_cache::CacheLayer;
// let’s use TimedSizedCache here
use cached::stores::TimedSizedCache;

let router: Router = Router::new()
    .route("/hello", get(|| async { "Hello, world!" }))
    // cache maximum value of 50 responses for one minute
    .layer(CacheLayer::with(TimedSizedCache::with_size_and_lifespan(50, 60)));

§Using custom keyer

It’s possible to customize the cache’s key to include eg. the Accept header (so that different types of responses are cached separately based on the header).

use axum::{Router, routing::get};
use axum_response_cache::CacheLayer;

// cache responses based on method, Accept header, and uri
let keyer = |request: &Request<Body>| {
    (
        request.method().clone(),
        request
            .headers()
            .get(axum::http::header::ACCEPT)
            .and_then(|c| c.to_str().ok())
            .unwrap_or("")
            .to_string(),
        request.uri().clone(),
    )
};
let router: Router = Router::new()
    .route("/hello", get(|| async { "Hello, world!" }))
    .layer(CacheLayer::with_lifespan_and_keyer(60, keyer));

§Use cases

Caching responses in memory (eg. using cached::TimedCache) might be useful when the underlying service produces the responses by:

  1. doing heavy computation,
  2. requesting external service(s) that might not be fully reliable or performant,
  3. serving static files from disk.

In those cases, if the response to identical requests does not change often over time, it might be desirable to re-use the same responses from memory without re-calculating them – skipping requests to data bases, external services, reading from disk.

§Using Axum 0.7

By default, this library uses Axum 0.8. However, you can configure it to use Axum 0.7 by enabling the appropriate feature flag in your Cargo.toml.

To use Axum 0.7, add the following to your Cargo.toml:

[dependencies]
axum-response-cache = { version = "0.3", features = ["axum07"], default-features = false }

This will disable the default Axum 0.8 feature and enable the Axum 0.7 feature instead.

Structs§

BasicKeyer
The basic caching strategy for the responses.
CacheLayer
The main struct of the library. The layer providing caching to the wrapped service. It is generic over the cache used (C) and a Keyer (K) used to obtain the key for cached responses.
CacheService
CachedResponse
The struct preserving all the headers and body of the cached response.

Traits§

Keyer
The trait for objects used to obtain cache keys. See BasicKeyer for default implementation returning (http::Method, Uri).

Type Aliases§

BasicKey