Crate axum_response_cache

source
Expand description

This library provides Axum middleware that caches HTTP responses to the incoming requests based on their HTTP method and path.

The main struct is CacheLayer. It can be created with any cache that implements two traits from the cached crate: cached::Cached and cached::CloneCached.

The current version of CacheLayer is compatible only with services accepting Axum’s Request<Body> and returning axum::response::Response, thus it is not compatible with non-Axum tower services.

It’s possible to configure the layer to re-use an old expired response in case the wrapped service fails to produce a new successful response.

Only successful responses are cached (responses with status codes outside of the [200-299] range are passed-through or ignored).

The cache limits maximum size of the response’s body (128 MB by default).

§Examples

To cache a response over a specific route, just wrap it in a CacheLayer:

use axum::{Router, extract::Path, routing::get};
use axum_response_cache::CacheLayer;

#[tokio::main]
async fn main() {
    let mut router = Router::new()
        .route(
            "/hello/:name",
            get(|Path(name): Path<String>| async move { format!("Hello, {name}!") })
                // this will cache responses with each `:name` for 60 seconds.
                .layer(CacheLayer::with_lifespan(60)),
        );

    let listener = tokio::net::TcpListener::bind("0.0.0.0:8080").await.unwrap();
    axum::serve(listener, router).await.unwrap();
}

§Reusing last successful response

use axum::{
    body::Body,
    extract::Path,
    http::status::StatusCode,
    http::Request,
    Router,
    routing::get,
};
use axum_response_cache::CacheLayer;
use tower::Service as _;

// a handler that returns 200 OK only the first time it’s called
async fn handler(Path(name): Path<String>) -> (StatusCode, String) {
    static FIRST_RUN: AtomicBool = AtomicBool::new(true);
    let first_run = FIRST_RUN.swap(false, Ordering::AcqRel);

    if first_run {
        (StatusCode::OK, format!("Hello, {name}"))
    } else {
        (StatusCode::INTERNAL_SERVER_ERROR, String::from("Error!"))
    }
}

let mut router = Router::new()
    .route("/hello/:name", get(handler))
    .layer(CacheLayer::with_lifespan(60).use_stale_on_failure());

// first request will fire handler and get the response
let status1 = router.call(Request::get("/hello/foo").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status1);

// second request will reuse the last response since the handler now returns ISE
let status2 = router.call(Request::get("/hello/foo").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status2);

§Serving static files

This middleware can be used to cache files served in memory to limit hard drive load on the server. To serve files you can use tower-http::services::ServeDir layer.

let router = Router::new().nest_service("/", ServeDir::new("static/"));

§Limiting the body size

use axum::{
    body::Body,
    extract::Path,
    http::status::StatusCode,
    http::Request,
    Router,
    routing::get,
};
use axum_response_cache::CacheLayer;
use tower::Service as _;

// returns a short string, well below the limit
async fn ok_handler() -> &'static str {
    "ok"
}

async fn too_long_handler() -> &'static str {
    "a response that is well beyond the limit of the cache!"
}

let mut router = Router::new()
    .route("/ok", get(ok_handler))
    .route("/too_long", get(too_long_handler))
    // limit max cached body to only 16 bytes
    .layer(CacheLayer::with_lifespan(60).body_limit(16));

let status_ok = router.call(Request::get("/ok").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::OK, status_ok);

let status_too_long = router.call(Request::get("/too_long").body(Body::empty()).unwrap())
    .await
    .unwrap()
    .status();
assert_eq!(StatusCode::INTERNAL_SERVER_ERROR, status_too_long);

§Using custom cache

use axum::{Router, routing::get};
use axum_response_cache::CacheLayer;
// let’s use TimedSizedCache here
use cached::stores::TimedSizedCache;

let router: Router = Router::new()
    .route("/hello", get(|| async { "Hello, world!" }))
    // cache maximum value of 50 responses for one minute
    .layer(CacheLayer::with(TimedSizedCache::with_size_and_lifespan(50, 60)));

§Use cases

Caching responses in memory (eg. using cached::TimedCache) might be useful when the underlying service produces the responses by:

  1. doing heavy computation,
  2. requesting external service(s) that might not be fully reliable or performant,
  3. serving static files from disk.

In those cases, if the response to identical requests does not change often over time, it might be desirable to re-use the same responses from memory without re-calculating them – skipping requests to data bases, external services, reading from disk.

Structs§

  • The main struct of the library. The layer providing caching to the wrapped service.
  • The struct preserving all the headers and body of the cached response.