http-cache-tower

An HTTP caching middleware for Tower and Hyper.
This crate provides Tower Layer and Service implementations that add HTTP caching capabilities to your HTTP clients and services.
Minimum Supported Rust Version (MSRV)
1.82.0
Install
With cargo add installed :
cargo add http-cache-tower
Features
The following features are available. By default manager-cacache
is enabled.
manager-cacache
(default): enable cacache, a high-performance disk cache, backend manager.
manager-moka
(disabled): enable moka, a high-performance in-memory cache, backend manager.
streaming
(disabled): enable streaming cache support for memory-efficient handling of large responses using StreamingManager
.
Example
Basic HTTP Cache
use http_cache_tower::HttpCacheLayer;
use http_cache::CACacheManager;
use tower::{ServiceBuilder, ServiceExt};
use http::{Request, Response};
use http_body_util::Full;
use bytes::Bytes;
use std::path::PathBuf;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let cache_manager = CACacheManager::new(PathBuf::from("./cache"), false);
let cache_layer = HttpCacheLayer::new(cache_manager);
let service = ServiceBuilder::new()
.layer(cache_layer)
.service_fn(|_req: Request<Full<Bytes>>| async {
Ok::<_, std::convert::Infallible>(
Response::new(Full::new(Bytes::from("Hello, world!")))
)
});
let request = Request::builder()
.uri("https://httpbin.org/cache/300")
.body(Full::new(Bytes::new()))?;
let response = service.oneshot(request).await?;
println!("Status: {}", response.status());
Ok(())
}
Streaming HTTP Cache
For large responses or when memory efficiency is important, use the streaming cache layer:
# #[cfg(feature = "streaming")]
use http_cache_tower::HttpCacheStreamingLayer;
# #[cfg(feature = "streaming")]
use http_cache::StreamingManager;
# #[cfg(feature = "streaming")]
use tower::{ServiceBuilder, ServiceExt};
# #[cfg(feature = "streaming")]
use http::{Request, Response};
# #[cfg(feature = "streaming")]
use http_body_util::Full;
# #[cfg(feature = "streaming")]
use bytes::Bytes;
use std::path::PathBuf;
# #[cfg(feature = "streaming")]
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let streaming_manager = StreamingManager::new(PathBuf::from("./cache"));
let cache_layer = HttpCacheStreamingLayer::new(streaming_manager);
let service = ServiceBuilder::new()
.layer(cache_layer)
.service_fn(|_req: Request<Full<Bytes>>| async {
Ok::<_, std::convert::Infallible>(
Response::new(Full::new(Bytes::from("Large response data...")))
)
});
let request = Request::builder()
.uri("https://example.com/large-file")
.body(Full::new(Bytes::new()))?;
let response = service.oneshot(request).await?;
println!("Status: {}", response.status());
Ok(())
}
# #[cfg(not(feature = "streaming"))]
# fn main() {}
Note: For memory-efficient streaming of large responses, use StreamingManager
with HttpCacheStreamingLayer
. For traditional caching with smaller responses, use CACacheManager
or MokaManager
with HttpCacheLayer
.
Cache Backends
This crate supports multiple cache backends through feature flags:
manager-cacache
(default): Disk-based caching using cacache
manager-moka
: In-memory caching using moka
Integration with Hyper Client
use http_cache_tower::HttpCacheLayer;
use http_cache::CACacheManager;
use hyper_util::client::legacy::Client;
use hyper_util::rt::TokioExecutor;
use tower::{ServiceBuilder, ServiceExt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let cache_manager = CACacheManager::default();
let cache_layer = HttpCacheLayer::new(cache_manager);
let client = Client::builder(TokioExecutor::new()).build_http();
let cached_client = ServiceBuilder::new()
.layer(cache_layer)
.service(client);
Ok(())
}
Documentation
License
Licensed under either of
at your option.