reqwest-drive

High-performance caching, throttling, and backoff middleware for reqwest, powered by SIMD-accelerated single-file storage.
Overview
reqwest-drive is a middleware based on reqwest-middleware that provides:
- High-speed request caching using SIMD R Drive, a SIMD-optimized, single-file-container data store.
- Automatic process-scoped cache storage via cache-manager, with no manual cache path required.
- Adaptive request throttling with support for dynamic concurrency limits.
- Configurable backoff strategies for handling rate-limiting and transient failures.
- Throttle-only mode that requires no persistent store.
Note: This is not WASM compatible.
Cache safety note
- The cache layer is thread-safe within a single process.
- The cache layer is not multi-process safe when multiple processes target the same cache file concurrently.
- If your deployment has multiple processes/workers, use process-level coordination or separate cache files.
Features
- Efficient single-file caching
- Uses SIMD acceleration for fast reads/writes.
- Supports header-based TTLs or custom expiration policies.
- Normalizes query parameter order for stable cache identity.
- Varies cache entries by key request headers (e.g.
accept, accept-language, content-type).
- Hashes sensitive header values (like
authorization/x-api-key) before key material is constructed.
- Supports per-request cache controls:
- Bypass cache (
CacheBypass) for one-off uncached reads.
- Bust & refresh cache (
CacheBust) to force a fresh value and update stored cache.
- Customizable throttling & backoff
- Control request concurrency.
- Define exponential backoff & jitter for retries.
- Run in throttle-only mode without cache persistence.
- Supports per-request throttle policy overrides via request extensions.
- Seamless integration with
reqwest
- Works as a
reqwest-middleware layer.
- Easy to configure and extend.
Install
cargo add reqwest-drive
Usage
Basic example with caching:
use reqwest_drive::{init_cache, CachePolicy};
use reqwest_middleware::ClientBuilder;
use std::time::Duration;
use tempfile::tempdir;
#[tokio::main]
async fn main() {
let temp_dir = tempdir().unwrap();
let cache_path = temp_dir.path().join("cache_storage.bin");
let cache = init_cache(&cache_path, CachePolicy {
default_ttl: Duration::from_secs(3600), respect_headers: true,
cache_status_override: None,
});
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(cache)
.build();
let response = client.get("https://httpbin.org/get").send().await.unwrap();
println!("Response: {:?}", response.text().await.unwrap());
}
Process-scoped cache (no manual cache path)
Use this mode to automatically place cache storage under discovered <crate-root>/.cache,
using cache group reqwest-drive, and a process/thread-scoped cache_storage.bin location:
use reqwest_drive::{init_cache_process_scoped, CachePolicy};
use reqwest_middleware::ClientBuilder;
use std::time::Duration;
#[tokio::main]
async fn main() {
let temp_root = tempfile::tempdir().unwrap();
let previous_cwd = std::env::current_dir().unwrap();
std::env::set_current_dir(temp_root.path()).unwrap();
let cache = init_cache_process_scoped(CachePolicy {
default_ttl: Duration::from_secs(3600),
respect_headers: true,
cache_status_override: None,
})
.expect("init process-scoped cache");
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(cache)
.build();
let response = client.get("https://httpbin.org/get").send().await.unwrap();
println!("Response status: {}", response.status());
std::env::set_current_dir(previous_cwd).unwrap();
}
Notes:
- The cache group name is this crate's package name:
reqwest-drive.
- Process directories are PID-scoped and cleaned up on normal process shutdown.
- Cleanup is best-effort (crashes/forced exits can leave stale directories).
Throttling & Backoff
To enable request throttling and exponential backoff:
use reqwest_drive::{init_cache_with_throttle, CachePolicy, ThrottlePolicy};
use reqwest_middleware::ClientBuilder;
use std::time::Duration;
use tempfile::tempdir;
#[tokio::main]
async fn main() {
let temp_dir = tempdir().unwrap();
let cache_path = temp_dir.path().join("cache_storage.bin");
let (cache, throttle) = init_cache_with_throttle(
&cache_path,
CachePolicy::default(),
ThrottlePolicy {
base_delay_ms: 200,
adaptive_jitter_ms: 100,
max_concurrent: 2,
max_retries: 3,
}
);
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(cache)
.with_arc(throttle)
.build();
let response = client.get("https://httpbin.org/status/429").send().await.unwrap();
println!("Response status: {}", response.status());
}
Process-scoped cache + throttling
use reqwest_drive::{
CachePolicy, ThrottlePolicy, init_cache_process_scoped_with_throttle,
init_client_with_cache_and_throttle,
};
#[tokio::main]
async fn main() {
let temp_root = tempfile::tempdir().unwrap();
let previous_cwd = std::env::current_dir().unwrap();
std::env::set_current_dir(temp_root.path()).unwrap();
let (cache, throttle) = init_cache_process_scoped_with_throttle(
CachePolicy::default(),
ThrottlePolicy {
base_delay_ms: 200,
adaptive_jitter_ms: 100,
max_concurrent: 2,
max_retries: 3,
},
)
.expect("init process-scoped cache + throttle");
let client = init_client_with_cache_and_throttle(cache, throttle);
let response = client.get("https://httpbin.org/status/429").send().await.unwrap();
println!("Response status: {}", response.status());
std::env::set_current_dir(previous_cwd).unwrap();
}
Throttle-only (No Data Store)
Use this mode when you want rate limiting and retry/backoff behavior, but no cache layer at all:
use reqwest_drive::{init_throttle, ThrottlePolicy};
use reqwest_middleware::ClientBuilder;
#[tokio::main]
async fn main() {
let throttle = init_throttle(ThrottlePolicy {
base_delay_ms: 200,
adaptive_jitter_ms: 100,
max_concurrent: 2,
max_retries: 3,
});
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(throttle)
.build();
let response = client.get("https://httpbin.org/status/429").send().await.unwrap();
println!("Response status: {}", response.status());
}
Initializing Client without with_arc
Initializing a client with both caching and throttling, without manually attaching middleware via .with_arc():
use reqwest_drive::{
init_cache_with_throttle, init_client_with_cache_and_throttle, CachePolicy, ThrottlePolicy,
};
use reqwest_middleware::ClientWithMiddleware;
use std::time::Duration;
use tempfile::tempdir;
#[tokio::main]
async fn main() {
let cache_policy = CachePolicy {
default_ttl: Duration::from_secs(300), respect_headers: true,
cache_status_override: None,
};
let throttle_policy = ThrottlePolicy {
base_delay_ms: 100, adaptive_jitter_ms: 50, max_concurrent: 2, max_retries: 2, };
let temp_dir = tempdir().unwrap();
let cache_path = temp_dir.path().join("cache_storage.bin");
let (cache, throttle) = init_cache_with_throttle(&cache_path, cache_policy, throttle_policy);
let client: ClientWithMiddleware = init_client_with_cache_and_throttle(cache, throttle);
let response = client.get("https://httpbin.org/get").send().await.unwrap();
println!("Response status: {}", response.status());
}
Overriding Throttle Policy (Per Request)
To override the throttle policy for a single request:
use reqwest_drive::{init_cache_with_throttle, CachePolicy, ThrottlePolicy};
use reqwest_middleware::ClientBuilder;
use std::time::Duration;
use tempfile::tempdir;
#[tokio::main]
async fn main() {
let temp_dir = tempdir().unwrap();
let cache_path = temp_dir.path().join("cache_storage.bin");
let (cache, throttle) = init_cache_with_throttle(
&cache_path,
CachePolicy::default(),
ThrottlePolicy {
base_delay_ms: 200, adaptive_jitter_ms: 100, max_concurrent: 2, max_retries: 3, }
);
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(cache)
.with_arc(throttle)
.build();
let custom_throttle_policy = ThrottlePolicy {
base_delay_ms: 50, adaptive_jitter_ms: 25, max_concurrent: 1, max_retries: 1, };
let mut request = client.get("https://httpbin.org/status/429");
request.extensions().insert(custom_throttle_policy);
let response = request.send().await.unwrap();
println!("Response status: {}", response.status());
}
Bypassing Cache for a Single Request
When using cache + throttle together, you can bypass cache for an individual request while keeping the same middleware stack:
use reqwest_drive::{CacheBypass, CachePolicy, ThrottlePolicy, init_cache_with_throttle};
use reqwest_middleware::ClientBuilder;
use tempfile::tempdir;
#[tokio::main]
async fn main() {
let temp_dir = tempdir().unwrap();
let cache_path = temp_dir.path().join("cache_storage.bin");
let (cache, throttle) = init_cache_with_throttle(
&cache_path,
CachePolicy::default(),
ThrottlePolicy::default(),
);
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(cache)
.with_arc(throttle)
.build();
let mut request = client.get("https://httpbin.org/get");
request.extensions().insert(CacheBypass(true));
let response = request.send().await.unwrap();
println!("Response status: {}", response.status());
}
Busting Cache for a Single Request (Refresh)
Use this when you want to force a fresh fetch now and update the cached entry for later requests:
use reqwest_drive::{CacheBust, CachePolicy, ThrottlePolicy, init_cache_with_throttle};
use reqwest_middleware::ClientBuilder;
use tempfile::tempdir;
#[tokio::main]
async fn main() {
let temp_dir = tempdir().unwrap();
let cache_path = temp_dir.path().join("cache_storage.bin");
let (cache, throttle) = init_cache_with_throttle(
&cache_path,
CachePolicy::default(),
ThrottlePolicy::default(),
);
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(cache)
.with_arc(throttle)
.build();
let mut request = client.get("https://httpbin.org/get");
request.extensions().insert(CacheBust(true));
let response = request.send().await.unwrap();
println!("Response status: {}", response.status());
}
Configuration
The middleware can be fine-tuned using the following options:
Cache Policy
use reqwest_middleware::ClientBuilder;
use reqwest_drive::{init_cache, CachePolicy};
use std::time::Duration;
use tempfile::tempdir;
#[tokio::main]
async fn main() {
let temp_dir = tempdir().unwrap();
let cache_path = temp_dir.path().join("cache_storage.bin");
let cache_policy = CachePolicy {
default_ttl: Duration::from_secs(60 * 60), respect_headers: true, cache_status_override: Some(vec![200, 404]), };
let cache = init_cache(&cache_path, cache_policy);
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(cache)
.build();
let response = client.get("https://httpbin.org/get").send().await.unwrap();
println!("Response status: {}", response.status());
}
Throttle Policy
use reqwest_middleware::ClientBuilder;
use reqwest_drive::{init_throttle, ThrottlePolicy};
#[tokio::main]
async fn main() {
let throttle_policy = ThrottlePolicy {
base_delay_ms: 100, adaptive_jitter_ms: 50, max_concurrent: 1, max_retries: 2, };
let throttle = init_throttle(throttle_policy);
let client = ClientBuilder::new(reqwest::Client::new())
.with_arc(throttle)
.build();
let response = client.get("https://httpbin.org/get").send().await.unwrap();
println!("Response status: {}", response.status());
}
Using re-exported reqwest
reqwest-drive re-exports reqwest and generally tracks a compatible upstream
reqwest release to reduce version ambiguity, while still allowing independent
updates when needed. Import it as reqwest_drive::reqwest.
Note: This will completely bypass all throttling and caching if the middleware is not attached.
use reqwest_drive::reqwest;
let client = reqwest::Client::new();
let request = client.get("https://httpbin.org/get").build().unwrap();
assert_eq!(request.url().as_str(), "https://httpbin.org/get");
Why reqwest-drive?
✅ Faster than traditional disk-based caches (memory-mapped, single-file storage container with SIMD acceleration for queries).
✅ More efficient than in-memory caches (persists data across runs without RAM overhead).
✅ Backoff-aware throttling helps prevent API bans due to excessive requests.
License
reqwest-drive is primarily distributed under the terms of both the MIT license and the Apache License (Version 2.0).
See LICENSE-APACHE and LICENSE-MIT for details.