multi-tier-cache 0.5.1

Customizable multi-tier cache with L1 (Moka in-memory) + L2 (Redis distributed) defaults, expandable to L3/L4+, cross-instance invalidation via Pub/Sub, stampede protection, and flexible TTL scaling
Documentation

๐Ÿš€ multi-tier-cache

Crates.io Documentation License

A high-performance, production-ready multi-tier caching library for Rust featuring L1 (in-memory) + L2 (Redis) caches, automatic stampede protection, and built-in Redis Streams support.

๐Ÿ“‘ Table of Contents

โœจ Features

  • ๐Ÿ”ฅ Multi-Tier Architecture: Combines fast in-memory (Moka) with persistent distributed (Redis) caching
  • ๐ŸŒ Dynamic Multi-Tier (v0.5.0+): Support for 3, 4, or more cache tiers (L1+L2+L3+L4+...) with flexible configuration โญ NEW
  • ๐Ÿ”„ Cross-Instance Cache Invalidation (v0.4.0+): Real-time cache synchronization across all instances via Redis Pub/Sub
  • ๐Ÿ”Œ Pluggable Backends (v0.3.0+): Swap Moka/Redis with custom implementations (DashMap, Memcached, RocksDB, etc.)
  • ๐Ÿ›ก๏ธ Cache Stampede Protection: DashMap + Mutex request coalescing prevents duplicate computations (99.6% latency reduction: 534ms โ†’ 5.2ms)
  • ๐Ÿ“Š Redis Streams: Built-in publish/subscribe with automatic trimming for event streaming
  • โšก Automatic Tier Promotion: Intelligent cache tier promotion for frequently accessed data with TTL preservation and per-tier scaling
  • ๐Ÿ“ˆ Comprehensive Statistics: Hit rates per tier, promotions, in-flight request tracking, invalidation metrics
  • ๐ŸŽฏ Zero-Config: Sensible defaults, works out of the box
  • โœ… Production-Proven: Battle-tested at 16,829+ RPS with 5.2ms latency and 95% hit rate

๐Ÿ—๏ธ Architecture

Request โ†’ L1 Cache (Moka) โ†’ L2 Cache (Redis) โ†’ Compute/Fetch
          โ†“ Hit (90%)       โ†“ Hit (75%)        โ†“ Miss (5%)
          Return            Promote to L1       Store in L1+L2

Cache Flow

  1. Fast Path: Check L1 cache (sub-millisecond, 90% hit rate)
  2. Fallback: Check L2 cache (2-5ms, 75% hit rate) + auto-promote to L1
  3. Compute: Fetch/compute fresh data with stampede protection, store in both tiers

๐Ÿ“ฆ Installation

Add to your Cargo.toml:

[dependencies]
multi-tier-cache = "0.5"
tokio = { version = "1.28", features = ["full"] }
serde_json = "1.0"

Version Guide:

  • v0.5.0+: Dynamic multi-tier architecture (L1+L2+L3+L4+...), per-tier statistics โญ NEW
  • v0.4.0+: Cross-instance cache invalidation via Redis Pub/Sub
  • v0.3.0+: Pluggable backends, trait-based architecture
  • v0.2.0+: Type-safe database caching with get_or_compute_typed()
  • v0.1.0+: Core multi-tier caching with stampede protection

๐Ÿš€ Quick Start

use multi_tier_cache::{CacheSystem, CacheStrategy};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Initialize cache system (uses REDIS_URL env var)
    let cache = CacheSystem::new().await?;

    // Store data with cache strategy
    let data = serde_json::json!({"user": "alice", "score": 100});
    cache.cache_manager()
        .set_with_strategy("user:1", data, CacheStrategy::ShortTerm)
        .await?;

    // Retrieve data (L1 first, then L2 fallback)
    if let Some(cached) = cache.cache_manager().get("user:1").await? {
        println!("Cached data: {}", cached);
    }

    // Get statistics
    let stats = cache.cache_manager().get_stats();
    println!("Hit rate: {:.2}%", stats.hit_rate);

    Ok(())
}

๐Ÿ’ก Usage Patterns

1. Cache Strategies

Choose the right TTL for your use case:

use std::time::Duration;

// RealTime (10s) - Fast-changing data
cache.cache_manager()
    .set_with_strategy("live_price", data, CacheStrategy::RealTime)
    .await?;

// ShortTerm (5min) - Frequently accessed data
cache.cache_manager()
    .set_with_strategy("session:123", data, CacheStrategy::ShortTerm)
    .await?;

// MediumTerm (1hr) - Moderately stable data
cache.cache_manager()
    .set_with_strategy("catalog", data, CacheStrategy::MediumTerm)
    .await?;

// LongTerm (3hr) - Stable data
cache.cache_manager()
    .set_with_strategy("config", data, CacheStrategy::LongTerm)
    .await?;

// Custom - Specific requirements
cache.cache_manager()
    .set_with_strategy("metrics", data, CacheStrategy::Custom(Duration::from_secs(30)))
    .await?;

2. Compute-on-Miss Pattern

Fetch data only when cache misses, with stampede protection:

async fn fetch_from_database(id: u32) -> anyhow::Result<serde_json::Value> {
    // Expensive operation...
    Ok(serde_json::json!({"id": id, "data": "..."}))
}

// Only ONE request will compute, others wait and read from cache
let product = cache.cache_manager()
    .get_or_compute_with(
        "product:42",
        CacheStrategy::MediumTerm,
        || fetch_from_database(42)
    )
    .await?;

3. Redis Streams Integration

Publish and consume events:

// Publish to stream
let fields = vec![
    ("event_id".to_string(), "123".to_string()),
    ("event_type".to_string(), "user_action".to_string()),
    ("timestamp".to_string(), "2025-01-01T00:00:00Z".to_string()),
];
let entry_id = cache.cache_manager()
    .publish_to_stream("events_stream", fields, Some(1000)) // Auto-trim to 1000 entries
    .await?;

// Read latest entries
let entries = cache.cache_manager()
    .read_stream_latest("events_stream", 10)
    .await?;

// Blocking read for new entries
let new_entries = cache.cache_manager()
    .read_stream("events_stream", "$", 10, Some(5000)) // Block for 5s
    .await?;

4. Type-Safe Database Caching (New in 0.2.0! ๐ŸŽ‰)

Eliminate boilerplate with automatic serialization/deserialization for database queries:

use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize)]
struct User {
    id: i64,
    name: String,
    email: String,
}

// โŒ OLD WAY: Manual cache + serialize + deserialize (40+ lines)
let cached = cache.cache_manager().get("user:123").await?;
let user: User = match cached {
    Some(json) => serde_json::from_value(json)?,
    None => {
        let user = sqlx::query_as::<_, User>("SELECT * FROM users WHERE id = $1")
            .bind(123)
            .fetch_one(&pool)
            .await?;
        let json = serde_json::to_value(&user)?;
        cache.cache_manager().set_with_strategy("user:123", json, CacheStrategy::MediumTerm).await?;
        user
    }
};

// โœ… NEW WAY: Type-safe automatic caching (5 lines)
let user: User = cache.cache_manager()
    .get_or_compute_typed(
        "user:123",
        CacheStrategy::MediumTerm,
        || async {
            sqlx::query_as::<_, User>("SELECT * FROM users WHERE id = $1")
                .bind(123)
                .fetch_one(&pool)
                .await
        }
    )
    .await?;

Benefits:

  • โœ… Type-Safe: Compiler checks types, no runtime surprises
  • โœ… Zero Boilerplate: Automatic serialize/deserialize
  • โœ… Full Cache Features: L1โ†’L2 fallback, stampede protection, auto-promotion
  • โœ… Generic: Works with any type implementing Serialize + DeserializeOwned

More Examples:

// PostgreSQL Reports
#[derive(Serialize, Deserialize)]
struct Report {
    id: i64,
    title: String,
    data: serde_json::Value,
}

let report: Report = cache.cache_manager()
    .get_or_compute_typed(
        &format!("report:{}", id),
        CacheStrategy::LongTerm,
        || async {
            sqlx::query_as("SELECT * FROM reports WHERE id = $1")
                .bind(id)
                .fetch_one(&pool)
                .await
        }
    )
    .await?;

// API Responses
#[derive(Serialize, Deserialize)]
struct ApiData {
    status: String,
    items: Vec<String>,
}

let data: ApiData = cache.cache_manager()
    .get_or_compute_typed(
        "api:external",
        CacheStrategy::RealTime,
        || async {
            reqwest::get("https://api.example.com/data")
                .await?
                .json::<ApiData>()
                .await
        }
    )
    .await?;

// Complex Computations
#[derive(Serialize, Deserialize)]
struct AnalyticsResult {
    total: i64,
    average: f64,
    breakdown: HashMap<String, i64>,
}

let analytics: AnalyticsResult = cache.cache_manager()
    .get_or_compute_typed(
        "analytics:monthly",
        CacheStrategy::Custom(Duration::from_secs(6 * 3600)),
        || async {
            // Expensive computation...
            compute_monthly_analytics(&pool).await
        }
    )
    .await?;

Performance:

  • L1 Hit: <1ms + deserialization (~10-50ฮผs)
  • L2 Hit: 2-5ms + deserialization + L1 promotion
  • Cache Miss: Your query time + serialization + L1+L2 storage

5. Cross-Instance Cache Invalidation (New in 0.4.0! ๐ŸŽ‰)

Keep caches synchronized across multiple servers/instances using Redis Pub/Sub:

Why Invalidation?

In distributed systems with multiple cache instances, stale data is a common problem:

  • User updates profile on Server A โ†’ Cache on Server B still has old data
  • Admin changes product price โ†’ Other servers show outdated prices
  • TTL-only expiration โ†’ Users see stale data until timeout

Solution: Real-time cache invalidation across ALL instances!

Two Invalidation Strategies

1. Remove Strategy (Lazy Reload)

use multi_tier_cache::{CacheManager, L1Cache, L2Cache, InvalidationConfig};

// Initialize with invalidation support
let config = InvalidationConfig::default();
let cache_manager = CacheManager::new_with_invalidation(
    Arc::new(L1Cache::new().await?),
    Arc::new(L2Cache::new().await?),
    "redis://localhost",
    config
).await?;

// Update database
database.update_user(123, new_data).await?;

// Invalidate cache across ALL instances
// โ†’ Cache removed, next access triggers reload
cache_manager.invalidate("user:123").await?;

2. Update Strategy (Zero Cache Miss)

// Update database
database.update_user(123, new_data).await?;

// Push new data directly to ALL instances' L1 caches
// โ†’ No cache miss, instant update!
cache_manager.update_cache(
    "user:123",
    serde_json::to_value(&new_data)?,
    Some(Duration::from_secs(3600))
).await?;

Pattern-Based Invalidation

Invalidate multiple related keys at once:

// Update product category in database
database.update_category(42, new_price).await?;

// Invalidate ALL products in category across ALL instances
cache_manager.invalidate_pattern("product:category:42:*").await?;

Write-Through Caching

Cache and broadcast in one operation:

let report = generate_monthly_report().await?;

// Cache locally AND broadcast to all other instances
cache_manager.set_with_broadcast(
    "report:monthly",
    serde_json::to_value(&report)?,
    CacheStrategy::LongTerm
).await?;

How It Works

Instance A              Redis Pub/Sub           Instance B
    โ”‚                        โ”‚                       โ”‚
    โ”‚  1. Update data        โ”‚                       โ”‚
    โ”‚  2. Broadcast msg  โ”€โ”€โ”€>โ”‚                       โ”‚
    โ”‚                        โ”‚  3. Receive msg  โ”€โ”€โ”€>โ”‚
    โ”‚                        โ”‚  4. Update L1    โ”€โ”€โ”€โ”€โ”˜
    โ”‚                        โ”‚                       โœ“

Performance:

  • Latency: ~1-5ms invalidation propagation
  • Overhead: Negligible (<0.1% CPU for subscriber)
  • Production-Safe: Auto-reconnection, error recovery

Configuration

use multi_tier_cache::InvalidationConfig;

let config = InvalidationConfig {
    channel: "my_app:cache:invalidate".to_string(),
    auto_broadcast_on_write: false,  // Manual control
    enable_audit_stream: true,       // Enable audit trail
    audit_stream: "cache:invalidations".to_string(),
    audit_stream_maxlen: Some(10000),
};

When to Use:

  • โœ… Multi-server deployments (load balancers, horizontal scaling)
  • โœ… Data that changes frequently (user profiles, prices, inventory)
  • โœ… Real-time requirements (instant consistency)
  • โŒ Single-server deployments (unnecessary overhead)
  • โŒ Rarely-changing data (TTL is sufficient)

Comparison:

Strategy Bandwidth Cache Miss Use Case
Remove Low Yes (on next access) Large values, infrequent access
Update Higher No (instant) Small values, frequent access
Pattern Medium Yes Bulk invalidation (categories)

6. Custom Cache Backends (New in 0.3.0! ๐ŸŽ‰)

Starting from v0.3.0, you can replace the default Moka (L1) and Redis (L2) backends with your own custom implementations!

Use Cases:

  • Replace Redis with Memcached, DragonflyDB, or KeyDB
  • Use DashMap instead of Moka for L1
  • Implement no-op caches for testing
  • Add custom cache eviction policies
  • Integrate with proprietary caching systems

Basic Example: Custom HashMap L1 Cache

use multi_tier_cache::{CacheBackend, CacheSystemBuilder, async_trait};
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::time::{Duration, Instant};
use anyhow::Result;

struct HashMapCache {
    store: Arc<RwLock<HashMap<String, (serde_json::Value, Instant)>>>,
}

#[async_trait]
impl CacheBackend for HashMapCache {
    async fn get(&self, key: &str) -> Option<serde_json::Value> {
        let store = self.store.read().unwrap();
        store.get(key).and_then(|(value, expiry)| {
            if *expiry > Instant::now() {
                Some(value.clone())
            } else {
                None
            }
        })
    }

    async fn set_with_ttl(
        &self,
        key: &str,
        value: serde_json::Value,
        ttl: Duration,
    ) -> Result<()> {
        let mut store = self.store.write().unwrap();
        store.insert(key.to_string(), (value, Instant::now() + ttl));
        Ok(())
    }

    async fn remove(&self, key: &str) -> Result<()> {
        self.store.write().unwrap().remove(key);
        Ok(())
    }

    async fn health_check(&self) -> bool {
        true
    }

    fn name(&self) -> &str {
        "HashMap"
    }
}

// Use custom backend
let custom_l1 = Arc::new(HashMapCache::new());

let cache = CacheSystemBuilder::new()
    .with_l1(custom_l1 as Arc<dyn CacheBackend>)
    .build()
    .await?;

Advanced: Custom L2 Backend with TTL

For L2 caches, implement L2CacheBackend which extends CacheBackend with get_with_ttl():

use multi_tier_cache::{L2CacheBackend, async_trait};

#[async_trait]
impl CacheBackend for MyCustomL2 {
    // ... implement CacheBackend methods
}

#[async_trait]
impl L2CacheBackend for MyCustomL2 {
    async fn get_with_ttl(
        &self,
        key: &str,
    ) -> Option<(serde_json::Value, Option<Duration>)> {
        // Return value with remaining TTL
        Some((value, Some(remaining_ttl)))
    }
}

Builder API

use multi_tier_cache::CacheSystemBuilder;

let cache = CacheSystemBuilder::new()
    .with_l1(custom_l1)        // Custom L1 backend
    .with_l2(custom_l2)        // Custom L2 backend
    .with_streams(kafka)       // Optional: Custom streaming backend
    .build()
    .await?;

Mix and Match:

  • Use custom L1 with default Redis L2
  • Use default Moka L1 with custom L2
  • Replace both L1 and L2 backends

See: examples/custom_backends.rs for complete working examples including:

  • HashMap L1 cache
  • In-memory L2 cache with TTL
  • No-op cache (for testing)
  • Mixed backend configurations

7. Multi-Tier Architecture (New in 0.5.0! ๐ŸŽ‰)

Starting from v0.5.0, you can configure 3, 4, or more cache tiers beyond the default L1+L2 setup!

Use Cases:

  • L3 (Cold Storage): RocksDB or LevelDB for large datasets with longer TTL
  • L4 (Archive): S3 or filesystem for rarely-accessed but important data
  • Custom Tiers: Any combination of backends to fit your workload

Why Multi-Tier?

Request โ†’ L1 (Hot - RAM) โ†’ L2 (Warm - Redis) โ†’ L3 (Cold - RocksDB) โ†’ L4 (Archive - S3)
          <1ms (95%)        2-5ms (4%)           10-50ms (0.9%)        100-500ms (0.1%)
  • Cost Optimization: Keep hot data in expensive fast storage, cold data in cheap slow storage
  • Capacity: Extend cache capacity beyond RAM limits
  • Performance: 95%+ requests served from L1/L2, only rare misses hit slower tiers

Basic Example: 3-Tier Cache

use multi_tier_cache::{CacheSystemBuilder, TierConfig, L2Cache};
use std::sync::Arc;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Setup backends
    let l1 = Arc::new(L2Cache::new().await?);  // Fast: Redis
    let l2 = Arc::new(L2Cache::new().await?);  // Warm: Redis
    let l3 = Arc::new(RocksDBCache::new("/tmp/cache").await?);  // Cold: RocksDB

    // Build 3-tier cache
    let cache = CacheSystemBuilder::new()
        .with_tier(l1, TierConfig::as_l1())
        .with_tier(l2, TierConfig::as_l2())
        .with_l3(l3)  // Convenience method: 2x TTL
        .build()
        .await?;

    // Use as normal - transparent multi-tier
    cache.cache_manager()
        .set_with_strategy("key", data, CacheStrategy::LongTerm)
        .await?;

    Ok(())
}

Tier Configuration

Pre-configured Tiers:

// L1 - Hot tier (no promotion, standard TTL)
TierConfig::as_l1()

// L2 - Warm tier (promote to L1, standard TTL)
TierConfig::as_l2()

// L3 - Cold tier (promote to L2+L1, 2x TTL)
TierConfig::as_l3()

// L4 - Archive tier (promote to all, 8x TTL)
TierConfig::as_l4()

Custom Tier:

TierConfig::new(3)
    .with_promotion(true)   // Auto-promote on hit
    .with_ttl_scale(5.0)    // 5x TTL multiplier
    .with_level(3)          // Tier number

TTL Scaling Example

// Set data with 1-hour TTL
cache.cache_manager()
    .set_with_strategy("product:123", data, CacheStrategy::MediumTerm)  // 1hr
    .await?;

// Actual TTL per tier:
// L1: 1 hour   (scale = 1.0x)
// L2: 1 hour   (scale = 1.0x)
// L3: 2 hours  (scale = 2.0x) โ† Keeps data longer!
// L4: 8 hours  (scale = 8.0x) โ† Much longer retention!

Per-Tier Statistics

Track hit rates for each tier:

if let Some(tier_stats) = cache.cache_manager().get_tier_stats() {
    for stats in tier_stats {
        println!("L{}: {} hits ({})",
                 stats.tier_level,
                 stats.hit_count(),
                 stats.backend_name);
    }
}

// Output:
// L1: 9500 hits (Redis)
// L2: 450 hits (Redis)
// L3: 45 hits (RocksDB)
// L4: 5 hits (S3)

4-Tier Example

let cache = CacheSystemBuilder::new()
    .with_tier(moka_l1, TierConfig::as_l1())
    .with_tier(redis_l2, TierConfig::as_l2())
    .with_tier(rocksdb_l3, TierConfig::as_l3())
    .with_tier(s3_l4, TierConfig::as_l4())
    .build()
    .await?;

Automatic Tier Promotion

When data is found in a lower tier (e.g., L3), it's automatically promoted to all upper tiers:

Request for "key"
  โ”œโ”€ Check L1 โ†’ Miss
  โ”œโ”€ Check L2 โ†’ Miss
  โ””โ”€ Check L3 โ†’ HIT!
       โ”œโ”€ Promote to L2 (with original TTL)
       โ”œโ”€ Promote to L1 (with original TTL)
       โ””โ”€ Return data

Next request for "key" โ†’ L1 Hit! <1ms

Backward Compatibility

Existing 2-tier users: No changes required! Your code continues to work:

// This still works exactly as before (v0.1.0 - v0.4.x)
let cache = CacheSystemBuilder::new().build().await?;

Multi-tier mode is opt-in via .with_tier() or .with_l3()/.with_l4() methods.

When to Use Multi-Tier

โœ… Good fit:

  • Large datasets that don't fit in RAM
  • Cost-sensitive workloads (mix expensive + cheap storage)
  • Long-tail data access patterns (90% hot, 10% cold)
  • Hierarchical data with different access frequencies

โŒ Not needed:

  • Small datasets (< 10GB) that fit in Redis
  • Uniform access patterns (all data equally hot)
  • Latency-critical paths (stick to L1+L2)

โš–๏ธ Feature Compatibility

Invalidation + Custom Backends

โœ… Compatible:

  • Cache invalidation works with default Redis L2 backend
  • Single-key operations (invalidate, update_cache) work with any backend
  • Type-safe caching works with all backends
  • Stampede protection works with all backends

โš ๏ธ Limited Support:

  • Pattern-based invalidation (invalidate_pattern) requires concrete Redis L2Cache
  • Custom L2 backends: Single-key invalidation works, but pattern invalidation not available
  • Workaround: Implement pattern matching in your custom backend

Example:

// โœ… Works: Default Redis + Invalidation
let cache = CacheManager::new_with_invalidation(
    Arc::new(L1Cache::new().await?),
    Arc::new(L2Cache::new().await?),  // Concrete Redis L2
    "redis://localhost",
    InvalidationConfig::default()
).await?;

cache.invalidate("key").await?;           // โœ… Works
cache.invalidate_pattern("user:*").await?; // โœ… Works (has scan_keys)

// โš ๏ธ Limited: Custom L2 + Invalidation
let cache = CacheManager::new_with_backends(
    custom_l1,
    custom_l2,  // Custom trait-based L2
    None
).await?;

// Pattern invalidation not available without concrete L2Cache
// Use single-key invalidation instead

Combining All Features

All features work together seamlessly:

use multi_tier_cache::*;

// v0.4.0: Invalidation
let config = InvalidationConfig::default();

// v0.3.0: Custom backends (or use defaults)
let l1 = Arc::new(L1Cache::new().await?);
let l2 = Arc::new(L2Cache::new().await?);

// Initialize with invalidation
let cache_manager = CacheManager::new_with_invalidation(
    l1, l2, "redis://localhost", config
).await?;

// v0.2.0: Type-safe caching
let user: User = cache_manager.get_or_compute_typed(
    "user:123",
    CacheStrategy::MediumTerm,
    || fetch_user(123)
).await?;

// v0.4.0: Invalidate across instances
cache_manager.invalidate("user:123").await?;

// v0.1.0: All core features work
let stats = cache_manager.get_stats();
println!("Hit rate: {:.2}%", stats.hit_rate);

No Conflicts: All features are designed to work together without interference.

๐Ÿ“Š Performance Benchmarks

Tested in production environment:

Metric Value
Throughput 16,829+ requests/second
Latency (p50) 5.2ms
Cache Hit Rate 95% (L1: 90%, L2: 75%)
Stampede Protection 99.6% latency reduction (534ms โ†’ 5.2ms)
Success Rate 100% (zero failures under load)

Comparison with Other Libraries

Library Multi-Tier Stampede Protection Redis Support Streams Invalidation
multi-tier-cache โœ… L1+L2 โœ… Full โœ… Full โœ… Built-in โœ… Pub/Sub
cached โŒ Single โŒ No โŒ No โŒ No โŒ No
moka โŒ L1 only โœ… L1 only โŒ No โŒ No โŒ No
redis-rs โŒ No cache โŒ Manual โœ… Low-level โœ… Manual โŒ Manual

Running Benchmarks

The library includes comprehensive benchmarks built with Criterion:

# Run all benchmarks
cargo bench

# Run specific benchmark suite
cargo bench --bench cache_operations
cargo bench --bench stampede_protection
cargo bench --bench invalidation
cargo bench --bench serialization

# Generate detailed HTML reports
cargo bench -- --save-baseline my_baseline

Benchmark Suites:

  • cache_operations: L1/L2 read/write performance, cache strategies, compute-on-miss patterns
  • stampede_protection: Concurrent access, request coalescing under load
  • invalidation: Cross-instance invalidation overhead, pattern matching performance
  • serialization: JSON vs typed caching, data size impact

Results are saved to target/criterion/ with interactive HTML reports.

๐Ÿ”ง Configuration

Redis Connection (REDIS_URL)

The library connects to Redis using the REDIS_URL environment variable. Configuration priority (highest to lowest):

1. Programmatic Configuration (Highest Priority)

// Set custom Redis URL before initialization
let cache = CacheSystem::with_redis_url("redis://production:6379").await?;

2. Environment Variable

# Set in shell
export REDIS_URL="redis://your-redis-host:6379"
cargo run

3. .env File (Recommended for Development)

# Create .env file in project root
REDIS_URL="redis://localhost:6379"

4. Default Fallback

If not configured, defaults to: redis://127.0.0.1:6379


Use Cases

Development (Local Redis)

# .env
REDIS_URL="redis://127.0.0.1:6379"

Production (Cloud Redis with Authentication)

# Railway, Render, AWS ElastiCache, etc.
REDIS_URL="redis://:your-password@redis-host.cloud:6379"

Docker Compose

services:
  app:
    environment:
      - REDIS_URL=redis://redis:6379
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

Testing (Separate Instance)

#[tokio::test]
async fn test_cache() {
    let cache = CacheSystem::with_redis_url("redis://localhost:6380").await?;
    // Test logic...
}

Redis URL Format

redis://[username]:[password]@[host]:[port]/[database]

Examples:

  • redis://localhost:6379 - Local Redis, no authentication
  • redis://:mypassword@localhost:6379 - Local with password only
  • redis://user:pass@redis.example.com:6379/0 - Remote with username, password, and database 0
  • rediss://redis.cloud:6380 - SSL/TLS connection (note the rediss://)

Troubleshooting Redis Connection

Connection Refused

# Check if Redis is running
redis-cli ping  # Should return "PONG"

# Check the port
netstat -an | grep 6379

# Verify REDIS_URL
echo $REDIS_URL

Authentication Failed

# Ensure password is in the URL
REDIS_URL="redis://:YOUR_PASSWORD@host:6379"

# Test connection with redis-cli
redis-cli -h host -p 6379 -a YOUR_PASSWORD ping

Timeout Errors

  • Check network connectivity: ping your-redis-host
  • Verify firewall rules allow port 6379
  • Check Redis maxclients setting (may be full)
  • Review Redis logs: redis-cli INFO clients

DNS Resolution Issues

# Test DNS resolution
nslookup your-redis-host.com

# Use IP address as fallback
REDIS_URL="redis://192.168.1.100:6379"

Cache Tuning

Default settings (configurable in library source):

  • L1 Capacity: 2000 entries
  • L1 TTL: 5 minutes (per key)
  • L2 TTL: 1 hour (per key)
  • Stream Max Length: 1000 entries

๐Ÿงช Testing

Integration Tests

The library includes comprehensive integration tests (30 tests) that verify functionality with real Redis:

# Run all integration tests
cargo test --tests

# Run specific test suite
cargo test --test integration_basic
cargo test --test integration_invalidation
cargo test --test integration_stampede
cargo test --test integration_streams

Test Coverage:

  • โœ… L1 cache operations (get, set, remove, TTL)
  • โœ… L2 cache operations (get_with_ttl, scan_keys, bulk operations)
  • โœ… L2-to-L1 promotion
  • โœ… Cross-instance invalidation (Remove, Update, Pattern)
  • โœ… Stampede protection with concurrent requests
  • โœ… Type-safe caching with serialization
  • โœ… Redis Streams (publish, read, trimming)
  • โœ… Statistics tracking

Requirements:

  • Redis server running on localhost:6379 (or set REDIS_URL)
  • Tests automatically clean up after themselves

Test Structure:

tests/
โ”œโ”€โ”€ common/mod.rs           # Shared utilities
โ”œโ”€โ”€ integration_basic.rs    # Core cache operations
โ”œโ”€โ”€ integration_invalidation.rs  # Cross-instance sync
โ”œโ”€โ”€ integration_stampede.rs # Concurrent access
โ””โ”€โ”€ integration_streams.rs  # Redis Streams

๐Ÿ“š Examples

Run examples with:

# Basic usage
cargo run --example basic_usage

# Stampede protection demonstration
cargo run --example stampede_protection

# Redis Streams
cargo run --example redis_streams

# Cache strategies
cargo run --example cache_strategies

# Advanced patterns
cargo run --example advanced_usage

# Health monitoring
cargo run --example health_monitoring

๐Ÿ›๏ธ Architecture Details

Cache Stampede Protection

When multiple requests hit an expired cache key simultaneously:

  1. First request acquires DashMap mutex lock and computes value
  2. Subsequent requests wait on the same mutex
  3. After computation, all requests read from cache
  4. Result: Only ONE computation instead of N

Performance Impact:

  • Without protection: 10 requests ร— 500ms = 5000ms total
  • With protection: 1 request ร— 500ms = 500ms total (90% faster)

L2-to-L1 Promotion

When data is found in L2 but not L1:

  1. Retrieve from Redis (L2)
  2. Automatically store in Moka (L1) with fresh TTL
  3. Future requests hit fast L1 cache
  4. Result: Self-optimizing cache that adapts to access patterns

๐Ÿ› ๏ธ Development

Build

# Development
cargo build

# Release (optimized)
cargo build --release

# Run tests
cargo test

Documentation

# Generate and open docs
cargo doc --open

๐Ÿ“– Migration Guide

From cached crate

// Before (cached)
use cached::proc_macro::cached;

#[cached(time = 60)]
fn expensive_function(arg: String) -> String {
    // ...
}

// After (multi-tier-cache)
async fn expensive_function(cache: &CacheManager, arg: String) -> Result<String> {
    cache.get_or_compute_with(
        &format!("func:{}", arg),
        CacheStrategy::ShortTerm,
        || async { /* computation */ }
    ).await
}

From direct Redis usage

// Before (redis-rs)
let mut conn = client.get_connection()?;
let value: String = conn.get("key")?;
conn.set_ex("key", value, 3600)?;

// After (multi-tier-cache)
if let Some(value) = cache.cache_manager().get("key").await? {
    // Use cached value
}
cache.cache_manager()
    .set_with_strategy("key", value, CacheStrategy::MediumTerm)
    .await?;

๐Ÿค Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

๐Ÿ“„ License

Licensed under either of:

at your option.

๐Ÿ™ Acknowledgments

Built with:

  • Moka - High-performance concurrent cache library
  • Redis-rs - Redis client for Rust
  • DashMap - Blazingly fast concurrent map
  • Tokio - Asynchronous runtime

๐Ÿ“ž Contact


Made with โค๏ธ in Rust | Production-proven in crypto trading dashboard serving 16,829+ RPS