multi-tier-cache 0.1.2

High-performance multi-tier cache with L1 (Moka) + L2 (Redis) and stampede protection
Documentation

🚀 multi-tier-cache

Crates.io Documentation License

A high-performance, production-ready multi-tier caching library for Rust featuring L1 (in-memory) + L2 (Redis) caches, automatic stampede protection, and built-in Redis Streams support.

✨ Features

  • 🔥 Multi-Tier Architecture: Combines fast in-memory (Moka) with persistent distributed (Redis) caching
  • 🛡️ Cache Stampede Protection: DashMap + Mutex request coalescing prevents duplicate computations (99.6% latency reduction: 534ms → 5.2ms)
  • 📊 Redis Streams: Built-in publish/subscribe with automatic trimming for event streaming
  • ⚡ Automatic L2-to-L1 Promotion: Intelligent cache tier promotion for frequently accessed data
  • 📈 Comprehensive Statistics: Hit rates, promotions, in-flight request tracking
  • 🎯 Zero-Config: Sensible defaults, works out of the box
  • ✅ Production-Proven: Battle-tested at 16,829+ RPS with 5.2ms latency and 95% hit rate

🏗️ Architecture

Request → L1 Cache (Moka) → L2 Cache (Redis) → Compute/Fetch
          ↓ Hit (90%)       ↓ Hit (75%)        ↓ Miss (5%)
          Return            Promote to L1       Store in L1+L2

Cache Flow

  1. Fast Path: Check L1 cache (sub-millisecond, 90% hit rate)
  2. Fallback: Check L2 cache (2-5ms, 75% hit rate) + auto-promote to L1
  3. Compute: Fetch/compute fresh data with stampede protection, store in both tiers

📦 Installation

Add to your Cargo.toml:

[dependencies]
multi-tier-cache = "0.1"
tokio = { version = "1.28", features = ["full"] }
serde_json = "1.0"

🚀 Quick Start

use multi_tier_cache::{CacheSystem, CacheStrategy};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Initialize cache system (uses REDIS_URL env var)
    let cache = CacheSystem::new().await?;

    // Store data with cache strategy
    let data = serde_json::json!({"user": "alice", "score": 100});
    cache.cache_manager()
        .set_with_strategy("user:1", data, CacheStrategy::ShortTerm)
        .await?;

    // Retrieve data (L1 first, then L2 fallback)
    if let Some(cached) = cache.cache_manager().get("user:1").await? {
        println!("Cached data: {}", cached);
    }

    // Get statistics
    let stats = cache.cache_manager().get_stats();
    println!("Hit rate: {:.2}%", stats.hit_rate);

    Ok(())
}

💡 Usage Patterns

1. Cache Strategies

Choose the right TTL for your use case:

use std::time::Duration;

// RealTime (10s) - Fast-changing data
cache.cache_manager()
    .set_with_strategy("live_price", data, CacheStrategy::RealTime)
    .await?;

// ShortTerm (5min) - Frequently accessed data
cache.cache_manager()
    .set_with_strategy("session:123", data, CacheStrategy::ShortTerm)
    .await?;

// MediumTerm (1hr) - Moderately stable data
cache.cache_manager()
    .set_with_strategy("catalog", data, CacheStrategy::MediumTerm)
    .await?;

// LongTerm (3hr) - Stable data
cache.cache_manager()
    .set_with_strategy("config", data, CacheStrategy::LongTerm)
    .await?;

// Custom - Specific requirements
cache.cache_manager()
    .set_with_strategy("metrics", data, CacheStrategy::Custom(Duration::from_secs(30)))
    .await?;

2. Compute-on-Miss Pattern

Fetch data only when cache misses, with stampede protection:

async fn fetch_from_database(id: u32) -> anyhow::Result<serde_json::Value> {
    // Expensive operation...
    Ok(serde_json::json!({"id": id, "data": "..."}))
}

// Only ONE request will compute, others wait and read from cache
let product = cache.cache_manager()
    .get_or_compute_with(
        "product:42",
        CacheStrategy::MediumTerm,
        || fetch_from_database(42)
    )
    .await?;

3. Redis Streams Integration

Publish and consume events:

// Publish to stream
let fields = vec![
    ("event_id".to_string(), "123".to_string()),
    ("event_type".to_string(), "user_action".to_string()),
    ("timestamp".to_string(), "2025-01-01T00:00:00Z".to_string()),
];
let entry_id = cache.cache_manager()
    .publish_to_stream("events_stream", fields, Some(1000)) // Auto-trim to 1000 entries
    .await?;

// Read latest entries
let entries = cache.cache_manager()
    .read_stream_latest("events_stream", 10)
    .await?;

// Blocking read for new entries
let new_entries = cache.cache_manager()
    .read_stream("events_stream", "$", 10, Some(5000)) // Block for 5s
    .await?;

📊 Performance Benchmarks

Tested in production environment:

Metric Value
Throughput 16,829+ requests/second
Latency (p50) 5.2ms
Cache Hit Rate 95% (L1: 90%, L2: 75%)
Stampede Protection 99.6% latency reduction (534ms → 5.2ms)
Success Rate 100% (zero failures under load)

Comparison with Other Libraries

Library Multi-Tier Stampede Protection Redis Support Streams
multi-tier-cache ✅ L1+L2 ✅ Full ✅ Full ✅ Built-in
cached ❌ Single ❌ No ❌ No ❌ No
moka ❌ L1 only ✅ L1 only ❌ No ❌ No
redis-rs ❌ No cache ❌ Manual ✅ Low-level ✅ Manual

🔧 Configuration

Redis Connection (REDIS_URL)

The library connects to Redis using the REDIS_URL environment variable. Configuration priority (highest to lowest):

1. Programmatic Configuration (Highest Priority)

// Set custom Redis URL before initialization
let cache = CacheSystem::with_redis_url("redis://production:6379").await?;

2. Environment Variable

# Set in shell
export REDIS_URL="redis://your-redis-host:6379"
cargo run

3. .env File (Recommended for Development)

# Create .env file in project root
REDIS_URL="redis://localhost:6379"

4. Default Fallback

If not configured, defaults to: redis://127.0.0.1:6379


Use Cases

Development (Local Redis)

# .env
REDIS_URL="redis://127.0.0.1:6379"

Production (Cloud Redis with Authentication)

# Railway, Render, AWS ElastiCache, etc.
REDIS_URL="redis://:your-password@redis-host.cloud:6379"

Docker Compose

services:
  app:
    environment:
      - REDIS_URL=redis://redis:6379
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

Testing (Separate Instance)

#[tokio::test]
async fn test_cache() {
    let cache = CacheSystem::with_redis_url("redis://localhost:6380").await?;
    // Test logic...
}

Redis URL Format

redis://[username]:[password]@[host]:[port]/[database]

Examples:

  • redis://localhost:6379 - Local Redis, no authentication
  • redis://:mypassword@localhost:6379 - Local with password only
  • redis://user:pass@redis.example.com:6379/0 - Remote with username, password, and database 0
  • rediss://redis.cloud:6380 - SSL/TLS connection (note the rediss://)

Troubleshooting Redis Connection

Connection Refused

# Check if Redis is running
redis-cli ping  # Should return "PONG"

# Check the port
netstat -an | grep 6379

# Verify REDIS_URL
echo $REDIS_URL

Authentication Failed

# Ensure password is in the URL
REDIS_URL="redis://:YOUR_PASSWORD@host:6379"

# Test connection with redis-cli
redis-cli -h host -p 6379 -a YOUR_PASSWORD ping

Timeout Errors

  • Check network connectivity: ping your-redis-host
  • Verify firewall rules allow port 6379
  • Check Redis maxclients setting (may be full)
  • Review Redis logs: redis-cli INFO clients

DNS Resolution Issues

# Test DNS resolution
nslookup your-redis-host.com

# Use IP address as fallback
REDIS_URL="redis://192.168.1.100:6379"

Cache Tuning

Default settings (configurable in library source):

  • L1 Capacity: 2000 entries
  • L1 TTL: 5 minutes (per key)
  • L2 TTL: 1 hour (per key)
  • Stream Max Length: 1000 entries

📚 Examples

Run examples with:

# Basic usage
cargo run --example basic_usage

# Stampede protection demonstration
cargo run --example stampede_protection

# Redis Streams
cargo run --example redis_streams

# Cache strategies
cargo run --example cache_strategies

# Advanced patterns
cargo run --example advanced_usage

# Health monitoring
cargo run --example health_monitoring

🏛️ Architecture Details

Cache Stampede Protection

When multiple requests hit an expired cache key simultaneously:

  1. First request acquires DashMap mutex lock and computes value
  2. Subsequent requests wait on the same mutex
  3. After computation, all requests read from cache
  4. Result: Only ONE computation instead of N

Performance Impact:

  • Without protection: 10 requests × 500ms = 5000ms total
  • With protection: 1 request × 500ms = 500ms total (90% faster)

L2-to-L1 Promotion

When data is found in L2 but not L1:

  1. Retrieve from Redis (L2)
  2. Automatically store in Moka (L1) with fresh TTL
  3. Future requests hit fast L1 cache
  4. Result: Self-optimizing cache that adapts to access patterns

🛠️ Development

Build

# Development
cargo build

# Release (optimized)
cargo build --release

# Run tests
cargo test

Documentation

# Generate and open docs
cargo doc --open

📖 Migration Guide

From cached crate

// Before (cached)
use cached::proc_macro::cached;

#[cached(time = 60)]
fn expensive_function(arg: String) -> String {
    // ...
}

// After (multi-tier-cache)
async fn expensive_function(cache: &CacheManager, arg: String) -> Result<String> {
    cache.get_or_compute_with(
        &format!("func:{}", arg),
        CacheStrategy::ShortTerm,
        || async { /* computation */ }
    ).await
}

From direct Redis usage

// Before (redis-rs)
let mut conn = client.get_connection()?;
let value: String = conn.get("key")?;
conn.set_ex("key", value, 3600)?;

// After (multi-tier-cache)
if let Some(value) = cache.cache_manager().get("key").await? {
    // Use cached value
}
cache.cache_manager()
    .set_with_strategy("key", value, CacheStrategy::MediumTerm)
    .await?;

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📄 License

Licensed under either of:

at your option.

🙏 Acknowledgments

Built with:

  • Moka - High-performance concurrent cache library
  • Redis-rs - Redis client for Rust
  • DashMap - Blazingly fast concurrent map
  • Tokio - Asynchronous runtime

📞 Contact


Made with ❤️ in Rust | Production-proven in crypto trading dashboard serving 16,829+ RPS