oxcache 0.1.0

A high-performance, production-ready Rust multi-level cache library providing L1 (Moka) and L2 (Redis) caching.
Documentation

๐Ÿš€ Oxcache

CI Crates.io Documentation Downloads codecov Dependency Status License Rust Version

English | ็ฎ€ไฝ“ไธญๆ–‡

Oxcache is a high-performance, production-grade two-level caching library for Rust, providing L1 (Moka in-memory cache) + L2 (Redis distributed cache) architecture.

โœจ Key Features

  • ๐Ÿš€ Extreme Performance: L1 nanosecond response (P99 < 100ns), L1 millisecond response (P99 < 5ms)
  • ๐ŸŽฏ Zero-Code Changes: Enable caching with a single #[cached] macro
  • ๐Ÿ”„ Auto Recovery: Automatic degradation on Redis failure, WAL replay on recovery
  • ๐ŸŒ Multi-Instance Sync: Pub/Sub + version-based invalidation synchronization
  • โšก Batch Optimization: Intelligent batch writes for significantly improved throughput
  • ๐Ÿ›ก๏ธ Production Grade: Complete observability, health checks, chaos testing verified

๐Ÿ“ฆ Quick Start

1. Add Dependency

Add oxcache to your Cargo.toml:

[dependencies]
oxcache = "0.1"

Note: tokio and serde are already included by default. If you need minimal dependencies, you can use oxcache = { version = "0.1", default-features = false } and add them manually.

2. Configuration

Create a config.toml file:

[global]
default_ttl = 3600
health_check_interval = 30
serialization = "json"
enable_metrics = true

[services.user_cache]
cache_type = "two-level"  # "l1" | "l2" | "two-level"
ttl = 600

  [services.user_cache.l1]
  max_capacity = 10000
  ttl = 300  # L1 TTL must be <= L2 TTL
  tti = 180
  initial_capacity = 1000

  [services.user_cache.l2]
  mode = "standalone"  # "standalone" | "sentinel" | "cluster"
  connection_string = "redis://127.0.0.1:6379"

  [services.user_cache.two_level]
  write_through = true
  promote_on_hit = true
  enable_batch_write = true
  batch_size = 100
  batch_interval_ms = 50

3. Usage

Using Macros (Recommended)

use oxcache::macros::cached;
use serde::{Deserialize, Serialize};

#[derive(Serialize, Deserialize, Clone, Debug)]
struct User {
    id: u64,
    name: String,
}

// One-line cache enable
#[cached(service = "user_cache", ttl = 600)]
async fn get_user(id: u64) -> Result<User, String> {
    // Simulate slow database query
    tokio::time::sleep(std::time::Duration::from_millis(100)).await;
    Ok(User {
        id,
        name: format!("User {}", id),
    })
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Initialize cache (from config file)
    oxcache::init("config.toml").await?;
    
    // First call: execute function logic + cache result (~100ms)
    let user = get_user(1).await?;
    println!("First call: {:?}", user);
    
    // Second call: return directly from cache (~0.1ms)
    let cached_user = get_user(1).await?;
    println!("Cached call: {:?}", cached_user);
    
    Ok(())
}

Manual Client Usage

use oxcache::{get_client, CacheOps};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    oxcache::init("config.toml").await?;
    
    let client = get_client("user_cache")?;
    
    // Standard operation: write to both L1 and L2
    client.set("key", &my_data, Some(300)).await?;
    let data: MyData = client.get("key").await?.unwrap();
    
    // Write to L1 only (temporary data)
    client.set_l1_only("temp_key", &temp_data, Some(60)).await?;
    
    // Write to L2 only (shared data)
    client.set_l2_only("shared_key", &shared_data, Some(3600)).await?;
    
    // Delete
    client.delete("key").await?;
    
    Ok(())
}

๐ŸŽจ Use Cases

Scenario 1: User Information Cache

#[cached(service = "user_cache", ttl = 600)]
async fn get_user_profile(user_id: u64) -> Result<UserProfile, Error> {
    database::query_user(user_id).await
}

Scenario 2: API Response Cache

#[cached(
    service = "api_cache",
    ttl = 300,
    key = "api_{endpoint}_{version}"
)]
async fn fetch_api_data(endpoint: String, version: u32) -> Result<ApiResponse, Error> {
    http_client::get(&format!("/api/{}/{}", endpoint, version)).await
}

Scenario 3: L1-Only Hot Data Cache

#[cached(service = "session_cache", cache_type = "l1", ttl = 60)]
async fn get_user_session(session_id: String) -> Result<Session, Error> {
    session_store::load(session_id).await
}

๐Ÿ—๏ธ Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                    Application Code                      โ”‚
โ”‚                  (#[cached] Macro)                       โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                       โ”‚
                       โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                   CacheManager                           โ”‚
โ”‚        (Service Registry + Health Monitor)               โ”‚
โ””โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”˜
    โ”‚                                                  โ”‚
    โ†“                                                  โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ TwoLevelClientโ”‚                              โ”‚ L1OnlyClient โ”‚
โ”‚               โ”‚                              โ”‚ L2OnlyClient โ”‚
โ””โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”˜                              โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
    โ”‚      โ”‚
    โ†“      โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  L1    โ”‚ โ”‚                L2                       โ”‚
โ”‚ (Moka) โ”‚ โ”‚              (Redis)                    โ”‚
โ”‚        โ”‚ โ”‚                                        โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

L1: In-process high-speed cache using LRU/TinyLFU eviction strategy
L2: Distributed shared cache supporting Sentinel/Cluster modes

๐Ÿ“Š Performance Benchmarks

Test environment: M1 Pro, 16GB RAM, macOS

Single-thread Latency Test (P99):
โ”œโ”€โ”€ L1 Cache:  ~50ns
โ”œโ”€โ”€ L2 Cache:  ~1ms
โ””โ”€โ”€ Database:   ~10ms

Throughput Test (batch_size=100):
โ”œโ”€โ”€ Single Write:  ~10K ops/s
โ””โ”€โ”€ Batch Write:   ~50K ops/s

๐Ÿ›ก๏ธ Reliability

  • โœ… Single-Flight (prevent cache stampede)
  • โœ… WAL (Write-Ahead Log) persistence
  • โœ… Automatic degradation on Redis failure
  • โœ… Graceful shutdown mechanism
  • โœ… Health checks and auto-recovery

๐Ÿ“š Documentation

๐Ÿค Contributing

Pull Requests and Issues are welcome!

๐Ÿ“ Changelog

See CHANGELOG.md

๐Ÿ“„ License

This project is licensed under MIT License. See LICENSE file.


If this project helps you, please give a โญ Star to show support!

Made with โค๏ธ by oxcache Team