๐ Oxcache
Oxcache is a high-performance, production-grade two-level caching library for Rust, providing L1 (Moka in-memory cache) + L2 (Redis distributed cache) architecture.
โจ Key Features
- ๐ Extreme Performance: L1 nanosecond response (P99 < 100ns), L1 millisecond response (P99 < 5ms)
- ๐ฏ Zero-Code Changes: Enable caching with a single
#[cached]macro - ๐ Auto Recovery: Automatic degradation on Redis failure, WAL replay on recovery
- ๐ Multi-Instance Sync: Pub/Sub + version-based invalidation synchronization
- โก Batch Optimization: Intelligent batch writes for significantly improved throughput
- ๐ก๏ธ Production Grade: Complete observability, health checks, chaos testing verified
๐ฆ Quick Start
1. Add Dependency
Add oxcache to your Cargo.toml:
[]
= "0.1"
Note:
tokioandserdeare already included by default. If you need minimal dependencies, you can useoxcache = { version = "0.1", default-features = false }and add them manually.
2. Configuration
Create a config.toml file:
[]
= 3600
= 30
= "json"
= true
[]
= "two-level" # "l1" | "l2" | "two-level"
= 600
[]
= 10000
= 300 # L1 TTL must be <= L2 TTL
= 180
= 1000
[]
= "standalone" # "standalone" | "sentinel" | "cluster"
= "redis://127.0.0.1:6379"
[]
= true
= true
= true
= 100
= 50
3. Usage
Using Macros (Recommended)
use cached;
use ;
// One-line cache enable
async
async
Manual Client Usage
use ;
async
๐จ Use Cases
Scenario 1: User Information Cache
async
Scenario 2: API Response Cache
async
Scenario 3: L1-Only Hot Data Cache
async
๐๏ธ Architecture
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Application Code โ
โ (#[cached] Macro) โ
โโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ CacheManager โ
โ (Service Registry + Health Monitor) โ
โโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโ
โ โ
โ โ
โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ
โ TwoLevelClientโ โ L1OnlyClient โ
โ โ โ L2OnlyClient โ
โโโโโฌโโโโโโโฌโโโโ โโโโโโโโโโโโโโโโ
โ โ
โ โ
โโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ L1 โ โ L2 โ
โ (Moka) โ โ (Redis) โ
โ โ โ โ
โโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
L1: In-process high-speed cache using LRU/TinyLFU eviction strategy
L2: Distributed shared cache supporting Sentinel/Cluster modes
๐ Performance Benchmarks
Test environment: M1 Pro, 16GB RAM, macOS
Single-thread Latency Test (P99):
โโโ L1 Cache: ~50ns
โโโ L2 Cache: ~1ms
โโโ Database: ~10ms
Throughput Test (batch_size=100):
โโโ Single Write: ~10K ops/s
โโโ Batch Write: ~50K ops/s
๐ก๏ธ Reliability
- โ Single-Flight (prevent cache stampede)
- โ WAL (Write-Ahead Log) persistence
- โ Automatic degradation on Redis failure
- โ Graceful shutdown mechanism
- โ Health checks and auto-recovery
๐ Documentation
๐ค Contributing
Pull Requests and Issues are welcome!
๐ Changelog
See CHANGELOG.md
๐ License
This project is licensed under MIT License. See LICENSE file.
If this project helps you, please give a โญ Star to show support!
Made with โค๏ธ by oxcache Team