Oxcache is a high-performance, production-grade two-level caching library for Rust, providing L1 (Moka in-memory cache) + L2 (Redis distributed cache) architecture.
✨ Key Features
- 🚀 Extreme Performance: L1 nanosecond response (P99 < 100ns), L1 millisecond response (P99 < 5ms)
- 🎯 Zero-Code Changes: Enable caching with a single
#[cached]macro - 🔄 Auto Recovery: Automatic degradation on Redis failure, WAL replay on recovery
- 🌐 Multi-Instance Sync: Pub/Sub + version-based invalidation synchronization
- ⚡ Batch Optimization: Intelligent batch writes for significantly improved throughput
- 🛡️ Production Grade: Complete observability, health checks, chaos testing verified
📦 Quick Start
1. Add Dependency
Add oxcache to your Cargo.toml:
[]
= "0.1.2"
Note:
tokioandserdeare already included by default. If you need minimal dependencies, you can useoxcache = { version = "0.1.2", default-features = false }and add them manually.
Features: To use
#[cached]macro, enablemacrosfeature:oxcache = { version = "0.1.2", features = ["macros"] }
Feature Tiers
# Full features (recommended)
= { = "0.1.2", = ["full"] }
# Core functionality (L1 + L2 cache)
= { = "0.1.2", = ["core"] }
# Minimal (L1 cache only)
= { = "0.1.2", = ["minimal"] }
# Custom selection
= { = "0.1.2", = ["core", "macros", "metrics"] }
Available Features
| Tier | Features | Description |
|---|---|---|
| minimal | l1-moka, serialization, metrics |
L1 cache only |
| core | minimal + l2-redis |
L1 + L2 cache |
| full | core + all advanced features |
Complete functionality |
Advanced Features (included in full):
macros-#[cached]attribute macrobatch-write- Optimized batch writingwal-recovery- Write-ahead log for durabilitybloom-filter- Cache penetration protectionrate-limiting- DoS protectiondatabase- Database integrationcli- Command-line interfacefull-metrics- OpenTelemetry integration
2. Configuration
Create a config.toml file:
[]
= 3600
= 30
= "json"
= true
# Two-level cache (L1 + L2)
[]
= "two-level" # "l1" | "l2" | "two-level"
= 600
[]
= 10000
= 300 # L1 TTL must be <= L2 TTL
= 180
= 1000
[]
= "standalone" # "standalone" | "sentinel" | "cluster"
= "redis://127.0.0.1:6379"
[]
= true
= true
= true
= 100
= 50
# L1-only cache (memory only)
[]
= "l1"
= 300
[]
= 5000
= 300
= 120
# L2-only cache (Redis only)
[]
= "l2"
= 7200
[]
= "standalone"
= "redis://127.0.0.1:6379"
3. Usage
Using Macros (Recommended)
use cached;
use ;
// One-line cache enable
async
async
Manual Client Usage
use ;
async
🎨 Use Cases
Scenario 1: User Information Cache
async
Scenario 2: API Response Cache
async
Scenario 3: L1-Only Hot Data Cache
async
🏗️ Architecture
graph TD
A[Application Code<br/>#[cached] Macro] --> B[Cache Manager<br/>Service Registry + Health Monitor]
B --> C[TwoLevelClient]
B --> D[L1OnlyClient]
B --> E[L2OnlyClient]
C --> F[L1 Cache<br/>Moka]
C --> G[L2 Cache<br/>Redis]
D --> F
E --> G
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style D fill:#fff3e0
style E fill:#fce4ec
style F fill:#f1f8e9
style G fill:#fdf2e9
L1: In-process high-speed cache using LRU/TinyLFU eviction strategy
L2: Distributed shared cache supporting Sentinel/Cluster modes
📊 Performance Benchmarks
Test environment: M1 Pro, 16GB RAM, macOS, Redis 7.0
Note: Performance varies based on hardware, network conditions, and data size.
xychart-beta
title "Single-thread Latency Test (P99)"
x-axis ["L1 Cache", "L2 Cache", "Database"]
y-axis "Latency (ms)" 0 --> 60
bar [0.05, 3, 30]
line [0.05, 3, 30]
xychart-beta
title "Throughput Test (batch_size=100)"
x-axis ["L1 Operations", "L2 Single Write", "L2 Batch Write"]
y-axis "Ops/sec" 0 --> 600
bar [7500, 75, 350]
Performance Summary:
- L1 Cache: 50-100ns (in-memory)
- L2 Cache: 1-5ms (Redis, localhost)
- Database: 10-50ms (typical SQL query)
- L1 Operations: 5-10M ops/sec
- L2 Single Write: 50-100K ops/sec
- L2 Batch Write: 200-500K ops/sec
🛡️ Reliability
- ✅ Single-Flight (prevent cache stampede)
- ✅ WAL (Write-Ahead Log) persistence
- ✅ Automatic degradation on Redis failure
- ✅ Graceful shutdown mechanism
- ✅ Health checks and auto-recovery
📚 Documentation
🤝 Contributing
Pull Requests and Issues are welcome!
📝 Changelog
See CHANGELOG.md
📄 License
This project is licensed under MIT License. See LICENSE file.
If this project helps you, please give a ⭐ Star to show support!
Made with ❤️ by Kirky.X