Oxcache is a high-performance, production-grade two-level caching library for Rust, providing L1 (Moka in-memory cache) + L2 (Redis distributed cache) architecture.
✨ Key Features
- 🚀 Extreme Performance: L1 nanosecond response (P99 < 100ns), L1 millisecond response (P99 < 5ms)
- 🎯 Zero-Code Changes: Enable caching with a single
#[cached]macro - 🔄 Auto Recovery: Automatic degradation on Redis failure, WAL replay on recovery
- 🌐 Multi-Instance Sync: Pub/Sub + version-based invalidation synchronization
- ⚡ Batch Optimization: Intelligent batch writes for significantly improved throughput
- 🛡️ Production Grade: Complete observability, health checks, chaos testing verified
📦 Quick Start
1. Add Dependency
Add oxcache to your Cargo.toml:
[]
= "0.1.3"
Note:
tokioandserdeare already included by default. If you need minimal dependencies, you can useoxcache = { version = "0.1.3", default-features = false }and add them manually.
Features: To use
#[cached]macro, enablemacrosfeature:oxcache = { version = "0.1.3", features = ["macros"] }
Feature Tiers
# Full features (recommended)
= { = "0.1.3", = ["full"] }
# Core functionality only
= { = "0.1.3", = ["core"] }
# Minimal - L1 cache only
= { = "0.1.3", = ["minimal"] }
# Custom selection
= { = "0.1.3", = ["core", "macros", "metrics"] }
# Development with specific features
= { = "0.1.3", = [
"l1-moka", # L1 cache (Moka)
"l2-redis", # L2 cache (Redis)
"macros", # #[cached] macro
"batch-write", # Optimized batch writing
"metrics", # Basic metrics
] }
| Tier | Features | Description |
|---|---|---|
| minimal | l1-moka, serialization, metrics |
L1 cache only |
| core | minimal + l2-redis |
L1 + L2 cache |
| full | core + all advanced features |
Complete functionality |
Advanced Features (included in full):
macros-#[cached]attribute macrobatch-write- Optimized batch writingwal-recovery- Write-ahead log for durabilitybloom-filter- Cache penetration protectionrate-limiting- DoS protectiondatabase- Database integrationcli- Command-line interfacefull-metrics- OpenTelemetry integration
2. Configuration
Create a config.toml file:
Important: To initialize from a config file, you need to enable both
config-tomlandconfersfeatures:= { = "0.1.3", = ["config-toml", "confers"] }
[]
= 3600
= 30
= "json"
= true
# Two-level cache (L1 + L2)
[]
= "two-level" # "l1" | "l2" | "two-level"
= 600
[]
= 10000
= 300 # L1 TTL must be <= L2 TTL
= 180
= 1000
[]
= "standalone" # "standalone" | "sentinel" | "cluster"
= "redis://127.0.0.1:6379"
[]
= true
= true
= true
= 100
= 50
# L1-only cache (memory only)
[]
= "l1"
= 300
[]
= 5000
= 300
= 120
# L2-only cache (Redis only)
[]
= "l2"
= 7200
[]
= "standalone"
= "redis://127.0.0.1:6379"
3. Usage
Using Macros (Recommended)
use cached;
use ;
// One-line cache enable
async
async
Manual Client Usage
use ;
async
🎨 Use Cases
Scenario 1: User Information Cache
async
Scenario 2: API Response Cache
async
Scenario 3: L1-Only Hot Data Cache
async
🏗️ Architecture
graph TD
A[Application Code<br/>#[cached] Macro] --> B[Cache Manager<br/>Service Registry + Health Monitor]
B --> C[TwoLevelClient]
B --> D[L1OnlyClient]
B --> E[L2OnlyClient]
C --> F[L1 Cache<br/>Moka]
C --> G[L2 Cache<br/>Redis]
D --> F
E --> G
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style D fill:#fff3e0
style E fill:#fce4ec
style F fill:#f1f8e9
style G fill:#fdf2e9
L1: In-process high-speed cache using LRU/TinyLFU eviction strategy
L2: Distributed shared cache supporting Sentinel/Cluster modes
📊 Performance Benchmarks
Test environment: M1 Pro, 16GB RAM, macOS, Redis 7.0
Note: Performance varies based on hardware, network conditions, and data size.
xychart-beta
title "Single-thread Latency Test (P99)"
x-axis ["L1 Cache", "L2 Cache", "Database"]
y-axis "Latency (ms)" 0 --> 60
bar [0.05, 3, 30]
line [0.05, 3, 30]
xychart-beta
title "Throughput Test (batch_size=100)"
x-axis ["L1 Operations", "L2 Single Write", "L2 Batch Write"]
y-axis "Ops/sec" 0 --> 600
bar [7500, 75, 350]
Performance Summary:
- L1 Cache: 50-100ns (in-memory)
- L2 Cache: 1-5ms (Redis, localhost)
- Database: 10-50ms (typical SQL query)
- L1 Operations: 5-10M ops/sec
- L2 Single Write: 50-100K ops/sec
- L2 Batch Write: 200-500K ops/sec
🛡️ Reliability
- ✅ Single-Flight (prevent cache stampede)
- ✅ WAL (Write-Ahead Log) persistence
- ✅ Automatic degradation on Redis failure
- ✅ Graceful shutdown mechanism
- ✅ Health checks and auto-recovery
🔐 Security
Oxcache implements multiple security measures to protect against common attacks:
Input Validation
All user inputs are validated before being passed to Redis:
- Key Validation: Keys cannot be empty, exceed 512KB, or contain dangerous characters (
\r,\n,\0) that could enable Redis protocol injection attacks. - Lua Script Validation: Scripts are validated for:
- Maximum length of 10KB
- Maximum of 100 keys
- Blocking dangerous commands:
FLUSHALL,FLUSHDB,KEYS,SHUTDOWN,DEBUG,CONFIG,SAVE,BGSAVE,MONITOR
- SCAN Pattern Validation: Patterns are validated to prevent ReDoS attacks:
- Maximum length of 256 characters
- Maximum of 10 wildcard (
*) characters - Count parameter clamped to safe range (1-1000)
Timeout Protection
Long-running operations have timeout protection:
- Lua Scripts: 30-second timeout prevents Redis blocking
- SCAN Operations: 30-second timeout prevents hanging scans
Secure Lock Values
Distributed locks use cryptographically secure UUID v4 values automatically generated by the library, eliminating the risk of lock value prediction attacks.
Connection String Redaction
Passwords in connection strings are redacted in logs by default to prevent credential leakage. Use normalize_connection_string_with_redaction() for secure logging.
Best Practices
- Use the library's key validation - Don't bypass the
validate_redis_key()function - Avoid custom Lua scripts - Use the built-in cache operations when possible
- Set appropriate timeouts - Don't disable the 30-second default timeout
- Rotate lock values - The library handles this automatically
- Never log connection strings - Use the redaction utility for debugging
For more details, see Security Documentation.
📚 Documentation
🤝 Contributing
Pull Requests and Issues are welcome!
📝 Changelog
See CHANGELOG.md
📄 License
This project is licensed under MIT License. See LICENSE file.
If this project helps you, please give a ⭐ Star to show support!
Made with ❤️ by Kirky.X