<div align="center">
# ๐ Oxcache
[](https://github.com/Kirky-X/oxcache/actions/workflows/ci.yml)
[](https://crates.io/crates/oxcache)
[](https://docs.rs/oxcache)
[](https://crates.io/crates/oxcache)
[](https://codecov.io/gh/Kirky-X/oxcache)
[](https://deps.rs/repo/github/Kirky-X/oxcache)
[](https://github.com/Kirky-X/oxcache/blob/main/LICENSE)
[](https://www.rust-lang.org)
Oxcache is a high-performance, production-grade two-level caching library for Rust, providing L1 (Moka in-memory
cache) + L2 (Redis distributed cache) architecture.
</div>
## โจ Key Features
<div align="center">
<table>
<tr>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/rocket.png" width="48"><br>
<b>Extreme Performance</b><br>L1 in nanoseconds
</td>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/magic-wand.png" width="48"><br>
<b>Zero-Code Changes</b><br>One-line cache enable
</td>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/cloud.png" width="48"><br>
<b>Auto Recovery</b><br>Redis fault degradation
</td>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/synchronize.png" width="48"><br>
<b>Multi-Instance Sync</b><br>Based on Pub/Sub
</td>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/lightning.png" width="48"><br>
<b>Batch Optimization</b><br>Smart batch writes
</td>
</tr>
</table>
</div>
- **๐ Extreme Performance**: L1 nanosecond response (P99 < 100ns), L1 millisecond response (P99 < 5ms)
- **๐ฏ Zero-Code Changes**: Enable caching with a single `#[cached]` macro
- **๐ Auto Recovery**: Automatic degradation on Redis failure, WAL replay on recovery
- **๐ Multi-Instance Sync**: Pub/Sub + version-based invalidation synchronization
- **โก Batch Optimization**: Intelligent batch writes for significantly improved throughput
- **๐ก๏ธ Production Grade**: Complete observability, health checks, chaos testing verified
## ๐ฆ Quick Start
### 1. Add Dependency
Add `oxcache` to your `Cargo.toml`:
```toml
[dependencies]
oxcache = "0.1"
```
> **Note**: `tokio` and `serde` are already included by default. If you need minimal dependencies, you can use
`oxcache = { version = "0.1", default-features = false }` and add them manually.
### 2. Configuration
Create a `config.toml` file:
```toml
[global]
default_ttl = 3600
health_check_interval = 30
serialization = "json"
enable_metrics = true
[services.user_cache]
cache_type = "two-level" # "l1" | "l2" | "two-level"
ttl = 600
[services.user_cache.l1]
max_capacity = 10000
ttl = 300 # L1 TTL must be <= L2 TTL
tti = 180
initial_capacity = 1000
[services.user_cache.l2]
[services.user_cache.two_level]
write_through = true
promote_on_hit = true
enable_batch_write = true
batch_size = 100
batch_interval_ms = 50
```
### 3. Usage
#### Using Macros (Recommended)
```rust
use oxcache::macros::cached;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Clone, Debug)]
struct User {
id: u64,
name: String,
}
// One-line cache enable
#[cached(service = "user_cache", ttl = 600)]
async fn get_user(id: u64) -> Result<User, String> {
// Simulate slow database query
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(User {
id,
name: format!("User {}", id),
})
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize cache (from config file)
oxcache::init("config.toml").await?;
// First call: execute function logic + cache result (~100ms)
let user = get_user(1).await?;
println!("First call: {:?}", user);
// Second call: return directly from cache (~0.1ms)
let cached_user = get_user(1).await?;
println!("Cached call: {:?}", cached_user);
Ok(())
}
```
#### Manual Client Usage
```rust
use oxcache::{get_client, CacheOps};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
oxcache::init("config.toml").await?;
let client = get_client("user_cache")?;
// Standard operation: write to both L1 and L2
client.set("key", &my_data, Some(300)).await?;
let data: MyData = client.get("key").await?.unwrap();
// Write to L1 only (temporary data)
client.set_l1_only("temp_key", &temp_data, Some(60)).await?;
// Write to L2 only (shared data)
client.set_l2_only("shared_key", &shared_data, Some(3600)).await?;
// Delete
client.delete("key").await?;
Ok(())
}
```
## ๐จ Use Cases
### Scenario 1: User Information Cache
```rust
#[cached(service = "user_cache", ttl = 600)]
async fn get_user_profile(user_id: u64) -> Result<UserProfile, Error> {
database::query_user(user_id).await
}
```
### Scenario 2: API Response Cache
```rust
#[cached(
service = "api_cache",
ttl = 300,
key = "api_{endpoint}_{version}"
)]
async fn fetch_api_data(endpoint: String, version: u32) -> Result<ApiResponse, Error> {
http_client::get(&format!("/api/{}/{}", endpoint, version)).await
}
```
### Scenario 3: L1-Only Hot Data Cache
```rust
#[cached(service = "session_cache", cache_type = "l1", ttl = 60)]
async fn get_user_session(session_id: String) -> Result<Session, Error> {
session_store::load(session_id).await
}
```
## ๐๏ธ Architecture
```
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Application Code โ
โ (#[cached] Macro) โ
โโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ CacheManager โ
โ (Service Registry + Health Monitor) โ
โโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโ
โ โ
โ โ
โโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโ
โ TwoLevelClientโ โ L1OnlyClient โ
โ โ โ L2OnlyClient โ
โโโโโฌโโโโโโโฌโโโโ โโโโโโโโโโโโโโโโ
โ โ
โ โ
โโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ L1 โ โ L2 โ
โ (Moka) โ โ (Redis) โ
โ โ โ โ
โโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
**L1**: In-process high-speed cache using LRU/TinyLFU eviction strategy
**L2**: Distributed shared cache supporting Sentinel/Cluster modes
## ๐ Performance Benchmarks
> Test environment: M1 Pro, 16GB RAM, macOS
```
Single-thread Latency Test (P99):
โโโ L1 Cache: ~50ns
โโโ L2 Cache: ~1ms
โโโ Database: ~10ms
Throughput Test (batch_size=100):
โโโ Single Write: ~10K ops/s
โโโ Batch Write: ~50K ops/s
```
## ๐ก๏ธ Reliability
- โ
Single-Flight (prevent cache stampede)
- โ
WAL (Write-Ahead Log) persistence
- โ
Automatic degradation on Redis failure
- โ
Graceful shutdown mechanism
- โ
Health checks and auto-recovery
## ๐ Documentation
- [๐ User Guide](docs/USER_GUIDE.md)
- [๐ API Documentation](https://docs.rs/oxcache)
- [๐ป Examples](../examples/)
## ๐ค Contributing
Pull Requests and Issues are welcome!
## ๐ Changelog
See [CHANGELOG.md](../CHANGELOG.md)
## ๐ License
This project is licensed under MIT License. See [LICENSE](../LICENSE) file.
---
<div align="center">
**If this project helps you, please give a โญ Star to show support!**
Made with โค๏ธ by oxcache Team
</div>