oxcache 0.1.4

A high-performance multi-level cache library for Rust with L1 (memory) and L2 (Redis) caching.
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
<div align="center">

<img src="docs/image/oxcache.png" alt="Oxcache Logo" width="250">

[![CI](https://github.com/Kirky-X/oxcache/actions/workflows/ci.yml/badge.svg)](https://github.com/Kirky-X/oxcache/actions/workflows/ci.yml)
[![Crates.io](https://img.shields.io/crates/v/oxcache.svg)](https://crates.io/crates/oxcache)
[![Documentation](https://docs.rs/oxcache/badge.svg)](https://docs.rs/oxcache)
[![Downloads](https://img.shields.io/crates/d/oxcache.svg)](https://crates.io/crates/oxcache)
[![codecov](https://codecov.io/gh/Kirky-X/oxcache/branch/main/graph/badge.svg)](https://codecov.io/gh/Kirky-X/oxcache)
[![Dependency Status](https://deps.rs/repo/github/Kirky-X/oxcache/status.svg)](https://deps.rs/repo/github/Kirky-X/oxcache)
[![License](https://img.shields.io/crates/l/oxcache.svg)](https://github.com/Kirky-X/oxcache/blob/main/LICENSE)
[![Rust Version](https://img.shields.io/badge/rust-1.70%2B-blue.svg)](https://www.rust-lang.org)

[English]../README.md | [简体中文]README_zh.md

Oxcache is a high-performance, production-grade two-level caching library for Rust, providing L1 (Moka in-memory
cache) + L2 (Redis distributed cache) architecture.

</div>

## ✨ Key Features

<div align="center">

<table>
<tr>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/rocket.png" width="48"><br>
<b>Extreme Performance</b><br>L1 in nanoseconds
</td>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/magic-wand.png" width="48"><br>
<b>Zero-Code Changes</b><br>One-line cache enable
</td>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/cloud.png" width="48"><br>
<b>Auto Recovery</b><br>Redis fault degradation
</td>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/synchronize.png" width="48"><br>
<b>Multi-Instance Sync</b><br>Based on Pub/Sub
</td>
<td width="20%" align="center">
<img src="https://img.icons8.com/fluency/96/000000/lightning.png" width="48"><br>
<b>Batch Optimization</b><br>Smart batch writes
</td>
</tr>
</table>

</div>

- **🚀 Extreme Performance**: L1 nanosecond response (P99 < 100ns), L1 millisecond response (P99 < 5ms)
- **🎯 Zero-Code Changes**: Enable caching with a single `#[cached]` macro
- **🔄 Auto Recovery**: Automatic degradation on Redis failure, WAL replay on recovery
- **🌐 Multi-Instance Sync**: Pub/Sub + version-based invalidation synchronization
- **⚡ Batch Optimization**: Intelligent batch writes for significantly improved throughput
- **🛡️ Production Grade**: Complete observability, health checks, chaos testing verified

## 📦 Quick Start

### 1. Add Dependency

Add `oxcache` to your `Cargo.toml`:

```toml
[dependencies]
oxcache = "0.1.3"
```

> **Note**: `tokio` and `serde` are already included by default. If you need minimal dependencies, you can use
`oxcache = { version = "0.1.3", default-features = false }` and add them manually.

> **Features**: To use `#[cached]` macro, enable `macros` feature: `oxcache = { version = "0.1.3", features = ["macros"] }`

#### Feature Tiers

```toml
# Full features (recommended)
oxcache = { version = "0.1.3", features = ["full"] }

# Core functionality only
oxcache = { version = "0.1.3", features = ["core"] }

# Minimal - L1 cache only
oxcache = { version = "0.1.3", features = ["minimal"] }

# Custom selection
oxcache = { version = "0.1.3", features = ["core", "macros", "metrics"] }

# Development with specific features
oxcache = { version = "0.1.3", features = [
    "l1-moka",      # L1 cache (Moka)
    "l2-redis",     # L2 cache (Redis)
    "macros",       # #[cached] macro
    "batch-write",  # Optimized batch writing
    "metrics",      # Basic metrics
] }
```

| Tier | Features | Description |
|------|----------|-------------|
| **minimal** | `l1-moka`, `serialization`, `metrics` | L1 cache only |
| **core** | `minimal` + `l2-redis` | L1 + L2 cache |
| **full** | `core` + all advanced features | Complete functionality |

**Advanced Features** (included in `full`):
- `macros` - `#[cached]` attribute macro
- `batch-write` - Optimized batch writing
- `wal-recovery` - Write-ahead log for durability
- `bloom-filter` - Cache penetration protection
- `rate-limiting` - DoS protection
- `database` - Database integration
- `cli` - Command-line interface
- `full-metrics` - OpenTelemetry integration

### 2. Configuration

Create a `config.toml` file:

> **Important**: To initialize from a config file, you need to enable both `config-toml` and `confers` features:
> ```toml
> oxcache = { version = "0.1.3", features = ["config-toml", "confers"] }
> ```

```toml
[global]
default_ttl = 3600
health_check_interval = 30
serialization = "json"
enable_metrics = true

# Two-level cache (L1 + L2)
[services.user_cache]
cache_type = "two-level"  # "l1" | "l2" | "two-level"
ttl = 600

  [services.user_cache.l1]
  max_capacity = 10000
  ttl = 300  # L1 TTL must be <= L2 TTL
  tti = 180
  initial_capacity = 1000

  [services.user_cache.l2]
  mode = "standalone"  # "standalone" | "sentinel" | "cluster"
  connection_string = "redis://127.0.0.1:6379"

  [services.user_cache.two_level]
  write_through = true
  promote_on_hit = true
  enable_batch_write = true
  batch_size = 100
  batch_interval_ms = 50

# L1-only cache (memory only)
[services.session_cache]
cache_type = "l1"
ttl = 300

  [services.session_cache.l1]
  max_capacity = 5000
  ttl = 300
  tti = 120

# L2-only cache (Redis only)
[services.shared_cache]
cache_type = "l2"
ttl = 7200

  [services.shared_cache.l2]
  mode = "standalone"
  connection_string = "redis://127.0.0.1:6379"
```

### 3. Usage

#### Using Macros (Recommended)

```rust
use oxcache::macros::cached;
use serde::{Deserialize, Serialize};

#[derive(Serialize, Deserialize, Clone, Debug)]
struct User {
    id: u64,
    name: String,
}

// One-line cache enable
#[cached(service = "user_cache", ttl = 600)]
async fn get_user(id: u64) -> Result<User, String> {
    // Simulate slow database query
    tokio::time::sleep(std::time::Duration::from_millis(100)).await;
    Ok(User {
        id,
        name: format!("User {}", id),
    })
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Initialize cache (from config file)
    oxcache::init_from_file("config.toml").await?;
    
    // First call: execute function logic + cache result (~100ms)
    let user = get_user(1).await?;
    println!("First call: {:?}", user);
    
    // Second call: return directly from cache (~0.1ms)
    let cached_user = get_user(1).await?;
    println!("Cached call: {:?}", cached_user);
    
    Ok(())
}
```

#### Manual Client Usage

```rust
use oxcache::{get_client, CacheOps};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    oxcache::init_from_file("config.toml").await?;
    
    let client = get_client("user_cache")?;
    
    // Standard operation: write to both L1 and L2
    client.set("key", &my_data, Some(300)).await?;
    let data: MyData = client.get("key").await?.unwrap();
    
    // Write to L1 only (temporary data)
    client.set_l1_only("temp_key", &temp_data, Some(60)).await?;
    
    // Write to L2 only (shared data)
    client.set_l2_only("shared_key", &shared_data, Some(3600)).await?;
    
    // Delete
    client.delete("key").await?;
    
    Ok(())
}
```

## 🎨 Use Cases

### Scenario 1: User Information Cache

```rust
#[cached(service = "user_cache", ttl = 600)]
async fn get_user_profile(user_id: u64) -> Result<UserProfile, Error> {
    database::query_user(user_id).await
}
```

### Scenario 2: API Response Cache

```rust
#[cached(
    service = "api_cache",
    ttl = 300,
    key = "api_{endpoint}_{version}"
)]
async fn fetch_api_data(endpoint: String, version: u32) -> Result<ApiResponse, Error> {
    http_client::get(&format!("/api/{}/{}", endpoint, version)).await
}
```

### Scenario 3: L1-Only Hot Data Cache

```rust
#[cached(service = "session_cache", cache_type = "l1", ttl = 60)]
async fn get_user_session(session_id: String) -> Result<Session, Error> {
    session_store::load(session_id).await
}
```

## 🏗️ Architecture

```mermaid
graph TD
    A[Application Code<br/>#[cached] Macro] --> B[Cache Manager<br/>Service Registry + Health Monitor]
    
    B --> C[TwoLevelClient]
    B --> D[L1OnlyClient]
    B --> E[L2OnlyClient]
    
    C --> F[L1 Cache<br/>Moka]
    C --> G[L2 Cache<br/>Redis]
    
    D --> F
    E --> G
    
    style A fill:#e1f5fe
    style B fill:#f3e5f5
    style C fill:#e8f5e8
    style D fill:#fff3e0
    style E fill:#fce4ec
    style F fill:#f1f8e9
    style G fill:#fdf2e9
```

**L1**: In-process high-speed cache using LRU/TinyLFU eviction strategy  
**L2**: Distributed shared cache supporting Sentinel/Cluster modes

## 📊 Performance Benchmarks

> Test environment: M1 Pro, 16GB RAM, macOS, Redis 7.0
> 
> **Note**: Performance varies based on hardware, network conditions, and data size.

```mermaid
xychart-beta
    title "Single-thread Latency Test (P99)"
    x-axis ["L1 Cache", "L2 Cache", "Database"]
    y-axis "Latency (ms)" 0 --> 60
    bar [0.05, 3, 30]
    line [0.05, 3, 30]
```

```mermaid
xychart-beta
    title "Throughput Test (batch_size=100)"
    x-axis ["L1 Operations", "L2 Single Write", "L2 Batch Write"]
    y-axis "Ops/sec" 0 --> 600
    bar [7500, 75, 350]
```

**Performance Summary**:
- **L1 Cache**: 50-100ns (in-memory)
- **L2 Cache**: 1-5ms (Redis, localhost)
- **Database**: 10-50ms (typical SQL query)
- **L1 Operations**: 5-10M ops/sec
- **L2 Single Write**: 50-100K ops/sec
- **L2 Batch Write**: 200-500K ops/sec

## 🛡️ Reliability

- ✅ Single-Flight (prevent cache stampede)
- ✅ WAL (Write-Ahead Log) persistence
- ✅ Automatic degradation on Redis failure
- ✅ Graceful shutdown mechanism
- ✅ Health checks and auto-recovery

## 🔐 Security

Oxcache implements multiple security measures to protect against common attacks:

### Input Validation

All user inputs are validated before being passed to Redis:

- **Key Validation**: Keys cannot be empty, exceed 512KB, or contain dangerous characters (`\r`, `\n`, `\0`) that could enable Redis protocol injection attacks.
- **Lua Script Validation**: Scripts are validated for:
  - Maximum length of 10KB
  - Maximum of 100 keys
  - Blocking dangerous commands: `FLUSHALL`, `FLUSHDB`, `KEYS`, `SHUTDOWN`, `DEBUG`, `CONFIG`, `SAVE`, `BGSAVE`, `MONITOR`
- **SCAN Pattern Validation**: Patterns are validated to prevent ReDoS attacks:
  - Maximum length of 256 characters
  - Maximum of 10 wildcard (`*`) characters
  - Count parameter clamped to safe range (1-1000)

### Timeout Protection

Long-running operations have timeout protection:

- **Lua Scripts**: 30-second timeout prevents Redis blocking
- **SCAN Operations**: 30-second timeout prevents hanging scans

### Secure Lock Values

Distributed locks use cryptographically secure UUID v4 values automatically generated by the library, eliminating the risk of lock value prediction attacks.

### Connection String Redaction

Passwords in connection strings are redacted in logs by default to prevent credential leakage. Use `normalize_connection_string_with_redaction()` for secure logging.

### Best Practices

1. **Use the library's key validation** - Don't bypass the `validate_redis_key()` function
2. **Avoid custom Lua scripts** - Use the built-in cache operations when possible
3. **Set appropriate timeouts** - Don't disable the 30-second default timeout
4. **Rotate lock values** - The library handles this automatically
5. **Never log connection strings** - Use the redaction utility for debugging

For more details, see [Security Documentation](docs/SECURITY.md).

## 📚 Documentation

- [📖 User Guide]docs/USER_GUIDE.md
- [📘 API Documentation]https://docs.rs/oxcache
- [💻 Examples]../examples/

## 🤝 Contributing

Pull Requests and Issues are welcome!

## 📝 Changelog

See [CHANGELOG.md](../CHANGELOG.md)

## 📄 License

This project is licensed under MIT License. See [LICENSE](../LICENSE) file.

---

<div align="center">

**If this project helps you, please give a ⭐ Star to show support!**

Made with ❤️ by Kirky.X

</div>