rat_memcache 0.2.4

高性能 Memcached 协议兼容服务器,支持双层缓存和持久化存储
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
# RatMemCache

High-performance Memcached protocol-compatible server with dual-layer cache and **melange_db** persistent storage

[![Crates.io](https://img.shields.io/crates/v/rat_memcache.svg)](https://crates.io/crates/rat_memcache)
[![Documentation](https://docs.rs/rat_memcache/badge.svg)](https://docs.rs/rat_memcache)
[![License: LGPL v3](https://img.shields.io/badge/License-LGPL%20v3-blue.svg)](https://www.gnu.org/licenses/lgpl-3.0)
[![Downloads](https://img.shields.io/crates/d/rat_memcache.svg)](https://crates.io/crates/rat_memcache)
[![Rust](https://img.shields.io/badge/rust-1.70+-orange.svg)](https://rust-lang.org)

---

🇨🇳 [中文]README.md | 🇺🇸 [English]README_EN.md | 🇯🇵 [日本語]README_JA.md

## Project Description

RatMemCache is a high-performance caching system based on Rust implementation, providing two usage modes:

1. **As a library**: Provides high-performance caching API with memory and **melange_db** persistent dual-layer cache
2. **As a standalone server**: 100% Memcached protocol-compatible standalone server

### 🪟 Native Windows Platform Support

**RatMemCache is one of the few high-performance Memcached-compatible servers that can run natively on Windows!**

- **Native Windows Support**: No WSL or virtual machine required, runs directly on Windows
-**100% Protocol Compatibility**: Fully compatible with Memcached protocol, direct replacement for original memcached
-**Cross-platform Consistency**: Windows, Linux, macOS functionality is completely identical
-**Solves Windows Pain Points**: Original memcached is complex to deploy on Windows, RatMemCache is ready to use

Licensed under LGPL-v3, supporting free usage and modification.

## Key Features

- 🚀 **High Performance**: Based on async runtime, supports high concurrency
- 📦 **Dual-Layer Cache Architecture**: Memory L1 cache + MelangeDB L2 persistent cache
- 🔌 **100% Memcached Protocol Compatible**: Can directly replace standard memcached server
- 🪟 **Windows Native Support**: No WSL required, runs directly on Windows platform
- 🧠 **Intelligent Eviction Strategies**: Supports LRU, LFU, FIFO, hybrid strategies, etc.
-**TTL Support**: Flexible expiration time management
- 🐘 **Large Value Optimization**: Large values exceeding threshold are automatically sent to L2 storage, avoiding memory exhaustion
- 🗜️ **Data Compression**: LZ4 compression algorithm, saves storage space
- 🎨 **Structured Logging**: High-performance logging system based on rat_logger
- 🔧 **Flexible Configuration**: Supports multiple preset configurations and custom configurations

## License

This project is licensed under **LGPL-v3**. This means:

- ✅ Free to use, modify and distribute
- ✅ Can be used in commercial projects
- ✅ Can be linked to your projects
- ⚠️ Modified library source code must be open-sourced under LGPL license
- ⚠️ When linked to your application, the application can remain closed-source

See [LICENSE](LICENSE) file for details.

## Quick Start

### Usage Scenario Selection

RatMemCache provides flexible feature selection to meet different scenario needs:

#### 1. Pure Memory Cache (Default)
```toml
[dependencies]
rat_memcache = "0.2.2"
```
- ✅ Basic memory cache functionality
- ✅ TTL support
- ❌ Persistent storage
- ❌ Performance metrics
- Suitable for: Simple cache scenarios

#### 2. Dual-Layer Cache (Memory + Persistent)
```toml
[dependencies]
rat_memcache = { version = "0.2.2", features = ["full-features"] }
```
- ✅ All library features
- ✅ MelangeDB persistent storage
- ✅ LZ4 compression
- ✅ Performance metrics
- ✅ mimalloc memory allocator
- Suitable for: Production environments requiring persistence

#### 3. Complete Server
```toml
[dependencies]
rat_memcache = { version = "0.2.2", features = ["server"] }
```
- ✅ Includes all library features
- ✅ rat_memcached binary
- Suitable for: Use as standalone memcached server

#### 4. Custom Combination
```toml
[dependencies]
rat_memcache = { version = "0.2.2", features = ["cache-lib", "ttl-support", "metrics"] }
```
- Select specific features as needed
- Minimize dependencies and compilation time

### Using as a Library

RatMemCache can be integrated into your project as a Rust library, providing high-performance dual-layer cache functionality.

#### Basic Integration

```toml
[dependencies]
rat_memcache = "0.2.2"
tokio = { version = "1.0", features = ["full"] }
```

#### Quick Start

```rust
use rat_memcache::{RatMemCacheBuilder, CacheOptions};
use bytes::Bytes;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create cache instance - use default configuration
    let cache = RatMemCacheBuilder::new()
        .build()
        .await?;

    // Basic operations
    let key = "my_key".to_string();
    let value = Bytes::from("my_value");

    // Set cache
    cache.set(key.clone(), value.clone()).await?;

    // Get cache
    if let Some(retrieved) = cache.get(&key).await? {
        println!("Retrieved: {:?}", retrieved);
    }

    // Set cache with TTL (expires in 60 seconds)
    cache.set_with_ttl("temp_key".to_string(), Bytes::from("temp_value"), 60).await?;

    // Check if cache exists
    let exists = cache.contains_key("temp_key").await?;
    println!("Key exists: {}", exists);

    // Get cache key list
    let keys = cache.keys().await?;
    println!("Cache keys: {:?}", keys);

    // Conditional deletion
    let deleted = cache.delete("temp_key").await?;
    println!("Key deleted: {}", deleted);

    // Graceful shutdown
    cache.shutdown().await?;

    Ok(())
}
```

#### Advanced Configuration

```rust
use rat_memcache::{RatMemCacheBuilder, EvictionStrategy};
use rat_memcache::config::{L1Config, L2Config, TtlConfig};
use std::path::PathBuf;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Custom L1 configuration (2GB memory limit)
    let l1_config = L1Config {
        max_memory: 2 * 1024 * 1024 * 1024,  // 2GB in bytes
        max_entries: 1_000_000,             // 1 million entries
        eviction_strategy: EvictionStrategy::Lru,
    };

    // Custom L2 configuration (10GB disk space)
    let l2_config = L2Config {
        enable_l2_cache: true,
        data_dir: Some(PathBuf::from("./cache_data")),
        clear_on_startup: false,
        max_disk_size: 10 * 1024 * 1024 * 1024,  // 10GB in bytes
        write_buffer_size: 64 * 1024 * 1024,     // 64MB
        max_write_buffer_number: 3,
        block_cache_size: 32 * 1024 * 1024,      // 32MB
        enable_compression: true,
        compression_level: 6,
        background_threads: 2,
        database_engine: Default::default(),
        melange_config: Default::default(),
    };

    // TTL configuration
    let ttl_config = TtlConfig {
        default_ttl: Some(3600),     // Default 1 hour
        max_ttl: 86400,              // Maximum 24 hours
        cleanup_interval: 300,       // Clean up every 5 minutes
        ..Default::default()
    };

    let cache = RatMemCacheBuilder::new()
        .l1_config(l1_config)
        .l2_config(l2_config)
        .ttl_config(ttl_config)
        .build()
        .await?;

    // Use cache...

    Ok(())
}
```

#### Production Best Practices

```rust
use rat_memcache::{RatMemCacheBuilder, EvictionStrategy};
use rat_memcache::config::{L1Config, L2Config, PerformanceConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Production configuration - optimized performance configuration
    let cache = RatMemCacheBuilder::new()
        .l1_config(L1Config {
            max_memory: 4 * 1024 * 1024 * 1024,  // 4GB
            max_entries: 2_000_000,
            eviction_strategy: EvictionStrategy::Lru,
        })
        .l2_config(L2Config {
            enable_l2_cache: true,
            max_disk_size: 50 * 1024 * 1024 * 1024,  // 50GB
            enable_compression: true,
            background_threads: 4,
            ..Default::default()
        })
        .performance_config(PerformanceConfig {
            ..Default::default()
        })
        .build()
        .await?;

    // Application main logic...

    Ok(())
}
```

### Using as Standalone Server

```bash
# Clone project
git clone https://github.com/0ldm0s/rat_memcache.git
cd rat_memcache

# Build (enable server functionality)
cargo build --release

# Start server with default configuration
cargo run --bin rat_memcached

# Specify binding address
cargo run --bin rat_memcached -- --bind 0.0.0.0:11211

# Use configuration file
cargo run --bin rat_memcached -- --config custom_config.toml

# Run as daemon
cargo run --bin rat_memcached -- --daemon --pid-file /var/run/rat_memcached.pid
```

#### Windows Platform Special Notes

On Windows platform, RatMemCache provides completely consistent functionality with Linux/macOS:

```powershell
# Windows build
cargo build --release

# Windows start server
cargo run --bin rat_memcached

# Windows specify port
cargo run --bin rat_memcached -- --bind 127.0.0.1:11211

# Windows background run (using PowerShell Start-Process)
Start-Process cargo -ArgumentList "run --bin rat_memcached -- --bind 0.0.0.0:11211" -NoNewWindow
```

**Windows Advantages**:
- No need to install WSL or virtual machine
- Native performance, no virtualization overhead
- Perfect integration with Windows services
- Support for Windows native paths and permission management

### Protocol Compatibility

RatMemCache is fully compatible with Memcached protocol, supporting the following commands:

- `get` / `gets` - Get data
- `set` / `add` / `replace` / `append` / `prepend` / `cas` - Set data
- `delete` - Delete data
- `incr` / `decr` - Increment/decrement values
- `flush_all` - Clear all data
- `version` - Get version information

You can use any standard Memcached client to connect to RatMemCache server:

```bash
# Test with telnet
telnet 127.0.0.1 11211

# Use memcached-cli
memcached-cli --server 127.0.0.1:11211
```

## Configuration

The project uses TOML format configuration files, supporting flexible configuration options:

### Basic Configuration

```toml
[l1]
max_memory = 1073741824  # 1GB
max_entries = 100000
eviction_strategy = "Lru"

[l2]
enable_l2_cache = true
data_dir = "./rat_memcache_data"
max_disk_size = 1073741824  # 1GB
enable_compression = true

[compression]
enable_lz4 = true
compression_threshold = 1024
compression_level = 6

[ttl]
default_ttl = 3600  # 1 hour
cleanup_interval = 300  # 5 minutes

[performance]
worker_threads = 4
enable_concurrency = true
read_write_separation = true
large_value_threshold = 10240  # 10KB
```

### Advanced Logging Configuration

RatMemCache provides flexible logging configuration based on rat_logger, supporting performance tuning:

```toml
[logging]
# Basic logging configuration
level = "INFO"                    # Log level: trace, debug, info, warn, error, off
enable_colors = true               # Enable colored output
show_timestamp = true              # Show timestamp
enable_performance_logs = true     # Enable performance logs
enable_audit_logs = true           # Enable operation audit logs
enable_cache_logs = true           # Enable cache operation logs

# Advanced logging configuration (performance tuning)
enable_logging = true               # Whether to completely disable logging system (set to false for highest performance)
enable_async = false               # Whether to enable async mode (async mode can improve performance but may lose logs on program crash)

# Batch configuration for async mode (only effective when enable_async=true)
batch_size = 2048                  # Batch size (bytes)
batch_interval_ms = 25             # Batch time interval (milliseconds)
buffer_size = 16384                # Buffer size (bytes)
```

#### Logging Performance Tuning Recommendations

1. **Highest Performance Mode** (suitable for production environment):
   ```toml
   [logging]
   enable_logging = false
   ```

2. **Async High Performance Mode** (suitable for high-load scenarios):
   ```toml
   [logging]
   enable_logging = true
   enable_async = true
   batch_size = 4096
   batch_interval_ms = 50
   buffer_size = 32768
   ```

3. **Debug Mode** (development environment):
   ```toml
   [logging]
   enable_logging = true
   enable_async = false
   level = "DEBUG"
   enable_performance_logs = true
   enable_cache_logs = true
   ```

#### Configuration Description

- **enable_logging**: Switch to completely disable logging system, when set to false all logging functions will be disabled, providing highest performance
- **enable_async**: Async mode switch, async mode can improve performance but may lose logs on program crash
- **batch_size**: Batch size in async mode, affecting logging processing efficiency
- **batch_interval_ms**: Batch time interval in async mode, affecting logging real-time performance
- **buffer_size**: Buffer size in async mode, affecting memory usage

## Build and Test

```bash
# Build project
cargo build

# Build release version
cargo build --release

# Run tests
cargo test

# Run benchmarks
cargo bench

# Check code formatting
cargo fmt

# Check code quality
cargo clippy
```

## Features

### Cache Features
- ✅ Basic cache operations (get/set/delete)
- ✅ TTL expiration management
- ✅ Batch operation support
- ✅ Conditional operations (cas)
- ✅ Data compression

### Protocol Support
- ✅ Complete Memcached protocol implementation
- ✅ Binary protocol support
- ✅ ASCII protocol support
- ✅ Multi-connection handling
- ✅ Concurrent access control

### Performance Features
- ✅ Asynchronous I/O
- ✅ Read-write separation
- ✅ Memory pool management
- ✅ Smart cache warm-up
- ✅ High-performance async design

### Reliability
- ✅ Data persistence
- ✅ Graceful shutdown
- ✅ Error recovery
- ✅ Memory protection

## Architecture Design

```
┌─────────────────────────────────────────────────────────┐
│                    RatMemCache                          │
├─────────────────┬───────────────────────────────────────┤
│   Server Layer   │          Library Interface           │
│  (Memcached     │         (Rust API)                   │
│   Protocol)     │                                       │
├─────────────────┴───────────────────────────────────────┤
│                     Core Layer                          │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐    │
│  │   L1 Cache  │  │   TTL Mgmt  │  │ Streaming   │    │
│  │   (Memory)  │  │             │  │             │    │
│  └─────────────┘  └─────────────┘  └─────────────┘    │
├─────────────────────────────────────────────────────────┤
│                  Storage Layer                          │
│  ┌─────────────────────────────────────────────────┐  │
│  │              MelangeDB L2 Cache                 │  │
│  │           (Persistent Storage)                   │  │
│  └─────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────┘
```

## Performance Benchmarks

⚠️ **Important Note**: The following performance data is based on tests conducted with Apple MacBook Air M1 chip. Actual performance may vary depending on hardware configuration, usage scenarios, and data characteristics. This data is for reference only.

### L1 Cache Performance (After v0.2.2 Optimization)
Tested on Apple MacBook Air M1:
- **L1 Cache GET Operations**: 1.75-3.4 microseconds (µs)
- **L1 Cache SET Operations**: 8-35 microseconds (µs)
- **Memory Cache Hit Rate**: 99%+
- **Data Consistency**: 100%

### General Performance Benchmarks
In standard test environment (4-core CPU, 8GB memory):
- **QPS**: 50,000+ (simple get operations)
- **Memory Usage**: < 50MB base footprint
- **Concurrent Connections**: 10,000+
- **Latency**: < 1ms (99th percentile)

### Performance Optimization Notes
Version v0.2.2 focuses on optimizing L1 memory cache performance:
- Fixed the issue where L1 cache incorrectly performed compression/decompression operations
- L1 cache now directly stores and returns raw data, avoiding unnecessary CPU overhead
- Memory cache hit response time improved by 145,000x (from 515ms to microsecond level)

## ⚠️ Large Value Data Transfer Warning

**Important Reminder**: When transferring large values exceeding 40KB, standard memcached protocol may encounter socket buffer limitations, causing transfer timeouts or incomplete transfers.

### Recommended Solution

RatMemCache provides **enhanced streaming protocol** that can effectively solve large value transfer problems:

#### Streaming GET Command
```bash
# Standard GET (may timeout)
get large_key

# Streaming GET (recommended)
streaming_get large_key 16384  # 16KB chunk size
```

#### Streaming Protocol Advantages
- 🚀 **Avoid Timeouts**: Chunked transfer bypasses socket buffer limitations
- 📊 **Progress Visibility**: Real-time display of transfer progress and chunk information
- 💾 **Memory Friendly**: Clients can process data chunks on demand
- 🔧 **Backward Compatible**: Fully compatible with standard memcached protocol

#### Usage Example
```python
# See demo/streaming_protocol_demo.py - Complete performance comparison demo
```

### Detailed Description
- **Problem Threshold**: Data >40KB may trigger socket buffer limitations
- **Recommended Practice**: Use streaming protocol for large value transfers
- **Performance Improvement**: Streaming transfer is 10-100x faster than traditional methods (for large values)

## Dependencies

Main dependencies:
- **tokio**: Async runtime
- **melange_db**: Persistent storage (optional) - High-performance embedded database
- **dashmap**: Concurrent hash table
- **lz4**: Data compression
- **rat_logger**: Logging system
- **clap**: Command line argument parsing
- **mimalloc**: High-performance memory allocator

## Version Compatibility

- **Rust**: 1.70+ (edition 2021)
- **Operating Systems**: Linux, macOS, Windows (fully native support)
- **Memcached Protocol**: 1.4.0+
- **Windows Features**: Native support, no WSL or virtual machine required

## Contribution Guide

Contributions are welcome! Please follow these steps:

1. Fork this project
2. Create feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to branch (`git push origin feature/AmazingFeature`)
5. Create Pull Request

## Maintainers

- [@0ldm0s]https://github.com/0ldm0s - Main developer

## Acknowledgments

Thanks to the following open source projects:
- [Tokio]https://tokio.rs/ - Async runtime
- [melange_db]https://github.com/melange-db/melange_db - High-performance embedded persistent storage
- [Rust]https://www.rust-lang.org/ - Programming language

## Roadmap

- [ ] Enhanced cluster support
- [ ] Add more eviction strategies
- [ ] Redis protocol support
- [ ] Web management interface

## License Details

This project is licensed under **GNU Lesser General Public License v3.0 or later (LGPL-3.0-or-later)**.

This means:
- You can link this library to any type of software (including closed-source software)
- When modifying this library source code, modified versions must be released under the same license
- Applications using this library can maintain their own license

See [LICENSE](LICENSE) file for details.