multi-tier-cache 0.6.5

Customizable multi-tier cache with L1 (Moka in-memory) + L2 (Redis distributed) defaults, expandable to L3/L4+, cross-instance invalidation via Pub/Sub, stampede protection, and flexible TTL scaling
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased]

### Planned

- Metrics export (Prometheus format)

## [0.6.5] - 2026-04-22

### Changed

- **README Rewrite**: Complete overhaul — reduced from ~1,280 lines to ~500 lines. Cleaner structure, version-agnostic headings, consolidated examples, and a new Feature Compatibility table.
- **Benchmark Figures Updated**: Throughput 16,829 → 21,528+ RPS, added mean latency (23.2ms), corrected stampede figures.
- **docs.rs `doc_cfg`**: Added `#[doc(cfg(â€Ķ))]` annotations to all feature-gated items for better visibility on docs.rs. Configured `all-features = true` and single-target build in `[package.metadata.docs.rs]`.

### Internal

- Documentation and metadata only — no functional code changes. Fully backward compatible with 0.6.4.

## [0.6.4] - 2026-03-15

### Added
- **Custom Error Types**: Introduced a comprehensive `CacheError` enum and `CacheResult` type for structured, strongly-typed error handling across the entire library.
- **Feature-Gated Backends**: Enhanced modularity by feature-gating all cache backends. Users can now selectively enable `moka`, `redis`, `memcached`, and `quick_cache` to minimize dependency footprints.

### Changed
- **Builder Refactoring**: Optimized `CacheSystemBuilder` internal logic with dedicated initialization paths for default 2-tier, custom 2-tier, and multi-tier configurations, improving maintainability and code clarity.
- Updated crate documentation to reflect the new error handling patterns and feature flag availability.

## [0.6.3] - 2026-03-14

### Added
- **Probabilistic Promotion**: Introduced `promotion_frequency` (N) to control how often items are promoted from lower to higher tiers. Promotion occurs with a probability of 1/N. Default N=10 for all tiers except L1.
- Added `with_promotion_frequency(n)` to `TierConfig` builder API for fine-grained control over tier promotion rates.

### Changed
- **Benchmark Optimization**: Refactored `benches/storm_requests.rs` to use `join_all`, correctly simulating concurrent "storm" request patterns instead of sequential access.
- Added a dedicated L2-only benchmark scenario to measure pure backend performance without L1 promotion interference.

## [0.6.2] - 2026-03-12

**[Docs / Fixed]**
- Completely overhauled `README.md` to accurately reflect the massive v0.6.1 API and architectural changes.
- Updated all documentation code examples to demonstrate native AFIT (no `async-trait`), the usage of `bytes::Bytes` over `serde_json::Value`, and the new broadcast-based stampede protection.

## [0.6.1] - 2026-03-12

### Changed
- Upgraded the crate to **Rust Edition 2024**, leveraging new lifetime capture rules and optimizations.
- Migrated all asynchronous traits (`CacheBackend`, `L2CacheBackend`, `StreamingBackend`) to use Rust's native Async Functions in Traits (AFIT), completely removing the `async-trait` macro dependency.
- Replaced `serde_json::Value` with `bytes::Bytes` in the core cache trait signatures to eliminate unnecessary intermediate AST allocations.
- Unified the internal architecture in `CacheManager`: deprecated fragmented legacy fields (`l1_cache`, `l2_cache`) in favor of a single, scalable `tiers: Vec<CacheTier>` system for both 2-tier and multi-tier configurations.

### Performance
- **Cache Stampede Protection Re-engineered**: Replaced the sequential `Mutex`-based locking mechanism in `DashMap` with an asynchronous broadcast channel approach. This eliminates massive latency spikes for concurrent cache misses by waking up all waiting requests simultaneously once the data is computed.
- **Zero-Cost L1 Hits**: Implemented direct byte-to-type deserialization (`serde_json::from_slice`) to drastically reduce memory overhead and CPU cycles during high-concurrency L2 fetch and compute phases.

## [0.5.7] - 2026-01-24

### Fixed

- **L1 Pattern Invalidation**: Fixed a critical bug where `invalidate_pattern` (e.g., `user:*`) would only remove keys from L2 (Redis) but leave stale data in L1 (Moka). Now, matching keys are atomically removed from L1 memory as well.
- **Documentation**: Fixed broken doc tests in `src/backends/mod.rs` and `src/cache_manager.rs` that were using outdated API calls.

### Changed

- **Dependencies Upgrade**:
  - Upgraded `redis` from `0.32` to **`1.0.2`** for improved performance and async stability.
  - Upgraded `tokio` from `1.28` to **`1.43`** (latest stable).

## [0.5.6] - 2025-12-15

### Added

- **Configurable Moka Cache**: Introduced `MokaCacheConfig` to allow customizing L1 cache settings (capacity, TTL, idle time).
- **Builder API**: Added `with_moka_config()` to `CacheSystemBuilder` for applying custom Moka configurations.

### Changed

- **Code Quality**: Resolved all `clippy` warnings including `doc_markdown`, `needless_pass_by_value`, and `clone_on_copy`.
- **Documentation**: Added "Moka Cache (L1) Configuration" section to README.md with usage examples.


## [0.5.5] - 2025-01-06

### Changed

- **Code Quality**: Enforced strict `cargo clippy` lints
  - Enabled `pedantic`, `unwrap_used`, `expect_used`, and `indexing_slicing` as warnings
  - Resolved all resulting warnings for a cleaner, safer codebase

## [0.5.4] - 2025-01-06

### Changed

- **Documentation & Examples**: Comprehensive update to examples
  - Added `examples/multi_tier_usage.rs` demonstrating 3-tier architecture (L1+L2+L3)
  - Updated `examples/custom_backends.rs` with `with_tier` usage for custom tiers
  - Updated `examples/basic_usage.rs` and `examples/advanced_usage.rs` to reflect multi-tier capabilities
  - Fixed outdated comments and patterns in examples

## [0.5.3] - 2025-01-06

### Changed

**🔧 Code Quality: Eliminated "AI-Generated" Anti-Patterns**

Comprehensive refactoring to enforce idiomatic Rust patterns and eliminate common anti-patterns that make code look auto-generated. This release focuses entirely on code quality with **zero** functionality changes or breaking changes.

**1. Global State Mutation (Critical)**
- **Eliminated**: Removed `std::env::set_var("REDIS_URL", ...)` from `with_redis_url()` method
- **Why Bad**: Mutates global state with thread-safety issues, causes data races in tests
- **Fix**: Added `RedisCache::with_url()` method to pass URL through call chain
- **Files**: [src/lib.rs]src/lib.rs, [src/backends/redis_cache.rs]src/backends/redis_cache.rs

**2. Unsafe `.unwrap()` Calls**
- **Eliminated**: 3 production `.unwrap()` calls replaced with safe alternatives
- **Why Bad**: Panics on errors in production, violates fail-fast principle
- **Fix**: Changed to `.unwrap_or(Duration::ZERO)` for TTL parsing
- **Files**: [src/invalidation.rs]src/invalidation.rs:157

**3. Arc<Atomic\*> Double-Wrapping**
- **Eliminated**: 6 fields changed from `Arc<AtomicU64>` to `AtomicU64`
- **Why Bad**: Atomic types are already thread-safe, Arc adds unnecessary overhead
- **Fix**: Direct AtomicU64 usage, custom Clone impl for TierStats
- **Overhead Reduction**: Eliminated 1 heap allocation + 1 reference count per statistic
- **Files**: [src/cache_manager.rs]src/cache_manager.rs:272-277

**4. Missing Error Context**
- **Added**: Comprehensive error context to all Redis operations using `anyhow::Context`
- **Why Bad**: Generic errors don't explain what operation failed or with what parameters
- **Fix**: Added `.context()` with descriptive messages to 15+ Redis operations
- **Example**: `Failed to create Redis client with URL: redis://...`
- **Files**: [src/backends/redis_cache.rs]src/backends/redis_cache.rs

**5. Placeholder Anti-Pattern**
- **Eliminated**: Removed creation of unused L1/L2 cache instances
- **Why Bad**: Creates objects that are immediately discarded, wastes resources
- **Fix**: Changed struct fields to `Option<Arc<L1Cache>>` and `Option<Arc<L2Cache>>`
- **Resource Savings**: Avoids initializing Moka cache + Redis connection when using custom backends
- **Files**: [src/lib.rs]src/lib.rs, [src/builder.rs]src/builder.rs

**6. String Allocation Optimization**
- **Reduced**: ~70% reduction in string allocations in hot path
- **Why Bad**: Creates 3 String allocations per invalidation message (high-frequency operation)
- **Fix**: Use `&str` instead of `String` in pattern matching, single allocation
- **Before**: 3 allocations (type, key, extra)
- **After**: 1 allocation (only extra field)
- **Files**: [src/invalidation.rs]src/invalidation.rs:161-186

**7. Redundant Trait Wrapper Pattern**
- **Eliminated**: ~270 lines of duplicated code across 5 backend implementations
- **Why Bad**: Inherent methods that just call trait methods add no value, confuse API surface
- **Fix**: Moved all logic directly into trait implementations
- **Backends Refactored**: MokaCache, RedisCache, DashMapCache, MemcachedCache, QuickCacheBackend
- **Files**: All 5 files in [src/backends/]src/backends/

**8. Console Logging (println! → tracing)**
- **Replaced**: All 50+ `println!` statements with structured logging
- **Why Bad**: println! is not production-ready (no levels, filtering, or structured data)
- **Fix**:
  - `info!()` for initialization/success messages
  - `debug!()` for per-operation logging
  - `warn!()` for warnings with structured fields
- **Benefits**: Proper log levels, filtering, structured fields, integration with telemetry
- **Dependency**: Added `tracing = "0.1"` to Cargo.toml
- **Files**: All 10 modified files

### Internal Improvements

**Files Modified (10 total):**
1. [Cargo.toml]Cargo.toml - Version bump to 0.5.3, added tracing dependency
2. [src/lib.rs]src/lib.rs - Global state fix, Option<Arc<T>>, tracing integration
3. [src/builder.rs]src/builder.rs - Placeholder elimination, tracing
4. [src/cache_manager.rs]src/cache_manager.rs - Arc<Atomic*> removal, custom Clone impl
5. [src/invalidation.rs]src/invalidation.rs - .unwrap() fix, string optimization
6. [src/backends/moka_cache.rs]src/backends/moka_cache.rs - Trait refactor, tracing
7. [src/backends/redis_cache.rs]src/backends/redis_cache.rs - Error context, with_url() method, trait refactor
8. [src/backends/dashmap_cache.rs]src/backends/dashmap_cache.rs - Trait refactor, tracing
9. [src/backends/memcached_cache.rs]src/backends/memcached_cache.rs - Trait refactor, tracing
10. [src/backends/quickcache_cache.rs]src/backends/quickcache_cache.rs - Trait refactor, tracing

**Quality Metrics:**
- Global state mutations: 1 → 0 ✅
- Unsafe `.unwrap()` calls: 3 → 0 ✅
- Arc<Atomic*> double-wraps: 6 → 0 ✅
- Placeholder instances: 4 → 0 ✅
- println! statements: 50+ → 0 ✅
- Redundant code lines: ~270 eliminated ✅
- Error operations with context: 0 → 15+ ✅
- String allocations (hot path): 3 → 1 per operation ✅

### Backward Compatibility

✅ **100% Backward Compatible** - Zero breaking changes:
- All public APIs unchanged
- All 42 existing tests pass
- New `with_url()` method is additive only
- Struct field changes are internal (Option wrapper is transparent via methods)
- tracing is a superset of println! (non-breaking)

### Performance Impact

- **Arc<Atomic*> fix**: Small reduction in memory overhead and atomic operations
- **String allocation fix**: ~70% fewer allocations in invalidation hot path
- **Trait refactor**: Zero performance impact (monomorphization eliminates indirection)
- **Error context**: Negligible (only paid on error path)
- **tracing**: <1% overhead with info-level filtering

### Migration

**No migration required** - This is a pure code quality release. Your code continues to work unchanged.

If you were directly calling `L2Cache::new()` and want to use a custom URL:
```rust
// Before (still works via env var)
std::env::set_var("REDIS_URL", "redis://custom:6379");
let cache = L2Cache::new().await?;

// After (cleaner, no global state)
let cache = L2Cache::with_url("redis://custom:6379").await?;
```

## [0.5.2] - 2025-01-05

### Added

**🎉 New Feature: Multiple Cache Backend Options**

- **Backend Module Refactoring**: Organized backends in `src/backends/` subdirectory
  - Renamed `l1_cache.rs` → `backends/moka_cache.rs` (MokaCache)
  - Renamed `l2_cache.rs` → `backends/redis_cache.rs` (RedisCache)
  - Type aliases `L1Cache = MokaCache`, `L2Cache = RedisCache` for backward compatibility
  - Zero breaking changes - existing code continues to work

- **DashMapCache Backend**: Simple concurrent HashMap-based L1 cache
  - Always available (no feature flag required)
  - Manual expiration cleanup via `cleanup_expired()`
  - Ideal for educational purposes and simple use cases
  - Example: `examples/builtin_backends.rs`

- **MemcachedCache Backend**: Lightweight distributed L2 cache
  - Feature flag: `backend-memcached`
  - High-performance distributed caching
  - Server statistics via `get_server_stats()`
  - Note: Does not implement `L2CacheBackend` (no TTL introspection)
  - Dependency: `memcache = { version = "0.17", optional = true }`

- **QuickCacheBackend**: Ultra-fast L1 cache optimized for maximum throughput
  - Feature flag: `backend-quickcache`
  - Sub-microsecond latency
  - Lock-free design for concurrent access
  - Configurable capacity via `new(max_capacity)`
  - Dependencies: `quick_cache = { version = "0.6", optional = true }`, `parking_lot = { version = "0.12", optional = true }`

- **Documentation**: New "Available Backends" section in README
  - Comparison tables for L1 and L2 backends
  - Feature requirements and use cases
  - Code examples for each backend
  - Updated Table of Contents

- **Example File**: `examples/builtin_backends.rs`
  - Demonstrates DashMapCache, MemcachedCache, and QuickCacheBackend
  - Shows feature flag usage
  - Server statistics demonstration

### Changed

- **Backend Organization**: Improved module structure
  - All backends now in `src/backends/` subdirectory
  - Centralized exports via `backends/mod.rs`
  - Feature-gated backends with conditional compilation
  - Better separation and discoverability

- **Public API**: Enhanced re-exports
  - Backends accessible via `multi_tier_cache::backends::*`
  - Also available at top level: `multi_tier_cache::{MokaCache, RedisCache, DashMapCache, ...}`
  - Feature-gated exports for optional backends

- **Cargo.toml**: New optional dependencies and features
  - `memcache = { version = "0.17", optional = true }`
  - `quick_cache = { version = "0.6", optional = true }`
  - `parking_lot = { version = "0.12", optional = true }`
  - New feature flags: `backend-memcached`, `backend-quickcache`

### Fixed

- All existing tests pass (42 tests) - backward compatibility maintained

## [0.5.1] - 2025-01-05

### Added

- **Multi-Tier Stampede Protection**: `get_or_compute_with()` and `get_or_compute_typed()` now support all tiers (L1+L2+L3+L4+...)
  - Stampede protection now checks L2, L3, L4... before computing
  - Automatically promotes from any tier to L1 with stampede lock held
  - Prevents unnecessary recomputation when data exists in lower tiers
  - 2 new integration tests: `test_multi_tier_stampede_protection`, `test_stampede_retrieves_from_l3`

### Changed

- **Redis Streams Refactoring**: Separated into dedicated `redis_streams` module
  - New `RedisStreams` struct - standalone Redis Streams client
  - Removed ~200 lines of streaming code from `L2Cache`
  - `L2Cache` now focuses purely on cache operations
  - Better separation of concerns (cache vs. streaming)
  - `RedisStreams` can be used independently without full cache system
  - Backward compatible - existing stream APIs still work via `CacheManager`

- **Improved Description**: Updated Cargo.toml description
  - Clarifies L1 (Moka in-memory) + L2 (Redis distributed) as defaults
  - Highlights expandability to L3/L4+ tiers

### Fixed

- **Stampede Protection**: Fixed multi-tier support in compute methods
  - Previously only checked L1+L2 in multi-tier mode
  - Now correctly checks all configured tiers (L1+L2+L3+L4+...)
  - Ensures data is retrieved from lower tiers instead of recomputing

## [0.5.0] - 2025-01-05

### Added

**🚀 Major Feature: Dynamic Multi-Tier Cache Architecture**

- **Multi-Tier Support (L1+L2+L3+L4+...)**: Flexible tier configuration
  - Support for 3, 4, or more cache tiers beyond L1+L2
  - Dynamic tier chain with automatic promotion
  - Per-tier TTL scaling (e.g., L3 = 2x, L4 = 8x)
  - Per-tier hit statistics and monitoring
  - Configurable promotion behavior per tier

- **Builder API Extensions**: New methods for tier configuration
  - `.with_tier(backend, TierConfig)` - Add custom tier with full control
  - `.with_l3(backend)` - Convenience method for L3 (cold tier, 2x TTL)
  - `.with_l4(backend)` - Convenience method for L4 (archive tier, 8x TTL)
  - Automatic tier sorting by level during build
  - Validation of tier ordering

- **TierConfig**: Flexible tier configuration
  - `TierConfig::as_l1()` - Hot tier (no promotion)
  - `TierConfig::as_l2()` - Warm tier (promote to L1)
  - `TierConfig::as_l3()` - Cold tier (2x TTL, promote to L2+L1)
  - `TierConfig::as_l4()` - Archive tier (8x TTL, promote to all)
  - Builder pattern: `.with_promotion(bool)`, `.with_ttl_scale(f64)`, `.with_level(usize)`

- **Per-Tier Statistics**: Granular performance monitoring
  - `TierStats` struct with tier-level, hits, backend name
  - `get_tier_stats()` - Get statistics for all configured tiers
  - Individual tier hit tracking
  - Statistics available only in multi-tier mode

- **Integration Tests**: 8 new tests for multi-tier functionality
  - Multi-tier basic operations (get/set across 3+ tiers)
  - Statistics tracking per tier
  - Backward compatibility verification
  - TTL scaling validation
  - Cache miss behavior
  - Convenience methods (with_l3/with_l4)

- **Benchmarks**: 5 benchmarks for multi-tier performance
  - 2-tier vs 3-tier vs 4-tier write performance
  - Multi-tier read performance (L1 hits)
  - TTL scaling impact measurement
  - Data size scaling across tiers
  - Tier statistics access overhead

### Changed

- **CacheManager**: Extended for multi-tier support
  - Added `tiers: Option<Vec<CacheTier>>` field
  - New `new_with_tiers(tiers, streaming_backend)` constructor
  - Updated `get()` to iterate through all tiers with promotion
  - Updated `set_with_strategy()` to write to all tiers with TTL scaling
  - Updated `invalidate()` and `update_cache()` to work across all tiers
  - **Backward Compatible**: Legacy 2-tier mode still works when `tiers: None`

- **CacheTier**: Internal tier representation
  - Wraps L2CacheBackend with tier metadata
  - Tracks tier level, promotion settings, TTL scale
  - Per-tier statistics tracking
  - Automatic TTL scaling on set operations

- **ProxyCacheBackend**: Trait conversion helper
  - Converts L2CacheBackend → CacheBackend for trait object compatibility
  - Enables L2CacheBackend as tier backend (needed for get_with_ttl)

### Backward Compatibility

- ✅ **Fully Backward Compatible**: All 36 existing tests pass
- Legacy 2-tier mode (L1+L2) continues to work without changes
- `tiers: None` triggers legacy code paths
- Multi-tier mode activated only when using `.with_tier()` builder methods
- Existing APIs unchanged: `get()`, `set_with_strategy()`, `invalidate()`, etc.

### Migration Guide

**Existing 2-tier users**: No changes required. Your code continues to work as-is.

**New 3+ tier users**: Use the builder pattern:

```rust
use multi_tier_cache::{CacheSystemBuilder, TierConfig};

// 3-tier setup
let cache = CacheSystemBuilder::new()
    .with_tier(l1, TierConfig::as_l1())
    .with_tier(l2, TierConfig::as_l2())
    .with_l3(rocksdb)  // Convenience method
    .build()
    .await?;

// Access per-tier stats
if let Some(tier_stats) = cache.cache_manager().get_tier_stats() {
    for stats in tier_stats {
        println!("L{}: {} hits", stats.tier_level, stats.hit_count());
    }
}
```

## [0.4.1] - 2025-01-05

### Added
- **Integration Tests** (30 tests with real Redis)
  - Basic L1+L2 cache operations (12 tests)
  - Cross-instance invalidation (8 tests)
  - Stampede protection (5 tests)
  - Redis Streams functionality (5 tests)
  - Test utilities in `tests/common/mod.rs`

- **Benchmark Suite with Criterion**
  - **cache_operations**: L1/L2 read/write performance, cache hit/miss latency, different cache strategies
  - **stampede_protection**: Concurrent access patterns, request coalescing effectiveness
  - **invalidation**: Single key and pattern-based invalidation overhead, update vs remove comparison
  - **serialization**: JSON vs typed caching performance, data size impact

### Fixed
- Fixed doctests compilation errors for examples with external dependencies

## [0.4.0] - 2025-01-04

### Added

**🎉 Major Feature: Cross-Instance Cache Invalidation**

- **Redis Pub/Sub Integration**: Real-time cache synchronization across all instances
  - Automatic broadcast of invalidation messages via Redis Pub/Sub
  - Background subscriber task with auto-reconnection
  - Sub-millisecond invalidation latency (~1-5ms)
  - Graceful error handling and connection recovery

- **Cache Invalidation API**: New methods for coordinated cache updates
  - `invalidate(key)` - Remove key from all instances (L1 + L2)
  - `update_cache(key, value, ttl)` - Update value across all instances (avoids cache miss)
  - `invalidate_pattern(pattern)` - Remove all keys matching glob pattern (e.g., `user:*`)
  - `set_with_broadcast(key, value, strategy)` - Write-through with automatic broadcast
  - `get_invalidation_stats()` - Monitor invalidation operations

- **Pattern-Based Invalidation**: Bulk invalidation with glob patterns
  - Uses Redis SCAN (non-blocking, production-safe)
  - Supports glob patterns: `user:*`, `product:123:*`, etc.
  - Broadcast to all instances for coordinated L1 cleanup

- **Invalidation Message Types**: Flexible invalidation strategies
  - `Remove` - Invalidate single key (lazy reload on next access)
  - `Update` - Push new value to all instances (zero cache miss)
  - `RemovePattern` - Pattern-based bulk invalidation
  - `RemoveBulk` - Multiple keys at once

- **Configuration Options**: `InvalidationConfig` for customization
  - Pub/Sub channel name (default: `cache:invalidate`)
  - Auto-broadcast on write (opt-in)
  - Audit stream for invalidation events (observability)
  - Stream retention policy (max length)

- **Constructor**: `CacheManager::new_with_invalidation()`
  - Enables cross-instance invalidation support
  - Spawns background subscriber automatically
  - Returns fully configured CacheManager with Pub/Sub

- **Audit Trail**: Optional Redis Streams logging
  - Records all invalidation events for observability
  - Includes timestamp, operation type, affected keys
  - Configurable retention (default: 10,000 entries)

- **Statistics Tracking**: Comprehensive invalidation metrics
  - Messages sent/received counts
  - Operation type breakdown (remove/update/pattern/bulk)
  - Processing errors tracking

### Changed

- **L2Cache**: Added pattern matching and bulk operations
  - `scan_keys(pattern)` - Find keys matching glob pattern (uses SCAN)
  - `remove_bulk(keys)` - Delete multiple keys efficiently
  - Production-safe (non-blocking iteration)

- **CacheManager Structure**: Extended for invalidation support
  - Added `InvalidationPublisher` for broadcasting messages
  - Added `InvalidationSubscriber` for receiving messages
  - Added `AtomicInvalidationStats` for metrics
  - Maintains backward compatibility (invalidation is opt-in)

### Dependencies

- **New**: `futures-util = "0.3"` - For Pub/Sub stream handling
- **Updated**: `tokio` now includes `macros` and `time` features for `select!` macro

### Benefits

- ✅ **Multi-Instance Support**: Keep caches in sync across multiple servers
- ✅ **Two Invalidation Strategies**:
  - Remove (lazy reload, lower bandwidth)
  - Update (zero cache miss, higher bandwidth)
- ✅ **Pattern-Based**: Invalidate related keys in one operation
- ✅ **Low Latency**: ~1-5ms invalidation propagation via Pub/Sub
- ✅ **Reliable**: Auto-reconnection, error recovery, audit trail
- ✅ **Opt-In**: Existing code continues to work without changes

### Use Cases

**Scenario 1: User Profile Update**
```rust
// Update user in database
database.update_user(123, new_data).await?;

// Invalidate cache across all instances
cache_manager.invalidate("user:123").await?;
// OR update cache directly (avoids cache miss)
cache_manager.update_cache("user:123", new_data, Some(ttl)).await?;
```

**Scenario 2: Bulk Product Updates**
```rust
// Update product category in database
database.update_category(42, new_price).await?;

// Invalidate all products in category across all instances
cache_manager.invalidate_pattern("product:category:42:*").await?;
```

**Scenario 3: Write-Through Caching**
```rust
// Compute expensive data
let report = generate_monthly_report().await?;

// Cache and broadcast to all instances in one call
cache_manager.set_with_broadcast(
    "report:monthly",
    report,
    CacheStrategy::LongTerm
).await?;
```

### Performance Impact

- Invalidation overhead: ~1-5ms per operation (Pub/Sub + network)
- Background subscriber: Negligible CPU usage (~0.1%)
- Memory overhead: ~2-5MB for Pub/Sub connections
- No impact on cache read/write performance when not using invalidation

### Breaking Changes

**None** - This release is fully backward compatible:
- New features are opt-in via `new_with_invalidation()` constructor
- Existing `CacheSystem::new()` and `CacheManager::new()` unchanged
- All previous APIs continue to work as before

### Migration Guide

**To enable invalidation:**
```rust
// Old (v0.3.x) - Still works!
let cache = CacheSystem::new().await?;

// New (v0.4.0) - With invalidation support
let config = InvalidationConfig::default();
let cache_manager = CacheManager::new_with_invalidation(
    l1_cache,
    l2_cache,
    "redis://localhost",
    config
).await?;

// Use invalidation features
cache_manager.invalidate("key").await?;
cache_manager.update_cache("key", value, None).await?;
cache_manager.invalidate_pattern("user:*").await?;
```

### Documentation

- Added comprehensive module documentation in `src/invalidation.rs`
- Added examples for all invalidation methods in `CacheManager`
- Added configuration examples for `InvalidationConfig`
- TODO: Add `examples/cache_invalidation.rs` demonstration

### Internal

- Added `src/invalidation.rs` with ~500 lines of invalidation logic
- Added `InvalidationMessage` enum with serde serialization
- Added `InvalidationPublisher` for broadcasting
- Added `InvalidationSubscriber` with background task
- Added `AtomicInvalidationStats` for thread-safe metrics
- Updated `CacheManager` with invalidation methods
- Updated `L2Cache` with pattern matching support

### Resolves

- ✅ Planned feature: "Cache invalidation patterns (wildcard, regex)"
- ✅ Multi-instance cache consistency problem
- ✅ Stale cache data across distributed systems

## [0.3.0] - 2025-01-04

### Added

**🎉 Major Feature: Pluggable Cache Backends**

- **Trait-Based Architecture**: Complete refactoring to support custom cache backends
  - `CacheBackend` trait for L1 (in-memory) caches
  - `L2CacheBackend` trait for L2 (distributed) caches with TTL introspection
  - `StreamingBackend` trait for event streaming capabilities
  - All traits exported publicly with `async_trait` support

- **`CacheSystemBuilder`**: New builder pattern for flexible configuration
  - `.with_l1(backend)` - Use custom L1 cache (replace Moka)
  - `.with_l2(backend)` - Use custom L2 cache (replace Redis)
  - `.with_streams(backend)` - Use custom streaming backend
  - Mix and match: Use custom L1 with default Redis L2, or vice versa

- **`CacheManager::new_with_backends()`**: Primary constructor for trait-based backends
  - Accepts any types implementing required traits
  - Enables swapping Moka with DashMap, HashMap, or custom implementations
  - Enables swapping Redis with Memcached, DragonflyDB, KeyDB, or in-memory mocks

- **Example Implementations** (`examples/custom_backends.rs`):
  - `HashMapCache`: Simple in-memory L1 cache using HashMap + RwLock
  - `InMemoryL2Cache`: In-memory L2 cache with TTL tracking
  - `NoOpCache`: No-op cache for testing/disabling caching
  - Demonstrates mixing custom and default backends

### Changed

- **L2 Cache Optimization**: ConnectionManager replaces repeated connection creation
  - Redis now uses `ConnectionManager` for persistent connections
  - Automatic reconnection on connection loss
  - Reduced connection overhead for all Redis operations
  - Applied to all methods: get, set, remove, health_check, and streaming operations

- **TTL-Based L2-to-L1 Promotion**: Promotion now preserves Redis TTL
  - Added `L2Cache::get_with_ttl()` method returning `(value, Option<Duration>)`
  - Updated promotion logic in `get()`, `get_or_compute_with()`, and `get_or_compute_typed()`
  - Promoted entries maintain same expiration as L2, instead of using default strategy TTL
  - More accurate cache consistency across tiers

- **CacheManager Refactoring**: Now uses trait objects internally
  - Stores `Arc<dyn CacheBackend>` and `Arc<dyn L2CacheBackend>` instead of concrete types
  - Legacy `CacheManager::new()` constructor maintained for backward compatibility
  - Streaming methods now return error if streaming backend not configured

### Internal

- Added `src/traits.rs` with comprehensive trait definitions and documentation
- Added `src/builder.rs` with `CacheSystemBuilder` implementation
- `CacheManager` fields changed to trait objects (breaking change for direct field access)
- Added `async-trait = "0.1"` dependency
- Added `rand = "0.8"` dev-dependency for examples

### Migration Guide

**For most users:** No changes required if using `CacheSystem::new()` or `cache_manager()` methods.

**If implementing custom backends:**
```rust
// Old (v0.2.x)
let cache = CacheSystem::new().await?;

// New (v0.3.0) - Same API, now with pluggable backends support
let cache = CacheSystem::new().await?;  // Still works!

// New (v0.3.0) - Custom backends
let cache = CacheSystemBuilder::new()
    .with_l1(my_custom_l1)
    .build()
    .await?;
```

**Breaking Changes:**
- `CacheManager` struct fields are now trait objects (not breaking if using methods)
- `CacheManager::new_with_backends()` signature changed to include `streaming_backend` parameter

**See:** `examples/custom_backends.rs` for complete migration examples

### Performance

- No regression on default backends (Moka + Redis)
- ConnectionManager reduces Redis connection overhead by ~15-20%
- Trait-based dispatch adds <5% overhead (negligible in practice)

## [0.2.1] - 2025-01-04

### Changed
- **Metadata**: Added `documentation` field to Cargo.toml pointing to docs.rs
  - Enables automatic documentation link on crates.io page
  - Improves discoverability for users

### Removed
- Removed internal tracking documents (PROJECT_COMPLETE.md, NEXT_STEPS.md, MIGRATION_SUMMARY.md)
  - These were development artifacts not needed by end users
  - Cleaner package for crates.io publication

### Internal
- No code changes - metadata and cleanup only
- Fully backward compatible with 0.2.0

## [0.2.0] - 2025-01-03

### Added

**🎉 Major Feature: Type-Safe Database Caching**

- **`get_or_compute_typed<T>()`** - New method for automatic type-safe caching
  - Generic over any type implementing `Serialize + DeserializeOwned`
  - Automatic serialization/deserialization (no manual JSON conversion)
  - Full L1→L2 cache flow with stampede protection
  - Perfect for database queries, API calls, complex computations
  - **Reduces boilerplate from 40+ lines to 5 lines**

**Examples:**
- `examples/database_caching.rs` - Comprehensive demonstration with multiple types
- README section "Type-Safe Database Caching" with before/after comparisons

**Dependencies:**
- Added `serde = { version = "1.0", features = ["derive"] }` for trait bounds

### Documentation

- Added comprehensive "Type-Safe Database Caching" section to README
- Added before/after comparison showing 40+ lines → 5 lines reduction
- Added examples for PostgreSQL, API calls, complex computations
- Updated method documentation with detailed examples and performance notes

### Benefits

- ✅ **Type Safety**: Compiler enforces correct types at compile time
- ✅ **Zero Boilerplate**: Eliminates manual serialize/deserialize code
- ✅ **Full Cache Features**: L1+L2, stampede protection, auto-promotion
- ✅ **Generic**: Works with any serializable type (User, Product, Report, etc.)
- ✅ **Performance**: Same cache performance + ~10-50ξs deserialization overhead

### Breaking Changes

**None** - This is a fully backward compatible release. All existing code continues to work.
- New method is additive only
- Existing `get_or_compute_with()` unchanged
- Version bump to 0.2.0 due to new public API (semver minor)

## [0.1.2] - 2025-01-03

### Changed
- **Documentation**: Significantly improved REDIS_URL configuration documentation
  - Added configuration priority order (programmatic > env var > .env > default)
  - Added use case examples (development, production, Docker, testing)
  - Added Redis URL format specification with examples
  - Added comprehensive troubleshooting section for common connection issues
  - Better organization with clear headings and code examples

### Internal
- No code changes - documentation-only release
- Fully backward compatible with 0.1.1 and 0.1.0

## [0.1.1] - 2025-01-03

### Changed
- **Documentation**: Removed RPS column from library comparison table in README
- **Documentation**: Removed unavailable docs.rs link from Contact section
- **Documentation**: Fixed GitHub repository URLs to use correct username

### Internal
- No code changes - documentation-only release
- Fully backward compatible with 0.1.0

## [0.1.0] - 2025-01-03

### Added

**Core Features:**
- Multi-tier caching architecture with L1 (Moka) and L2 (Redis)
- Cache stampede protection using DashMap + Mutex request coalescing
- Automatic L2-to-L1 promotion for frequently accessed data
- Comprehensive statistics tracking (hit rates, promotions, in-flight requests)

**Cache Strategies:**
- `RealTime` - 10 seconds TTL for fast-changing data
- `ShortTerm` - 5 minutes TTL for frequently accessed data
- `MediumTerm` - 1 hour TTL for moderately stable data
- `LongTerm` - 3 hours TTL for stable data
- `Custom(Duration)` - User-defined TTL

**Redis Streams Support:**
- `publish_to_stream()` - XADD with automatic trimming
- `read_stream_latest()` - XREVRANGE for latest N entries
- `read_stream()` - XREAD for blocking/non-blocking consumption

**API Methods:**
- `CacheSystem::new()` - Initialize with default Redis URL
- `CacheSystem::with_redis_url()` - Initialize with custom URL
- `CacheManager::get()` - Retrieve from cache (L1 → L2 fallback)
- `CacheManager::set_with_strategy()` - Store with TTL strategy
- `CacheManager::get_or_compute_with()` - Compute-on-miss with stampede protection
- `CacheManager::get_stats()` - Retrieve cache statistics

**Examples:**
- `basic_usage.rs` - Quick start and fundamental operations
- `stampede_protection.rs` - Demonstrates concurrency handling
- `redis_streams.rs` - Event streaming patterns
- `cache_strategies.rs` - All TTL strategies showcase
- `advanced_usage.rs` - L2-to-L1 promotion and compute-on-miss
- `health_monitoring.rs` - Health checks and statistics

**Documentation:**
- Comprehensive README.md with architecture diagrams
- Full rustdoc API documentation
- Migration guides from `cached` and `redis-rs`
- Performance benchmarks and comparison tables
- MIT OR Apache-2.0 dual licensing

### Performance

**Production Metrics** (from source project):
- **Throughput**: 16,829+ requests/second sustained
- **Latency**: 5.2ms average response time
- **Cache Hit Rate**: 95% overall (L1: 90%, L2: 75%)
- **Stampede Protection**: 99.6% latency reduction (534ms → 5.2ms in high-concurrency scenarios)
- **Success Rate**: 100% (zero failures under load)

**Resource Usage:**
- L1 Cache Capacity: 2,000 entries
- L2 Redis Connections: Multiplexed async connections
- Memory Footprint: ~50MB for typical workload
- CPU Overhead: <5% at 16k RPS

### Dependencies

- `moka = "0.12"` - L1 in-memory cache
- `redis = "0.32"` - L2 Redis client
- `tokio = "1.28"` - Async runtime
- `serde_json = "1.0"` - JSON serialization
- `anyhow = "1.0"` - Error handling
- `dashmap = "5.5"` - Concurrent HashMap for stampede protection

### Notes

This is the initial release extracted from a production web server project that serves a crypto investment dashboard. The library has been battle-tested at scale and proven reliable under high load.

The cache system was originally developed as `cache_system_island` module and has been refactored into a standalone, reusable library with zero business logic coupling.

[Unreleased]: https://github.com/thichuong/multi-tier-cache/compare/v0.3.0...HEAD
[0.3.0]: https://github.com/thichuong/multi-tier-cache/compare/v0.2.1...v0.3.0
[0.2.1]: https://github.com/thichuong/multi-tier-cache/compare/v0.2.0...v0.2.1
[0.2.0]: https://github.com/thichuong/multi-tier-cache/compare/v0.1.2...v0.2.0
[0.1.2]: https://github.com/thichuong/multi-tier-cache/compare/v0.1.1...v0.1.2
[0.1.1]: https://github.com/thichuong/multi-tier-cache/compare/v0.1.0...v0.1.1
[0.1.0]: https://github.com/thichuong/multi-tier-cache/releases/tag/v0.1.0