Expand description
§Concurrency Patterns - Rust Book Chapter 16
This module demonstrates thread-safe caching with minimal shared state from The Rust Book Chapter 16.
§Key Concepts Demonstrated
-
Shared State with Mutex (Chapter 16.3)
front_cache: Mutex<HashMap<...>>for thread-safe in-memory cache- Lock held for minimal time to reduce contention
- Appropriate use case: infrequent updates, small critical sections
-
When NOT to Use Arc<Mutex
> (Chapter 15 + 16)- This module does NOT use
Arc<Mutex<T>>for the main cache - Instead uses a persistent database (
sled) with built-in concurrency - Demonstrates that not all shared state needs
Arc<Mutex<T>>
- This module does NOT use
-
Process-Level Concurrency
- Background cache server process (optional)
- TCP communication between processes
- Demonstrates concurrency beyond threads
§Design Decisions
Why Mutex<HashMap> for front cache?
- Small, frequently accessed data (LRU cache)
- Lock contention is acceptable (not in hot loop)
- Simpler than lock-free alternatives
Why NOT Arc<Mutex<Vec>> for results?
- Results are returned, not shared
- Caller owns the data
- No need for reference counting
Why persistent database instead of in-memory?
- Cache survives process restarts
- Built-in concurrency control
- Automatic disk management
§Learning Notes
This module shows that effective concurrency doesn’t always mean:
- Using
Arceverywhere - Sharing everything with
Mutex - Complex lock-free algorithms
Sometimes the best approach is:
- Minimal shared state
- Clear ownership boundaries
- Letting libraries handle concurrency (like
sled)
Structs§
- Search
Result Cache - Cross-process cache client: tries a background TCP server; falls back to local cache.