nostro2-cache
Event ID deduplication cache strategies for Nostr relay implementations.
Strategies
1. DashMapCache (Lock-Free)
- Implementation:
dashmap::DashMap(concurrent hashmap) - Pros: Lock-free, excellent concurrent performance
- Cons: No LRU eviction, simple clear-when-full strategy
- Best for: High-throughput, many concurrent writers
2. ParkingLotLruCache
- Implementation:
parking_lot::Mutex<lru::LruCache> - Pros: Fast mutex, automatic LRU eviction, bounded memory
- Cons: Mutex contention under very high concurrency
- Best for: Moderate concurrency with memory constraints
3. StdMutexLruCache
- Implementation:
std::sync::Mutex<lru::LruCache> - Pros: No external deps, automatic LRU eviction
- Cons: Slower than parking_lot under contention
- Best for: Simple use cases, minimal dependencies
Benchmarks
Run benchmarks to compare strategies:
# Run all benchmarks
# Run specific benchmark
# View HTML reports
Benchmark Scenarios
- Single Thread Insert: Pure insertion performance
- Multi Thread Insert: Concurrent insert with 2, 4, 8, 10, 20 threads
- Realistic Relay Pattern: 10 threads with 20% duplicate rate
Usage
use ;
// Choose your strategy
let cache = new;
// Check for duplicates
if cache.insert else
Recommendations
- nostro2-ring-relay: Use
DashMapCachefor lock-free consistency - nostro2-relay (async): Use
ParkingLotLruCachefor bounded memory - Low concurrency: Use
StdMutexLruCacheto minimize dependencies