nostro2-cache 0.1.0

Lock-free LRU deduplication cache for the Nostr protocol.
Documentation
  • Coverage
  • 85.71%
    6 out of 7 items documented2 out of 6 items with examples
  • Size
  • Source code size: 33.03 kB This is the summed size of all the files inside the crates.io package for this release.
  • Documentation size: 1.37 MB This is the summed size of all files generated by rustdoc for all configured targets
  • Ø build duration
  • this release: 22s Average build duration of successful builds.
  • all releases: 22s Average build duration of successful builds in releases after 2024-10-23.
  • Links
  • Homepage
  • 42Pupusas/NostrO2
    5 1 0
  • crates.io
  • Dependencies
  • Versions
  • Owners
  • 42Pupusas

nostro2-cache

Event ID deduplication cache strategies for Nostr relay implementations.

Strategies

1. DashMapCache (Lock-Free)

  • Implementation: dashmap::DashMap (concurrent hashmap)
  • Pros: Lock-free, excellent concurrent performance
  • Cons: No LRU eviction, simple clear-when-full strategy
  • Best for: High-throughput, many concurrent writers

2. ParkingLotLruCache

  • Implementation: parking_lot::Mutex<lru::LruCache>
  • Pros: Fast mutex, automatic LRU eviction, bounded memory
  • Cons: Mutex contention under very high concurrency
  • Best for: Moderate concurrency with memory constraints

3. StdMutexLruCache

  • Implementation: std::sync::Mutex<lru::LruCache>
  • Pros: No external deps, automatic LRU eviction
  • Cons: Slower than parking_lot under contention
  • Best for: Simple use cases, minimal dependencies

Benchmarks

Run benchmarks to compare strategies:

# Run all benchmarks
cargo bench --package nostro2-cache

# Run specific benchmark
cargo bench --package nostro2-cache -- single_thread
cargo bench --package nostro2-cache -- multi_thread
cargo bench --package nostro2-cache -- realistic

# View HTML reports
open target/criterion/report/index.html

Benchmark Scenarios

  1. Single Thread Insert: Pure insertion performance
  2. Multi Thread Insert: Concurrent insert with 2, 4, 8, 10, 20 threads
  3. Realistic Relay Pattern: 10 threads with 20% duplicate rate

Usage

use nostro2_cache::{DashMapCache, ParkingLotLruCache, StdMutexLruCache};

// Choose your strategy
let cache = DashMapCache::new(10_000);

// Check for duplicates
if cache.insert(event_id) {
    println!("New event!");
} else {
    println!("Duplicate event, skip");
}

Recommendations

  • nostro2-ring-relay: Use DashMapCache for lock-free consistency
  • nostro2-relay (async): Use ParkingLotLruCache for bounded memory
  • Low concurrency: Use StdMutexLruCache to minimize dependencies