Expand description
Global entropy cache for multi-symbol processors
Issue #145: Multi-Symbol Entropy Cache Sharing Provides a thread-safe, shared entropy cache across all processors.
§Architecture
Instead of each processor maintaining its own 128-entry cache:
- Before: 20 separate caches (5 symbols × 4 thresholds × 128 entries each)
- After: 1 global cache (512-1024 entries) shared across all processors
§Benefits
- Memory reduction: 20 × 128 → 1 × 1024 = 20-30% savings on multi-symbol workloads
- Hit ratio improvement: 34.5% → 50%+ (larger cache size + price-based key is symbol-independent)
- Thread-safe: Arc<RwLock<>> for safe concurrent access
- Backward compatible: Local cache still available as default
§Usage
ⓘ
use rangebar_core::entropy_cache_global::{get_global_entropy_cache, EntropyCache};
// Option 1: Use global cache (recommended for multi-symbol)
let cache = get_global_entropy_cache();
let mut cache_guard = cache.write();
compute_entropy_adaptive_cached(prices, &mut cache_guard);
// Option 2: Use local cache (default, backward compatible)
let cache = Arc::new(RwLock::new(EntropyCache::new()));Constants§
- GLOBAL_
ENTROPY_ CACHE_ CAPACITY - Maximum capacity for the global entropy cache (tunable via this constant)
Statics§
- GLOBAL_
ENTROPY_ CACHE - Global entropy cache shared across all processors
Functions§
- create_
local_ entropy_ cache - Create a local entropy cache (backward compatibility)
- get_
global_ entropy_ cache - Get a reference to the global entropy cache
- warm_
up_ entropy_ cache - Warm up the global entropy cache with deterministic price patterns