pub fn normalize_excess(value: f64) -> f64Expand description
Normalize excess gain/loss to [0, 1] using precomputed tanh lookup table.
Tanh is mathematically natural for this purpose:
- Maps [0, ∞) → [0, 1)
- Zero input → zero output
- Monotonically increasing
- Smooth gradients for backpropagation
The scaling factor (5.0) is derived from the observation that typical ITH excess gains range from 0 to 20%, and we want this range to occupy most of the [0, 0.8] output space.
Issue #96 Task #197: Uses precomputed LUT in 0.1 steps [0, 5] range instead of exp() (50-100 CPU cycles → <1 CPU cycle).
§Arguments
value- Raw excess gain or loss (absolute value used)
§Returns
Normalized value in [0, 1)