tanh_simd

Function tanh_simd 

Source
pub fn tanh_simd<F>(x: &ArrayView1<'_, F>) -> Array1<F>
where F: Float + SimdUnifiedOps,
Expand description

Compute the hyperbolic tangent of each element (SIMD-accelerated).

Computes tanh(x) for each element in the array.

§Arguments

  • x - Input 1D array

§Returns

Array1<F> with the same length as input, with hyperbolic tangent values.

§Performance

  • Auto-vectorization: Compiler optimizations provide excellent performance
  • Speedup: 2-4x on large arrays via auto-vectorization

§Mathematical Definition

tanh(x) = sinh(x) / cosh(x) = (e^x - e^(-x)) / (e^x + e^(-x))
Range: (-1, 1)

§Examples

use scirs2_core::ndarray::array;
use scirs2_core::ndarray_ext::elementwise::tanh_simd;

let x = array![0.0_f64, 1.0, -1.0, 10.0];
let result = tanh_simd(&x.view());

// tanh(0) = 0, tanh(1) ≈ 0.762, tanh(-1) ≈ -0.762, tanh(∞) → 1
assert!((result[0] - 0.0_f64).abs() < 1e-10);
assert!((result[1] - 0.7615941559_f64).abs() < 1e-9);
assert!((result[2] + 0.7615941559_f64).abs() < 1e-9);
assert!((result[3] - 1.0_f64).abs() < 1e-9); // tanh(10) ≈ 1

§Edge Cases

  • Empty array: Returns empty array
  • Zero: tanh(0) = 0
  • Asymptotic: tanh(x) → ±1 as x → ±∞
  • Anti-symmetric: tanh(-x) = -tanh(x)
  • NaN: Returns NaN (preserves NaN)

§Applications

  • Neural Networks: Tanh activation function (classic activation)
  • Machine Learning: Gradient clipping, normalization layers
  • Reinforcement Learning: Policy networks, value functions
  • Signal Processing: Soft limiting, saturation
  • Optimization: Smooth approximation to sign function
  • Physics: Relativistic velocity addition

§Note on Neural Networks

Tanh was historically the most popular activation function before ReLU. It’s still widely used in RNNs, LSTMs, and GRUs for gating mechanisms. Gradient: d/dx tanh(x) = 1 - tanh²(x) = sech²(x)