leaky_relu_simd

Function leaky_relu_simd 

Source
pub fn leaky_relu_simd<F>(x: &ArrayView1<'_, F>, alpha: F) -> Array1<F>
where F: Float + SimdUnifiedOps,
Expand description

Apply Leaky ReLU / PReLU activation using SIMD operations

Leaky ReLU (Parametric ReLU when alpha is learned) is defined as:

  • f(x) = x, if x >= 0
  • f(x) = alpha * x, if x < 0

Leaky ReLU addresses the “dying ReLU” problem by allowing a small gradient for negative inputs, preventing neurons from becoming permanently inactive.

§Arguments

  • x - Input array
  • alpha - Slope for negative inputs (commonly 0.01 for Leaky ReLU, learned for PReLU)

§Returns

  • Array with Leaky ReLU applied elementwise

§Example

use scirs2_core::ndarray_ext::elementwise::leaky_relu_simd;
use ndarray::{array, ArrayView1};

let x = array![1.0_f32, 0.0, -1.0, -2.0];
let result = leaky_relu_simd(&x.view(), 0.01);
assert!((result[0] - 1.0).abs() < 1e-6);    // Positive: unchanged
assert!((result[1] - 0.0).abs() < 1e-6);    // Zero: unchanged
assert!((result[2] - (-0.01)).abs() < 1e-6); // Negative: alpha * x