pub fn relu_simd<F>(x: &ArrayView1<'_, F>) -> Array1<F>where
F: Float + SimdUnifiedOps,Expand description
Compute ReLU (Rectified Linear Unit) activation with SIMD acceleration.
ReLU is one of the most common activation functions in deep learning, defined as: ReLU(x) = max(0, x)
§Arguments
x- Input 1D array
§Returns
Array1<F> containing the ReLU activation output, where negative values are zeroed
§Performance
- SIMD: Automatically used for large arrays (1000+ elements)
- Scalar: Used for small arrays or when SIMD unavailable
- Speedup: 3-5x for large arrays on AVX2/NEON systems
- Most common activation in CNNs, ResNets, and modern architectures
§Mathematical Definition
ReLU(x) = { x if x > 0
{ 0 if x ≤ 0§Examples
use scirs2_core::ndarray::array;
use scirs2_core::ndarray_ext::preprocessing::relu_simd;
let x = array![-2.0, -1.0, 0.0, 1.0, 2.0];
let result = relu_simd(&x.view());
assert_eq!(result[0], 0.0); // -2.0 -> 0.0
assert_eq!(result[1], 0.0); // -1.0 -> 0.0
assert_eq!(result[2], 0.0); // 0.0 -> 0.0
assert_eq!(result[3], 1.0); // 1.0 -> 1.0
assert_eq!(result[4], 2.0); // 2.0 -> 2.0§Applications
- Convolutional Neural Networks: Primary activation in CNN layers
- ResNet/DenseNet: Core activation in residual blocks
- Fully Connected Layers: Standard activation for hidden layers
- Feature Extraction: Non-linear transformation in deep networks
- Modern Architectures: Default choice for most deep learning models
§Advantages of ReLU
- Computational Efficiency: Very fast to compute (simple max operation)
- No Gradient Vanishing: Gradients don’t saturate for positive values
- Sparse Activation: Promotes sparsity (zeros out negative values)
- SIMD-Friendly: Perfect for vectorization