Skip to main content

Module functional

Module functional 

Source
Expand description

Functional interface for neural network operations.

This module provides stateless functions that mirror the module-based activations and operations. Use these when you don’t need a module wrapper (e.g., in custom forward passes).

§Example

use aprender::nn::F;
use aprender::autograd::Tensor;

let x = Tensor::randn(&[32, 10]);
let y = F::relu(&x);
let probs = F::softmax(&y, -1);

Functions§

cosine_similarity_slice
Cosine similarity between two slices.
dropout
Dropout (must be called with training flag)
euclidean_distance
Euclidean distance between two slices.
gelu
GELU activation (Gaussian Error Linear Unit)
layer_norm
Layer normalization over the last dimension of an ND tensor.
leaky_relu
Leaky ReLU activation: max(negative_slope * x, x)
linear
Linear transformation: y = x @ weight^T + bias
log_softmax
Log softmax along the last dimension of an ND tensor.
log_softmax_1d
Log-softmax on a 1D slice of f32 values.
relu
ReLU activation: max(0, x)
relu_scalar
Scalar ReLU: max(0, x)
rms_norm
RMS normalization over the last dimension of an ND tensor.
sigmoid
Sigmoid activation: 1 / (1 + exp(-x))
sigmoid_scalar
Scalar sigmoid: σ(x) = 1 / (1 + exp(-x))
sigmoid_scalar_f64
Scalar sigmoid (f64): σ(x) = 1 / (1 + exp(-x))
silu
SiLU (Swish) activation: x * sigmoid(x)
silu_scalar
Scalar SiLU for non-Tensor contexts.
softmax
Softmax along the last dimension of an ND tensor.
softmax_1d
Softmax on a 1D slice of f32 values.
softmax_1d_f64
Softmax on a 1D slice of f64 values.
swiglu
SwiGLU activation: SiLU(gate) * x
swiglu_scalar
Scalar SwiGLU for non-Tensor contexts.
tanh
Tanh activation