Expand description
Functional interface for neural network operations.
This module provides stateless functions that mirror the module-based activations and operations. Use these when you don’t need a module wrapper (e.g., in custom forward passes).
§Example
ⓘ
use aprender::nn::F;
use aprender::autograd::Tensor;
let x = Tensor::randn(&[32, 10]);
let y = F::relu(&x);
let probs = F::softmax(&y, -1);Functions§
- cosine_
similarity_ slice - Cosine similarity between two slices.
- dropout
- Dropout (must be called with training flag)
- euclidean_
distance - Euclidean distance between two slices.
- gelu
- GELU activation (Gaussian Error Linear Unit)
- layer_
norm - Layer normalization over the last dimension of an ND tensor.
- leaky_
relu - Leaky
ReLUactivation:max(negative_slope* x, x) - linear
- Linear transformation: y = x @ weight^T + bias
- log_
softmax - Log softmax along the last dimension of an ND tensor.
- log_
softmax_ 1d - Log-softmax on a 1D slice of f32 values.
- relu
ReLUactivation: max(0, x)- relu_
scalar - Scalar ReLU: max(0, x)
- rms_
norm - RMS normalization over the last dimension of an ND tensor.
- sigmoid
- Sigmoid activation: 1 / (1 + exp(-x))
- sigmoid_
scalar - Scalar sigmoid: σ(x) = 1 / (1 + exp(-x))
- sigmoid_
scalar_ f64 - Scalar sigmoid (f64): σ(x) = 1 / (1 + exp(-x))
- silu
- SiLU (Swish) activation: x * sigmoid(x)
- silu_
scalar - Scalar SiLU for non-Tensor contexts.
- softmax
- Softmax along the last dimension of an ND tensor.
- softmax_
1d - Softmax on a 1D slice of f32 values.
- softmax_
1d_ f64 - Softmax on a 1D slice of f64 values.
- swiglu
- SwiGLU activation: SiLU(gate) * x
- swiglu_
scalar - Scalar SwiGLU for non-Tensor contexts.
- tanh
- Tanh activation