Expand description
SIMD-vectorized implementations of various math functions that are commonly used in neural networks.
For each function in this library there are multiple variants, which typically include:
- A version that operates on scalars
- A version that reads values from an input slice and writes to the
corresponding position in an equal-length output slice. These have a
vec_
prefix. - A version that reads values from a mutable input slice and writes
the computed values back in-place. These have a
vec_
prefix and_in_place
suffix.
All variants use the same underlying implementation and should have the same accuracy.
See the source code for comments on accuracy.
Functionsยง
- Computes the error function.
- Computes e^val. Functionally equivalent to
f32::exp
. - Computes the GELU function. See
vec_gelu
. - Computes the sigmoid function, aka. the standard logistic function,
1. / (1. + (-x).exp())
. - Sigmoid Linear Unit (SiLU) function. This computes
x * sigmoid(x)
. - Vectorized error function.
- Variant of
vec_erf
that modifies elements in-place. - Vectorized exponential function.
- Variant of
vec_exp
that modifies elements in-place. - Vectorized GELU function.
- Variant of
vec_gelu
that modifies elements in-place. - Vectorized sigmoid function.
- Variant of
vec_sigmoid
that modifies elements in-place. - Vectorized Sigmoid Linear Unit (SiLU) function.
- Vectorized Sigmoid Linear Unit (SiLU) function.
- Computes the softmax function over a slice of floats.
- Computes the softmax function over a slice of floats.
- Vectorized tanh implementation.