Module f32

Module f32 

Source
Expand description

The regular float

Structs§

Tensor
Tensor, can own, or borrow the underlying tensor

Functions§

add
tensor elementwise addition. b += a. a is automatically broadcasted.
apply
Applies func to every item of the tensor
causal_softmax
Causal softmax on the last dimension for tensor x. The causality is determined by the shape of x and past_sequence_length which defines how big is the missing part of the square.
faster_gelu
gelu operation https://en.wikipedia.org/wiki/Activation_function#Comparison_of_activation_functions but using faster_tanh
faster_tanh
utility function to use a faster but less precise tanh
gelu
gelu operation https://en.wikipedia.org/wiki/Activation_function#Comparison_of_activation_functions
inline_tanh
utility function to use a faster but less precise tanh
matmul
Regular matrix multiplication
matmul_t
Matrix multiplication matmul(A, B.transposed())
mul
tensor elementwise multiplication. b *= a. a is automatically broadcasted.
normalize
Basic operation for the layernorm. x = (x - x.mean()) / (x.var() + epsilon) mean and var do not have to be initialized, they are simply passed to avoid allocation.
select
Operation for selecting entire rows within tensor weights. Each id is the index of the row.
softmax
Softmax on the last dimension for tensor x
special_argmax
Argmax of the last dimension of tensor x .