Expand description

Implementations of all operations for tensors, including activations, binary operations, and other methods.

Functions

The absolute value (abs) computes |x|

Add two Tensors of the same shape together: &lhs + rhs

Applies a binary function f, it’s partial wrt. x dfdx, and its partial wrt. y dfdy to a pair of Tensors lhs and `rhs.

Broadcasts the last dimension of rhs to make it the same size of lhs.

Broadcast the first dimension of Rhs M times, so its the same size as Lhs.

Clamps all values in t to between min and max

Similar to map(), but doesn’t take ownership of the Tensor t.

The cos function computes cos(x)

Divides two Tensors of the same shape: &lhs / rhs.

The exponential function (exp) computes e ^ x

The Natural Logarithm (ln) computes ln(x)

Numerically stable computation of log(softmax(t)). Does t - logsumexp(t) under the hood.

Computes the LogSumExp function. Equivalent to log(sum(exp(data))) or data.exp().sum(-1).log().

Applies a function f to every element of the Tensor. The derivative df must also be provided.

Matrix multiplication.

Sums all the values in self and divides by number of values.

Multiplies two Tensors of the same shape together: &lhs * rhs.

Replaces any nans in t with value.

Negates all values in t.

Sigmoid computes 1 / (1 + exp(-x)).

The sine function computes sin(x)

Computes the softmax function. Equivalent to t.log_softmax().exp() or exp(log_softmax(t)) or exp(t) / sum(exp(t))

Square root computes x ^ 0.5 or √x.

Square computes x * x.

Subtracts two Tensors of the same shape from each other: &lhs - rhs

Calls [Device::sum_last_dim()] on the underlying array. Result Tensor has smaller number of dimensions.

Sets t to value anywhere mask equals value

vector * matrix multiplication.