pub trait ActivationOps<B: Backend> {
    // Provided methods
    fn relu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D> { ... }
    fn relu_backward<const D: usize>(
        output: FloatTensor<B, D>,
        grad: FloatTensor<B, D>
    ) -> FloatTensor<B, D> { ... }
    fn gelu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D> { ... }
    fn gelu_backward<const D: usize>(
        x: FloatTensor<B, D>,
        grad: FloatTensor<B, D>
    ) -> FloatTensor<B, D> { ... }
}
Expand description

Activation function operations.

This trait let backend implementations override activation functions for better performance.

Provided Methods§

source

fn relu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>

Applies the ReLU activation function.

Arguments
  • tensor - The tensor.
Returns

The output tensor.

source

fn relu_backward<const D: usize>( output: FloatTensor<B, D>, grad: FloatTensor<B, D> ) -> FloatTensor<B, D>

Applies the ReLU activation function backward.

Arguments
  • output - The output tensor.
Returns

The gradient.

source

fn gelu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>

Applies the Gelu activation function.

Arguments
  • tensor - The tensor.
Returns

The output tensor.

source

fn gelu_backward<const D: usize>( x: FloatTensor<B, D>, grad: FloatTensor<B, D> ) -> FloatTensor<B, D>

Applies the Gelu activation function backward.

Arguments
  • x - The tensor.
  • grad - The gradient.
Returns

The output tensor.

Object Safety§

This trait is not object safe.

Implementors§