Trait burn_tensor::ops::ActivationOps

source ·
pub trait ActivationOps<B: Backend> {
    // Provided methods
    fn leaky_relu<const D: usize>(
        tensor: FloatTensor<B, D>,
        negative_slope: FloatElem<B>
    ) -> FloatTensor<B, D> { ... }
    fn relu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D> { ... }
    fn relu_backward<const D: usize>(
        output: FloatTensor<B, D>,
        grad: FloatTensor<B, D>
    ) -> FloatTensor<B, D> { ... }
    fn gelu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D> { ... }
    fn prelu<const D: usize>(
        tensor: FloatTensor<B, D>,
        alpha: FloatTensor<B, D>
    ) -> FloatTensor<B, D> { ... }
    fn gelu_backward<const D: usize>(
        x: FloatTensor<B, D>,
        grad: FloatTensor<B, D>
    ) -> FloatTensor<B, D> { ... }
    fn sigmoid<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D> { ... }
    fn sigmoid_backward<const D: usize>(
        output: FloatTensor<B, D>,
        grad: FloatTensor<B, D>
    ) -> FloatTensor<B, D> { ... }
    fn log_sigmoid<const D: usize>(
        tensor: FloatTensor<B, D>
    ) -> FloatTensor<B, D> { ... }
    fn log_sigmoid_backward<const D: usize>(
        x: FloatTensor<B, D>,
        grad: FloatTensor<B, D>
    ) -> FloatTensor<B, D> { ... }
}
Expand description

Activation function operations.

This trait let backend implementations override activation functions for better performance.

Provided Methods§

source

fn leaky_relu<const D: usize>( tensor: FloatTensor<B, D>, negative_slope: FloatElem<B> ) -> FloatTensor<B, D>

Applies the LeakyReLU activation function.

§Arguments
  • tensor - The tensor.
  • negative_slope - The negative_slope value that values smaller than 0 are multiplied with.
§Returns

The output tensor.

source

fn relu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>

Applies the ReLU activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

source

fn relu_backward<const D: usize>( output: FloatTensor<B, D>, grad: FloatTensor<B, D> ) -> FloatTensor<B, D>

Applies the ReLU activation function backward.

§Arguments
  • output - The output tensor.
§Returns

The gradient.

source

fn gelu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>

Applies the Gelu activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

source

fn prelu<const D: usize>( tensor: FloatTensor<B, D>, alpha: FloatTensor<B, D> ) -> FloatTensor<B, D>

Applies the PReLu activation function.

§Arguments
  • tensor - The input tensor
  • alpha - The weight tensor
source

fn gelu_backward<const D: usize>( x: FloatTensor<B, D>, grad: FloatTensor<B, D> ) -> FloatTensor<B, D>

Applies the Gelu activation function backward.

§Arguments
  • x - The tensor.
  • grad - The gradient.
§Returns

The output tensor.

source

fn sigmoid<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>

Applies the Sigmoid activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

source

fn sigmoid_backward<const D: usize>( output: FloatTensor<B, D>, grad: FloatTensor<B, D> ) -> FloatTensor<B, D>

Applies the Sigmoid activation function backward.

§Arguments
  • output - The output tensor of the sigmoid function.
  • grad - The gradient.
§Returns

The output tensor.

source

fn log_sigmoid<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>

Applies the LogSigmoid activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

source

fn log_sigmoid_backward<const D: usize>( x: FloatTensor<B, D>, grad: FloatTensor<B, D> ) -> FloatTensor<B, D>

Applies the LogSigmoid activation function backward.

§Arguments
  • x - The input tensor.
  • grad - The gradient.
§Returns

The output gradient.

Object Safety§

This trait is not object safe.

Implementors§