pub trait ActivationOps<B>where
    B: Backend,{
    // Provided methods
    fn relu<const D: usize>(
        tensor: <B as Backend>::TensorPrimitive<D>
    ) -> <B as Backend>::TensorPrimitive<D> { ... }
    fn relu_backward<const D: usize>(
        output: <B as Backend>::TensorPrimitive<D>,
        grad: <B as Backend>::TensorPrimitive<D>
    ) -> <B as Backend>::TensorPrimitive<D> { ... }
    fn gelu<const D: usize>(
        tensor: <B as Backend>::TensorPrimitive<D>
    ) -> <B as Backend>::TensorPrimitive<D> { ... }
    fn gelu_backward<const D: usize>(
        x: <B as Backend>::TensorPrimitive<D>,
        grad: <B as Backend>::TensorPrimitive<D>
    ) -> <B as Backend>::TensorPrimitive<D> { ... }
}
Expand description

Activation function operations.

This trait let backend implementations override activation functions for better performance.

Provided Methods§

source

fn relu<const D: usize>( tensor: <B as Backend>::TensorPrimitive<D> ) -> <B as Backend>::TensorPrimitive<D>

Applies the ReLU activation function.

Arguments
  • tensor - The tensor.
Returns

The output tensor.

source

fn relu_backward<const D: usize>( output: <B as Backend>::TensorPrimitive<D>, grad: <B as Backend>::TensorPrimitive<D> ) -> <B as Backend>::TensorPrimitive<D>

Applies the ReLU activation function backward.

Arguments
  • output - The output tensor.
Returns

The gradient.

source

fn gelu<const D: usize>( tensor: <B as Backend>::TensorPrimitive<D> ) -> <B as Backend>::TensorPrimitive<D>

Applies the Gelu activation function.

Arguments
  • tensor - The tensor.
Returns

The output tensor.

source

fn gelu_backward<const D: usize>( x: <B as Backend>::TensorPrimitive<D>, grad: <B as Backend>::TensorPrimitive<D> ) -> <B as Backend>::TensorPrimitive<D>

Applies the Gelu activation function backward.

Arguments
  • x - The tensor.
  • grad - The gradient.
Returns

The output tensor.

Object Safety§

This trait is not object safe.

Implementors§