Trait burn_core::tensor::ops::ActivationOps
source · pub trait ActivationOps<B>where
B: Backend,{
// Provided methods
fn relu<const D: usize>(
tensor: <B as Backend>::TensorPrimitive<D>
) -> <B as Backend>::TensorPrimitive<D> { ... }
fn relu_backward<const D: usize>(
output: <B as Backend>::TensorPrimitive<D>,
grad: <B as Backend>::TensorPrimitive<D>
) -> <B as Backend>::TensorPrimitive<D> { ... }
fn gelu<const D: usize>(
tensor: <B as Backend>::TensorPrimitive<D>
) -> <B as Backend>::TensorPrimitive<D> { ... }
fn gelu_backward<const D: usize>(
x: <B as Backend>::TensorPrimitive<D>,
grad: <B as Backend>::TensorPrimitive<D>
) -> <B as Backend>::TensorPrimitive<D> { ... }
}Expand description
Activation function operations.
This trait let backend implementations override activation functions for better performance.
Provided Methods§
sourcefn relu<const D: usize>(
tensor: <B as Backend>::TensorPrimitive<D>
) -> <B as Backend>::TensorPrimitive<D>
fn relu<const D: usize>( tensor: <B as Backend>::TensorPrimitive<D> ) -> <B as Backend>::TensorPrimitive<D>
sourcefn relu_backward<const D: usize>(
output: <B as Backend>::TensorPrimitive<D>,
grad: <B as Backend>::TensorPrimitive<D>
) -> <B as Backend>::TensorPrimitive<D>
fn relu_backward<const D: usize>( output: <B as Backend>::TensorPrimitive<D>, grad: <B as Backend>::TensorPrimitive<D> ) -> <B as Backend>::TensorPrimitive<D>
sourcefn gelu<const D: usize>(
tensor: <B as Backend>::TensorPrimitive<D>
) -> <B as Backend>::TensorPrimitive<D>
fn gelu<const D: usize>( tensor: <B as Backend>::TensorPrimitive<D> ) -> <B as Backend>::TensorPrimitive<D>
sourcefn gelu_backward<const D: usize>(
x: <B as Backend>::TensorPrimitive<D>,
grad: <B as Backend>::TensorPrimitive<D>
) -> <B as Backend>::TensorPrimitive<D>
fn gelu_backward<const D: usize>( x: <B as Backend>::TensorPrimitive<D>, grad: <B as Backend>::TensorPrimitive<D> ) -> <B as Backend>::TensorPrimitive<D>
Object Safety§
This trait is not object safe.