Trait burn_tensor::ops::ActivationOps
source · pub trait ActivationOps<B: Backend> {
// Provided methods
fn relu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D> { ... }
fn relu_backward<const D: usize>(
output: FloatTensor<B, D>,
grad: FloatTensor<B, D>
) -> FloatTensor<B, D> { ... }
fn gelu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D> { ... }
fn gelu_backward<const D: usize>(
x: FloatTensor<B, D>,
grad: FloatTensor<B, D>
) -> FloatTensor<B, D> { ... }
}Expand description
Activation function operations.
This trait let backend implementations override activation functions for better performance.
Provided Methods§
sourcefn relu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>
fn relu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>
sourcefn relu_backward<const D: usize>(
output: FloatTensor<B, D>,
grad: FloatTensor<B, D>
) -> FloatTensor<B, D>
fn relu_backward<const D: usize>( output: FloatTensor<B, D>, grad: FloatTensor<B, D> ) -> FloatTensor<B, D>
sourcefn gelu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>
fn gelu<const D: usize>(tensor: FloatTensor<B, D>) -> FloatTensor<B, D>
sourcefn gelu_backward<const D: usize>(
x: FloatTensor<B, D>,
grad: FloatTensor<B, D>
) -> FloatTensor<B, D>
fn gelu_backward<const D: usize>( x: FloatTensor<B, D>, grad: FloatTensor<B, D> ) -> FloatTensor<B, D>
Object Safety§
This trait is not object safe.