Skip to main content

ActivationOps

Trait ActivationOps 

Source
pub trait ActivationOps<B>
where B: Backend,
{
Show 14 methods // Provided methods fn leaky_relu( tensor: <B as BackendTypes>::FloatTensorPrimitive, negative_slope: Scalar, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn relu( tensor: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn relu_backward( output: <B as BackendTypes>::FloatTensorPrimitive, grad: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn gelu( tensor: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn prelu( tensor: <B as BackendTypes>::FloatTensorPrimitive, alpha: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn gelu_backward( x: <B as BackendTypes>::FloatTensorPrimitive, grad: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn sigmoid( tensor: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn sigmoid_backward( output: <B as BackendTypes>::FloatTensorPrimitive, grad: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn hard_sigmoid( tensor: <B as BackendTypes>::FloatTensorPrimitive, alpha: Scalar, beta: Scalar, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn log_sigmoid( tensor: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn softmax( tensor: <B as BackendTypes>::FloatTensorPrimitive, dim: usize, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn log_softmax( tensor: <B as BackendTypes>::FloatTensorPrimitive, dim: usize, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn softmin( tensor: <B as BackendTypes>::FloatTensorPrimitive, dim: usize, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... } fn log_sigmoid_backward( x: <B as BackendTypes>::FloatTensorPrimitive, grad: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive { ... }
}
Expand description

Activation function operations.

This trait let backend implementations override activation functions for better performance.

Provided Methods§

Source

fn leaky_relu( tensor: <B as BackendTypes>::FloatTensorPrimitive, negative_slope: Scalar, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the LeakyReLU activation function.

§Arguments
  • tensor - The tensor.
  • negative_slope - The negative_slope value that values smaller than 0 are multiplied with.
§Returns

The output tensor.

Source

fn relu( tensor: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the ReLU activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

Source

fn relu_backward( output: <B as BackendTypes>::FloatTensorPrimitive, grad: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the ReLU activation function backward.

§Arguments
  • output - The output tensor.
§Returns

The gradient.

Source

fn gelu( tensor: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the Gelu activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

Source

fn prelu( tensor: <B as BackendTypes>::FloatTensorPrimitive, alpha: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the PReLu activation function.

§Arguments
  • tensor - The input tensor
  • alpha - The weight tensor
Source

fn gelu_backward( x: <B as BackendTypes>::FloatTensorPrimitive, grad: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the Gelu activation function backward.

§Arguments
  • x - The tensor.
  • grad - The gradient.
§Returns

The output tensor.

Source

fn sigmoid( tensor: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the Sigmoid activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

Source

fn sigmoid_backward( output: <B as BackendTypes>::FloatTensorPrimitive, grad: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the Sigmoid activation function backward.

§Arguments
  • output - The output tensor of the sigmoid function.
  • grad - The gradient.
§Returns

The output tensor.

Source

fn hard_sigmoid( tensor: <B as BackendTypes>::FloatTensorPrimitive, alpha: Scalar, beta: Scalar, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the hard Sigmoid activation function.

§Arguments
  • tensor - The tensor.
  • alpha - The alpha value that the tensor is multiplied with.
  • beta - The beta value that is added to the tensor
§Returns

The output tensor.

Source

fn log_sigmoid( tensor: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the LogSigmoid activation function.

§Arguments
  • tensor - The tensor.
§Returns

The output tensor.

Source

fn softmax( tensor: <B as BackendTypes>::FloatTensorPrimitive, dim: usize, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the softmax function along the given dimension.

Uses the max-shift trick for numerical stability: the per-row max is detached so no gradient flows back through it (the shift is a numerical-stability transformation, not part of the function).

§Arguments
  • tensor - The tensor.
  • dim - The dimension along which softmax is computed.
§Returns

The output tensor.

Source

fn log_softmax( tensor: <B as BackendTypes>::FloatTensorPrimitive, dim: usize, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the log-softmax function along the given dimension.

Computed via the log-sum-exp trick with a detached max-shift for numerical stability.

§Arguments
  • tensor - The tensor.
  • dim - The dimension along which log-softmax is computed.
§Returns

The output tensor.

Source

fn softmin( tensor: <B as BackendTypes>::FloatTensorPrimitive, dim: usize, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the softmin function along the given dimension.

Equivalent to softmax(-tensor, dim).

§Arguments
  • tensor - The tensor.
  • dim - The dimension along which softmin is computed.
§Returns

The output tensor.

Source

fn log_sigmoid_backward( x: <B as BackendTypes>::FloatTensorPrimitive, grad: <B as BackendTypes>::FloatTensorPrimitive, ) -> <B as BackendTypes>::FloatTensorPrimitive

Applies the LogSigmoid activation function backward.

§Arguments
  • x - The input tensor.
  • grad - The gradient.
§Returns

The output gradient.

Dyn Compatibility§

This trait is not dyn compatible.

In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.

Implementors§