pub fn smooth_l1_loss<T: Reduce<AllAxes>>(
    pred: T,
    targ: T::NoTape,
    beta: T::Dtype
) -> T::Reduced
Expand description

Smooth l1 loss (closely related to Huber Loss) uses absolute error when the error is higher than beta, and squared error when the error is lower than beta.

It computes:

  1. if |x - y| < beta: 0.5 * (x - y)^2 / beta
  2. otherwise: |x - y| - 0.5 * beta

Example

let x = Tensor1D::new([-1.0, -0.5]);
let y = Tensor1D::new([0.5, 0.5]);
let loss = smooth_l1_loss(x.traced(), y, 1.0);