Function neuronika::nn::loss::bce_with_logits_loss [−][src]
pub fn bce_with_logits_loss<T, U, V>(
input: VarDiff<T, U>,
target: Var<V>,
reduction: Reduction
) -> VarDiff<BCEWithLogitsLoss<T, V>, BCEWithLogitsLossBackward<U, T, V>> where
T: Data,
U: Gradient<Dim = T::Dim> + Overwrite,
V: Data<Dim = T::Dim>,
Expand description
Computes the binary cross entropy with logits between the target y and input x.
1 n
Lᴏss = ― ∑ - [ʏᵢ * ln(σ(xᵢ)) + (1 - ʏᵢ) * ln(1 - σ(xᵢ))]
n i=1
This loss combines a sigmoid and a binary cross entropy. This version is more numerically stable than using a plain sigmoid followed by a binary cross entropy as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability. Note that the target y should be numbers between 0 and 1 and the input x should be raw unnormalized scores.