Expand description
Differentiable loss functions for neural network training.
These loss functions work with autograd Tensors and support backpropagation for gradient-based optimization.
§Example
ⓘ
use aprender::nn::loss::{MSELoss, CrossEntropyLoss};
use aprender::autograd::Tensor;
// Regression loss
let criterion = MSELoss::new();
let pred = Tensor::from_slice(&[1.0, 2.0, 3.0]).requires_grad();
let target = Tensor::from_slice(&[1.1, 2.0, 2.9]);
let loss = criterion.forward(&pred, &target);
loss.backward();
// Classification loss
let criterion = CrossEntropyLoss::new();
let logits = Tensor::new(&[1.0, 2.0, 0.5, 0.1, 3.0, 0.2], &[2, 3]).requires_grad();
let targets = Tensor::from_slice(&[1.0, 2.0]); // class indices
let loss = criterion.forward(&logits, &targets);§References
- Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
Structs§
- BCEWith
Logits Loss - Binary Cross-Entropy with Logits loss.
- Cross
Entropy Loss - Cross-Entropy Loss for classification.
- L1Loss
- Mean Absolute Error loss for regression.
- MSELoss
- Mean Squared Error loss for regression.
- NLLLoss
- Negative Log Likelihood loss.
- Smooth
L1Loss - Smooth L1 Loss (Huber Loss).
Enums§
- Reduction
- Reduction mode for loss functions.