pub fn accuracy_score<T, S1, S2, D1, D2>(
y_true: &ArrayBase<S1, D1>,
y_pred: &ArrayBase<S2, D2>,
) -> Result<f64>Expand description
Calculates accuracy score, the fraction of correctly classified samples
§Mathematical Formulation
The accuracy score is defined as:
Accuracy = (Number of Correct Predictions) / (Total Number of Predictions)
= (TP + TN) / (TP + TN + FP + FN)Where:
- TP = True Positives
- TN = True Negatives
- FP = False Positives
- FN = False Negatives
For multi-class classification, accuracy is simply:
Accuracy = (1/n) * Σ I(ŷᵢ = yᵢ)Where:
- n = total number of samples
- I(·) = indicator function (1 if condition is true, 0 otherwise)
- ŷᵢ = predicted label for sample i
- yᵢ = true label for sample i
§Range
Accuracy is bounded between 0 and 1, where:
- 0 = worst possible accuracy (all predictions wrong)
- 1 = perfect accuracy (all predictions correct)
- 0.5 = random guessing for balanced binary classification
§Arguments
y_true- Ground truth (correct) labelsy_pred- Predicted labels, as returned by a classifier
§Returns
- The fraction of correctly classified samples (float)
§Examples
use scirs2_core::ndarray::array;
use scirs2_metrics::classification::accuracy_score;
let y_true = array![0, 1, 2, 3];
let y_pred = array![0, 2, 1, 3];
let acc = accuracy_score(&y_true, &y_pred).unwrap();
assert!((acc - 0.5).abs() < 1e-10); // 2 out of 4 are correct