pub fn precision_score<T, S1, S2, D1, D2>(
y_true: &ArrayBase<S1, D1>,
y_pred: &ArrayBase<S2, D2>,
pos_label: T,
) -> Result<f64>Expand description
Calculates the precision score for binary classification
§Mathematical Formulation
Precision is defined as:
Precision = TP / (TP + FP)Where:
- TP = True Positives (correctly predicted positive cases)
- FP = False Positives (incorrectly predicted as positive)
Alternatively, precision can be expressed as:
Precision = P(y_true = positive | ŷ = positive)This represents the probability that a sample is actually positive given that the classifier predicted it as positive.
§Interpretation
Precision answers the question: “Of all the samples the classifier predicted as positive, how many were actually positive?”
- High precision means low false positive rate
- Precision = 1.0 means no false positives
- Precision = 0.0 means no true positives (all positive predictions are wrong)
§Range
Precision is bounded between 0 and 1:
- 0 = worst precision (no correct positive predictions)
- 1 = perfect precision (no false positive predictions)
§Use Cases
High precision is important when the cost of false positives is high, such as:
- Medical diagnosis (avoid unnecessary treatments)
- Spam detection (avoid blocking legitimate emails)
- Quality control (avoid rejecting good products)
§Arguments
y_true- Ground truth (correct) binary labelsy_pred- Predicted binary labels, as returned by a classifierpos_label- The label to report as positive class
§Returns
- The precision score (float between 0.0 and 1.0)
§Examples
use scirs2_core::ndarray::array;
use scirs2_metrics::classification::precision_score;
let y_true = array![0, 1, 0, 0, 1, 1];
let y_pred = array![0, 0, 1, 0, 1, 1];
let precision = precision_score(&y_true, &y_pred, 1).unwrap();
// There are 2 true positives and 1 false positive
assert!((precision - 2.0/3.0).abs() < 1e-10);