pub struct ConfusionMatrix<A> { /* private fields */ }
Expand description

Confusion matrix for multi-label evaluation

A confusion matrix shows predictions in a matrix, where rows correspond to target and columns to predicted. Diagonal entries are correct predictions, and everything off the diagonal is a miss-classification.

Implementations

Precision score, the number of correct classifications for the first class divided by total number of items in the first class

Binary confusion matrix

For binary confusion matrices (2x2 size) the precision score is calculated for the first label and corresponds to

true-label-1 / (true-label-1 + false-label-1)
Multilabel confusion matrix

For multilabel confusion matrices, the precision score is averaged over all classes (also known as macro averaging) A more precise controlled evaluation can be done by first splitting the confusion matrix with split_one_vs_all and then applying a different averaging scheme.

Examples
// create dummy classes 0 and 1
let prediction = array![0, 1, 1, 1, 0, 0, 1];
let ground_truth = array![0, 0, 1, 0, 1, 0, 1];

// create confusion matrix
let cm = prediction.into_confusion_matrix(&ground_truth);

// print precision for label 0
println!("{:?}", cm.precision());

Recall score, the number of correct classifications in the first class divided by the number of classifications in the first class

Binary confusion matrix

For binary confusion matrices (2x2 size) the recall score is calculated for the first label and corresponds to

true-label-1 / (true-label-1 + false-label-2)
Multilabel confusion matrix

For multilabel confusion matrices the recall score is averaged over all classes (also known as macro averaging). A more precise evaluation can be achieved by first splitting the confusion matrix with split_one_vs_all and then applying a different averaging scheme.

Example
// create dummy classes 0 and 1
let prediction = array![0, 1, 1, 1, 0, 0, 1];
let ground_truth = array![0, 0, 1, 0, 1, 0, 1];

// create confusion matrix
let cm = prediction.into_confusion_matrix(&ground_truth);

// print recall for label 0
println!("{:?}", cm.recall());

Accuracy score

The accuracy score is the ratio of correct classifications to all classifications. For multi-label confusion matrices this is the sum of diagonal entries to the sum of all entries.

F-beta-score

The F-beta-score averages between precision and recall. It is defined as

(1.0 + b*b) * (precision * recall) / (b * b * precision + recall)

F1-score, this is the F-beta-score for beta=1

Matthew Correlation Coefficients

Estimates the normalized cross-correlation between target and predicted variable. The MCC is more significant than precision or recall, because all four quadrants are included in the evaluation. A generalized evaluation for multiple labels is also included.

Split confusion matrix in N one-vs-all binary confusion matrices

Split confusion matrix in N*(N-1)/2 one-vs-one binary confusion matrices

Trait Implementations

Print a confusion matrix

Formats the value using the given formatter. Read more

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.