Struct linfa::metrics::ConfusionMatrix [−][src]
pub struct ConfusionMatrix<A> { /* fields omitted */ }
Confusion matrix for multi-label evaluation
A confusion matrix shows predictions in a matrix, where rows correspond to target and columns to predicted. Diagonal entries are correct predictions, and everything off the diagonal is a miss-classification.
Implementations
impl<A> ConfusionMatrix<A>
[src]
impl<A> ConfusionMatrix<A>
[src]pub fn precision(&self) -> f32
[src]
Precision score, the number of correct classifications for the first class divided by total number of items in the first class
Binary confusion matrix
For binary confusion matrices (2x2 size) the precision score is calculated for the first label and corresponds to
true-label-1 / (true-label-1 + false-label-1)
Multilabel confusion matrix
For multilabel confusion matrices, the precision score is averaged over all classes
(also known as macro
averaging) A more precise controlled evaluation can be done by first splitting the confusion matrix with split_one_vs_all
and then applying a different averaging scheme.
Examples
// create dummy classes 0 and 1 let prediction = array![0, 1, 1, 1, 0, 0, 1]; let ground_truth = array![0, 0, 1, 0, 1, 0, 1]; // create confusion matrix let cm = prediction.into_confusion_matrix(&ground_truth); // print precision for label 0 println!("{:?}", cm.precision());
pub fn recall(&self) -> f32
[src]
Recall score, the number of correct classifications in the first class divided by the number of classifications in the first class
Binary confusion matrix
For binary confusion matrices (2x2 size) the recall score is calculated for the first label and corresponds to
true-label-1 / (true-label-1 + false-label-2)
Multilabel confusion matrix
For multilabel confusion matrices the recall score is averaged over all classes (also known
as macro
averaging). A more precise evaluation can be achieved by first splitting the
confusion matrix with split_one_vs_all
and then applying a different averaging scheme.
Example
// create dummy classes 0 and 1 let prediction = array![0, 1, 1, 1, 0, 0, 1]; let ground_truth = array![0, 0, 1, 0, 1, 0, 1]; // create confusion matrix let cm = prediction.into_confusion_matrix(&ground_truth); // print recall for label 0 println!("{:?}", cm.recall());
pub fn accuracy(&self) -> f32
[src]
Accuracy score
The accuracy score is the ratio of correct classifications to all classifications. For multi-label confusion matrices this is the sum of diagonal entries to the sum of all entries.
pub fn f_score(&self, beta: f32) -> f32
[src]
F-beta-score
The F-beta-score averages between precision and recall. It is defined as
(1.0 + b*b) * (precision * recall) / (b * b * precision + recall)
pub fn f1_score(&self) -> f32
[src]
F1-score, this is the F-beta-score for beta=1
pub fn mcc(&self) -> f32
[src]
Matthew Correlation Coefficients
Estimates the normalized cross-correlation between target and predicted variable. The MCC is more significant than precision or recall, because all four quadrants are included in the evaluation. A generalized evaluation for multiple labels is also included.
pub fn split_one_vs_all(&self) -> Vec<ConfusionMatrix<bool>>
[src]
Split confusion matrix in N one-vs-all binary confusion matrices
pub fn split_one_vs_one(&self) -> Vec<ConfusionMatrix<bool>>
[src]
Split confusion matrix in N*(N-1)/2 one-vs-one binary confusion matrices
Trait Implementations
Auto Trait Implementations
impl<A> RefUnwindSafe for ConfusionMatrix<A> where
A: RefUnwindSafe,
impl<A> RefUnwindSafe for ConfusionMatrix<A> where
A: RefUnwindSafe,
impl<A> Send for ConfusionMatrix<A> where
A: Send,
impl<A> Send for ConfusionMatrix<A> where
A: Send,
impl<A> Sync for ConfusionMatrix<A> where
A: Sync,
impl<A> Sync for ConfusionMatrix<A> where
A: Sync,
impl<A> Unpin for ConfusionMatrix<A>
impl<A> Unpin for ConfusionMatrix<A>
impl<A> UnwindSafe for ConfusionMatrix<A> where
A: RefUnwindSafe,
impl<A> UnwindSafe for ConfusionMatrix<A> where
A: RefUnwindSafe,