[][src]Struct rusoto_comprehend::ClassifierEvaluationMetrics

pub struct ClassifierEvaluationMetrics {
    pub accuracy: Option<f64>,
    pub f1_score: Option<f64>,
    pub hamming_loss: Option<f64>,
    pub micro_f1_score: Option<f64>,
    pub micro_precision: Option<f64>,
    pub micro_recall: Option<f64>,
    pub precision: Option<f64>,
    pub recall: Option<f64>,
}

Describes the result metrics for the test data associated with an documentation classifier.

Fields

accuracy: Option<f64>

The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.

f1_score: Option<f64>

A measure of how accurate the classifier results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.

hamming_loss: Option<f64>

Indicates the fraction of labels that are incorrectly predicted. Also seen as the fraction of wrong labels compared to the total number of labels. Scores closer to zero are better.

micro_f1_score: Option<f64>

A measure of how accurate the classifier results are for the test data. It is a combination of the Micro Precision and Micro Recall values. The Micro F1Score is the harmonic mean of the two scores. The highest score is 1, and the worst score is 0.

micro_precision: Option<f64>

A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones. Unlike the Precision metric which comes from averaging the precision of all available labels, this is based on the overall score of all precision scores added together.

micro_recall: Option<f64>

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results. Specifically, this indicates how many of the correct categories in the text that the model can predict. It is a percentage of correct categories in the text that can found. Instead of averaging the recall scores of all labels (as with Recall), micro Recall is based on the overall score of all recall scores added together.

precision: Option<f64>

A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.

recall: Option<f64>

A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.

Trait Implementations

impl Clone for ClassifierEvaluationMetrics[src]

impl Debug for ClassifierEvaluationMetrics[src]

impl Default for ClassifierEvaluationMetrics[src]

impl<'de> Deserialize<'de> for ClassifierEvaluationMetrics[src]

impl PartialEq<ClassifierEvaluationMetrics> for ClassifierEvaluationMetrics[src]

impl StructuralPartialEq for ClassifierEvaluationMetrics[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> DeserializeOwned for T where
    T: for<'de> Deserialize<'de>, 
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Same<T> for T

type Output = T

Should always be Self

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.