[][src]Struct gcp_client::google::cloud::automl::v1beta1::ModelEvaluation

pub struct ModelEvaluation {
    pub name: String,
    pub annotation_spec_id: String,
    pub display_name: String,
    pub create_time: Option<Timestamp>,
    pub evaluated_example_count: i32,
    pub metrics: Option<Metrics>,
}

Evaluation results of a model.

Fields

name: String

Output only. Resource name of the model evaluation. Format:

projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}

annotation_spec_id: String

Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation. For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION

[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] the [display_name][google.cloud.automl.v1beta1.ModelEvaluation.display_name] field is used.

display_name: String

Output only. The value of [display_name][google.cloud.automl.v1beta1.AnnotationSpec.display_name] at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings. For Tables CLASSIFICATION

[prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type] distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.

create_time: Option<Timestamp>

Output only. Timestamp when this model evaluation was created.

evaluated_example_count: i32

Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the

[annotation_spec_id][google.cloud.automl.v1beta1.ModelEvaluation.annotation_spec_id].

metrics: Option<Metrics>

Output only. Problem type specific evaluation metrics.

Trait Implementations

impl Clone for ModelEvaluation[src]

impl Debug for ModelEvaluation[src]

impl Default for ModelEvaluation[src]

impl Message for ModelEvaluation[src]

impl PartialEq<ModelEvaluation> for ModelEvaluation[src]

impl StructuralPartialEq for ModelEvaluation[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> IntoRequest<T> for T[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>, 

impl<T> WithSubscriber for T[src]