[][src]Struct gcp_client::google::cloud::datalabeling::v1beta1::EvaluationJobConfig

pub struct EvaluationJobConfig {
    pub input_config: Option<InputConfig>,
    pub evaluation_config: Option<EvaluationConfig>,
    pub human_annotation_config: Option<HumanAnnotationConfig>,
    pub bigquery_import_keys: HashMap<String, String>,
    pub example_count: i32,
    pub example_sample_percentage: f64,
    pub evaluation_job_alert_config: Option<EvaluationJobAlertConfig>,
    pub human_annotation_request_config: Option<HumanAnnotationRequestConfig>,
}

Configures specific details of how a continuous evaluation job works. Provide this configuration when you create an EvaluationJob.

Fields

input_config: Option<InputConfig>

Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields:

  • dataType must be one of IMAGE, TEXT, or GENERAL_DATA.
  • annotationType must be one of IMAGE_CLASSIFICATION_ANNOTATION, TEXT_CLASSIFICATION_ANNOTATION, GENERAL_CLASSIFICATION_ANNOTATION, or IMAGE_BOUNDING_BOX_ANNOTATION (image object detection).
  • If your machine learning model performs classification, you must specify classificationMetadata.isMultiLabel.
  • You must specify bigquerySource (not gcsSource).
evaluation_config: Option<EvaluationConfig>

Required. Details for calculating evaluation metrics and creating [Evaulations][google.cloud.datalabeling.v1beta1.Evaluation]. If your model version performs image object detection, you must specify the boundingBoxEvaluationOptions field within this configuration. Otherwise, provide an empty object for this configuration.

human_annotation_config: Option<HumanAnnotationConfig>

Optional. Details for human annotation of your data. If you set [labelMissingGroundTruth][google.cloud.datalabeling.v1beta1.EvaluationJob.label_missing_ground_truth] to true for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field.

Note that you must create an [Instruction][google.cloud.datalabeling.v1beta1.Instruction] resource before you can specify this field. Provide the name of the instruction resource in the instruction field within this configuration.

bigquery_import_keys: HashMap<String, String>

Required. Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON.

You can provide the following entries in this field:

  • data_json_key: the data key for prediction input. You must provide either this key or reference_json_key.
  • reference_json_key: the data reference key for prediction input. You must provide either this key or data_json_key.
  • label_json_key: the label key for prediction output. Required.
  • label_score_json_key: the score key for prediction output. Required.
  • bounding_box_json_key: the bounding box key for prediction output. Required if your model version perform image object detection.

Learn how to configure prediction keys.

example_count: i32

Required. The maximum number of predictions to sample and save to BigQuery during each [evaluation interval][google.cloud.datalabeling.v1beta1.EvaluationJob.schedule]. This limit overrides example_sample_percentage: even if the service has not sampled enough predictions to fulfill example_sample_perecentage during an interval, it stops sampling predictions when it meets this limit.

example_sample_percentage: f64

Required. Fraction of predictions to sample and save to BigQuery during each [evaluation interval][google.cloud.datalabeling.v1beta1.EvaluationJob.schedule]. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.

evaluation_job_alert_config: Option<EvaluationJobAlertConfig>

Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.

human_annotation_request_config: Option<HumanAnnotationRequestConfig>

Required. Details for how you want human reviewers to provide ground truth labels.

Trait Implementations

impl Clone for EvaluationJobConfig[src]

impl Debug for EvaluationJobConfig[src]

impl Default for EvaluationJobConfig[src]

impl Message for EvaluationJobConfig[src]

impl PartialEq<EvaluationJobConfig> for EvaluationJobConfig[src]

impl StructuralPartialEq for EvaluationJobConfig[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> IntoRequest<T> for T[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>, 

impl<T> WithSubscriber for T[src]