Struct rusoto_machinelearning::Evaluation [] [src]

pub struct Evaluation {
    pub compute_time: Option<i64>,
    pub created_at: Option<f64>,
    pub created_by_iam_user: Option<String>,
    pub evaluation_data_source_id: Option<String>,
    pub evaluation_id: Option<String>,
    pub finished_at: Option<f64>,
    pub input_data_location_s3: Option<String>,
    pub last_updated_at: Option<f64>,
    pub ml_model_id: Option<String>,
    pub message: Option<String>,
    pub name: Option<String>,
    pub performance_metrics: Option<PerformanceMetrics>,
    pub started_at: Option<f64>,
    pub status: Option<String>,
}

Represents the output of GetEvaluation operation.

The content consists of the detailed metadata and data file information and the current status of the Evaluation.

Fields

The time that the Evaluation was created. The time is expressed in epoch time.

The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

The ID of the DataSource that is used to evaluate the MLModel.

The ID that is assigned to the Evaluation at creation.

The location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.

The time of the most recent edit to the Evaluation. The time is expressed in epoch time.

The ID of the MLModel that is the focus of the evaluation.

A description of the most recent details about evaluating the MLModel.

A user-supplied name or description of the Evaluation.

Measurements of how well the MLModel performed, using observations referenced by the DataSource. One of the following metrics is returned, based on the type of the MLModel:

  • BinaryAUC: A binary MLModel uses the Area Under the Curve (AUC) technique to measure performance.

  • RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.

  • MulticlassAvgFScore: A multiclass MLModel uses the F1 score technique to measure performance.

For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.

The status of the evaluation. This element can have one of the following values:

  • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to evaluate an MLModel.
  • INPROGRESS - The evaluation is underway.
  • FAILED - The request to evaluate an MLModel did not run to completion. It is not usable.
  • COMPLETED - The evaluation process completed successfully.
  • DELETED - The Evaluation is marked as deleted. It is not usable.

Trait Implementations

impl Default for Evaluation
[src]

[src]

Returns the "default value" for a type. Read more

impl Debug for Evaluation
[src]

[src]

Formats the value using the given formatter.

impl Clone for Evaluation
[src]

[src]

Returns a copy of the value. Read more

1.0.0
[src]

Performs copy-assignment from source. Read more