Struct rusoto_machinelearning::Evaluation
[−]
[src]
pub struct Evaluation { pub compute_time: Option<i64>, pub created_at: Option<f64>, pub created_by_iam_user: Option<String>, pub evaluation_data_source_id: Option<String>, pub evaluation_id: Option<String>, pub finished_at: Option<f64>, pub input_data_location_s3: Option<String>, pub last_updated_at: Option<f64>, pub ml_model_id: Option<String>, pub message: Option<String>, pub name: Option<String>, pub performance_metrics: Option<PerformanceMetrics>, pub started_at: Option<f64>, pub status: Option<String>, }
Represents the output of GetEvaluation
operation.
The content consists of the detailed metadata and data file information and the current status of the Evaluation
.
Fields
compute_time: Option<i64>
created_at: Option<f64>
The time that the Evaluation
was created. The time is expressed in epoch time.
created_by_iam_user: Option<String>
The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.
evaluation_data_source_id: Option<String>
The ID of the DataSource
that is used to evaluate the MLModel
.
evaluation_id: Option<String>
The ID that is assigned to the Evaluation
at creation.
finished_at: Option<f64>
input_data_location_s3: Option<String>
The location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.
last_updated_at: Option<f64>
The time of the most recent edit to the Evaluation
. The time is expressed in epoch time.
ml_model_id: Option<String>
The ID of the MLModel
that is the focus of the evaluation.
message: Option<String>
A description of the most recent details about evaluating the MLModel
.
name: Option<String>
A user-supplied name or description of the Evaluation
.
performance_metrics: Option<PerformanceMetrics>
Measurements of how well the MLModel
performed, using observations referenced by the DataSource
. One of the following metrics is returned, based on the type of the MLModel
:
-
BinaryAUC: A binary
MLModel
uses the Area Under the Curve (AUC) technique to measure performance. -
RegressionRMSE: A regression
MLModel
uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. -
MulticlassAvgFScore: A multiclass
MLModel
uses the F1 score technique to measure performance.
For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.
started_at: Option<f64>
status: Option<String>
The status of the evaluation. This element can have one of the following values:
-
PENDING
- Amazon Machine Learning (Amazon ML) submitted a request to evaluate anMLModel
. -
INPROGRESS
- The evaluation is underway. -
FAILED
- The request to evaluate anMLModel
did not run to completion. It is not usable. -
COMPLETED
- The evaluation process completed successfully. -
DELETED
- TheEvaluation
is marked as deleted. It is not usable.
Trait Implementations
impl Default for Evaluation
[src]
fn default() -> Evaluation
[src]
Returns the "default value" for a type. Read more
impl Debug for Evaluation
[src]
impl Clone for Evaluation
[src]
fn clone(&self) -> Evaluation
[src]
Returns a copy of the value. Read more
fn clone_from(&mut self, source: &Self)
1.0.0[src]
Performs copy-assignment from source
. Read more