#[non_exhaustive]pub struct GetEvaluationOutput {Show 15 fields
pub evaluation_id: Option<String>,
pub ml_model_id: Option<String>,
pub evaluation_data_source_id: Option<String>,
pub input_data_location_s3: Option<String>,
pub created_by_iam_user: Option<String>,
pub created_at: Option<DateTime>,
pub last_updated_at: Option<DateTime>,
pub name: Option<String>,
pub status: Option<EntityStatus>,
pub performance_metrics: Option<PerformanceMetrics>,
pub log_uri: Option<String>,
pub message: Option<String>,
pub compute_time: Option<i64>,
pub finished_at: Option<DateTime>,
pub started_at: Option<DateTime>,
/* private fields */
}Expand description
Represents the output of a GetEvaluation operation and describes an Evaluation.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.evaluation_id: Option<String>The evaluation ID which is same as the EvaluationId in the request.
ml_model_id: Option<String>The ID of the MLModel that was the focus of the evaluation.
evaluation_data_source_id: Option<String>The DataSource used for this evaluation.
input_data_location_s3: Option<String>The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).
created_by_iam_user: Option<String>The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.
created_at: Option<DateTime>The time that the Evaluation was created. The time is expressed in epoch time.
last_updated_at: Option<DateTime>The time of the most recent edit to the Evaluation. The time is expressed in epoch time.
name: Option<String>A user-supplied name or description of the Evaluation.
status: Option<EntityStatus>The status of the evaluation. This element can have one of the following values:
-
PENDING- Amazon Machine Language (Amazon ML) submitted a request to evaluate anMLModel. -
INPROGRESS- The evaluation is underway. -
FAILED- The request to evaluate anMLModeldid not run to completion. It is not usable. -
COMPLETED- The evaluation process completed successfully. -
DELETED- TheEvaluationis marked as deleted. It is not usable.
performance_metrics: Option<PerformanceMetrics>Measurements of how well the MLModel performed using observations referenced by the DataSource. One of the following metric is returned based on the type of the MLModel:
-
BinaryAUC: A binary
MLModeluses the Area Under the Curve (AUC) technique to measure performance. -
RegressionRMSE: A regression
MLModeluses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. -
MulticlassAvgFScore: A multiclass
MLModeluses the F1 score technique to measure performance.
For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.
log_uri: Option<String>A link to the file that contains logs of the CreateEvaluation operation.
message: Option<String>A description of the most recent details about evaluating the MLModel.
compute_time: Option<i64>The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the Evaluation, normalized and scaled on computation resources. ComputeTime is only available if the Evaluation is in the COMPLETED state.
finished_at: Option<DateTime>The epoch time when Amazon Machine Learning marked the Evaluation as COMPLETED or FAILED. FinishedAt is only available when the Evaluation is in the COMPLETED or FAILED state.
started_at: Option<DateTime>The epoch time when Amazon Machine Learning marked the Evaluation as INPROGRESS. StartedAt isn't available if the Evaluation is in the PENDING state.
Implementations§
source§impl GetEvaluationOutput
impl GetEvaluationOutput
sourcepub fn evaluation_id(&self) -> Option<&str>
pub fn evaluation_id(&self) -> Option<&str>
The evaluation ID which is same as the EvaluationId in the request.
sourcepub fn ml_model_id(&self) -> Option<&str>
pub fn ml_model_id(&self) -> Option<&str>
The ID of the MLModel that was the focus of the evaluation.
sourcepub fn evaluation_data_source_id(&self) -> Option<&str>
pub fn evaluation_data_source_id(&self) -> Option<&str>
The DataSource used for this evaluation.
sourcepub fn input_data_location_s3(&self) -> Option<&str>
pub fn input_data_location_s3(&self) -> Option<&str>
The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).
sourcepub fn created_by_iam_user(&self) -> Option<&str>
pub fn created_by_iam_user(&self) -> Option<&str>
The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.
sourcepub fn created_at(&self) -> Option<&DateTime>
pub fn created_at(&self) -> Option<&DateTime>
The time that the Evaluation was created. The time is expressed in epoch time.
sourcepub fn last_updated_at(&self) -> Option<&DateTime>
pub fn last_updated_at(&self) -> Option<&DateTime>
The time of the most recent edit to the Evaluation. The time is expressed in epoch time.
sourcepub fn status(&self) -> Option<&EntityStatus>
pub fn status(&self) -> Option<&EntityStatus>
The status of the evaluation. This element can have one of the following values:
-
PENDING- Amazon Machine Language (Amazon ML) submitted a request to evaluate anMLModel. -
INPROGRESS- The evaluation is underway. -
FAILED- The request to evaluate anMLModeldid not run to completion. It is not usable. -
COMPLETED- The evaluation process completed successfully. -
DELETED- TheEvaluationis marked as deleted. It is not usable.
sourcepub fn performance_metrics(&self) -> Option<&PerformanceMetrics>
pub fn performance_metrics(&self) -> Option<&PerformanceMetrics>
Measurements of how well the MLModel performed using observations referenced by the DataSource. One of the following metric is returned based on the type of the MLModel:
-
BinaryAUC: A binary
MLModeluses the Area Under the Curve (AUC) technique to measure performance. -
RegressionRMSE: A regression
MLModeluses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. -
MulticlassAvgFScore: A multiclass
MLModeluses the F1 score technique to measure performance.
For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.
sourcepub fn log_uri(&self) -> Option<&str>
pub fn log_uri(&self) -> Option<&str>
A link to the file that contains logs of the CreateEvaluation operation.
sourcepub fn message(&self) -> Option<&str>
pub fn message(&self) -> Option<&str>
A description of the most recent details about evaluating the MLModel.
sourcepub fn compute_time(&self) -> Option<i64>
pub fn compute_time(&self) -> Option<i64>
The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the Evaluation, normalized and scaled on computation resources. ComputeTime is only available if the Evaluation is in the COMPLETED state.
sourcepub fn finished_at(&self) -> Option<&DateTime>
pub fn finished_at(&self) -> Option<&DateTime>
The epoch time when Amazon Machine Learning marked the Evaluation as COMPLETED or FAILED. FinishedAt is only available when the Evaluation is in the COMPLETED or FAILED state.
sourcepub fn started_at(&self) -> Option<&DateTime>
pub fn started_at(&self) -> Option<&DateTime>
The epoch time when Amazon Machine Learning marked the Evaluation as INPROGRESS. StartedAt isn't available if the Evaluation is in the PENDING state.
source§impl GetEvaluationOutput
impl GetEvaluationOutput
sourcepub fn builder() -> GetEvaluationOutputBuilder
pub fn builder() -> GetEvaluationOutputBuilder
Creates a new builder-style object to manufacture GetEvaluationOutput.
Trait Implementations§
source§impl Clone for GetEvaluationOutput
impl Clone for GetEvaluationOutput
source§fn clone(&self) -> GetEvaluationOutput
fn clone(&self) -> GetEvaluationOutput
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moresource§impl Debug for GetEvaluationOutput
impl Debug for GetEvaluationOutput
source§impl PartialEq for GetEvaluationOutput
impl PartialEq for GetEvaluationOutput
source§fn eq(&self, other: &GetEvaluationOutput) -> bool
fn eq(&self, other: &GetEvaluationOutput) -> bool
self and other values to be equal, and is used
by ==.source§impl RequestId for GetEvaluationOutput
impl RequestId for GetEvaluationOutput
source§fn request_id(&self) -> Option<&str>
fn request_id(&self) -> Option<&str>
None if the service could not be reached.