#[non_exhaustive]pub struct GetEvaluationOutput {Show 15 fields
pub evaluation_id: Option<String>,
pub ml_model_id: Option<String>,
pub evaluation_data_source_id: Option<String>,
pub input_data_location_s3: Option<String>,
pub created_by_iam_user: Option<String>,
pub created_at: Option<DateTime>,
pub last_updated_at: Option<DateTime>,
pub name: Option<String>,
pub status: Option<EntityStatus>,
pub performance_metrics: Option<PerformanceMetrics>,
pub log_uri: Option<String>,
pub message: Option<String>,
pub compute_time: Option<i64>,
pub finished_at: Option<DateTime>,
pub started_at: Option<DateTime>,
/* private fields */
}
Expand description
Represents the output of a GetEvaluation
operation and describes an Evaluation
.
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.evaluation_id: Option<String>
The evaluation ID which is same as the EvaluationId
in the request.
ml_model_id: Option<String>
The ID of the MLModel
that was the focus of the evaluation.
evaluation_data_source_id: Option<String>
The DataSource
used for this evaluation.
input_data_location_s3: Option<String>
The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).
created_by_iam_user: Option<String>
The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.
created_at: Option<DateTime>
The time that the Evaluation
was created. The time is expressed in epoch time.
last_updated_at: Option<DateTime>
The time of the most recent edit to the Evaluation
. The time is expressed in epoch time.
name: Option<String>
A user-supplied name or description of the Evaluation
.
status: Option<EntityStatus>
The status of the evaluation. This element can have one of the following values:
-
PENDING
- Amazon Machine Language (Amazon ML) submitted a request to evaluate anMLModel
. -
INPROGRESS
- The evaluation is underway. -
FAILED
- The request to evaluate anMLModel
did not run to completion. It is not usable. -
COMPLETED
- The evaluation process completed successfully. -
DELETED
- TheEvaluation
is marked as deleted. It is not usable.
performance_metrics: Option<PerformanceMetrics>
Measurements of how well the MLModel
performed using observations referenced by the DataSource
. One of the following metric is returned based on the type of the MLModel
:
-
BinaryAUC: A binary
MLModel
uses the Area Under the Curve (AUC) technique to measure performance. -
RegressionRMSE: A regression
MLModel
uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. -
MulticlassAvgFScore: A multiclass
MLModel
uses the F1 score technique to measure performance.
For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.
log_uri: Option<String>
A link to the file that contains logs of the CreateEvaluation
operation.
message: Option<String>
A description of the most recent details about evaluating the MLModel
.
compute_time: Option<i64>
The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the Evaluation
, normalized and scaled on computation resources. ComputeTime
is only available if the Evaluation
is in the COMPLETED
state.
finished_at: Option<DateTime>
The epoch time when Amazon Machine Learning marked the Evaluation
as COMPLETED
or FAILED
. FinishedAt
is only available when the Evaluation
is in the COMPLETED
or FAILED
state.
started_at: Option<DateTime>
The epoch time when Amazon Machine Learning marked the Evaluation
as INPROGRESS
. StartedAt
isn't available if the Evaluation
is in the PENDING
state.
Implementations§
Source§impl GetEvaluationOutput
impl GetEvaluationOutput
Sourcepub fn evaluation_id(&self) -> Option<&str>
pub fn evaluation_id(&self) -> Option<&str>
The evaluation ID which is same as the EvaluationId
in the request.
Sourcepub fn ml_model_id(&self) -> Option<&str>
pub fn ml_model_id(&self) -> Option<&str>
The ID of the MLModel
that was the focus of the evaluation.
Sourcepub fn evaluation_data_source_id(&self) -> Option<&str>
pub fn evaluation_data_source_id(&self) -> Option<&str>
The DataSource
used for this evaluation.
Sourcepub fn input_data_location_s3(&self) -> Option<&str>
pub fn input_data_location_s3(&self) -> Option<&str>
The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).
Sourcepub fn created_by_iam_user(&self) -> Option<&str>
pub fn created_by_iam_user(&self) -> Option<&str>
The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.
Sourcepub fn created_at(&self) -> Option<&DateTime>
pub fn created_at(&self) -> Option<&DateTime>
The time that the Evaluation
was created. The time is expressed in epoch time.
Sourcepub fn last_updated_at(&self) -> Option<&DateTime>
pub fn last_updated_at(&self) -> Option<&DateTime>
The time of the most recent edit to the Evaluation
. The time is expressed in epoch time.
Sourcepub fn status(&self) -> Option<&EntityStatus>
pub fn status(&self) -> Option<&EntityStatus>
The status of the evaluation. This element can have one of the following values:
-
PENDING
- Amazon Machine Language (Amazon ML) submitted a request to evaluate anMLModel
. -
INPROGRESS
- The evaluation is underway. -
FAILED
- The request to evaluate anMLModel
did not run to completion. It is not usable. -
COMPLETED
- The evaluation process completed successfully. -
DELETED
- TheEvaluation
is marked as deleted. It is not usable.
Sourcepub fn performance_metrics(&self) -> Option<&PerformanceMetrics>
pub fn performance_metrics(&self) -> Option<&PerformanceMetrics>
Measurements of how well the MLModel
performed using observations referenced by the DataSource
. One of the following metric is returned based on the type of the MLModel
:
-
BinaryAUC: A binary
MLModel
uses the Area Under the Curve (AUC) technique to measure performance. -
RegressionRMSE: A regression
MLModel
uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. -
MulticlassAvgFScore: A multiclass
MLModel
uses the F1 score technique to measure performance.
For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.
Sourcepub fn log_uri(&self) -> Option<&str>
pub fn log_uri(&self) -> Option<&str>
A link to the file that contains logs of the CreateEvaluation
operation.
Sourcepub fn message(&self) -> Option<&str>
pub fn message(&self) -> Option<&str>
A description of the most recent details about evaluating the MLModel
.
Sourcepub fn compute_time(&self) -> Option<i64>
pub fn compute_time(&self) -> Option<i64>
The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the Evaluation
, normalized and scaled on computation resources. ComputeTime
is only available if the Evaluation
is in the COMPLETED
state.
Sourcepub fn finished_at(&self) -> Option<&DateTime>
pub fn finished_at(&self) -> Option<&DateTime>
The epoch time when Amazon Machine Learning marked the Evaluation
as COMPLETED
or FAILED
. FinishedAt
is only available when the Evaluation
is in the COMPLETED
or FAILED
state.
Sourcepub fn started_at(&self) -> Option<&DateTime>
pub fn started_at(&self) -> Option<&DateTime>
The epoch time when Amazon Machine Learning marked the Evaluation
as INPROGRESS
. StartedAt
isn't available if the Evaluation
is in the PENDING
state.
Source§impl GetEvaluationOutput
impl GetEvaluationOutput
Sourcepub fn builder() -> GetEvaluationOutputBuilder
pub fn builder() -> GetEvaluationOutputBuilder
Creates a new builder-style object to manufacture GetEvaluationOutput
.
Trait Implementations§
Source§impl Clone for GetEvaluationOutput
impl Clone for GetEvaluationOutput
Source§fn clone(&self) -> GetEvaluationOutput
fn clone(&self) -> GetEvaluationOutput
1.0.0 · Source§const fn clone_from(&mut self, source: &Self)
const fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for GetEvaluationOutput
impl Debug for GetEvaluationOutput
Source§impl PartialEq for GetEvaluationOutput
impl PartialEq for GetEvaluationOutput
Source§impl RequestId for GetEvaluationOutput
impl RequestId for GetEvaluationOutput
Source§fn request_id(&self) -> Option<&str>
fn request_id(&self) -> Option<&str>
None
if the service could not be reached.impl StructuralPartialEq for GetEvaluationOutput
Auto Trait Implementations§
impl Freeze for GetEvaluationOutput
impl RefUnwindSafe for GetEvaluationOutput
impl Send for GetEvaluationOutput
impl Sync for GetEvaluationOutput
impl Unpin for GetEvaluationOutput
impl UnwindSafe for GetEvaluationOutput
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);