Struct rusoto_machinelearning::Evaluation
source · [−]pub struct Evaluation {Show 14 fields
pub compute_time: Option<i64>,
pub created_at: Option<f64>,
pub created_by_iam_user: Option<String>,
pub evaluation_data_source_id: Option<String>,
pub evaluation_id: Option<String>,
pub finished_at: Option<f64>,
pub input_data_location_s3: Option<String>,
pub last_updated_at: Option<f64>,
pub ml_model_id: Option<String>,
pub message: Option<String>,
pub name: Option<String>,
pub performance_metrics: Option<PerformanceMetrics>,
pub started_at: Option<f64>,
pub status: Option<String>,
}
Expand description
Represents the output of GetEvaluation
operation.
The content consists of the detailed metadata and data file information and the current status of the Evaluation
.
Fields
compute_time: Option<i64>
created_at: Option<f64>
The time that the Evaluation
was created. The time is expressed in epoch time.
created_by_iam_user: Option<String>
The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.
evaluation_data_source_id: Option<String>
The ID of the DataSource
that is used to evaluate the MLModel
.
evaluation_id: Option<String>
The ID that is assigned to the Evaluation
at creation.
finished_at: Option<f64>
input_data_location_s3: Option<String>
The location and name of the data in Amazon Simple Storage Server (Amazon S3) that is used in the evaluation.
last_updated_at: Option<f64>
The time of the most recent edit to the Evaluation
. The time is expressed in epoch time.
ml_model_id: Option<String>
The ID of the MLModel
that is the focus of the evaluation.
message: Option<String>
A description of the most recent details about evaluating the MLModel
.
name: Option<String>
A user-supplied name or description of the Evaluation
.
performance_metrics: Option<PerformanceMetrics>
Measurements of how well the MLModel
performed, using observations referenced by the DataSource
. One of the following metrics is returned, based on the type of the MLModel
:
-
BinaryAUC: A binary
MLModel
uses the Area Under the Curve (AUC) technique to measure performance. -
RegressionRMSE: A regression
MLModel
uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. -
MulticlassAvgFScore: A multiclass
MLModel
uses the F1 score technique to measure performance.
For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.
started_at: Option<f64>
status: Option<String>
The status of the evaluation. This element can have one of the following values:
-
PENDING
- Amazon Machine Learning (Amazon ML) submitted a request to evaluate anMLModel
. -
INPROGRESS
- The evaluation is underway. -
FAILED
- The request to evaluate anMLModel
did not run to completion. It is not usable. -
COMPLETED
- The evaluation process completed successfully. -
DELETED
- TheEvaluation
is marked as deleted. It is not usable.
Trait Implementations
sourceimpl Clone for Evaluation
impl Clone for Evaluation
sourcefn clone(&self) -> Evaluation
fn clone(&self) -> Evaluation
Returns a copy of the value. Read more
1.0.0 · sourcefn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
sourceimpl Debug for Evaluation
impl Debug for Evaluation
sourceimpl Default for Evaluation
impl Default for Evaluation
sourcefn default() -> Evaluation
fn default() -> Evaluation
Returns the “default value” for a type. Read more
sourceimpl<'de> Deserialize<'de> for Evaluation
impl<'de> Deserialize<'de> for Evaluation
sourcefn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error> where
__D: Deserializer<'de>,
Deserialize this value from the given Serde deserializer. Read more
sourceimpl PartialEq<Evaluation> for Evaluation
impl PartialEq<Evaluation> for Evaluation
sourcefn eq(&self, other: &Evaluation) -> bool
fn eq(&self, other: &Evaluation) -> bool
This method tests for self
and other
values to be equal, and is used
by ==
. Read more
sourcefn ne(&self, other: &Evaluation) -> bool
fn ne(&self, other: &Evaluation) -> bool
This method tests for !=
.
impl StructuralPartialEq for Evaluation
Auto Trait Implementations
impl RefUnwindSafe for Evaluation
impl Send for Evaluation
impl Sync for Evaluation
impl Unpin for Evaluation
impl UnwindSafe for Evaluation
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> ToOwned for T where
T: Clone,
impl<T> ToOwned for T where
T: Clone,
type Owned = T
type Owned = T
The resulting type after obtaining ownership.
sourcefn clone_into(&self, target: &mut T)
fn clone_into(&self, target: &mut T)
toowned_clone_into
)Uses borrowed data to replace owned data, usually by cloning. Read more
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more