Struct aws_sdk_machinelearning::operation::create_evaluation::builders::CreateEvaluationFluentBuilder
source · pub struct CreateEvaluationFluentBuilder { /* private fields */ }
Expand description
Fluent builder constructing a request to CreateEvaluation
.
Creates a new Evaluation
of an MLModel
. An MLModel
is evaluated on a set of observations associated to a DataSource
. Like a DataSource
for an MLModel
, the DataSource
for an Evaluation
contains values for the Target Variable
. The Evaluation
compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the MLModel
functions on the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding MLModelType
: BINARY
, REGRESSION
or MULTICLASS
.
CreateEvaluation
is an asynchronous operation. In response to CreateEvaluation
, Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING
. After the Evaluation
is created and ready for use, Amazon ML sets the status to COMPLETED
.
You can use the GetEvaluation
operation to check progress of the evaluation during the creation operation.
Implementations§
source§impl CreateEvaluationFluentBuilder
impl CreateEvaluationFluentBuilder
sourcepub fn as_input(&self) -> &CreateEvaluationInputBuilder
pub fn as_input(&self) -> &CreateEvaluationInputBuilder
Access the CreateEvaluation as a reference.
sourcepub async fn send(
self
) -> Result<CreateEvaluationOutput, SdkError<CreateEvaluationError, HttpResponse>>
pub async fn send( self ) -> Result<CreateEvaluationOutput, SdkError<CreateEvaluationError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self
) -> CustomizableOperation<CreateEvaluationOutput, CreateEvaluationError, Self>
pub fn customize( self ) -> CustomizableOperation<CreateEvaluationOutput, CreateEvaluationError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn evaluation_id(self, input: impl Into<String>) -> Self
pub fn evaluation_id(self, input: impl Into<String>) -> Self
A user-supplied ID that uniquely identifies the Evaluation
.
sourcepub fn set_evaluation_id(self, input: Option<String>) -> Self
pub fn set_evaluation_id(self, input: Option<String>) -> Self
A user-supplied ID that uniquely identifies the Evaluation
.
sourcepub fn get_evaluation_id(&self) -> &Option<String>
pub fn get_evaluation_id(&self) -> &Option<String>
A user-supplied ID that uniquely identifies the Evaluation
.
sourcepub fn evaluation_name(self, input: impl Into<String>) -> Self
pub fn evaluation_name(self, input: impl Into<String>) -> Self
A user-supplied name or description of the Evaluation
.
sourcepub fn set_evaluation_name(self, input: Option<String>) -> Self
pub fn set_evaluation_name(self, input: Option<String>) -> Self
A user-supplied name or description of the Evaluation
.
sourcepub fn get_evaluation_name(&self) -> &Option<String>
pub fn get_evaluation_name(&self) -> &Option<String>
A user-supplied name or description of the Evaluation
.
sourcepub fn ml_model_id(self, input: impl Into<String>) -> Self
pub fn ml_model_id(self, input: impl Into<String>) -> Self
The ID of the MLModel
to evaluate.
The schema used in creating the MLModel
must match the schema of the DataSource
used in the Evaluation
.
sourcepub fn set_ml_model_id(self, input: Option<String>) -> Self
pub fn set_ml_model_id(self, input: Option<String>) -> Self
The ID of the MLModel
to evaluate.
The schema used in creating the MLModel
must match the schema of the DataSource
used in the Evaluation
.
sourcepub fn get_ml_model_id(&self) -> &Option<String>
pub fn get_ml_model_id(&self) -> &Option<String>
The ID of the MLModel
to evaluate.
The schema used in creating the MLModel
must match the schema of the DataSource
used in the Evaluation
.
sourcepub fn evaluation_data_source_id(self, input: impl Into<String>) -> Self
pub fn evaluation_data_source_id(self, input: impl Into<String>) -> Self
The ID of the DataSource
for the evaluation. The schema of the DataSource
must match the schema used to create the MLModel
.
sourcepub fn set_evaluation_data_source_id(self, input: Option<String>) -> Self
pub fn set_evaluation_data_source_id(self, input: Option<String>) -> Self
The ID of the DataSource
for the evaluation. The schema of the DataSource
must match the schema used to create the MLModel
.
sourcepub fn get_evaluation_data_source_id(&self) -> &Option<String>
pub fn get_evaluation_data_source_id(&self) -> &Option<String>
The ID of the DataSource
for the evaluation. The schema of the DataSource
must match the schema used to create the MLModel
.
Trait Implementations§
source§impl Clone for CreateEvaluationFluentBuilder
impl Clone for CreateEvaluationFluentBuilder
source§fn clone(&self) -> CreateEvaluationFluentBuilder
fn clone(&self) -> CreateEvaluationFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more