Struct aws_sdk_machinelearning::client::fluent_builders::CreateEvaluation [−][src]
pub struct CreateEvaluation<C = DynConnector, M = DefaultMiddleware, R = Standard> { /* fields omitted */ }Expand description
Fluent builder constructing a request to CreateEvaluation.
Creates a new Evaluation of an MLModel. An MLModel is evaluated on a set of observations associated to a DataSource. Like a DataSource for an MLModel, the DataSource for an Evaluation contains values for the Target Variable. The Evaluation compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the MLModel functions on the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding MLModelType: BINARY, REGRESSION or MULTICLASS.
CreateEvaluation is an asynchronous operation. In response to CreateEvaluation, Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to PENDING. After the Evaluation is created and ready for use, Amazon ML sets the status to COMPLETED.
You can use the GetEvaluation operation to check progress of the evaluation during the creation operation.
Implementations
impl<C, M, R> CreateEvaluation<C, M, R> where
C: SmithyConnector,
M: SmithyMiddleware<C>,
R: NewRequestPolicy,
impl<C, M, R> CreateEvaluation<C, M, R> where
C: SmithyConnector,
M: SmithyMiddleware<C>,
R: NewRequestPolicy,
pub async fn send(
self
) -> Result<CreateEvaluationOutput, SdkError<CreateEvaluationError>> where
R::Policy: SmithyRetryPolicy<CreateEvaluationInputOperationOutputAlias, CreateEvaluationOutput, CreateEvaluationError, CreateEvaluationInputOperationRetryAlias>,
pub async fn send(
self
) -> Result<CreateEvaluationOutput, SdkError<CreateEvaluationError>> where
R::Policy: SmithyRetryPolicy<CreateEvaluationInputOperationOutputAlias, CreateEvaluationOutput, CreateEvaluationError, CreateEvaluationInputOperationRetryAlias>,
Sends the request and returns the response.
If an error occurs, an SdkError will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
A user-supplied ID that uniquely identifies the Evaluation.
A user-supplied ID that uniquely identifies the Evaluation.
A user-supplied name or description of the Evaluation.
A user-supplied name or description of the Evaluation.
The ID of the MLModel to evaluate.
The schema used in creating the MLModel must match the schema of the DataSource used in the Evaluation.
The ID of the MLModel to evaluate.
The schema used in creating the MLModel must match the schema of the DataSource used in the Evaluation.
The ID of the DataSource for the evaluation. The schema of the DataSource must match the schema used to create the MLModel.
The ID of the DataSource for the evaluation. The schema of the DataSource must match the schema used to create the MLModel.
Trait Implementations
Auto Trait Implementations
impl<C = DynConnector, M = DefaultMiddleware, R = Standard> !RefUnwindSafe for CreateEvaluation<C, M, R>
impl<C, M, R> Send for CreateEvaluation<C, M, R> where
C: Send + Sync,
M: Send + Sync,
R: Send + Sync,
impl<C, M, R> Sync for CreateEvaluation<C, M, R> where
C: Send + Sync,
M: Send + Sync,
R: Send + Sync,
impl<C, M, R> Unpin for CreateEvaluation<C, M, R>
impl<C = DynConnector, M = DefaultMiddleware, R = Standard> !UnwindSafe for CreateEvaluation<C, M, R>
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber to this type, returning a
WithDispatch wrapper. Read more
Attaches the current default Subscriber to this type, returning a
WithDispatch wrapper. Read more
