Struct aws_sdk_machinelearning::client::Client
source · [−]pub struct Client<C = DynConnector, M = DefaultMiddleware, R = Standard> { /* private fields */ }
Expand description
Client for Amazon Machine Learning
Client for invoking operations on Amazon Machine Learning. Each operation on Amazon Machine Learning is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
Examples
Constructing a client and invoking an operation
// create a shared configuration. This can be used & shared between multiple service clients.
let shared_config = aws_config::load_from_env().await;
let client = aws_sdk_machinelearning::Client::new(&shared_config);
// invoke an operation
/* let rsp = client
.<operation_name>().
.<param>("some value")
.send().await; */
Constructing a client with custom configuration
use aws_config::RetryConfig;
let shared_config = aws_config::load_from_env().await;
let config = aws_sdk_machinelearning::config::Builder::from(&shared_config)
.retry_config(RetryConfig::disabled())
.build();
let client = aws_sdk_machinelearning::Client::from_conf(config);
Implementations
impl<C, M, R> Client<C, M, R> where
C: SmithyConnector,
M: SmithyMiddleware<C>,
R: NewRequestPolicy,
impl<C, M, R> Client<C, M, R> where
C: SmithyConnector,
M: SmithyMiddleware<C>,
R: NewRequestPolicy,
Constructs a fluent builder for the AddTags
operation.
- The fluent builder is configurable:
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:The key-value pairs to use to create tags. If you specify a key without specifying a value, Amazon ML creates a tag with the specified key and a value of null.
resource_id(impl Into<String>)
/set_resource_id(Option<String>)
:The ID of the ML object to tag. For example,
exampleModelId
.resource_type(TaggableResourceType)
/set_resource_type(Option<TaggableResourceType>)
:The type of the ML object to tag.
- On success, responds with
AddTagsOutput
with field(s):resource_id(Option<String>)
:The ID of the ML object that was tagged.
resource_type(Option<TaggableResourceType>)
:The type of the ML object that was tagged.
- On failure, responds with
SdkError<AddTagsError>
Constructs a fluent builder for the CreateBatchPrediction
operation.
- The fluent builder is configurable:
batch_prediction_id(impl Into<String>)
/set_batch_prediction_id(Option<String>)
:A user-supplied ID that uniquely identifies the
BatchPrediction
.batch_prediction_name(impl Into<String>)
/set_batch_prediction_name(Option<String>)
:A user-supplied name or description of the
BatchPrediction
.BatchPredictionName
can only use the UTF-8 character set.ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:The ID of the
MLModel
that will generate predictions for the group of observations.batch_prediction_data_source_id(impl Into<String>)
/set_batch_prediction_data_source_id(Option<String>)
:The ID of the
DataSource
that points to the group of observations to predict.output_uri(impl Into<String>)
/set_output_uri(Option<String>)
:The location of an Amazon Simple Storage Service (Amazon S3) bucket or directory to store the batch prediction results. The following substrings are not allowed in the
s3 key
portion of theoutputURI
field: ‘:’, ‘//’, ‘/./’, ‘/../’.Amazon ML needs permissions to store and retrieve the logs on your behalf. For information about how to set permissions, see the Amazon Machine Learning Developer Guide.
- On success, responds with
CreateBatchPredictionOutput
with field(s):batch_prediction_id(Option<String>)
:A user-supplied ID that uniquely identifies the
BatchPrediction
. This value is identical to the value of theBatchPredictionId
in the request.
- On failure, responds with
SdkError<CreateBatchPredictionError>
Constructs a fluent builder for the CreateDataSourceFromRDS
operation.
- The fluent builder is configurable:
data_source_id(impl Into<String>)
/set_data_source_id(Option<String>)
:A user-supplied ID that uniquely identifies the
DataSource
. Typically, an Amazon Resource Number (ARN) becomes the ID for aDataSource
.data_source_name(impl Into<String>)
/set_data_source_name(Option<String>)
:A user-supplied name or description of the
DataSource
.rds_data(RdsDataSpec)
/set_rds_data(Option<RdsDataSpec>)
:The data specification of an Amazon RDS
DataSource
:-
DatabaseInformation -
-
DatabaseName
- The name of the Amazon RDS database. -
InstanceIdentifier
- A unique identifier for the Amazon RDS database instance.
-
-
DatabaseCredentials - AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon RDS database.
-
ResourceRole - A role (DataPipelineDefaultResourceRole) assumed by an EC2 instance to carry out the copy task from Amazon RDS to Amazon Simple Storage Service (Amazon S3). For more information, see Role templates for data pipelines.
-
ServiceRole - A role (DataPipelineDefaultRole) assumed by the AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.
-
SecurityInfo - The security information to use to access an RDS DB instance. You need to set up appropriate ingress rules for the security entity IDs provided to allow access to the Amazon RDS instance. Specify a [
SubnetId
,SecurityGroupIds
] pair for a VPC-based RDS DB instance. -
SelectSqlQuery - A query that is used to retrieve the observation data for the
Datasource
. -
S3StagingLocation - The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using
SelectSqlQuery
is stored in this location. -
DataSchemaUri - The Amazon S3 location of the
DataSchema
. -
DataSchema - A JSON string representing the schema. This is not required if
DataSchemaUri
is specified. -
DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the
Datasource
.Sample -
“{"splitting":{"percentBegin":10,"percentEnd":60}}”
-
role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:The role that Amazon ML assumes on behalf of the user to create and activate a data pipeline in the user’s account and copy data using the
SelectSqlQuery
query from Amazon RDS to Amazon S3.compute_statistics(bool)
/set_compute_statistics(bool)
:The compute statistics for a
DataSource
. The statistics are generated from the observation data referenced by aDataSource
. Amazon ML uses the statistics internally duringMLModel
training. This parameter must be set totrue
if theDataSource
needs to be used for
MLModel
training.
- On success, responds with
CreateDataSourceFromRdsOutput
with field(s):data_source_id(Option<String>)
:A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the
DataSourceID
in the request.
- On failure, responds with
SdkError<CreateDataSourceFromRDSError>
Constructs a fluent builder for the CreateDataSourceFromRedshift
operation.
- The fluent builder is configurable:
data_source_id(impl Into<String>)
/set_data_source_id(Option<String>)
:A user-supplied ID that uniquely identifies the
DataSource
.data_source_name(impl Into<String>)
/set_data_source_name(Option<String>)
:A user-supplied name or description of the
DataSource
.data_spec(RedshiftDataSpec)
/set_data_spec(Option<RedshiftDataSpec>)
:The data specification of an Amazon Redshift
DataSource
:-
DatabaseInformation -
-
DatabaseName
- The name of the Amazon Redshift database. -
ClusterIdentifier
- The unique ID for the Amazon Redshift cluster.
-
-
DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database.
-
SelectSqlQuery - The query that is used to retrieve the observation data for the
Datasource
. -
S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the
SelectSqlQuery
query is stored in this location. -
DataSchemaUri - The Amazon S3 location of the
DataSchema
. -
DataSchema - A JSON string representing the schema. This is not required if
DataSchemaUri
is specified. -
DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the
DataSource
.Sample -
“{"splitting":{"percentBegin":10,"percentEnd":60}}”
-
role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:
-
A security group to allow Amazon ML to execute the
SelectSqlQuery
query on an Amazon Redshift cluster -
An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the
S3StagingLocation
-
compute_statistics(bool)
/set_compute_statistics(bool)
:The compute statistics for a
DataSource
. The statistics are generated from the observation data referenced by aDataSource
. Amazon ML uses the statistics internally duringMLModel
training. This parameter must be set totrue
if theDataSource
needs to be used forMLModel
training.
- On success, responds with
CreateDataSourceFromRedshiftOutput
with field(s):data_source_id(Option<String>)
:A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the
DataSourceID
in the request.
- On failure, responds with
SdkError<CreateDataSourceFromRedshiftError>
Constructs a fluent builder for the CreateDataSourceFromS3
operation.
- The fluent builder is configurable:
data_source_id(impl Into<String>)
/set_data_source_id(Option<String>)
:A user-supplied identifier that uniquely identifies the
DataSource
.data_source_name(impl Into<String>)
/set_data_source_name(Option<String>)
:A user-supplied name or description of the
DataSource
.data_spec(S3DataSpec)
/set_data_spec(Option<S3DataSpec>)
:The data specification of a
DataSource
:-
DataLocationS3 - The Amazon S3 location of the observation data.
-
DataSchemaLocationS3 - The Amazon S3 location of the
DataSchema
. -
DataSchema - A JSON string representing the schema. This is not required if
DataSchemaUri
is specified. -
DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the
Datasource
.Sample -
“{"splitting":{"percentBegin":10,"percentEnd":60}}”
-
compute_statistics(bool)
/set_compute_statistics(bool)
:The compute statistics for a
DataSource
. The statistics are generated from the observation data referenced by aDataSource
. Amazon ML uses the statistics internally duringMLModel
training. This parameter must be set totrue
if theDataSource
needs to be used for
MLModel
training.
- On success, responds with
CreateDataSourceFromS3Output
with field(s):data_source_id(Option<String>)
:A user-supplied ID that uniquely identifies the
DataSource
. This value should be identical to the value of theDataSourceID
in the request.
- On failure, responds with
SdkError<CreateDataSourceFromS3Error>
Constructs a fluent builder for the CreateEvaluation
operation.
- The fluent builder is configurable:
evaluation_id(impl Into<String>)
/set_evaluation_id(Option<String>)
:A user-supplied ID that uniquely identifies the
Evaluation
.evaluation_name(impl Into<String>)
/set_evaluation_name(Option<String>)
:A user-supplied name or description of the
Evaluation
.ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:The ID of the
MLModel
to evaluate.The schema used in creating the
MLModel
must match the schema of theDataSource
used in theEvaluation
.evaluation_data_source_id(impl Into<String>)
/set_evaluation_data_source_id(Option<String>)
:The ID of the
DataSource
for the evaluation. The schema of theDataSource
must match the schema used to create theMLModel
.
- On success, responds with
CreateEvaluationOutput
with field(s):evaluation_id(Option<String>)
:The user-supplied ID that uniquely identifies the
Evaluation
. This value should be identical to the value of theEvaluationId
in the request.
- On failure, responds with
SdkError<CreateEvaluationError>
Constructs a fluent builder for the CreateMLModel
operation.
- The fluent builder is configurable:
ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:A user-supplied ID that uniquely identifies the
MLModel
.ml_model_name(impl Into<String>)
/set_ml_model_name(Option<String>)
:A user-supplied name or description of the
MLModel
.ml_model_type(MlModelType)
/set_ml_model_type(Option<MlModelType>)
:The category of supervised learning that this
MLModel
will address. Choose from the following types:-
Choose
REGRESSION
if theMLModel
will be used to predict a numeric value. -
Choose
BINARY
if theMLModel
result has two possible values. -
Choose
MULTICLASS
if theMLModel
result has a limited number of values.
For more information, see the Amazon Machine Learning Developer Guide.
-
parameters(HashMap<String, String>)
/set_parameters(Option<HashMap<String, String>>)
:A list of the training parameters in the
MLModel
. The list is implemented as a map of key-value pairs.The following is the current set of training parameters:
-
sgd.maxMLModelSizeInBytes
- The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.The value is an integer that ranges from
100000
to2147483648
. The default value is33554432
. -
sgd.maxPasses
- The number of times that the training process traverses the observations to build theMLModel
. The value is an integer that ranges from1
to10000
. The default value is10
. -
sgd.shuffleType
- Whether Amazon ML shuffles the training data. Shuffling the data improves a model’s ability to find the optimal solution for a variety of data types. The valid values areauto
andnone
. The default value isnone
. We strongly recommend that you shuffle your data. -
sgd.l1RegularizationAmount
- The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as1.0E-08
.The value is a double that ranges from
0
toMAX_DOUBLE
. The default is to not use L1 normalization. This parameter can’t be used whenL2
is specified. Use this parameter sparingly. -
sgd.l2RegularizationAmount
- The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as1.0E-08
.The value is a double that ranges from
0
toMAX_DOUBLE
. The default is to not use L2 normalization. This parameter can’t be used whenL1
is specified. Use this parameter sparingly.
-
training_data_source_id(impl Into<String>)
/set_training_data_source_id(Option<String>)
:The
DataSource
that points to the training data.recipe(impl Into<String>)
/set_recipe(Option<String>)
:The data recipe for creating the
MLModel
. You must specify either the recipe or its URI. If you don’t specify a recipe or its URI, Amazon ML creates a default.recipe_uri(impl Into<String>)
/set_recipe_uri(Option<String>)
:The Amazon Simple Storage Service (Amazon S3) location and file name that contains the
MLModel
recipe. You must specify either the recipe or its URI. If you don’t specify a recipe or its URI, Amazon ML creates a default.
- On success, responds with
CreateMlModelOutput
with field(s):ml_model_id(Option<String>)
:A user-supplied ID that uniquely identifies the
MLModel
. This value should be identical to the value of theMLModelId
in the request.
- On failure, responds with
SdkError<CreateMLModelError>
Constructs a fluent builder for the CreateRealtimeEndpoint
operation.
- The fluent builder is configurable:
ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:The ID assigned to the
MLModel
during creation.
- On success, responds with
CreateRealtimeEndpointOutput
with field(s):ml_model_id(Option<String>)
:A user-supplied ID that uniquely identifies the
MLModel
. This value should be identical to the value of theMLModelId
in the request.realtime_endpoint_info(Option<RealtimeEndpointInfo>)
:The endpoint information of the
MLModel
- On failure, responds with
SdkError<CreateRealtimeEndpointError>
Constructs a fluent builder for the DeleteBatchPrediction
operation.
- The fluent builder is configurable:
batch_prediction_id(impl Into<String>)
/set_batch_prediction_id(Option<String>)
:A user-supplied ID that uniquely identifies the
BatchPrediction
.
- On success, responds with
DeleteBatchPredictionOutput
with field(s):batch_prediction_id(Option<String>)
:A user-supplied ID that uniquely identifies the
BatchPrediction
. This value should be identical to the value of theBatchPredictionID
in the request.
- On failure, responds with
SdkError<DeleteBatchPredictionError>
Constructs a fluent builder for the DeleteDataSource
operation.
- The fluent builder is configurable:
data_source_id(impl Into<String>)
/set_data_source_id(Option<String>)
:A user-supplied ID that uniquely identifies the
DataSource
.
- On success, responds with
DeleteDataSourceOutput
with field(s):data_source_id(Option<String>)
:A user-supplied ID that uniquely identifies the
DataSource
. This value should be identical to the value of theDataSourceID
in the request.
- On failure, responds with
SdkError<DeleteDataSourceError>
Constructs a fluent builder for the DeleteEvaluation
operation.
- The fluent builder is configurable:
evaluation_id(impl Into<String>)
/set_evaluation_id(Option<String>)
:A user-supplied ID that uniquely identifies the
Evaluation
to delete.
- On success, responds with
DeleteEvaluationOutput
with field(s):evaluation_id(Option<String>)
:A user-supplied ID that uniquely identifies the
Evaluation
. This value should be identical to the value of theEvaluationId
in the request.
- On failure, responds with
SdkError<DeleteEvaluationError>
Constructs a fluent builder for the DeleteMLModel
operation.
- The fluent builder is configurable:
ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:A user-supplied ID that uniquely identifies the
MLModel
.
- On success, responds with
DeleteMlModelOutput
with field(s):ml_model_id(Option<String>)
:A user-supplied ID that uniquely identifies the
MLModel
. This value should be identical to the value of theMLModelID
in the request.
- On failure, responds with
SdkError<DeleteMLModelError>
Constructs a fluent builder for the DeleteRealtimeEndpoint
operation.
- The fluent builder is configurable:
ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:The ID assigned to the
MLModel
during creation.
- On success, responds with
DeleteRealtimeEndpointOutput
with field(s):ml_model_id(Option<String>)
:A user-supplied ID that uniquely identifies the
MLModel
. This value should be identical to the value of theMLModelId
in the request.realtime_endpoint_info(Option<RealtimeEndpointInfo>)
:The endpoint information of the
MLModel
- On failure, responds with
SdkError<DeleteRealtimeEndpointError>
Constructs a fluent builder for the DeleteTags
operation.
- The fluent builder is configurable:
tag_keys(Vec<String>)
/set_tag_keys(Option<Vec<String>>)
:One or more tags to delete.
resource_id(impl Into<String>)
/set_resource_id(Option<String>)
:The ID of the tagged ML object. For example,
exampleModelId
.resource_type(TaggableResourceType)
/set_resource_type(Option<TaggableResourceType>)
:The type of the tagged ML object.
- On success, responds with
DeleteTagsOutput
with field(s):resource_id(Option<String>)
:The ID of the ML object from which tags were deleted.
resource_type(Option<TaggableResourceType>)
:The type of the ML object from which tags were deleted.
- On failure, responds with
SdkError<DeleteTagsError>
Constructs a fluent builder for the DescribeBatchPredictions
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
filter_variable(BatchPredictionFilterVariable)
/set_filter_variable(Option<BatchPredictionFilterVariable>)
:Use one of the following variables to filter a list of
BatchPrediction
:-
CreatedAt
- Sets the search criteria to theBatchPrediction
creation date. -
Status
- Sets the search criteria to theBatchPrediction
status. -
Name
- Sets the search criteria to the contents of theBatchPrediction
Name
. -
IAMUser
- Sets the search criteria to the user account that invoked theBatchPrediction
creation. -
MLModelId
- Sets the search criteria to theMLModel
used in theBatchPrediction
. -
DataSourceId
- Sets the search criteria to theDataSource
used in theBatchPrediction
. -
DataURI
- Sets the search criteria to the data file(s) used in theBatchPrediction
. The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.
-
eq(impl Into<String>)
/set_eq(Option<String>)
:The equal to operator. The
BatchPrediction
results will haveFilterVariable
values that exactly match the value specified withEQ
.gt(impl Into<String>)
/set_gt(Option<String>)
:The greater than operator. The
BatchPrediction
results will haveFilterVariable
values that are greater than the value specified withGT
.lt(impl Into<String>)
/set_lt(Option<String>)
:The less than operator. The
BatchPrediction
results will haveFilterVariable
values that are less than the value specified withLT
.ge(impl Into<String>)
/set_ge(Option<String>)
:The greater than or equal to operator. The
BatchPrediction
results will haveFilterVariable
values that are greater than or equal to the value specified withGE
.le(impl Into<String>)
/set_le(Option<String>)
:The less than or equal to operator. The
BatchPrediction
results will haveFilterVariable
values that are less than or equal to the value specified withLE
.ne(impl Into<String>)
/set_ne(Option<String>)
:The not equal to operator. The
BatchPrediction
results will haveFilterVariable
values not equal to the value specified withNE
.prefix(impl Into<String>)
/set_prefix(Option<String>)
:A string that is found at the beginning of a variable, such as
Name
orId
.For example, a
Batch Prediction
operation could have theName
2014-09-09-HolidayGiftMailer
. To search for thisBatchPrediction
, selectName
for theFilterVariable
and any of the following strings for thePrefix
:-
2014-09
-
2014-09-09
-
2014-09-09-Holiday
-
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:A two-value parameter that determines the sequence of the resulting list of
MLModel
s.-
asc
- Arranges the list in ascending order (A-Z, 0-9). -
dsc
- Arranges the list in descending order (Z-A, 9-0).
Results are sorted by
FilterVariable
.-
next_token(impl Into<String>)
/set_next_token(Option<String>)
:An ID of the page in the paginated results.
limit(i32)
/set_limit(Option<i32>)
:The number of pages of information to include in the result. The range of acceptable values is
1
through100
. The default value is100
.
- On success, responds with
DescribeBatchPredictionsOutput
with field(s):results(Option<Vec<BatchPrediction>>)
:A list of
BatchPrediction
objects that meet the search criteria.next_token(Option<String>)
:The ID of the next page in the paginated results that indicates at least one more page follows.
- On failure, responds with
SdkError<DescribeBatchPredictionsError>
Constructs a fluent builder for the DescribeDataSources
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
filter_variable(DataSourceFilterVariable)
/set_filter_variable(Option<DataSourceFilterVariable>)
:Use one of the following variables to filter a list of
DataSource
:-
CreatedAt
- Sets the search criteria toDataSource
creation dates. -
Status
- Sets the search criteria toDataSource
statuses. -
Name
- Sets the search criteria to the contents ofDataSource
Name
. -
DataUri
- Sets the search criteria to the URI of data files used to create theDataSource
. The URI can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory. -
IAMUser
- Sets the search criteria to the user account that invoked theDataSource
creation.
-
eq(impl Into<String>)
/set_eq(Option<String>)
:The equal to operator. The
DataSource
results will haveFilterVariable
values that exactly match the value specified withEQ
.gt(impl Into<String>)
/set_gt(Option<String>)
:The greater than operator. The
DataSource
results will haveFilterVariable
values that are greater than the value specified withGT
.lt(impl Into<String>)
/set_lt(Option<String>)
:The less than operator. The
DataSource
results will haveFilterVariable
values that are less than the value specified withLT
.ge(impl Into<String>)
/set_ge(Option<String>)
:The greater than or equal to operator. The
DataSource
results will haveFilterVariable
values that are greater than or equal to the value specified withGE
.le(impl Into<String>)
/set_le(Option<String>)
:The less than or equal to operator. The
DataSource
results will haveFilterVariable
values that are less than or equal to the value specified withLE
.ne(impl Into<String>)
/set_ne(Option<String>)
:The not equal to operator. The
DataSource
results will haveFilterVariable
values not equal to the value specified withNE
.prefix(impl Into<String>)
/set_prefix(Option<String>)
:A string that is found at the beginning of a variable, such as
Name
orId
.For example, a
DataSource
could have theName
2014-09-09-HolidayGiftMailer
. To search for thisDataSource
, selectName
for theFilterVariable
and any of the following strings for thePrefix
:-
2014-09
-
2014-09-09
-
2014-09-09-Holiday
-
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:A two-value parameter that determines the sequence of the resulting list of
DataSource
.-
asc
- Arranges the list in ascending order (A-Z, 0-9). -
dsc
- Arranges the list in descending order (Z-A, 9-0).
Results are sorted by
FilterVariable
.-
next_token(impl Into<String>)
/set_next_token(Option<String>)
:The ID of the page in the paginated results.
limit(i32)
/set_limit(Option<i32>)
:The maximum number of
DataSource
to include in the result.
- On success, responds with
DescribeDataSourcesOutput
with field(s):results(Option<Vec<DataSource>>)
:A list of
DataSource
that meet the search criteria.next_token(Option<String>)
:An ID of the next page in the paginated results that indicates at least one more page follows.
- On failure, responds with
SdkError<DescribeDataSourcesError>
Constructs a fluent builder for the DescribeEvaluations
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
filter_variable(EvaluationFilterVariable)
/set_filter_variable(Option<EvaluationFilterVariable>)
:Use one of the following variable to filter a list of
Evaluation
objects:-
CreatedAt
- Sets the search criteria to theEvaluation
creation date. -
Status
- Sets the search criteria to theEvaluation
status. -
Name
- Sets the search criteria to the contents ofEvaluation
Name
. -
IAMUser
- Sets the search criteria to the user account that invoked anEvaluation
. -
MLModelId
- Sets the search criteria to theMLModel
that was evaluated. -
DataSourceId
- Sets the search criteria to theDataSource
used inEvaluation
. -
DataUri
- Sets the search criteria to the data file(s) used inEvaluation
. The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.
-
eq(impl Into<String>)
/set_eq(Option<String>)
:The equal to operator. The
Evaluation
results will haveFilterVariable
values that exactly match the value specified withEQ
.gt(impl Into<String>)
/set_gt(Option<String>)
:The greater than operator. The
Evaluation
results will haveFilterVariable
values that are greater than the value specified withGT
.lt(impl Into<String>)
/set_lt(Option<String>)
:The less than operator. The
Evaluation
results will haveFilterVariable
values that are less than the value specified withLT
.ge(impl Into<String>)
/set_ge(Option<String>)
:The greater than or equal to operator. The
Evaluation
results will haveFilterVariable
values that are greater than or equal to the value specified withGE
.le(impl Into<String>)
/set_le(Option<String>)
:The less than or equal to operator. The
Evaluation
results will haveFilterVariable
values that are less than or equal to the value specified withLE
.ne(impl Into<String>)
/set_ne(Option<String>)
:The not equal to operator. The
Evaluation
results will haveFilterVariable
values not equal to the value specified withNE
.prefix(impl Into<String>)
/set_prefix(Option<String>)
:A string that is found at the beginning of a variable, such as
Name
orId
.For example, an
Evaluation
could have theName
2014-09-09-HolidayGiftMailer
. To search for thisEvaluation
, selectName
for theFilterVariable
and any of the following strings for thePrefix
:-
2014-09
-
2014-09-09
-
2014-09-09-Holiday
-
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:A two-value parameter that determines the sequence of the resulting list of
Evaluation
.-
asc
- Arranges the list in ascending order (A-Z, 0-9). -
dsc
- Arranges the list in descending order (Z-A, 9-0).
Results are sorted by
FilterVariable
.-
next_token(impl Into<String>)
/set_next_token(Option<String>)
:The ID of the page in the paginated results.
limit(i32)
/set_limit(Option<i32>)
:The maximum number of
Evaluation
to include in the result.
- On success, responds with
DescribeEvaluationsOutput
with field(s):results(Option<Vec<Evaluation>>)
:A list of
Evaluation
that meet the search criteria.next_token(Option<String>)
:The ID of the next page in the paginated results that indicates at least one more page follows.
- On failure, responds with
SdkError<DescribeEvaluationsError>
Constructs a fluent builder for the DescribeMLModels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
filter_variable(MlModelFilterVariable)
/set_filter_variable(Option<MlModelFilterVariable>)
:Use one of the following variables to filter a list of
MLModel
:-
CreatedAt
- Sets the search criteria toMLModel
creation date. -
Status
- Sets the search criteria toMLModel
status. -
Name
- Sets the search criteria to the contents ofMLModel
Name
. -
IAMUser
- Sets the search criteria to the user account that invoked theMLModel
creation. -
TrainingDataSourceId
- Sets the search criteria to theDataSource
used to train one or moreMLModel
. -
RealtimeEndpointStatus
- Sets the search criteria to theMLModel
real-time endpoint status. -
MLModelType
- Sets the search criteria toMLModel
type: binary, regression, or multi-class. -
Algorithm
- Sets the search criteria to the algorithm that theMLModel
uses. -
TrainingDataURI
- Sets the search criteria to the data file(s) used in training aMLModel
. The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.
-
eq(impl Into<String>)
/set_eq(Option<String>)
:The equal to operator. The
MLModel
results will haveFilterVariable
values that exactly match the value specified withEQ
.gt(impl Into<String>)
/set_gt(Option<String>)
:The greater than operator. The
MLModel
results will haveFilterVariable
values that are greater than the value specified withGT
.lt(impl Into<String>)
/set_lt(Option<String>)
:The less than operator. The
MLModel
results will haveFilterVariable
values that are less than the value specified withLT
.ge(impl Into<String>)
/set_ge(Option<String>)
:The greater than or equal to operator. The
MLModel
results will haveFilterVariable
values that are greater than or equal to the value specified withGE
.le(impl Into<String>)
/set_le(Option<String>)
:The less than or equal to operator. The
MLModel
results will haveFilterVariable
values that are less than or equal to the value specified withLE
.ne(impl Into<String>)
/set_ne(Option<String>)
:The not equal to operator. The
MLModel
results will haveFilterVariable
values not equal to the value specified withNE
.prefix(impl Into<String>)
/set_prefix(Option<String>)
:A string that is found at the beginning of a variable, such as
Name
orId
.For example, an
MLModel
could have theName
2014-09-09-HolidayGiftMailer
. To search for thisMLModel
, selectName
for theFilterVariable
and any of the following strings for thePrefix
:-
2014-09
-
2014-09-09
-
2014-09-09-Holiday
-
sort_order(SortOrder)
/set_sort_order(Option<SortOrder>)
:A two-value parameter that determines the sequence of the resulting list of
MLModel
.-
asc
- Arranges the list in ascending order (A-Z, 0-9). -
dsc
- Arranges the list in descending order (Z-A, 9-0).
Results are sorted by
FilterVariable
.-
next_token(impl Into<String>)
/set_next_token(Option<String>)
:The ID of the page in the paginated results.
limit(i32)
/set_limit(Option<i32>)
:The number of pages of information to include in the result. The range of acceptable values is
1
through100
. The default value is100
.
- On success, responds with
DescribeMlModelsOutput
with field(s):results(Option<Vec<MlModel>>)
:A list of
MLModel
that meet the search criteria.next_token(Option<String>)
:The ID of the next page in the paginated results that indicates at least one more page follows.
- On failure, responds with
SdkError<DescribeMLModelsError>
Constructs a fluent builder for the DescribeTags
operation.
- The fluent builder is configurable:
resource_id(impl Into<String>)
/set_resource_id(Option<String>)
:The ID of the ML object. For example,
exampleModelId
.resource_type(TaggableResourceType)
/set_resource_type(Option<TaggableResourceType>)
:The type of the ML object.
- On success, responds with
DescribeTagsOutput
with field(s):resource_id(Option<String>)
:The ID of the tagged ML object.
resource_type(Option<TaggableResourceType>)
:The type of the tagged ML object.
tags(Option<Vec<Tag>>)
:A list of tags associated with the ML object.
- On failure, responds with
SdkError<DescribeTagsError>
Constructs a fluent builder for the GetBatchPrediction
operation.
- The fluent builder is configurable:
batch_prediction_id(impl Into<String>)
/set_batch_prediction_id(Option<String>)
:An ID assigned to the
BatchPrediction
at creation.
- On success, responds with
GetBatchPredictionOutput
with field(s):batch_prediction_id(Option<String>)
:An ID assigned to the
BatchPrediction
at creation. This value should be identical to the value of theBatchPredictionID
in the request.ml_model_id(Option<String>)
:The ID of the
MLModel
that generated predictions for theBatchPrediction
request.batch_prediction_data_source_id(Option<String>)
:The ID of the
DataSource
that was used to create theBatchPrediction
.input_data_location_s3(Option<String>)
:The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).
created_by_iam_user(Option<String>)
:The AWS user account that invoked the
BatchPrediction
. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.created_at(Option<DateTime>)
:The time when the
BatchPrediction
was created. The time is expressed in epoch time.last_updated_at(Option<DateTime>)
:The time of the most recent edit to
BatchPrediction
. The time is expressed in epoch time.name(Option<String>)
:A user-supplied name or description of the
BatchPrediction
.status(Option<EntityStatus>)
:The status of the
BatchPrediction
, which can be one of the following values:-
PENDING
- Amazon Machine Learning (Amazon ML) submitted a request to generate batch predictions. -
INPROGRESS
- The batch predictions are in progress. -
FAILED
- The request to perform a batch prediction did not run to completion. It is not usable. -
COMPLETED
- The batch prediction process completed successfully. -
DELETED
- TheBatchPrediction
is marked as deleted. It is not usable.
-
output_uri(Option<String>)
:The location of an Amazon S3 bucket or directory to receive the operation results.
log_uri(Option<String>)
:A link to the file that contains logs of the
CreateBatchPrediction
operation.message(Option<String>)
:A description of the most recent details about processing the batch prediction request.
compute_time(Option<i64>)
:The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the
BatchPrediction
, normalized and scaled on computation resources.ComputeTime
is only available if theBatchPrediction
is in theCOMPLETED
state.finished_at(Option<DateTime>)
:The epoch time when Amazon Machine Learning marked the
BatchPrediction
asCOMPLETED
orFAILED
.FinishedAt
is only available when theBatchPrediction
is in theCOMPLETED
orFAILED
state.started_at(Option<DateTime>)
:The epoch time when Amazon Machine Learning marked the
BatchPrediction
asINPROGRESS
.StartedAt
isn’t available if theBatchPrediction
is in thePENDING
state.total_record_count(Option<i64>)
:The number of total records that Amazon Machine Learning saw while processing the
BatchPrediction
.invalid_record_count(Option<i64>)
:The number of invalid records that Amazon Machine Learning saw while processing the
BatchPrediction
.
- On failure, responds with
SdkError<GetBatchPredictionError>
Constructs a fluent builder for the GetDataSource
operation.
- The fluent builder is configurable:
data_source_id(impl Into<String>)
/set_data_source_id(Option<String>)
:The ID assigned to the
DataSource
at creation.verbose(bool)
/set_verbose(bool)
:Specifies whether the
GetDataSource
operation should returnDataSourceSchema
.If true,
DataSourceSchema
is returned.If false,
DataSourceSchema
is not returned.
- On success, responds with
GetDataSourceOutput
with field(s):data_source_id(Option<String>)
:The ID assigned to the
DataSource
at creation. This value should be identical to the value of theDataSourceId
in the request.data_location_s3(Option<String>)
:The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).
data_rearrangement(Option<String>)
:A JSON string that represents the splitting and rearrangement requirement used when this
DataSource
was created.created_by_iam_user(Option<String>)
:The AWS user account from which the
DataSource
was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.created_at(Option<DateTime>)
:The time that the
DataSource
was created. The time is expressed in epoch time.last_updated_at(Option<DateTime>)
:The time of the most recent edit to the
DataSource
. The time is expressed in epoch time.data_size_in_bytes(Option<i64>)
:The total size of observations in the data files.
number_of_files(Option<i64>)
:The number of data files referenced by the
DataSource
.name(Option<String>)
:A user-supplied name or description of the
DataSource
.status(Option<EntityStatus>)
:The current status of the
DataSource
. This element can have one of the following values:-
PENDING
- Amazon ML submitted a request to create aDataSource
. -
INPROGRESS
- The creation process is underway. -
FAILED
- The request to create aDataSource
did not run to completion. It is not usable. -
COMPLETED
- The creation process completed successfully. -
DELETED
- TheDataSource
is marked as deleted. It is not usable.
-
log_uri(Option<String>)
:A link to the file containing logs of
CreateDataSourceFrom*
operations.message(Option<String>)
:The user-supplied description of the most recent details about creating the
DataSource
.redshift_metadata(Option<RedshiftMetadata>)
:Describes the
DataSource
details specific to Amazon Redshift.rds_metadata(Option<RdsMetadata>)
:The datasource details that are specific to Amazon RDS.
role_arn(Option<String>)
:The Amazon Resource Name (ARN) of an AWS IAM Role, such as the following: arn:aws:iam::account:role/rolename.
compute_statistics(bool)
:The parameter is
true
if statistics need to be generated from the observation data.compute_time(Option<i64>)
:The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the
DataSource
, normalized and scaled on computation resources.ComputeTime
is only available if theDataSource
is in theCOMPLETED
state and theComputeStatistics
is set to true.finished_at(Option<DateTime>)
:The epoch time when Amazon Machine Learning marked the
DataSource
asCOMPLETED
orFAILED
.FinishedAt
is only available when theDataSource
is in theCOMPLETED
orFAILED
state.started_at(Option<DateTime>)
:The epoch time when Amazon Machine Learning marked the
DataSource
asINPROGRESS
.StartedAt
isn’t available if theDataSource
is in thePENDING
state.data_source_schema(Option<String>)
:The schema used by all of the data files of this
DataSource
.Note: This parameter is provided as part of the verbose format.
- On failure, responds with
SdkError<GetDataSourceError>
Constructs a fluent builder for the GetEvaluation
operation.
- The fluent builder is configurable:
evaluation_id(impl Into<String>)
/set_evaluation_id(Option<String>)
:The ID of the
Evaluation
to retrieve. The evaluation of eachMLModel
is recorded and cataloged. The ID provides the means to access the information.
- On success, responds with
GetEvaluationOutput
with field(s):evaluation_id(Option<String>)
:The evaluation ID which is same as the
EvaluationId
in the request.ml_model_id(Option<String>)
:The ID of the
MLModel
that was the focus of the evaluation.evaluation_data_source_id(Option<String>)
:The
DataSource
used for this evaluation.input_data_location_s3(Option<String>)
:The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).
created_by_iam_user(Option<String>)
:The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.
created_at(Option<DateTime>)
:The time that the
Evaluation
was created. The time is expressed in epoch time.last_updated_at(Option<DateTime>)
:The time of the most recent edit to the
Evaluation
. The time is expressed in epoch time.name(Option<String>)
:A user-supplied name or description of the
Evaluation
.status(Option<EntityStatus>)
:The status of the evaluation. This element can have one of the following values:
-
PENDING
- Amazon Machine Language (Amazon ML) submitted a request to evaluate anMLModel
. -
INPROGRESS
- The evaluation is underway. -
FAILED
- The request to evaluate anMLModel
did not run to completion. It is not usable. -
COMPLETED
- The evaluation process completed successfully. -
DELETED
- TheEvaluation
is marked as deleted. It is not usable.
-
performance_metrics(Option<PerformanceMetrics>)
:Measurements of how well the
MLModel
performed using observations referenced by theDataSource
. One of the following metric is returned based on the type of theMLModel
:-
BinaryAUC: A binary
MLModel
uses the Area Under the Curve (AUC) technique to measure performance. -
RegressionRMSE: A regression
MLModel
uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable. -
MulticlassAvgFScore: A multiclass
MLModel
uses the F1 score technique to measure performance.
For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.
-
log_uri(Option<String>)
:A link to the file that contains logs of the
CreateEvaluation
operation.message(Option<String>)
:A description of the most recent details about evaluating the
MLModel
.compute_time(Option<i64>)
:The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the
Evaluation
, normalized and scaled on computation resources.ComputeTime
is only available if theEvaluation
is in theCOMPLETED
state.finished_at(Option<DateTime>)
:The epoch time when Amazon Machine Learning marked the
Evaluation
asCOMPLETED
orFAILED
.FinishedAt
is only available when theEvaluation
is in theCOMPLETED
orFAILED
state.started_at(Option<DateTime>)
:The epoch time when Amazon Machine Learning marked the
Evaluation
asINPROGRESS
.StartedAt
isn’t available if theEvaluation
is in thePENDING
state.
- On failure, responds with
SdkError<GetEvaluationError>
Constructs a fluent builder for the GetMLModel
operation.
- The fluent builder is configurable:
ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:The ID assigned to the
MLModel
at creation.verbose(bool)
/set_verbose(bool)
:Specifies whether the
GetMLModel
operation should returnRecipe
.If true,
Recipe
is returned.If false,
Recipe
is not returned.
- On success, responds with
GetMlModelOutput
with field(s):ml_model_id(Option<String>)
:The MLModel ID, which is same as the
MLModelId
in the request.training_data_source_id(Option<String>)
:The ID of the training
DataSource
.created_by_iam_user(Option<String>)
:The AWS user account from which the
MLModel
was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.created_at(Option<DateTime>)
:The time that the
MLModel
was created. The time is expressed in epoch time.last_updated_at(Option<DateTime>)
:The time of the most recent edit to the
MLModel
. The time is expressed in epoch time.name(Option<String>)
:A user-supplied name or description of the
MLModel
.status(Option<EntityStatus>)
:The current status of the
MLModel
. This element can have one of the following values:-
PENDING
- Amazon Machine Learning (Amazon ML) submitted a request to describe aMLModel
. -
INPROGRESS
- The request is processing. -
FAILED
- The request did not run to completion. The ML model isn’t usable. -
COMPLETED
- The request completed successfully. -
DELETED
- TheMLModel
is marked as deleted. It isn’t usable.
-
size_in_bytes(Option<i64>)
:Long integer type that is a 64-bit signed number.
endpoint_info(Option<RealtimeEndpointInfo>)
:The current endpoint of the
MLModel
training_parameters(Option<HashMap<String, String>>)
:A list of the training parameters in the
MLModel
. The list is implemented as a map of key-value pairs.The following is the current set of training parameters:
-
sgd.maxMLModelSizeInBytes
- The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.The value is an integer that ranges from
100000
to2147483648
. The default value is33554432
. -
sgd.maxPasses
- The number of times that the training process traverses the observations to build theMLModel
. The value is an integer that ranges from1
to10000
. The default value is10
. -
sgd.shuffleType
- Whether Amazon ML shuffles the training data. Shuffling data improves a model’s ability to find the optimal solution for a variety of data types. The valid values areauto
andnone
. The default value isnone
. We strongly recommend that you shuffle your data. -
sgd.l1RegularizationAmount
- The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as1.0E-08
.The value is a double that ranges from
0
toMAX_DOUBLE
. The default is to not use L1 normalization. This parameter can’t be used whenL2
is specified. Use this parameter sparingly. -
sgd.l2RegularizationAmount
- The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as1.0E-08
.The value is a double that ranges from
0
toMAX_DOUBLE
. The default is to not use L2 normalization. This parameter can’t be used whenL1
is specified. Use this parameter sparingly.
-
input_data_location_s3(Option<String>)
:The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).
ml_model_type(Option<MlModelType>)
:Identifies the
MLModel
category. The following are the available types:-
REGRESSION – Produces a numeric result. For example, “What price should a house be listed at?”
-
BINARY – Produces one of two possible results. For example, “Is this an e-commerce website?”
-
MULTICLASS – Produces one of several possible results. For example, “Is this a HIGH, LOW or MEDIUM risk trade?”
-
score_threshold(Option<f32>)
:The scoring threshold is used in binary classification
MLModel
models. It marks the boundary between a positive prediction and a negative prediction.Output values greater than or equal to the threshold receive a positive result from the MLModel, such as
true
. Output values less than the threshold receive a negative response from the MLModel, such asfalse
.score_threshold_last_updated_at(Option<DateTime>)
:The time of the most recent edit to the
ScoreThreshold
. The time is expressed in epoch time.log_uri(Option<String>)
:A link to the file that contains logs of the
CreateMLModel
operation.message(Option<String>)
:A description of the most recent details about accessing the
MLModel
.compute_time(Option<i64>)
:The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the
MLModel
, normalized and scaled on computation resources.ComputeTime
is only available if theMLModel
is in theCOMPLETED
state.finished_at(Option<DateTime>)
:The epoch time when Amazon Machine Learning marked the
MLModel
asCOMPLETED
orFAILED
.FinishedAt
is only available when theMLModel
is in theCOMPLETED
orFAILED
state.started_at(Option<DateTime>)
:The epoch time when Amazon Machine Learning marked the
MLModel
asINPROGRESS
.StartedAt
isn’t available if theMLModel
is in thePENDING
state.recipe(Option<String>)
:The recipe to use when training the
MLModel
. TheRecipe
provides detailed information about the observation data to use during training, and manipulations to perform on the observation data during training.Note: This parameter is provided as part of the verbose format.
schema(Option<String>)
:The schema used by all of the data files referenced by the
DataSource
.Note: This parameter is provided as part of the verbose format.
- On failure, responds with
SdkError<GetMLModelError>
Constructs a fluent builder for the Predict
operation.
- The fluent builder is configurable:
ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:A unique identifier of the
MLModel
.record(HashMap<String, String>)
/set_record(Option<HashMap<String, String>>)
:A map of variable name-value pairs that represent an observation.
predict_endpoint(impl Into<String>)
/set_predict_endpoint(Option<String>)
: (undocumented)
- On success, responds with
PredictOutput
with field(s):prediction(Option<Prediction>)
:The output from a
Predict
operation:-
Details
- Contains the following attributes:DetailsAttributes.PREDICTIVE_MODEL_TYPE - REGRESSION | BINARY | MULTICLASS
DetailsAttributes.ALGORITHM - SGD
-
PredictedLabel
- Present for either aBINARY
orMULTICLASS
MLModel
request. -
PredictedScores
- Contains the raw classification score corresponding to each label. -
PredictedValue
- Present for aREGRESSION
MLModel
request.
-
- On failure, responds with
SdkError<PredictError>
Constructs a fluent builder for the UpdateBatchPrediction
operation.
- The fluent builder is configurable:
batch_prediction_id(impl Into<String>)
/set_batch_prediction_id(Option<String>)
:The ID assigned to the
BatchPrediction
during creation.batch_prediction_name(impl Into<String>)
/set_batch_prediction_name(Option<String>)
:A new user-supplied name or description of the
BatchPrediction
.
- On success, responds with
UpdateBatchPredictionOutput
with field(s):batch_prediction_id(Option<String>)
:The ID assigned to the
BatchPrediction
during creation. This value should be identical to the value of theBatchPredictionId
in the request.
- On failure, responds with
SdkError<UpdateBatchPredictionError>
Constructs a fluent builder for the UpdateDataSource
operation.
- The fluent builder is configurable:
data_source_id(impl Into<String>)
/set_data_source_id(Option<String>)
:The ID assigned to the
DataSource
during creation.data_source_name(impl Into<String>)
/set_data_source_name(Option<String>)
:A new user-supplied name or description of the
DataSource
that will replace the current description.
- On success, responds with
UpdateDataSourceOutput
with field(s):data_source_id(Option<String>)
:The ID assigned to the
DataSource
during creation. This value should be identical to the value of theDataSourceID
in the request.
- On failure, responds with
SdkError<UpdateDataSourceError>
Constructs a fluent builder for the UpdateEvaluation
operation.
- The fluent builder is configurable:
evaluation_id(impl Into<String>)
/set_evaluation_id(Option<String>)
:The ID assigned to the
Evaluation
during creation.evaluation_name(impl Into<String>)
/set_evaluation_name(Option<String>)
:A new user-supplied name or description of the
Evaluation
that will replace the current content.
- On success, responds with
UpdateEvaluationOutput
with field(s):evaluation_id(Option<String>)
:The ID assigned to the
Evaluation
during creation. This value should be identical to the value of theEvaluation
in the request.
- On failure, responds with
SdkError<UpdateEvaluationError>
Constructs a fluent builder for the UpdateMLModel
operation.
- The fluent builder is configurable:
ml_model_id(impl Into<String>)
/set_ml_model_id(Option<String>)
:The ID assigned to the
MLModel
during creation.ml_model_name(impl Into<String>)
/set_ml_model_name(Option<String>)
:A user-supplied name or description of the
MLModel
.score_threshold(f32)
/set_score_threshold(Option<f32>)
:The
ScoreThreshold
used in binary classificationMLModel
that marks the boundary between a positive prediction and a negative prediction.Output values greater than or equal to the
ScoreThreshold
receive a positive result from theMLModel
, such astrue
. Output values less than theScoreThreshold
receive a negative response from theMLModel
, such asfalse
.
- On success, responds with
UpdateMlModelOutput
with field(s):ml_model_id(Option<String>)
:The ID assigned to the
MLModel
during creation. This value should be identical to the value of theMLModelID
in the request.
- On failure, responds with
SdkError<UpdateMLModelError>
Creates a client with the given service config and connector override.
Trait Implementations
Auto Trait Implementations
impl<C = DynConnector, M = DefaultMiddleware, R = Standard> !RefUnwindSafe for Client<C, M, R>
impl<C = DynConnector, M = DefaultMiddleware, R = Standard> !UnwindSafe for Client<C, M, R>
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more