pub struct Client { /* private fields */ }
Expand description

Client for Amazon Machine Learning

Client for invoking operations on Amazon Machine Learning. Each operation on Amazon Machine Learning is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

Examples

Constructing a client and invoking an operation

    // create a shared configuration. This can be used & shared between multiple service clients.
    let shared_config = aws_config::load_from_env().await;
    let client = aws_sdk_machinelearning::Client::new(&shared_config);
    // invoke an operation
    /* let rsp = client
        .<operation_name>().
        .<param>("some value")
        .send().await; */

Constructing a client with custom configuration

use aws_config::RetryConfig;
    let shared_config = aws_config::load_from_env().await;
    let config = aws_sdk_machinelearning::config::Builder::from(&shared_config)
        .retry_config(RetryConfig::disabled())
        .build();
    let client = aws_sdk_machinelearning::Client::from_conf(config);

Implementations

Creates a client with the given service configuration.

Returns the client’s configuration.

Constructs a fluent builder for the AddTags operation.

Constructs a fluent builder for the CreateBatchPrediction operation.

Constructs a fluent builder for the CreateDataSourceFromRDS operation.

  • The fluent builder is configurable:
    • data_source_id(impl Into<String>) / set_data_source_id(Option<String>):

      A user-supplied ID that uniquely identifies the DataSource. Typically, an Amazon Resource Number (ARN) becomes the ID for a DataSource.

    • data_source_name(impl Into<String>) / set_data_source_name(Option<String>):

      A user-supplied name or description of the DataSource.

    • rds_data(RdsDataSpec) / set_rds_data(Option<RdsDataSpec>):

      The data specification of an Amazon RDS DataSource:

      • DatabaseInformation -

        • DatabaseName - The name of the Amazon RDS database.

        • InstanceIdentifier - A unique identifier for the Amazon RDS database instance.

      • DatabaseCredentials - AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon RDS database.

      • ResourceRole - A role (DataPipelineDefaultResourceRole) assumed by an EC2 instance to carry out the copy task from Amazon RDS to Amazon Simple Storage Service (Amazon S3). For more information, see Role templates for data pipelines.

      • ServiceRole - A role (DataPipelineDefaultRole) assumed by the AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

      • SecurityInfo - The security information to use to access an RDS DB instance. You need to set up appropriate ingress rules for the security entity IDs provided to allow access to the Amazon RDS instance. Specify a [SubnetId, SecurityGroupIds] pair for a VPC-based RDS DB instance.

      • SelectSqlQuery - A query that is used to retrieve the observation data for the Datasource.

      • S3StagingLocation - The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery is stored in this location.

      • DataSchemaUri - The Amazon S3 location of the DataSchema.

      • DataSchema - A JSON string representing the schema. This is not required if DataSchemaUri is specified.

      • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the Datasource.

        Sample - “{"splitting":{"percentBegin":10,"percentEnd":60}}”

    • role_arn(impl Into<String>) / set_role_arn(Option<String>):

      The role that Amazon ML assumes on behalf of the user to create and activate a data pipeline in the user’s account and copy data using the SelectSqlQuery query from Amazon RDS to Amazon S3.

    • compute_statistics(bool) / set_compute_statistics(bool):

      The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during MLModel training. This parameter must be set to true if the DataSource needs to be used for MLModel training.

  • On success, responds with CreateDataSourceFromRdsOutput with field(s):
    • data_source_id(Option<String>):

      A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the DataSourceID in the request.

  • On failure, responds with SdkError<CreateDataSourceFromRDSError>

Constructs a fluent builder for the CreateDataSourceFromRedshift operation.

  • The fluent builder is configurable:
    • data_source_id(impl Into<String>) / set_data_source_id(Option<String>):

      A user-supplied ID that uniquely identifies the DataSource.

    • data_source_name(impl Into<String>) / set_data_source_name(Option<String>):

      A user-supplied name or description of the DataSource.

    • data_spec(RedshiftDataSpec) / set_data_spec(Option<RedshiftDataSpec>):

      The data specification of an Amazon Redshift DataSource:

      • DatabaseInformation -

        • DatabaseName - The name of the Amazon Redshift database.

        • ClusterIdentifier - The unique ID for the Amazon Redshift cluster.

      • DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database.

      • SelectSqlQuery - The query that is used to retrieve the observation data for the Datasource.

      • S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the SelectSqlQuery query is stored in this location.

      • DataSchemaUri - The Amazon S3 location of the DataSchema.

      • DataSchema - A JSON string representing the schema. This is not required if DataSchemaUri is specified.

      • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the DataSource.

        Sample - “{"splitting":{"percentBegin":10,"percentEnd":60}}”

    • role_arn(impl Into<String>) / set_role_arn(Option<String>):

      A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:

      • A security group to allow Amazon ML to execute the SelectSqlQuery query on an Amazon Redshift cluster

      • An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the S3StagingLocation

    • compute_statistics(bool) / set_compute_statistics(bool):

      The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during MLModel training. This parameter must be set to true if the DataSource needs to be used for MLModel training.

  • On success, responds with CreateDataSourceFromRedshiftOutput with field(s):
    • data_source_id(Option<String>):

      A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the DataSourceID in the request.

  • On failure, responds with SdkError<CreateDataSourceFromRedshiftError>

Constructs a fluent builder for the CreateDataSourceFromS3 operation.

Constructs a fluent builder for the CreateEvaluation operation.

Constructs a fluent builder for the CreateMLModel operation.

  • The fluent builder is configurable:
    • ml_model_id(impl Into<String>) / set_ml_model_id(Option<String>):

      A user-supplied ID that uniquely identifies the MLModel.

    • ml_model_name(impl Into<String>) / set_ml_model_name(Option<String>):

      A user-supplied name or description of the MLModel.

    • ml_model_type(MlModelType) / set_ml_model_type(Option<MlModelType>):

      The category of supervised learning that this MLModel will address. Choose from the following types:

      • Choose REGRESSION if the MLModel will be used to predict a numeric value.

      • Choose BINARY if the MLModel result has two possible values.

      • Choose MULTICLASS if the MLModel result has a limited number of values.

      For more information, see the Amazon Machine Learning Developer Guide.

    • parameters(HashMap<String, String>) / set_parameters(Option<HashMap<String, String>>):

      A list of the training parameters in the MLModel. The list is implemented as a map of key-value pairs.

      The following is the current set of training parameters:

      • sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.

        The value is an integer that ranges from 100000 to 2147483648. The default value is 33554432.

      • sgd.maxPasses - The number of times that the training process traverses the observations to build the MLModel. The value is an integer that ranges from 1 to 10000. The default value is 10.

      • sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model’s ability to find the optimal solution for a variety of data types. The valid values are auto and none. The default value is none. We strongly recommend that you shuffle your data.

      • sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08.

        The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L1 normalization. This parameter can’t be used when L2 is specified. Use this parameter sparingly.

      • sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08.

        The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L2 normalization. This parameter can’t be used when L1 is specified. Use this parameter sparingly.

    • training_data_source_id(impl Into<String>) / set_training_data_source_id(Option<String>):

      The DataSource that points to the training data.

    • recipe(impl Into<String>) / set_recipe(Option<String>):

      The data recipe for creating the MLModel. You must specify either the recipe or its URI. If you don’t specify a recipe or its URI, Amazon ML creates a default.

    • recipe_uri(impl Into<String>) / set_recipe_uri(Option<String>):

      The Amazon Simple Storage Service (Amazon S3) location and file name that contains the MLModel recipe. You must specify either the recipe or its URI. If you don’t specify a recipe or its URI, Amazon ML creates a default.

  • On success, responds with CreateMlModelOutput with field(s):
    • ml_model_id(Option<String>):

      A user-supplied ID that uniquely identifies the MLModel. This value should be identical to the value of the MLModelId in the request.

  • On failure, responds with SdkError<CreateMLModelError>

Constructs a fluent builder for the CreateRealtimeEndpoint operation.

Constructs a fluent builder for the DeleteBatchPrediction operation.

Constructs a fluent builder for the DeleteDataSource operation.

Constructs a fluent builder for the DeleteEvaluation operation.

Constructs a fluent builder for the DeleteMLModel operation.

Constructs a fluent builder for the DeleteRealtimeEndpoint operation.

Constructs a fluent builder for the DeleteTags operation.

Constructs a fluent builder for the DescribeBatchPredictions operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the DescribeDataSources operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the DescribeEvaluations operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the DescribeMLModels operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • filter_variable(MlModelFilterVariable) / set_filter_variable(Option<MlModelFilterVariable>):

      Use one of the following variables to filter a list of MLModel:

      • CreatedAt - Sets the search criteria to MLModel creation date.

      • Status - Sets the search criteria to MLModel status.

      • Name - Sets the search criteria to the contents of MLModel Name.

      • IAMUser - Sets the search criteria to the user account that invoked the MLModel creation.

      • TrainingDataSourceId - Sets the search criteria to the DataSource used to train one or more MLModel.

      • RealtimeEndpointStatus - Sets the search criteria to the MLModel real-time endpoint status.

      • MLModelType - Sets the search criteria to MLModel type: binary, regression, or multi-class.

      • Algorithm - Sets the search criteria to the algorithm that the MLModel uses.

      • TrainingDataURI - Sets the search criteria to the data file(s) used in training a MLModel. The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.

    • eq(impl Into<String>) / set_eq(Option<String>):

      The equal to operator. The MLModel results will have FilterVariable values that exactly match the value specified with EQ.

    • gt(impl Into<String>) / set_gt(Option<String>):

      The greater than operator. The MLModel results will have FilterVariable values that are greater than the value specified with GT.

    • lt(impl Into<String>) / set_lt(Option<String>):

      The less than operator. The MLModel results will have FilterVariable values that are less than the value specified with LT.

    • ge(impl Into<String>) / set_ge(Option<String>):

      The greater than or equal to operator. The MLModel results will have FilterVariable values that are greater than or equal to the value specified with GE.

    • le(impl Into<String>) / set_le(Option<String>):

      The less than or equal to operator. The MLModel results will have FilterVariable values that are less than or equal to the value specified with LE.

    • ne(impl Into<String>) / set_ne(Option<String>):

      The not equal to operator. The MLModel results will have FilterVariable values not equal to the value specified with NE.

    • prefix(impl Into<String>) / set_prefix(Option<String>):

      A string that is found at the beginning of a variable, such as Name or Id.

      For example, an MLModel could have the Name 2014-09-09-HolidayGiftMailer. To search for this MLModel, select Name for the FilterVariable and any of the following strings for the Prefix:

      • 2014-09

      • 2014-09-09

      • 2014-09-09-Holiday

    • sort_order(SortOrder) / set_sort_order(Option<SortOrder>):

      A two-value parameter that determines the sequence of the resulting list of MLModel.

      • asc - Arranges the list in ascending order (A-Z, 0-9).

      • dsc - Arranges the list in descending order (Z-A, 9-0).

      Results are sorted by FilterVariable.

    • next_token(impl Into<String>) / set_next_token(Option<String>):

      The ID of the page in the paginated results.

    • limit(i32) / set_limit(Option<i32>):

      The number of pages of information to include in the result. The range of acceptable values is 1 through 100. The default value is 100.

  • On success, responds with DescribeMlModelsOutput with field(s):
  • On failure, responds with SdkError<DescribeMLModelsError>

Constructs a fluent builder for the DescribeTags operation.

Constructs a fluent builder for the GetBatchPrediction operation.

Constructs a fluent builder for the GetDataSource operation.

Constructs a fluent builder for the GetEvaluation operation.

  • The fluent builder is configurable:
  • On success, responds with GetEvaluationOutput with field(s):
    • evaluation_id(Option<String>):

      The evaluation ID which is same as the EvaluationId in the request.

    • ml_model_id(Option<String>):

      The ID of the MLModel that was the focus of the evaluation.

    • evaluation_data_source_id(Option<String>):

      The DataSource used for this evaluation.

    • input_data_location_s3(Option<String>):

      The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

    • created_by_iam_user(Option<String>):

      The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

    • created_at(Option<DateTime>):

      The time that the Evaluation was created. The time is expressed in epoch time.

    • last_updated_at(Option<DateTime>):

      The time of the most recent edit to the Evaluation. The time is expressed in epoch time.

    • name(Option<String>):

      A user-supplied name or description of the Evaluation.

    • status(Option<EntityStatus>):

      The status of the evaluation. This element can have one of the following values:

      • PENDING - Amazon Machine Language (Amazon ML) submitted a request to evaluate an MLModel.

      • INPROGRESS - The evaluation is underway.

      • FAILED - The request to evaluate an MLModel did not run to completion. It is not usable.

      • COMPLETED - The evaluation process completed successfully.

      • DELETED - The Evaluation is marked as deleted. It is not usable.

    • performance_metrics(Option<PerformanceMetrics>):

      Measurements of how well the MLModel performed using observations referenced by the DataSource. One of the following metric is returned based on the type of the MLModel:

      • BinaryAUC: A binary MLModel uses the Area Under the Curve (AUC) technique to measure performance.

      • RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.

      • MulticlassAvgFScore: A multiclass MLModel uses the F1 score technique to measure performance.

      For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.

    • log_uri(Option<String>):

      A link to the file that contains logs of the CreateEvaluation operation.

    • message(Option<String>):

      A description of the most recent details about evaluating the MLModel.

    • compute_time(Option<i64>):

      The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the Evaluation, normalized and scaled on computation resources. ComputeTime is only available if the Evaluation is in the COMPLETED state.

    • finished_at(Option<DateTime>):

      The epoch time when Amazon Machine Learning marked the Evaluation as COMPLETED or FAILED. FinishedAt is only available when the Evaluation is in the COMPLETED or FAILED state.

    • started_at(Option<DateTime>):

      The epoch time when Amazon Machine Learning marked the Evaluation as INPROGRESS. StartedAt isn’t available if the Evaluation is in the PENDING state.

  • On failure, responds with SdkError<GetEvaluationError>

Constructs a fluent builder for the GetMLModel operation.

  • The fluent builder is configurable:
  • On success, responds with GetMlModelOutput with field(s):
    • ml_model_id(Option<String>):

      The MLModel ID, which is same as the MLModelId in the request.

    • training_data_source_id(Option<String>):

      The ID of the training DataSource.

    • created_by_iam_user(Option<String>):

      The AWS user account from which the MLModel was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

    • created_at(Option<DateTime>):

      The time that the MLModel was created. The time is expressed in epoch time.

    • last_updated_at(Option<DateTime>):

      The time of the most recent edit to the MLModel. The time is expressed in epoch time.

    • name(Option<String>):

      A user-supplied name or description of the MLModel.

    • status(Option<EntityStatus>):

      The current status of the MLModel. This element can have one of the following values:

      • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to describe a MLModel.

      • INPROGRESS - The request is processing.

      • FAILED - The request did not run to completion. The ML model isn’t usable.

      • COMPLETED - The request completed successfully.

      • DELETED - The MLModel is marked as deleted. It isn’t usable.

    • size_in_bytes(Option<i64>):

      Long integer type that is a 64-bit signed number.

    • endpoint_info(Option<RealtimeEndpointInfo>):

      The current endpoint of the MLModel

    • training_parameters(Option<HashMap<String, String>>):

      A list of the training parameters in the MLModel. The list is implemented as a map of key-value pairs.

      The following is the current set of training parameters:

      • sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.

        The value is an integer that ranges from 100000 to 2147483648. The default value is 33554432.

      • sgd.maxPasses - The number of times that the training process traverses the observations to build the MLModel. The value is an integer that ranges from 1 to 10000. The default value is 10.

      • sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling data improves a model’s ability to find the optimal solution for a variety of data types. The valid values are auto and none. The default value is none. We strongly recommend that you shuffle your data.

      • sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08.

        The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L1 normalization. This parameter can’t be used when L2 is specified. Use this parameter sparingly.

      • sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08.

        The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L2 normalization. This parameter can’t be used when L1 is specified. Use this parameter sparingly.

    • input_data_location_s3(Option<String>):

      The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

    • ml_model_type(Option<MlModelType>):

      Identifies the MLModel category. The following are the available types:

      • REGRESSION – Produces a numeric result. For example, “What price should a house be listed at?”

      • BINARY – Produces one of two possible results. For example, “Is this an e-commerce website?”

      • MULTICLASS – Produces one of several possible results. For example, “Is this a HIGH, LOW or MEDIUM risk trade?”

    • score_threshold(Option<f32>):

      The scoring threshold is used in binary classification MLModel models. It marks the boundary between a positive prediction and a negative prediction.

      Output values greater than or equal to the threshold receive a positive result from the MLModel, such as true. Output values less than the threshold receive a negative response from the MLModel, such as false.

    • score_threshold_last_updated_at(Option<DateTime>):

      The time of the most recent edit to the ScoreThreshold. The time is expressed in epoch time.

    • log_uri(Option<String>):

      A link to the file that contains logs of the CreateMLModel operation.

    • message(Option<String>):

      A description of the most recent details about accessing the MLModel.

    • compute_time(Option<i64>):

      The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the MLModel, normalized and scaled on computation resources. ComputeTime is only available if the MLModel is in the COMPLETED state.

    • finished_at(Option<DateTime>):

      The epoch time when Amazon Machine Learning marked the MLModel as COMPLETED or FAILED. FinishedAt is only available when the MLModel is in the COMPLETED or FAILED state.

    • started_at(Option<DateTime>):

      The epoch time when Amazon Machine Learning marked the MLModel as INPROGRESS. StartedAt isn’t available if the MLModel is in the PENDING state.

    • recipe(Option<String>):

      The recipe to use when training the MLModel. The Recipe provides detailed information about the observation data to use during training, and manipulations to perform on the observation data during training.

      Note: This parameter is provided as part of the verbose format.

    • schema(Option<String>):

      The schema used by all of the data files referenced by the DataSource.

      Note: This parameter is provided as part of the verbose format.

  • On failure, responds with SdkError<GetMLModelError>

Constructs a fluent builder for the Predict operation.

Constructs a fluent builder for the UpdateBatchPrediction operation.

Constructs a fluent builder for the UpdateDataSource operation.

Constructs a fluent builder for the UpdateEvaluation operation.

Constructs a fluent builder for the UpdateMLModel operation.

Creates a client with the given service config and connector override.

Creates a new client from a shared config.

Creates a new client from the service Config.

Trait Implementations

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

Formats the value using the given formatter. Read more

Performs the conversion.

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The resulting type after obtaining ownership.

Creates owned data from borrowed data, usually by cloning. Read more

🔬 This is a nightly-only experimental API. (toowned_clone_into)

Uses borrowed data to replace owned data, usually by cloning. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more