Struct aws_sdk_machinelearning::Client

source ·
pub struct Client { /* private fields */ }
Expand description

Client for Amazon Machine Learning

Client for invoking operations on Amazon Machine Learning. Each operation on Amazon Machine Learning is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

§Constructing a Client

A Config is required to construct a client. For most use cases, the aws-config crate should be used to automatically resolve this config using aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling aws_config::from_env() instead, which returns a ConfigLoader that uses the builder pattern to customize the default config.

In the simplest case, creating a client looks as follows:

let config = aws_config::load_from_env().await;
let client = aws_sdk_machinelearning::Client::new(&config);

Occasionally, SDKs may have additional service-specific values that can be set on the Config that is absent from SdkConfig, or slightly different settings for a specific client may be desired. The Config struct implements From<&SdkConfig>, so setting these specific settings can be done as follows:

let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_machinelearning::config::Builder::from(&sdk_config)
    .some_service_specific_setting("value")
    .build();

See the aws-config docs and Config for more information on customizing configuration.

Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.

§Using the Client

A client has a function for every operation that can be performed by the service. For example, the AddTags operation has a Client::add_tags, function which returns a builder for that operation. The fluent builder ultimately has a send() function that returns an async future that returns a result, as illustrated below:

let result = client.add_tags()
    .resource_id("example")
    .send()
    .await;

The underlying HTTP requests that get made by this can be modified with the customize_operation function on the fluent builder. See the customize module for more information.

§Waiters

This client provides wait_until methods behind the Waiters trait. To use them, simply import the trait, and then call one of the wait_until methods. This will return a waiter fluent builder that takes various parameters, which are documented on the builder type. Once parameters have been provided, the wait method can be called to initiate waiting.

For example, if there was a wait_until_thing method, it could look like:

let result = client.wait_until_thing()
    .thing_id("someId")
    .wait(Duration::from_secs(120))
    .await;

Implementations§

source§

impl Client

source

pub fn add_tags(&self) -> AddTagsFluentBuilder

Constructs a fluent builder for the AddTags operation.

source§

impl Client

source

pub fn create_batch_prediction(&self) -> CreateBatchPredictionFluentBuilder

Constructs a fluent builder for the CreateBatchPrediction operation.

source§

impl Client

source

pub fn create_data_source_from_rds( &self ) -> CreateDataSourceFromRDSFluentBuilder

Constructs a fluent builder for the CreateDataSourceFromRDS operation.

  • The fluent builder is configurable:
    • data_source_id(impl Into<String>) / set_data_source_id(Option<String>):
      required: true

      A user-supplied ID that uniquely identifies the DataSource. Typically, an Amazon Resource Number (ARN) becomes the ID for a DataSource.


    • data_source_name(impl Into<String>) / set_data_source_name(Option<String>):
      required: false

      A user-supplied name or description of the DataSource.


    • rds_data(RdsDataSpec) / set_rds_data(Option<RdsDataSpec>):
      required: true

      The data specification of an Amazon RDS DataSource:

      • DatabaseInformation -

        • DatabaseName - The name of the Amazon RDS database.

        • InstanceIdentifier - A unique identifier for the Amazon RDS database instance.

      • DatabaseCredentials - AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon RDS database.

      • ResourceRole - A role (DataPipelineDefaultResourceRole) assumed by an EC2 instance to carry out the copy task from Amazon RDS to Amazon Simple Storage Service (Amazon S3). For more information, see Role templates for data pipelines.

      • ServiceRole - A role (DataPipelineDefaultRole) assumed by the AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see Role templates for data pipelines.

      • SecurityInfo - The security information to use to access an RDS DB instance. You need to set up appropriate ingress rules for the security entity IDs provided to allow access to the Amazon RDS instance. Specify a [SubnetId, SecurityGroupIds] pair for a VPC-based RDS DB instance.

      • SelectSqlQuery - A query that is used to retrieve the observation data for the Datasource.

      • S3StagingLocation - The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using SelectSqlQuery is stored in this location.

      • DataSchemaUri - The Amazon S3 location of the DataSchema.

      • DataSchema - A JSON string representing the schema. This is not required if DataSchemaUri is specified.

      • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the Datasource.

        Sample - “{"splitting":{"percentBegin":10,"percentEnd":60}}”


    • role_arn(impl Into<String>) / set_role_arn(Option<String>):
      required: true

      The role that Amazon ML assumes on behalf of the user to create and activate a data pipeline in the user’s account and copy data using the SelectSqlQuery query from Amazon RDS to Amazon S3.


    • compute_statistics(bool) / set_compute_statistics(Option<bool>):
      required: false

      The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during MLModel training. This parameter must be set to true if the DataSource needs to be used for MLModel training.


  • On success, responds with CreateDataSourceFromRdsOutput with field(s):
    • data_source_id(Option<String>):

      A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the DataSourceID in the request.

  • On failure, responds with SdkError<CreateDataSourceFromRDSError>
source§

impl Client

source

pub fn create_data_source_from_redshift( &self ) -> CreateDataSourceFromRedshiftFluentBuilder

Constructs a fluent builder for the CreateDataSourceFromRedshift operation.

  • The fluent builder is configurable:
    • data_source_id(impl Into<String>) / set_data_source_id(Option<String>):
      required: true

      A user-supplied ID that uniquely identifies the DataSource.


    • data_source_name(impl Into<String>) / set_data_source_name(Option<String>):
      required: false

      A user-supplied name or description of the DataSource.


    • data_spec(RedshiftDataSpec) / set_data_spec(Option<RedshiftDataSpec>):
      required: true

      The data specification of an Amazon Redshift DataSource:

      • DatabaseInformation -

        • DatabaseName - The name of the Amazon Redshift database.

        • ClusterIdentifier - The unique ID for the Amazon Redshift cluster.

      • DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database.

      • SelectSqlQuery - The query that is used to retrieve the observation data for the Datasource.

      • S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the SelectSqlQuery query is stored in this location.

      • DataSchemaUri - The Amazon S3 location of the DataSchema.

      • DataSchema - A JSON string representing the schema. This is not required if DataSchemaUri is specified.

      • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the DataSource.

        Sample - “{"splitting":{"percentBegin":10,"percentEnd":60}}”


    • role_arn(impl Into<String>) / set_role_arn(Option<String>):
      required: true

      A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:

      • A security group to allow Amazon ML to execute the SelectSqlQuery query on an Amazon Redshift cluster

      • An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the S3StagingLocation


    • compute_statistics(bool) / set_compute_statistics(Option<bool>):
      required: false

      The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during MLModel training. This parameter must be set to true if the DataSource needs to be used for MLModel training.


  • On success, responds with CreateDataSourceFromRedshiftOutput with field(s):
    • data_source_id(Option<String>):

      A user-supplied ID that uniquely identifies the datasource. This value should be identical to the value of the DataSourceID in the request.

  • On failure, responds with SdkError<CreateDataSourceFromRedshiftError>
source§

impl Client

source

pub fn create_data_source_from_s3(&self) -> CreateDataSourceFromS3FluentBuilder

Constructs a fluent builder for the CreateDataSourceFromS3 operation.

source§

impl Client

source

pub fn create_evaluation(&self) -> CreateEvaluationFluentBuilder

Constructs a fluent builder for the CreateEvaluation operation.

source§

impl Client

source

pub fn create_ml_model(&self) -> CreateMLModelFluentBuilder

Constructs a fluent builder for the CreateMLModel operation.

  • The fluent builder is configurable:
    • ml_model_id(impl Into<String>) / set_ml_model_id(Option<String>):
      required: true

      A user-supplied ID that uniquely identifies the MLModel.


    • ml_model_name(impl Into<String>) / set_ml_model_name(Option<String>):
      required: false

      A user-supplied name or description of the MLModel.


    • ml_model_type(MlModelType) / set_ml_model_type(Option<MlModelType>):
      required: true

      The category of supervised learning that this MLModel will address. Choose from the following types:

      • Choose REGRESSION if the MLModel will be used to predict a numeric value.

      • Choose BINARY if the MLModel result has two possible values.

      • Choose MULTICLASS if the MLModel result has a limited number of values.

      For more information, see the Amazon Machine Learning Developer Guide.


    • parameters(impl Into<String>, impl Into<String>) / set_parameters(Option<HashMap::<String, String>>):
      required: false

      A list of the training parameters in the MLModel. The list is implemented as a map of key-value pairs.

      The following is the current set of training parameters:

      • sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.

        The value is an integer that ranges from 100000 to 2147483648. The default value is 33554432.

      • sgd.maxPasses - The number of times that the training process traverses the observations to build the MLModel. The value is an integer that ranges from 1 to 10000. The default value is 10.

      • sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling the data improves a model’s ability to find the optimal solution for a variety of data types. The valid values are auto and none. The default value is none. We strongly recommend that you shuffle your data.

      • sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08.

        The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L1 normalization. This parameter can’t be used when L2 is specified. Use this parameter sparingly.

      • sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08.

        The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L2 normalization. This parameter can’t be used when L1 is specified. Use this parameter sparingly.


    • training_data_source_id(impl Into<String>) / set_training_data_source_id(Option<String>):
      required: true

      The DataSource that points to the training data.


    • recipe(impl Into<String>) / set_recipe(Option<String>):
      required: false

      The data recipe for creating the MLModel. You must specify either the recipe or its URI. If you don’t specify a recipe or its URI, Amazon ML creates a default.


    • recipe_uri(impl Into<String>) / set_recipe_uri(Option<String>):
      required: false

      The Amazon Simple Storage Service (Amazon S3) location and file name that contains the MLModel recipe. You must specify either the recipe or its URI. If you don’t specify a recipe or its URI, Amazon ML creates a default.


  • On success, responds with CreateMlModelOutput with field(s):
    • ml_model_id(Option<String>):

      A user-supplied ID that uniquely identifies the MLModel. This value should be identical to the value of the MLModelId in the request.

  • On failure, responds with SdkError<CreateMLModelError>
source§

impl Client

source

pub fn create_realtime_endpoint(&self) -> CreateRealtimeEndpointFluentBuilder

Constructs a fluent builder for the CreateRealtimeEndpoint operation.

source§

impl Client

source

pub fn delete_batch_prediction(&self) -> DeleteBatchPredictionFluentBuilder

Constructs a fluent builder for the DeleteBatchPrediction operation.

source§

impl Client

source

pub fn delete_data_source(&self) -> DeleteDataSourceFluentBuilder

Constructs a fluent builder for the DeleteDataSource operation.

source§

impl Client

source

pub fn delete_evaluation(&self) -> DeleteEvaluationFluentBuilder

Constructs a fluent builder for the DeleteEvaluation operation.

source§

impl Client

source

pub fn delete_ml_model(&self) -> DeleteMLModelFluentBuilder

Constructs a fluent builder for the DeleteMLModel operation.

source§

impl Client

source

pub fn delete_realtime_endpoint(&self) -> DeleteRealtimeEndpointFluentBuilder

Constructs a fluent builder for the DeleteRealtimeEndpoint operation.

source§

impl Client

source

pub fn delete_tags(&self) -> DeleteTagsFluentBuilder

Constructs a fluent builder for the DeleteTags operation.

source§

impl Client

source

pub fn describe_batch_predictions( &self ) -> DescribeBatchPredictionsFluentBuilder

Constructs a fluent builder for the DescribeBatchPredictions operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • filter_variable(BatchPredictionFilterVariable) / set_filter_variable(Option<BatchPredictionFilterVariable>):
      required: false

      Use one of the following variables to filter a list of BatchPrediction:

      • CreatedAt - Sets the search criteria to the BatchPrediction creation date.

      • Status - Sets the search criteria to the BatchPrediction status.

      • Name - Sets the search criteria to the contents of the BatchPrediction Name.

      • IAMUser - Sets the search criteria to the user account that invoked the BatchPrediction creation.

      • MLModelId - Sets the search criteria to the MLModel used in the BatchPrediction.

      • DataSourceId - Sets the search criteria to the DataSource used in the BatchPrediction.

      • DataURI - Sets the search criteria to the data file(s) used in the BatchPrediction. The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.


    • eq(impl Into<String>) / set_eq(Option<String>):
      required: false

      The equal to operator. The BatchPrediction results will have FilterVariable values that exactly match the value specified with EQ.


    • gt(impl Into<String>) / set_gt(Option<String>):
      required: false

      The greater than operator. The BatchPrediction results will have FilterVariable values that are greater than the value specified with GT.


    • lt(impl Into<String>) / set_lt(Option<String>):
      required: false

      The less than operator. The BatchPrediction results will have FilterVariable values that are less than the value specified with LT.


    • ge(impl Into<String>) / set_ge(Option<String>):
      required: false

      The greater than or equal to operator. The BatchPrediction results will have FilterVariable values that are greater than or equal to the value specified with GE.


    • le(impl Into<String>) / set_le(Option<String>):
      required: false

      The less than or equal to operator. The BatchPrediction results will have FilterVariable values that are less than or equal to the value specified with LE.


    • ne(impl Into<String>) / set_ne(Option<String>):
      required: false

      The not equal to operator. The BatchPrediction results will have FilterVariable values not equal to the value specified with NE.


    • prefix(impl Into<String>) / set_prefix(Option<String>):
      required: false

      A string that is found at the beginning of a variable, such as Name or Id.

      For example, a Batch Prediction operation could have the Name 2014-09-09-HolidayGiftMailer. To search for this BatchPrediction, select Name for the FilterVariable and any of the following strings for the Prefix:

      • 2014-09

      • 2014-09-09

      • 2014-09-09-Holiday


    • sort_order(SortOrder) / set_sort_order(Option<SortOrder>):
      required: false

      A two-value parameter that determines the sequence of the resulting list of MLModels.

      • asc - Arranges the list in ascending order (A-Z, 0-9).

      • dsc - Arranges the list in descending order (Z-A, 9-0).

      Results are sorted by FilterVariable.


    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      An ID of the page in the paginated results.


    • limit(i32) / set_limit(Option<i32>):
      required: false

      The number of pages of information to include in the result. The range of acceptable values is 1 through 100. The default value is 100.


  • On success, responds with DescribeBatchPredictionsOutput with field(s):
  • On failure, responds with SdkError<DescribeBatchPredictionsError>
source§

impl Client

source

pub fn describe_data_sources(&self) -> DescribeDataSourcesFluentBuilder

Constructs a fluent builder for the DescribeDataSources operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn describe_evaluations(&self) -> DescribeEvaluationsFluentBuilder

Constructs a fluent builder for the DescribeEvaluations operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn describe_ml_models(&self) -> DescribeMLModelsFluentBuilder

Constructs a fluent builder for the DescribeMLModels operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • filter_variable(MlModelFilterVariable) / set_filter_variable(Option<MlModelFilterVariable>):
      required: false

      Use one of the following variables to filter a list of MLModel:

      • CreatedAt - Sets the search criteria to MLModel creation date.

      • Status - Sets the search criteria to MLModel status.

      • Name - Sets the search criteria to the contents of MLModel Name.

      • IAMUser - Sets the search criteria to the user account that invoked the MLModel creation.

      • TrainingDataSourceId - Sets the search criteria to the DataSource used to train one or more MLModel.

      • RealtimeEndpointStatus - Sets the search criteria to the MLModel real-time endpoint status.

      • MLModelType - Sets the search criteria to MLModel type: binary, regression, or multi-class.

      • Algorithm - Sets the search criteria to the algorithm that the MLModel uses.

      • TrainingDataURI - Sets the search criteria to the data file(s) used in training a MLModel. The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.


    • eq(impl Into<String>) / set_eq(Option<String>):
      required: false

      The equal to operator. The MLModel results will have FilterVariable values that exactly match the value specified with EQ.


    • gt(impl Into<String>) / set_gt(Option<String>):
      required: false

      The greater than operator. The MLModel results will have FilterVariable values that are greater than the value specified with GT.


    • lt(impl Into<String>) / set_lt(Option<String>):
      required: false

      The less than operator. The MLModel results will have FilterVariable values that are less than the value specified with LT.


    • ge(impl Into<String>) / set_ge(Option<String>):
      required: false

      The greater than or equal to operator. The MLModel results will have FilterVariable values that are greater than or equal to the value specified with GE.


    • le(impl Into<String>) / set_le(Option<String>):
      required: false

      The less than or equal to operator. The MLModel results will have FilterVariable values that are less than or equal to the value specified with LE.


    • ne(impl Into<String>) / set_ne(Option<String>):
      required: false

      The not equal to operator. The MLModel results will have FilterVariable values not equal to the value specified with NE.


    • prefix(impl Into<String>) / set_prefix(Option<String>):
      required: false

      A string that is found at the beginning of a variable, such as Name or Id.

      For example, an MLModel could have the Name 2014-09-09-HolidayGiftMailer. To search for this MLModel, select Name for the FilterVariable and any of the following strings for the Prefix:

      • 2014-09

      • 2014-09-09

      • 2014-09-09-Holiday


    • sort_order(SortOrder) / set_sort_order(Option<SortOrder>):
      required: false

      A two-value parameter that determines the sequence of the resulting list of MLModel.

      • asc - Arranges the list in ascending order (A-Z, 0-9).

      • dsc - Arranges the list in descending order (Z-A, 9-0).

      Results are sorted by FilterVariable.


    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      The ID of the page in the paginated results.


    • limit(i32) / set_limit(Option<i32>):
      required: false

      The number of pages of information to include in the result. The range of acceptable values is 1 through 100. The default value is 100.


  • On success, responds with DescribeMlModelsOutput with field(s):
  • On failure, responds with SdkError<DescribeMLModelsError>
source§

impl Client

source

pub fn describe_tags(&self) -> DescribeTagsFluentBuilder

Constructs a fluent builder for the DescribeTags operation.

source§

impl Client

source

pub fn get_batch_prediction(&self) -> GetBatchPredictionFluentBuilder

Constructs a fluent builder for the GetBatchPrediction operation.

source§

impl Client

source

pub fn get_data_source(&self) -> GetDataSourceFluentBuilder

Constructs a fluent builder for the GetDataSource operation.

source§

impl Client

source

pub fn get_evaluation(&self) -> GetEvaluationFluentBuilder

Constructs a fluent builder for the GetEvaluation operation.

  • The fluent builder is configurable:
  • On success, responds with GetEvaluationOutput with field(s):
    • evaluation_id(Option<String>):

      The evaluation ID which is same as the EvaluationId in the request.

    • ml_model_id(Option<String>):

      The ID of the MLModel that was the focus of the evaluation.

    • evaluation_data_source_id(Option<String>):

      The DataSource used for this evaluation.

    • input_data_location_s3(Option<String>):

      The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

    • created_by_iam_user(Option<String>):

      The AWS user account that invoked the evaluation. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

    • created_at(Option<DateTime>):

      The time that the Evaluation was created. The time is expressed in epoch time.

    • last_updated_at(Option<DateTime>):

      The time of the most recent edit to the Evaluation. The time is expressed in epoch time.

    • name(Option<String>):

      A user-supplied name or description of the Evaluation.

    • status(Option<EntityStatus>):

      The status of the evaluation. This element can have one of the following values:

      • PENDING - Amazon Machine Language (Amazon ML) submitted a request to evaluate an MLModel.

      • INPROGRESS - The evaluation is underway.

      • FAILED - The request to evaluate an MLModel did not run to completion. It is not usable.

      • COMPLETED - The evaluation process completed successfully.

      • DELETED - The Evaluation is marked as deleted. It is not usable.

    • performance_metrics(Option<PerformanceMetrics>):

      Measurements of how well the MLModel performed using observations referenced by the DataSource. One of the following metric is returned based on the type of the MLModel:

      • BinaryAUC: A binary MLModel uses the Area Under the Curve (AUC) technique to measure performance.

      • RegressionRMSE: A regression MLModel uses the Root Mean Square Error (RMSE) technique to measure performance. RMSE measures the difference between predicted and actual values for a single variable.

      • MulticlassAvgFScore: A multiclass MLModel uses the F1 score technique to measure performance.

      For more information about performance metrics, please see the Amazon Machine Learning Developer Guide.

    • log_uri(Option<String>):

      A link to the file that contains logs of the CreateEvaluation operation.

    • message(Option<String>):

      A description of the most recent details about evaluating the MLModel.

    • compute_time(Option<i64>):

      The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the Evaluation, normalized and scaled on computation resources. ComputeTime is only available if the Evaluation is in the COMPLETED state.

    • finished_at(Option<DateTime>):

      The epoch time when Amazon Machine Learning marked the Evaluation as COMPLETED or FAILED. FinishedAt is only available when the Evaluation is in the COMPLETED or FAILED state.

    • started_at(Option<DateTime>):

      The epoch time when Amazon Machine Learning marked the Evaluation as INPROGRESS. StartedAt isn’t available if the Evaluation is in the PENDING state.

  • On failure, responds with SdkError<GetEvaluationError>
source§

impl Client

source

pub fn get_ml_model(&self) -> GetMLModelFluentBuilder

Constructs a fluent builder for the GetMLModel operation.

  • The fluent builder is configurable:
  • On success, responds with GetMlModelOutput with field(s):
    • ml_model_id(Option<String>):

      The MLModel ID, which is same as the MLModelId in the request.

    • training_data_source_id(Option<String>):

      The ID of the training DataSource.

    • created_by_iam_user(Option<String>):

      The AWS user account from which the MLModel was created. The account type can be either an AWS root account or an AWS Identity and Access Management (IAM) user account.

    • created_at(Option<DateTime>):

      The time that the MLModel was created. The time is expressed in epoch time.

    • last_updated_at(Option<DateTime>):

      The time of the most recent edit to the MLModel. The time is expressed in epoch time.

    • name(Option<String>):

      A user-supplied name or description of the MLModel.

    • status(Option<EntityStatus>):

      The current status of the MLModel. This element can have one of the following values:

      • PENDING - Amazon Machine Learning (Amazon ML) submitted a request to describe a MLModel.

      • INPROGRESS - The request is processing.

      • FAILED - The request did not run to completion. The ML model isn’t usable.

      • COMPLETED - The request completed successfully.

      • DELETED - The MLModel is marked as deleted. It isn’t usable.

    • size_in_bytes(Option<i64>):

      Long integer type that is a 64-bit signed number.

    • endpoint_info(Option<RealtimeEndpointInfo>):

      The current endpoint of the MLModel

    • training_parameters(Option<HashMap::<String, String>>):

      A list of the training parameters in the MLModel. The list is implemented as a map of key-value pairs.

      The following is the current set of training parameters:

      • sgd.maxMLModelSizeInBytes - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.

        The value is an integer that ranges from 100000 to 2147483648. The default value is 33554432.

      • sgd.maxPasses - The number of times that the training process traverses the observations to build the MLModel. The value is an integer that ranges from 1 to 10000. The default value is 10.

      • sgd.shuffleType - Whether Amazon ML shuffles the training data. Shuffling data improves a model’s ability to find the optimal solution for a variety of data types. The valid values are auto and none. The default value is none. We strongly recommend that you shuffle your data.

      • sgd.l1RegularizationAmount - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as 1.0E-08.

        The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L1 normalization. This parameter can’t be used when L2 is specified. Use this parameter sparingly.

      • sgd.l2RegularizationAmount - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as 1.0E-08.

        The value is a double that ranges from 0 to MAX_DOUBLE. The default is to not use L2 normalization. This parameter can’t be used when L1 is specified. Use this parameter sparingly.

    • input_data_location_s3(Option<String>):

      The location of the data file or directory in Amazon Simple Storage Service (Amazon S3).

    • ml_model_type(Option<MlModelType>):

      Identifies the MLModel category. The following are the available types:

      • REGRESSION – Produces a numeric result. For example, “What price should a house be listed at?”

      • BINARY – Produces one of two possible results. For example, “Is this an e-commerce website?”

      • MULTICLASS – Produces one of several possible results. For example, “Is this a HIGH, LOW or MEDIUM risk trade?”

    • score_threshold(Option<f32>):

      The scoring threshold is used in binary classification MLModel models. It marks the boundary between a positive prediction and a negative prediction.

      Output values greater than or equal to the threshold receive a positive result from the MLModel, such as true. Output values less than the threshold receive a negative response from the MLModel, such as false.

    • score_threshold_last_updated_at(Option<DateTime>):

      The time of the most recent edit to the ScoreThreshold. The time is expressed in epoch time.

    • log_uri(Option<String>):

      A link to the file that contains logs of the CreateMLModel operation.

    • message(Option<String>):

      A description of the most recent details about accessing the MLModel.

    • compute_time(Option<i64>):

      The approximate CPU time in milliseconds that Amazon Machine Learning spent processing the MLModel, normalized and scaled on computation resources. ComputeTime is only available if the MLModel is in the COMPLETED state.

    • finished_at(Option<DateTime>):

      The epoch time when Amazon Machine Learning marked the MLModel as COMPLETED or FAILED. FinishedAt is only available when the MLModel is in the COMPLETED or FAILED state.

    • started_at(Option<DateTime>):

      The epoch time when Amazon Machine Learning marked the MLModel as INPROGRESS. StartedAt isn’t available if the MLModel is in the PENDING state.

    • recipe(Option<String>):

      The recipe to use when training the MLModel. The Recipe provides detailed information about the observation data to use during training, and manipulations to perform on the observation data during training.

      Note: This parameter is provided as part of the verbose format.

    • schema(Option<String>):

      The schema used by all of the data files referenced by the DataSource.

      Note: This parameter is provided as part of the verbose format.

  • On failure, responds with SdkError<GetMLModelError>
source§

impl Client

source

pub fn predict(&self) -> PredictFluentBuilder

Constructs a fluent builder for the Predict operation.

source§

impl Client

source

pub fn update_batch_prediction(&self) -> UpdateBatchPredictionFluentBuilder

Constructs a fluent builder for the UpdateBatchPrediction operation.

source§

impl Client

source

pub fn update_data_source(&self) -> UpdateDataSourceFluentBuilder

Constructs a fluent builder for the UpdateDataSource operation.

source§

impl Client

source

pub fn update_evaluation(&self) -> UpdateEvaluationFluentBuilder

Constructs a fluent builder for the UpdateEvaluation operation.

source§

impl Client

source

pub fn update_ml_model(&self) -> UpdateMLModelFluentBuilder

Constructs a fluent builder for the UpdateMLModel operation.

source§

impl Client

source

pub fn from_conf(conf: Config) -> Self

Creates a new client from the service Config.

§Panics

This method will panic in the following cases:

  • Retries or timeouts are enabled without a sleep_impl configured.
  • Identity caching is enabled without a sleep_impl and time_source configured.
  • No behavior_version is provided.

The panic message for each of these will have instructions on how to resolve them.

source

pub fn config(&self) -> &Config

Returns the client’s configuration.

source§

impl Client

source

pub fn new(sdk_config: &SdkConfig) -> Self

Creates a new client from an SDK Config.

§Panics
  • This method will panic if the sdk_config is missing an async sleep implementation. If you experience this panic, set the sleep_impl on the Config passed into this function to fix it.
  • This method will panic if the sdk_config is missing an HTTP connector. If you experience this panic, set the http_connector on the Config passed into this function to fix it.
  • This method will panic if no BehaviorVersion is provided. If you experience this panic, set behavior_version on the Config or enable the behavior-version-latest Cargo feature.

Trait Implementations§

source§

impl Clone for Client

source§

fn clone(&self) -> Client

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for Client

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Waiters for Client

Auto Trait Implementations§

§

impl Freeze for Client

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more