Struct aws_sdk_forecast::Client

source ·
pub struct Client { /* private fields */ }
Expand description

Client for Amazon Forecast Service

Client for invoking operations on Amazon Forecast Service. Each operation on Amazon Forecast Service is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

§Constructing a Client

A Config is required to construct a client. For most use cases, the aws-config crate should be used to automatically resolve this config using aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling aws_config::from_env() instead, which returns a ConfigLoader that uses the builder pattern to customize the default config.

In the simplest case, creating a client looks as follows:

let config = aws_config::load_from_env().await;
let client = aws_sdk_forecast::Client::new(&config);

Occasionally, SDKs may have additional service-specific values that can be set on the Config that is absent from SdkConfig, or slightly different settings for a specific client may be desired. The Config struct implements From<&SdkConfig>, so setting these specific settings can be done as follows:

let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_forecast::config::Builder::from(&sdk_config)
    .some_service_specific_setting("value")
    .build();

See the aws-config docs and Config for more information on customizing configuration.

Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.

§Using the Client

A client has a function for every operation that can be performed by the service. For example, the CreateAutoPredictor operation has a Client::create_auto_predictor, function which returns a builder for that operation. The fluent builder ultimately has a send() function that returns an async future that returns a result, as illustrated below:

let result = client.create_auto_predictor()
    .predictor_name("example")
    .send()
    .await;

The underlying HTTP requests that get made by this can be modified with the customize_operation function on the fluent builder. See the customize module for more information.

Implementations§

source§

impl Client

source

pub fn create_auto_predictor(&self) -> CreateAutoPredictorFluentBuilder

Constructs a fluent builder for the CreateAutoPredictor operation.

  • The fluent builder is configurable:
    • predictor_name(impl Into<String>) / set_predictor_name(Option<String>):
      required: true

      A unique name for the predictor


    • forecast_horizon(i32) / set_forecast_horizon(Option<i32>):
      required: false

      The number of time-steps that the model predicts. The forecast horizon is also called the prediction length.

      The maximum forecast horizon is the lesser of 500 time-steps or 1/4 of the TARGET_TIME_SERIES dataset length. If you are retraining an existing AutoPredictor, then the maximum forecast horizon is the lesser of 500 time-steps or 1/3 of the TARGET_TIME_SERIES dataset length.

      If you are upgrading to an AutoPredictor or retraining an existing AutoPredictor, you cannot update the forecast horizon parameter. You can meet this requirement by providing longer time-series in the dataset.


    • forecast_types(impl Into<String>) / set_forecast_types(Option<Vec::<String>>):
      required: false

      The forecast types used to train a predictor. You can specify up to five forecast types. Forecast types can be quantiles from 0.01 to 0.99, by increments of 0.01 or higher. You can also specify the mean forecast with mean.


    • forecast_dimensions(impl Into<String>) / set_forecast_dimensions(Option<Vec::<String>>):
      required: false

      An array of dimension (field) names that specify how to group the generated forecast.

      For example, if you are generating forecasts for item sales across all your stores, and your dataset contains a store_id field, you would specify store_id as a dimension to group sales forecasts for each store.


    • forecast_frequency(impl Into<String>) / set_forecast_frequency(Option<String>):
      required: false

      The frequency of predictions in a forecast.

      Valid intervals are an integer followed by Y (Year), M (Month), W (Week), D (Day), H (Hour), and min (Minute). For example, “1D” indicates every day and “15min” indicates every 15 minutes. You cannot specify a value that would overlap with the next larger frequency. That means, for example, you cannot specify a frequency of 60 minutes, because that is equivalent to 1 hour. The valid values for each frequency are the following:

      • Minute - 1-59

      • Hour - 1-23

      • Day - 1-6

      • Week - 1-4

      • Month - 1-11

      • Year - 1

      Thus, if you want every other week forecasts, specify “2W”. Or, if you want quarterly forecasts, you specify “3M”.

      The frequency must be greater than or equal to the TARGET_TIME_SERIES dataset frequency.

      When a RELATED_TIME_SERIES dataset is provided, the frequency must be equal to the RELATED_TIME_SERIES dataset frequency.


    • data_config(DataConfig) / set_data_config(Option<DataConfig>):
      required: false

      The data configuration for your dataset group and any additional datasets.


    • encryption_config(EncryptionConfig) / set_encryption_config(Option<EncryptionConfig>):
      required: false

      An Key Management Service (KMS) key and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key. You can specify this optional object in the CreateDataset and CreatePredictor requests.


    • reference_predictor_arn(impl Into<String>) / set_reference_predictor_arn(Option<String>):
      required: false

      The ARN of the predictor to retrain or upgrade. This parameter is only used when retraining or upgrading a predictor. When creating a new predictor, do not specify a value for this parameter.

      When upgrading or retraining a predictor, only specify values for the ReferencePredictorArn and PredictorName. The value for PredictorName must be a unique predictor name.


    • optimization_metric(OptimizationMetric) / set_optimization_metric(Option<OptimizationMetric>):
      required: false

      The accuracy metric used to optimize the predictor.


    • explain_predictor(bool) / set_explain_predictor(Option<bool>):
      required: false

      Create an Explainability resource for the predictor.


    • tags(Tag) / set_tags(Option<Vec::<Tag>>):
      required: false

      Optional metadata to help you categorize and organize your predictors. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.

      The following restrictions apply to tags:

      • For each resource, each tag key must be unique and each tag key must have one value.

      • Maximum number of tags per resource: 50.

      • Maximum key length: 128 Unicode characters in UTF-8.

      • Maximum value length: 256 Unicode characters in UTF-8.

      • Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.

      • Key prefixes cannot include any upper or lowercase combination of aws: or AWS:. Values can have this prefix. If a tag value has aws as its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.


    • monitor_config(MonitorConfig) / set_monitor_config(Option<MonitorConfig>):
      required: false

      The configuration details for predictor monitoring. Provide a name for the monitor resource to enable predictor monitoring.

      Predictor monitoring allows you to see how your predictor’s performance changes over time. For more information, see Predictor Monitoring.


    • time_alignment_boundary(TimeAlignmentBoundary) / set_time_alignment_boundary(Option<TimeAlignmentBoundary>):
      required: false

      The time boundary Forecast uses to align and aggregate any data that doesn’t align with your forecast frequency. Provide the unit of time and the time boundary as a key value pair. For more information on specifying a time boundary, see Specifying a Time Boundary. If you don’t provide a time boundary, Forecast uses a set of Default Time Boundaries.


  • On success, responds with CreateAutoPredictorOutput with field(s):
  • On failure, responds with SdkError<CreateAutoPredictorError>
source§

impl Client

source

pub fn create_dataset(&self) -> CreateDatasetFluentBuilder

Constructs a fluent builder for the CreateDataset operation.

  • The fluent builder is configurable:
    • dataset_name(impl Into<String>) / set_dataset_name(Option<String>):
      required: true

      A name for the dataset.


    • domain(Domain) / set_domain(Option<Domain>):
      required: true

      The domain associated with the dataset. When you add a dataset to a dataset group, this value and the value specified for the Domain parameter of the CreateDatasetGroup operation must match.

      The Domain and DatasetType that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose the RETAIL domain and TARGET_TIME_SERIES as the DatasetType, Amazon Forecast requires item_id, timestamp, and demand fields to be present in your data. For more information, see Importing datasets.


    • dataset_type(DatasetType) / set_dataset_type(Option<DatasetType>):
      required: true

      The dataset type. Valid values depend on the chosen Domain.


    • data_frequency(impl Into<String>) / set_data_frequency(Option<String>):
      required: false

      The frequency of data collection. This parameter is required for RELATED_TIME_SERIES datasets.

      Valid intervals are an integer followed by Y (Year), M (Month), W (Week), D (Day), H (Hour), and min (Minute). For example, “1D” indicates every day and “15min” indicates every 15 minutes. You cannot specify a value that would overlap with the next larger frequency. That means, for example, you cannot specify a frequency of 60 minutes, because that is equivalent to 1 hour. The valid values for each frequency are the following:

      • Minute - 1-59

      • Hour - 1-23

      • Day - 1-6

      • Week - 1-4

      • Month - 1-11

      • Year - 1

      Thus, if you want every other week forecasts, specify “2W”. Or, if you want quarterly forecasts, you specify “3M”.


    • schema(Schema) / set_schema(Option<Schema>):
      required: true

      The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset Domain and DatasetType that you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see Dataset Domains and Dataset Types.


    • encryption_config(EncryptionConfig) / set_encryption_config(Option<EncryptionConfig>):
      required: false

      An Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.


    • tags(Tag) / set_tags(Option<Vec::<Tag>>):
      required: false

      The optional metadata that you apply to the dataset to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

      The following basic restrictions apply to tags:

      • Maximum number of tags per resource - 50.

      • For each resource, each tag key must be unique, and each tag key can have only one value.

      • Maximum key length - 128 Unicode characters in UTF-8.

      • Maximum value length - 256 Unicode characters in UTF-8.

      • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

      • Tag keys and values are case sensitive.

      • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.


  • On success, responds with CreateDatasetOutput with field(s):
  • On failure, responds with SdkError<CreateDatasetError>
source§

impl Client

source

pub fn create_dataset_group(&self) -> CreateDatasetGroupFluentBuilder

Constructs a fluent builder for the CreateDatasetGroup operation.

  • The fluent builder is configurable:
    • dataset_group_name(impl Into<String>) / set_dataset_group_name(Option<String>):
      required: true

      A name for the dataset group.


    • domain(Domain) / set_domain(Option<Domain>):
      required: true

      The domain associated with the dataset group. When you add a dataset to a dataset group, this value and the value specified for the Domain parameter of the CreateDataset operation must match.

      The Domain and DatasetType that you choose determine the fields that must be present in training data that you import to a dataset. For example, if you choose the RETAIL domain and TARGET_TIME_SERIES as the DatasetType, Amazon Forecast requires that item_id, timestamp, and demand fields are present in your data. For more information, see Dataset groups.


    • dataset_arns(impl Into<String>) / set_dataset_arns(Option<Vec::<String>>):
      required: false

      An array of Amazon Resource Names (ARNs) of the datasets that you want to include in the dataset group.


    • tags(Tag) / set_tags(Option<Vec::<Tag>>):
      required: false

      The optional metadata that you apply to the dataset group to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

      The following basic restrictions apply to tags:

      • Maximum number of tags per resource - 50.

      • For each resource, each tag key must be unique, and each tag key can have only one value.

      • Maximum key length - 128 Unicode characters in UTF-8.

      • Maximum value length - 256 Unicode characters in UTF-8.

      • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

      • Tag keys and values are case sensitive.

      • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.


  • On success, responds with CreateDatasetGroupOutput with field(s):
  • On failure, responds with SdkError<CreateDatasetGroupError>
source§

impl Client

source

pub fn create_dataset_import_job(&self) -> CreateDatasetImportJobFluentBuilder

Constructs a fluent builder for the CreateDatasetImportJob operation.

  • The fluent builder is configurable:
    • dataset_import_job_name(impl Into<String>) / set_dataset_import_job_name(Option<String>):
      required: true

      The name for the dataset import job. We recommend including the current timestamp in the name, for example, 20190721DatasetImport. This can help you avoid getting a ResourceAlreadyExistsException exception.


    • dataset_arn(impl Into<String>) / set_dataset_arn(Option<String>):
      required: true

      The Amazon Resource Name (ARN) of the Amazon Forecast dataset that you want to import data to.


    • data_source(DataSource) / set_data_source(Option<DataSource>):
      required: true

      The location of the training data to import and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data. The training data must be stored in an Amazon S3 bucket.

      If encryption is used, DataSource must include an Key Management Service (KMS) key and the IAM role must allow Amazon Forecast permission to access the key. The KMS key and IAM role must match those specified in the EncryptionConfig parameter of the CreateDataset operation.


    • timestamp_format(impl Into<String>) / set_timestamp_format(Option<String>):
      required: false

      The format of timestamps in the dataset. The format that you specify depends on the DataFrequency specified when the dataset was created. The following formats are supported

      • “yyyy-MM-dd”

        For the following data frequencies: Y, M, W, and D

      • “yyyy-MM-dd HH:mm:ss”

        For the following data frequencies: H, 30min, 15min, and 1min; and optionally, for: Y, M, W, and D

      If the format isn’t specified, Amazon Forecast expects the format to be “yyyy-MM-dd HH:mm:ss”.


    • time_zone(impl Into<String>) / set_time_zone(Option<String>):
      required: false

      A single time zone for every item in your dataset. This option is ideal for datasets with all timestamps within a single time zone, or if all timestamps are normalized to a single time zone.

      Refer to the Joda-Time API for a complete list of valid time zone names.


    • use_geolocation_for_time_zone(bool) / set_use_geolocation_for_time_zone(Option<bool>):
      required: false

      Automatically derive time zone information from the geolocation attribute. This option is ideal for datasets that contain timestamps in multiple time zones and those timestamps are expressed in local time.


    • geolocation_format(impl Into<String>) / set_geolocation_format(Option<String>):
      required: false

      The format of the geolocation attribute. The geolocation attribute can be formatted in one of two ways:

      • LAT_LONG - the latitude and longitude in decimal format (Example: 47.61_-122.33).

      • CC_POSTALCODE (US Only) - the country code (US), followed by the 5-digit ZIP code (Example: US_98121).


    • tags(Tag) / set_tags(Option<Vec::<Tag>>):
      required: false

      The optional metadata that you apply to the dataset import job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

      The following basic restrictions apply to tags:

      • Maximum number of tags per resource - 50.

      • For each resource, each tag key must be unique, and each tag key can have only one value.

      • Maximum key length - 128 Unicode characters in UTF-8.

      • Maximum value length - 256 Unicode characters in UTF-8.

      • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

      • Tag keys and values are case sensitive.

      • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.


    • format(impl Into<String>) / set_format(Option<String>):
      required: false

      The format of the imported data, CSV or PARQUET. The default value is CSV.


    • import_mode(ImportMode) / set_import_mode(Option<ImportMode>):
      required: false

      Specifies whether the dataset import job is a FULL or INCREMENTAL import. A FULL dataset import replaces all of the existing data with the newly imported data. An INCREMENTAL import appends the imported data to the existing data.


  • On success, responds with CreateDatasetImportJobOutput with field(s):
  • On failure, responds with SdkError<CreateDatasetImportJobError>
source§

impl Client

source

pub fn create_explainability(&self) -> CreateExplainabilityFluentBuilder

Constructs a fluent builder for the CreateExplainability operation.

source§

impl Client

source

pub fn create_explainability_export( &self ) -> CreateExplainabilityExportFluentBuilder

Constructs a fluent builder for the CreateExplainabilityExport operation.

source§

impl Client

source

pub fn create_forecast(&self) -> CreateForecastFluentBuilder

Constructs a fluent builder for the CreateForecast operation.

  • The fluent builder is configurable:
    • forecast_name(impl Into<String>) / set_forecast_name(Option<String>):
      required: true

      A name for the forecast.


    • predictor_arn(impl Into<String>) / set_predictor_arn(Option<String>):
      required: true

      The Amazon Resource Name (ARN) of the predictor to use to generate the forecast.


    • forecast_types(impl Into<String>) / set_forecast_types(Option<Vec::<String>>):
      required: false

      The quantiles at which probabilistic forecasts are generated. You can currently specify up to 5 quantiles per forecast. Accepted values include 0.01 to 0.99 (increments of .01 only) and mean. The mean forecast is different from the median (0.50) when the distribution is not symmetric (for example, Beta and Negative Binomial).

      The default quantiles are the quantiles you specified during predictor creation. If you didn’t specify quantiles, the default values are [“0.1”, “0.5”, “0.9”].


    • tags(Tag) / set_tags(Option<Vec::<Tag>>):
      required: false

      The optional metadata that you apply to the forecast to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

      The following basic restrictions apply to tags:

      • Maximum number of tags per resource - 50.

      • For each resource, each tag key must be unique, and each tag key can have only one value.

      • Maximum key length - 128 Unicode characters in UTF-8.

      • Maximum value length - 256 Unicode characters in UTF-8.

      • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

      • Tag keys and values are case sensitive.

      • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.


    • time_series_selector(TimeSeriesSelector) / set_time_series_selector(Option<TimeSeriesSelector>):
      required: false

      Defines the set of time series that are used to create the forecasts in a TimeSeriesIdentifiers object.

      The TimeSeriesIdentifiers object needs the following information:

      • DataSource

      • Format

      • Schema


  • On success, responds with CreateForecastOutput with field(s):
  • On failure, responds with SdkError<CreateForecastError>
source§

impl Client

source

pub fn create_forecast_export_job(&self) -> CreateForecastExportJobFluentBuilder

Constructs a fluent builder for the CreateForecastExportJob operation.

  • The fluent builder is configurable:
    • forecast_export_job_name(impl Into<String>) / set_forecast_export_job_name(Option<String>):
      required: true

      The name for the forecast export job.


    • forecast_arn(impl Into<String>) / set_forecast_arn(Option<String>):
      required: true

      The Amazon Resource Name (ARN) of the forecast that you want to export.


    • destination(DataDestination) / set_destination(Option<DataDestination>):
      required: true

      The location where you want to save the forecast and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the location. The forecast must be exported to an Amazon S3 bucket.

      If encryption is used, Destination must include an Key Management Service (KMS) key. The IAM role must allow Amazon Forecast permission to access the key.


    • tags(Tag) / set_tags(Option<Vec::<Tag>>):
      required: false

      The optional metadata that you apply to the forecast export job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

      The following basic restrictions apply to tags:

      • Maximum number of tags per resource - 50.

      • For each resource, each tag key must be unique, and each tag key can have only one value.

      • Maximum key length - 128 Unicode characters in UTF-8.

      • Maximum value length - 256 Unicode characters in UTF-8.

      • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

      • Tag keys and values are case sensitive.

      • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.


    • format(impl Into<String>) / set_format(Option<String>):
      required: false

      The format of the exported data, CSV or PARQUET. The default value is CSV.


  • On success, responds with CreateForecastExportJobOutput with field(s):
  • On failure, responds with SdkError<CreateForecastExportJobError>
source§

impl Client

source

pub fn create_monitor(&self) -> CreateMonitorFluentBuilder

Constructs a fluent builder for the CreateMonitor operation.

source§

impl Client

source

pub fn create_predictor(&self) -> CreatePredictorFluentBuilder

Constructs a fluent builder for the CreatePredictor operation.

  • The fluent builder is configurable:
    • predictor_name(impl Into<String>) / set_predictor_name(Option<String>):
      required: true

      A name for the predictor.


    • algorithm_arn(impl Into<String>) / set_algorithm_arn(Option<String>):
      required: false

      The Amazon Resource Name (ARN) of the algorithm to use for model training. Required if PerformAutoML is not set to true.

      Supported algorithms:

      • arn:aws:forecast:::algorithm/ARIMA

      • arn:aws:forecast:::algorithm/CNN-QR

      • arn:aws:forecast:::algorithm/Deep_AR_Plus

      • arn:aws:forecast:::algorithm/ETS

      • arn:aws:forecast:::algorithm/NPTS

      • arn:aws:forecast:::algorithm/Prophet


    • forecast_horizon(i32) / set_forecast_horizon(Option<i32>):
      required: true

      Specifies the number of time-steps that the model is trained to predict. The forecast horizon is also called the prediction length.

      For example, if you configure a dataset for daily data collection (using the DataFrequency parameter of the CreateDataset operation) and set the forecast horizon to 10, the model returns predictions for 10 days.

      The maximum forecast horizon is the lesser of 500 time-steps or 1/3 of the TARGET_TIME_SERIES dataset length.


    • forecast_types(impl Into<String>) / set_forecast_types(Option<Vec::<String>>):
      required: false

      Specifies the forecast types used to train a predictor. You can specify up to five forecast types. Forecast types can be quantiles from 0.01 to 0.99, by increments of 0.01 or higher. You can also specify the mean forecast with mean.

      The default value is [“0.10”, “0.50”, “0.9”].


    • perform_auto_ml(bool) / set_perform_auto_ml(Option<bool>):
      required: false

      Whether to perform AutoML. When Amazon Forecast performs AutoML, it evaluates the algorithms it provides and chooses the best algorithm and configuration for your training dataset.

      The default value is false. In this case, you are required to specify an algorithm.

      Set PerformAutoML to true to have Amazon Forecast perform AutoML. This is a good option if you aren’t sure which algorithm is suitable for your training data. In this case, PerformHPO must be false.


    • auto_ml_override_strategy(AutoMlOverrideStrategy) / set_auto_ml_override_strategy(Option<AutoMlOverrideStrategy>):
      required: false

      The LatencyOptimized AutoML override strategy is only available in private beta. Contact Amazon Web Services Support or your account manager to learn more about access privileges.

      Used to overide the default AutoML strategy, which is to optimize predictor accuracy. To apply an AutoML strategy that minimizes training time, use LatencyOptimized.

      This parameter is only valid for predictors trained using AutoML.


    • perform_hpo(bool) / set_perform_hpo(Option<bool>):
      required: false

      Whether to perform hyperparameter optimization (HPO). HPO finds optimal hyperparameter values for your training data. The process of performing HPO is known as running a hyperparameter tuning job.

      The default value is false. In this case, Amazon Forecast uses default hyperparameter values from the chosen algorithm.

      To override the default values, set PerformHPO to true and, optionally, supply the HyperParameterTuningJobConfig object. The tuning job specifies a metric to optimize, which hyperparameters participate in tuning, and the valid range for each tunable hyperparameter. In this case, you are required to specify an algorithm and PerformAutoML must be false.

      The following algorithms support HPO:

      • DeepAR+

      • CNN-QR


    • training_parameters(impl Into<String>, impl Into<String>) / set_training_parameters(Option<HashMap::<String, String>>):
      required: false

      The hyperparameters to override for model training. The hyperparameters that you can override are listed in the individual algorithms. For the list of supported algorithms, see aws-forecast-choosing-recipes.


    • evaluation_parameters(EvaluationParameters) / set_evaluation_parameters(Option<EvaluationParameters>):
      required: false

      Used to override the default evaluation parameters of the specified algorithm. Amazon Forecast evaluates a predictor by splitting a dataset into training data and testing data. The evaluation parameters define how to perform the split and the number of iterations.


    • hpo_config(HyperParameterTuningJobConfig) / set_hpo_config(Option<HyperParameterTuningJobConfig>):
      required: false

      Provides hyperparameter override values for the algorithm. If you don’t provide this parameter, Amazon Forecast uses default values. The individual algorithms specify which hyperparameters support hyperparameter optimization (HPO). For more information, see aws-forecast-choosing-recipes.

      If you included the HPOConfig object, you must set PerformHPO to true.


    • input_data_config(InputDataConfig) / set_input_data_config(Option<InputDataConfig>):
      required: true

      Describes the dataset group that contains the data to use to train the predictor.


    • featurization_config(FeaturizationConfig) / set_featurization_config(Option<FeaturizationConfig>):
      required: true

      The featurization configuration.


    • encryption_config(EncryptionConfig) / set_encryption_config(Option<EncryptionConfig>):
      required: false

      An Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.


    • tags(Tag) / set_tags(Option<Vec::<Tag>>):
      required: false

      The optional metadata that you apply to the predictor to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.

      The following basic restrictions apply to tags:

      • Maximum number of tags per resource - 50.

      • For each resource, each tag key must be unique, and each tag key can have only one value.

      • Maximum key length - 128 Unicode characters in UTF-8.

      • Maximum value length - 256 Unicode characters in UTF-8.

      • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

      • Tag keys and values are case sensitive.

      • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.


    • optimization_metric(OptimizationMetric) / set_optimization_metric(Option<OptimizationMetric>):
      required: false

      The accuracy metric used to optimize the predictor.


  • On success, responds with CreatePredictorOutput with field(s):
  • On failure, responds with SdkError<CreatePredictorError>
source§

impl Client

source

pub fn create_predictor_backtest_export_job( &self ) -> CreatePredictorBacktestExportJobFluentBuilder

Constructs a fluent builder for the CreatePredictorBacktestExportJob operation.

source§

impl Client

source

pub fn create_what_if_analysis(&self) -> CreateWhatIfAnalysisFluentBuilder

Constructs a fluent builder for the CreateWhatIfAnalysis operation.

source§

impl Client

source

pub fn create_what_if_forecast(&self) -> CreateWhatIfForecastFluentBuilder

Constructs a fluent builder for the CreateWhatIfForecast operation.

source§

impl Client

source

pub fn create_what_if_forecast_export( &self ) -> CreateWhatIfForecastExportFluentBuilder

Constructs a fluent builder for the CreateWhatIfForecastExport operation.

source§

impl Client

source

pub fn delete_dataset(&self) -> DeleteDatasetFluentBuilder

Constructs a fluent builder for the DeleteDataset operation.

source§

impl Client

source

pub fn delete_dataset_group(&self) -> DeleteDatasetGroupFluentBuilder

Constructs a fluent builder for the DeleteDatasetGroup operation.

source§

impl Client

source

pub fn delete_dataset_import_job(&self) -> DeleteDatasetImportJobFluentBuilder

Constructs a fluent builder for the DeleteDatasetImportJob operation.

source§

impl Client

source

pub fn delete_explainability(&self) -> DeleteExplainabilityFluentBuilder

Constructs a fluent builder for the DeleteExplainability operation.

source§

impl Client

source

pub fn delete_explainability_export( &self ) -> DeleteExplainabilityExportFluentBuilder

Constructs a fluent builder for the DeleteExplainabilityExport operation.

source§

impl Client

source

pub fn delete_forecast(&self) -> DeleteForecastFluentBuilder

Constructs a fluent builder for the DeleteForecast operation.

source§

impl Client

source

pub fn delete_forecast_export_job(&self) -> DeleteForecastExportJobFluentBuilder

Constructs a fluent builder for the DeleteForecastExportJob operation.

source§

impl Client

source

pub fn delete_monitor(&self) -> DeleteMonitorFluentBuilder

Constructs a fluent builder for the DeleteMonitor operation.

source§

impl Client

source

pub fn delete_predictor(&self) -> DeletePredictorFluentBuilder

Constructs a fluent builder for the DeletePredictor operation.

source§

impl Client

source

pub fn delete_predictor_backtest_export_job( &self ) -> DeletePredictorBacktestExportJobFluentBuilder

Constructs a fluent builder for the DeletePredictorBacktestExportJob operation.

source§

impl Client

source

pub fn delete_resource_tree(&self) -> DeleteResourceTreeFluentBuilder

Constructs a fluent builder for the DeleteResourceTree operation.

source§

impl Client

source

pub fn delete_what_if_analysis(&self) -> DeleteWhatIfAnalysisFluentBuilder

Constructs a fluent builder for the DeleteWhatIfAnalysis operation.

source§

impl Client

source

pub fn delete_what_if_forecast(&self) -> DeleteWhatIfForecastFluentBuilder

Constructs a fluent builder for the DeleteWhatIfForecast operation.

source§

impl Client

source

pub fn delete_what_if_forecast_export( &self ) -> DeleteWhatIfForecastExportFluentBuilder

Constructs a fluent builder for the DeleteWhatIfForecastExport operation.

source§

impl Client

source

pub fn describe_auto_predictor(&self) -> DescribeAutoPredictorFluentBuilder

Constructs a fluent builder for the DescribeAutoPredictor operation.

source§

impl Client

source

pub fn describe_dataset(&self) -> DescribeDatasetFluentBuilder

Constructs a fluent builder for the DescribeDataset operation.

source§

impl Client

source

pub fn describe_dataset_group(&self) -> DescribeDatasetGroupFluentBuilder

Constructs a fluent builder for the DescribeDatasetGroup operation.

source§

impl Client

source

pub fn describe_dataset_import_job( &self ) -> DescribeDatasetImportJobFluentBuilder

Constructs a fluent builder for the DescribeDatasetImportJob operation.

source§

impl Client

source

pub fn describe_explainability(&self) -> DescribeExplainabilityFluentBuilder

Constructs a fluent builder for the DescribeExplainability operation.

source§

impl Client

source

pub fn describe_explainability_export( &self ) -> DescribeExplainabilityExportFluentBuilder

Constructs a fluent builder for the DescribeExplainabilityExport operation.

source§

impl Client

source

pub fn describe_forecast(&self) -> DescribeForecastFluentBuilder

Constructs a fluent builder for the DescribeForecast operation.

source§

impl Client

source

pub fn describe_forecast_export_job( &self ) -> DescribeForecastExportJobFluentBuilder

Constructs a fluent builder for the DescribeForecastExportJob operation.

source§

impl Client

source

pub fn describe_monitor(&self) -> DescribeMonitorFluentBuilder

Constructs a fluent builder for the DescribeMonitor operation.

source§

impl Client

source

pub fn describe_predictor(&self) -> DescribePredictorFluentBuilder

Constructs a fluent builder for the DescribePredictor operation.

source§

impl Client

source

pub fn describe_predictor_backtest_export_job( &self ) -> DescribePredictorBacktestExportJobFluentBuilder

Constructs a fluent builder for the DescribePredictorBacktestExportJob operation.

source§

impl Client

source

pub fn describe_what_if_analysis(&self) -> DescribeWhatIfAnalysisFluentBuilder

Constructs a fluent builder for the DescribeWhatIfAnalysis operation.

source§

impl Client

source

pub fn describe_what_if_forecast(&self) -> DescribeWhatIfForecastFluentBuilder

Constructs a fluent builder for the DescribeWhatIfForecast operation.

source§

impl Client

source

pub fn describe_what_if_forecast_export( &self ) -> DescribeWhatIfForecastExportFluentBuilder

Constructs a fluent builder for the DescribeWhatIfForecastExport operation.

source§

impl Client

source

pub fn get_accuracy_metrics(&self) -> GetAccuracyMetricsFluentBuilder

Constructs a fluent builder for the GetAccuracyMetrics operation.

source§

impl Client

source

pub fn list_dataset_groups(&self) -> ListDatasetGroupsFluentBuilder

Constructs a fluent builder for the ListDatasetGroups operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_dataset_import_jobs(&self) -> ListDatasetImportJobsFluentBuilder

Constructs a fluent builder for the ListDatasetImportJobs operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The number of items to return in the response.


    • filters(Filter) / set_filters(Option<Vec::<Filter>>):
      required: false

      An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the datasets that match the statement from the list, respectively. The match statement consists of a key and a value.

      Filter properties

      • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the datasets that match the statement, specify IS. To exclude matching datasets, specify IS_NOT.

      • Key - The name of the parameter to filter on. Valid values are DatasetArn and Status.

      • Value - The value to match.

      For example, to list all dataset import jobs whose status is ACTIVE, you specify the following filter:

      “Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]


  • On success, responds with ListDatasetImportJobsOutput with field(s):
  • On failure, responds with SdkError<ListDatasetImportJobsError>
source§

impl Client

source

pub fn list_datasets(&self) -> ListDatasetsFluentBuilder

Constructs a fluent builder for the ListDatasets operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_explainabilities(&self) -> ListExplainabilitiesFluentBuilder

Constructs a fluent builder for the ListExplainabilities operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_explainability_exports( &self ) -> ListExplainabilityExportsFluentBuilder

Constructs a fluent builder for the ListExplainabilityExports operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_forecast_export_jobs(&self) -> ListForecastExportJobsFluentBuilder

Constructs a fluent builder for the ListForecastExportJobs operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The number of items to return in the response.


    • filters(Filter) / set_filters(Option<Vec::<Filter>>):
      required: false

      An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.

      Filter properties

      • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the forecast export jobs that match the statement, specify IS. To exclude matching forecast export jobs, specify IS_NOT.

      • Key - The name of the parameter to filter on. Valid values are ForecastArn and Status.

      • Value - The value to match.

      For example, to list all jobs that export a forecast named electricityforecast, specify the following filter:

      “Filters”: [ { “Condition”: “IS”, “Key”: “ForecastArn”, “Value”: “arn:aws:forecast:us-west-2: :forecast/electricityforecast” } ]


  • On success, responds with ListForecastExportJobsOutput with field(s):
  • On failure, responds with SdkError<ListForecastExportJobsError>
source§

impl Client

source

pub fn list_forecasts(&self) -> ListForecastsFluentBuilder

Constructs a fluent builder for the ListForecasts operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The number of items to return in the response.


    • filters(Filter) / set_filters(Option<Vec::<Filter>>):
      required: false

      An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the forecasts that match the statement from the list, respectively. The match statement consists of a key and a value.

      Filter properties

      • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the forecasts that match the statement, specify IS. To exclude matching forecasts, specify IS_NOT.

      • Key - The name of the parameter to filter on. Valid values are DatasetGroupArn, PredictorArn, and Status.

      • Value - The value to match.

      For example, to list all forecasts whose status is not ACTIVE, you would specify:

      “Filters”: [ { “Condition”: “IS_NOT”, “Key”: “Status”, “Value”: “ACTIVE” } ]


  • On success, responds with ListForecastsOutput with field(s):
  • On failure, responds with SdkError<ListForecastsError>
source§

impl Client

source

pub fn list_monitor_evaluations(&self) -> ListMonitorEvaluationsFluentBuilder

Constructs a fluent builder for the ListMonitorEvaluations operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_monitors(&self) -> ListMonitorsFluentBuilder

Constructs a fluent builder for the ListMonitors operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The maximum number of monitors to include in the response.


    • filters(Filter) / set_filters(Option<Vec::<Filter>>):
      required: false

      An array of filters. For each filter, provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the resources that match the statement from the list. The match statement consists of a key and a value.

      Filter properties

      • Condition - The condition to apply. Valid values are IS and IS_NOT.

      • Key - The name of the parameter to filter on. The only valid value is Status.

      • Value - The value to match.

      For example, to list all monitors who’s status is ACTIVE, you would specify:

      “Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]


  • On success, responds with ListMonitorsOutput with field(s):
  • On failure, responds with SdkError<ListMonitorsError>
source§

impl Client

source

pub fn list_predictor_backtest_export_jobs( &self ) -> ListPredictorBacktestExportJobsFluentBuilder

Constructs a fluent builder for the ListPredictorBacktestExportJobs operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_predictors(&self) -> ListPredictorsFluentBuilder

Constructs a fluent builder for the ListPredictors operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The number of items to return in the response.


    • filters(Filter) / set_filters(Option<Vec::<Filter>>):
      required: false

      An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the predictors that match the statement from the list, respectively. The match statement consists of a key and a value.

      Filter properties

      • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the predictors that match the statement, specify IS. To exclude matching predictors, specify IS_NOT.

      • Key - The name of the parameter to filter on. Valid values are DatasetGroupArn and Status.

      • Value - The value to match.

      For example, to list all predictors whose status is ACTIVE, you would specify:

      “Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]


  • On success, responds with ListPredictorsOutput with field(s):
  • On failure, responds with SdkError<ListPredictorsError>
source§

impl Client

source

pub fn list_tags_for_resource(&self) -> ListTagsForResourceFluentBuilder

Constructs a fluent builder for the ListTagsForResource operation.

source§

impl Client

source

pub fn list_what_if_analyses(&self) -> ListWhatIfAnalysesFluentBuilder

Constructs a fluent builder for the ListWhatIfAnalyses operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The number of items to return in the response.


    • filters(Filter) / set_filters(Option<Vec::<Filter>>):
      required: false

      An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the what-if analysis jobs that match the statement from the list, respectively. The match statement consists of a key and a value.

      Filter properties

      • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the what-if analysis jobs that match the statement, specify IS. To exclude matching what-if analysis jobs, specify IS_NOT.

      • Key - The name of the parameter to filter on. Valid values are WhatIfAnalysisArn and Status.

      • Value - The value to match.

      For example, to list all jobs that export a forecast named electricityWhatIf, specify the following filter:

      “Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfAnalysisArn”, “Value”: “arn:aws:forecast:us-west-2: :forecast/electricityWhatIf” } ]


  • On success, responds with ListWhatIfAnalysesOutput with field(s):
  • On failure, responds with SdkError<ListWhatIfAnalysesError>
source§

impl Client

source

pub fn list_what_if_forecast_exports( &self ) -> ListWhatIfForecastExportsFluentBuilder

Constructs a fluent builder for the ListWhatIfForecastExports operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next
 request. Tokens expire after 24 hours.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The number of items to return in the response.


    • filters(Filter) / set_filters(Option<Vec::<Filter>>):
      required: false

      An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the what-if forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.

      Filter properties

      • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the forecast export jobs that match the statement, specify IS. To exclude matching forecast export jobs, specify IS_NOT.

      • Key - The name of the parameter to filter on. Valid values are WhatIfForecastExportArn and Status.

      • Value - The value to match.

      For example, to list all jobs that export a forecast named electricityWIFExport, specify the following filter:

      “Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfForecastExportArn”, “Value”: “arn:aws:forecast:us-west-2: :forecast/electricityWIFExport” } ]


  • On success, responds with ListWhatIfForecastExportsOutput with field(s):
  • On failure, responds with SdkError<ListWhatIfForecastExportsError>
source§

impl Client

source

pub fn list_what_if_forecasts(&self) -> ListWhatIfForecastsFluentBuilder

Constructs a fluent builder for the ListWhatIfForecasts operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next
 request. Tokens expire after 24 hours.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      The number of items to return in the response.


    • filters(Filter) / set_filters(Option<Vec::<Filter>>):
      required: false

      An array of filters. For each filter, you provide a condition and a match statement. The condition is either IS or IS_NOT, which specifies whether to include or exclude the what-if forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.

      Filter properties

      • Condition - The condition to apply. Valid values are IS and IS_NOT. To include the forecast export jobs that match the statement, specify IS. To exclude matching forecast export jobs, specify IS_NOT.

      • Key - The name of the parameter to filter on. Valid values are WhatIfForecastArn and Status.

      • Value - The value to match.

      For example, to list all jobs that export a forecast named electricityWhatIfForecast, specify the following filter:

      “Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfForecastArn”, “Value”: “arn:aws:forecast:us-west-2: :forecast/electricityWhatIfForecast” } ]


  • On success, responds with ListWhatIfForecastsOutput with field(s):
  • On failure, responds with SdkError<ListWhatIfForecastsError>
source§

impl Client

source

pub fn resume_resource(&self) -> ResumeResourceFluentBuilder

Constructs a fluent builder for the ResumeResource operation.

source§

impl Client

source

pub fn stop_resource(&self) -> StopResourceFluentBuilder

Constructs a fluent builder for the StopResource operation.

source§

impl Client

source

pub fn tag_resource(&self) -> TagResourceFluentBuilder

Constructs a fluent builder for the TagResource operation.

  • The fluent builder is configurable:
    • resource_arn(impl Into<String>) / set_resource_arn(Option<String>):
      required: true

      The Amazon Resource Name (ARN) that identifies the resource for which to list the tags.


    • tags(Tag) / set_tags(Option<Vec::<Tag>>):
      required: true

      The tags to add to the resource. A tag is an array of key-value pairs.

      The following basic restrictions apply to tags:

      • Maximum number of tags per resource - 50.

      • For each resource, each tag key must be unique, and each tag key can have only one value.

      • Maximum key length - 128 Unicode characters in UTF-8.

      • Maximum value length - 256 Unicode characters in UTF-8.

      • If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.

      • Tag keys and values are case sensitive.

      • Do not use aws:, AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value has aws as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix of aws do not count against your tags per resource limit.


  • On success, responds with TagResourceOutput
  • On failure, responds with SdkError<TagResourceError>
source§

impl Client

source

pub fn untag_resource(&self) -> UntagResourceFluentBuilder

Constructs a fluent builder for the UntagResource operation.

source§

impl Client

source

pub fn update_dataset_group(&self) -> UpdateDatasetGroupFluentBuilder

Constructs a fluent builder for the UpdateDatasetGroup operation.

source§

impl Client

source

pub fn from_conf(conf: Config) -> Self

Creates a new client from the service Config.

§Panics

This method will panic in the following cases:

  • Retries or timeouts are enabled without a sleep_impl configured.
  • Identity caching is enabled without a sleep_impl and time_source configured.
  • No behavior_version is provided.

The panic message for each of these will have instructions on how to resolve them.

source

pub fn config(&self) -> &Config

Returns the client’s configuration.

source§

impl Client

source

pub fn new(sdk_config: &SdkConfig) -> Self

Creates a new client from an SDK Config.

§Panics
  • This method will panic if the sdk_config is missing an async sleep implementation. If you experience this panic, set the sleep_impl on the Config passed into this function to fix it.
  • This method will panic if the sdk_config is missing an HTTP connector. If you experience this panic, set the http_connector on the Config passed into this function to fix it.
  • This method will panic if no BehaviorVersion is provided. If you experience this panic, set behavior_version on the Config or enable the behavior-version-latest Cargo feature.

Trait Implementations§

source§

impl Clone for Client

source§

fn clone(&self) -> Client

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for Client

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl Freeze for Client

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more