Struct aws_sdk_forecast::Client
source · pub struct Client { /* private fields */ }Expand description
Client for Amazon Forecast Service
Client for invoking operations on Amazon Forecast Service. Each operation on Amazon Forecast Service is a method on this
this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.
Constructing a Client
A Config is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env() instead, which returns a ConfigLoader that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_forecast::Client::new(&config);Occasionally, SDKs may have additional service-specific that can be set on the Config that
is absent from SdkConfig, or slightly different settings for a specific client may be desired.
The Config struct implements From<&SdkConfig>, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_forecast::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();See the aws-config docs and Config for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Using the Client
A client has a function for every operation that can be performed by the service.
For example, the CreateAutoPredictor operation has
a Client::create_auto_predictor, function which returns a builder for that operation.
The fluent builder ultimately has a send() function that returns an async future that
returns a result, as illustrated below:
let result = client.create_auto_predictor()
.predictor_name("example")
.send()
.await;The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize module for more
information.
Implementations§
source§impl Client
impl Client
sourcepub fn create_auto_predictor(&self) -> CreateAutoPredictorFluentBuilder
pub fn create_auto_predictor(&self) -> CreateAutoPredictorFluentBuilder
Constructs a fluent builder for the CreateAutoPredictor operation.
- The fluent builder is configurable:
predictor_name(impl Into<String>)/set_predictor_name(Option<String>):A unique name for the predictor
forecast_horizon(i32)/set_forecast_horizon(Option<i32>):The number of time-steps that the model predicts. The forecast horizon is also called the prediction length.
The maximum forecast horizon is the lesser of 500 time-steps or 1/4 of the TARGET_TIME_SERIES dataset length. If you are retraining an existing AutoPredictor, then the maximum forecast horizon is the lesser of 500 time-steps or 1/3 of the TARGET_TIME_SERIES dataset length.
If you are upgrading to an AutoPredictor or retraining an existing AutoPredictor, you cannot update the forecast horizon parameter. You can meet this requirement by providing longer time-series in the dataset.
forecast_types(impl Into<String>)/set_forecast_types(Option<Vec<String>>):The forecast types used to train a predictor. You can specify up to five forecast types. Forecast types can be quantiles from 0.01 to 0.99, by increments of 0.01 or higher. You can also specify the mean forecast with
mean.forecast_dimensions(impl Into<String>)/set_forecast_dimensions(Option<Vec<String>>):An array of dimension (field) names that specify how to group the generated forecast.
For example, if you are generating forecasts for item sales across all your stores, and your dataset contains a
store_idfield, you would specifystore_idas a dimension to group sales forecasts for each store.forecast_frequency(impl Into<String>)/set_forecast_frequency(Option<String>):The frequency of predictions in a forecast.
Valid intervals are an integer followed by Y (Year), M (Month), W (Week), D (Day), H (Hour), and min (Minute). For example, “1D” indicates every day and “15min” indicates every 15 minutes. You cannot specify a value that would overlap with the next larger frequency. That means, for example, you cannot specify a frequency of 60 minutes, because that is equivalent to 1 hour. The valid values for each frequency are the following:
-
Minute - 1-59
-
Hour - 1-23
-
Day - 1-6
-
Week - 1-4
-
Month - 1-11
-
Year - 1
Thus, if you want every other week forecasts, specify “2W”. Or, if you want quarterly forecasts, you specify “3M”.
The frequency must be greater than or equal to the TARGET_TIME_SERIES dataset frequency.
When a RELATED_TIME_SERIES dataset is provided, the frequency must be equal to the RELATED_TIME_SERIES dataset frequency.
-
data_config(DataConfig)/set_data_config(Option<DataConfig>):The data configuration for your dataset group and any additional datasets.
encryption_config(EncryptionConfig)/set_encryption_config(Option<EncryptionConfig>):An Key Management Service (KMS) key and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key. You can specify this optional object in the
CreateDatasetandCreatePredictorrequests.reference_predictor_arn(impl Into<String>)/set_reference_predictor_arn(Option<String>):The ARN of the predictor to retrain or upgrade. This parameter is only used when retraining or upgrading a predictor. When creating a new predictor, do not specify a value for this parameter.
When upgrading or retraining a predictor, only specify values for the
ReferencePredictorArnandPredictorName. The value forPredictorNamemust be a unique predictor name.optimization_metric(OptimizationMetric)/set_optimization_metric(Option<OptimizationMetric>):The accuracy metric used to optimize the predictor.
explain_predictor(bool)/set_explain_predictor(Option<bool>):Create an Explainability resource for the predictor.
tags(Tag)/set_tags(Option<Vec<Tag>>):Optional metadata to help you categorize and organize your predictors. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.
The following restrictions apply to tags:
-
For each resource, each tag key must be unique and each tag key must have one value.
-
Maximum number of tags per resource: 50.
-
Maximum key length: 128 Unicode characters in UTF-8.
-
Maximum value length: 256 Unicode characters in UTF-8.
-
Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.
-
Key prefixes cannot include any upper or lowercase combination of
aws:orAWS:. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
-
monitor_config(MonitorConfig)/set_monitor_config(Option<MonitorConfig>):The configuration details for predictor monitoring. Provide a name for the monitor resource to enable predictor monitoring.
Predictor monitoring allows you to see how your predictor’s performance changes over time. For more information, see Predictor Monitoring.
time_alignment_boundary(TimeAlignmentBoundary)/set_time_alignment_boundary(Option<TimeAlignmentBoundary>):The time boundary Forecast uses to align and aggregate any data that doesn’t align with your forecast frequency. Provide the unit of time and the time boundary as a key value pair. For more information on specifying a time boundary, see Specifying a Time Boundary. If you don’t provide a time boundary, Forecast uses a set of Default Time Boundaries.
- On success, responds with
CreateAutoPredictorOutputwith field(s):predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor.
- On failure, responds with
SdkError<CreateAutoPredictorError>
source§impl Client
impl Client
sourcepub fn create_dataset(&self) -> CreateDatasetFluentBuilder
pub fn create_dataset(&self) -> CreateDatasetFluentBuilder
Constructs a fluent builder for the CreateDataset operation.
- The fluent builder is configurable:
dataset_name(impl Into<String>)/set_dataset_name(Option<String>):A name for the dataset.
domain(Domain)/set_domain(Option<Domain>):The domain associated with the dataset. When you add a dataset to a dataset group, this value and the value specified for the
Domainparameter of the CreateDatasetGroup operation must match.The
DomainandDatasetTypethat you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose theRETAILdomain andTARGET_TIME_SERIESas theDatasetType, Amazon Forecast requiresitem_id,timestamp, anddemandfields to be present in your data. For more information, see Importing datasets.dataset_type(DatasetType)/set_dataset_type(Option<DatasetType>):The dataset type. Valid values depend on the chosen
Domain.data_frequency(impl Into<String>)/set_data_frequency(Option<String>):The frequency of data collection. This parameter is required for RELATED_TIME_SERIES datasets.
Valid intervals are an integer followed by Y (Year), M (Month), W (Week), D (Day), H (Hour), and min (Minute). For example, “1D” indicates every day and “15min” indicates every 15 minutes. You cannot specify a value that would overlap with the next larger frequency. That means, for example, you cannot specify a frequency of 60 minutes, because that is equivalent to 1 hour. The valid values for each frequency are the following:
-
Minute - 1-59
-
Hour - 1-23
-
Day - 1-6
-
Week - 1-4
-
Month - 1-11
-
Year - 1
Thus, if you want every other week forecasts, specify “2W”. Or, if you want quarterly forecasts, you specify “3M”.
-
schema(Schema)/set_schema(Option<Schema>):The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset
DomainandDatasetTypethat you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see Dataset Domains and Dataset Types.encryption_config(EncryptionConfig)/set_encryption_config(Option<EncryptionConfig>):An Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
tags(Tag)/set_tags(Option<Vec<Tag>>):The optional metadata that you apply to the dataset to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:,AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit.
-
- On success, responds with
CreateDatasetOutputwith field(s):dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset.
- On failure, responds with
SdkError<CreateDatasetError>
source§impl Client
impl Client
sourcepub fn create_dataset_group(&self) -> CreateDatasetGroupFluentBuilder
pub fn create_dataset_group(&self) -> CreateDatasetGroupFluentBuilder
Constructs a fluent builder for the CreateDatasetGroup operation.
- The fluent builder is configurable:
dataset_group_name(impl Into<String>)/set_dataset_group_name(Option<String>):A name for the dataset group.
domain(Domain)/set_domain(Option<Domain>):The domain associated with the dataset group. When you add a dataset to a dataset group, this value and the value specified for the
Domainparameter of the CreateDataset operation must match.The
DomainandDatasetTypethat you choose determine the fields that must be present in training data that you import to a dataset. For example, if you choose theRETAILdomain andTARGET_TIME_SERIESas theDatasetType, Amazon Forecast requires thatitem_id,timestamp, anddemandfields are present in your data. For more information, see Dataset groups.dataset_arns(impl Into<String>)/set_dataset_arns(Option<Vec<String>>):An array of Amazon Resource Names (ARNs) of the datasets that you want to include in the dataset group.
tags(Tag)/set_tags(Option<Vec<Tag>>):The optional metadata that you apply to the dataset group to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:,AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit.
-
- On success, responds with
CreateDatasetGroupOutputwith field(s):dataset_group_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset group.
- On failure, responds with
SdkError<CreateDatasetGroupError>
source§impl Client
impl Client
sourcepub fn create_dataset_import_job(&self) -> CreateDatasetImportJobFluentBuilder
pub fn create_dataset_import_job(&self) -> CreateDatasetImportJobFluentBuilder
Constructs a fluent builder for the CreateDatasetImportJob operation.
- The fluent builder is configurable:
dataset_import_job_name(impl Into<String>)/set_dataset_import_job_name(Option<String>):The name for the dataset import job. We recommend including the current timestamp in the name, for example,
20190721DatasetImport. This can help you avoid getting aResourceAlreadyExistsExceptionexception.dataset_arn(impl Into<String>)/set_dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the Amazon Forecast dataset that you want to import data to.
data_source(DataSource)/set_data_source(Option<DataSource>):The location of the training data to import and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data. The training data must be stored in an Amazon S3 bucket.
If encryption is used,
DataSourcemust include an Key Management Service (KMS) key and the IAM role must allow Amazon Forecast permission to access the key. The KMS key and IAM role must match those specified in theEncryptionConfigparameter of the CreateDataset operation.timestamp_format(impl Into<String>)/set_timestamp_format(Option<String>):The format of timestamps in the dataset. The format that you specify depends on the
DataFrequencyspecified when the dataset was created. The following formats are supported-
“yyyy-MM-dd”
For the following data frequencies: Y, M, W, and D
-
“yyyy-MM-dd HH:mm:ss”
For the following data frequencies: H, 30min, 15min, and 1min; and optionally, for: Y, M, W, and D
If the format isn’t specified, Amazon Forecast expects the format to be “yyyy-MM-dd HH:mm:ss”.
-
time_zone(impl Into<String>)/set_time_zone(Option<String>):A single time zone for every item in your dataset. This option is ideal for datasets with all timestamps within a single time zone, or if all timestamps are normalized to a single time zone.
Refer to the Joda-Time API for a complete list of valid time zone names.
use_geolocation_for_time_zone(bool)/set_use_geolocation_for_time_zone(Option<bool>):Automatically derive time zone information from the geolocation attribute. This option is ideal for datasets that contain timestamps in multiple time zones and those timestamps are expressed in local time.
geolocation_format(impl Into<String>)/set_geolocation_format(Option<String>):The format of the geolocation attribute. The geolocation attribute can be formatted in one of two ways:
-
LAT_LONG- the latitude and longitude in decimal format (Example: 47.61_-122.33). -
CC_POSTALCODE(US Only) - the country code (US), followed by the 5-digit ZIP code (Example: US_98121).
-
tags(Tag)/set_tags(Option<Vec<Tag>>):The optional metadata that you apply to the dataset import job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:,AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit.
-
format(impl Into<String>)/set_format(Option<String>):The format of the imported data, CSV or PARQUET. The default value is CSV.
import_mode(ImportMode)/set_import_mode(Option<ImportMode>):Specifies whether the dataset import job is a
FULLorINCREMENTALimport. AFULLdataset import replaces all of the existing data with the newly imported data. AnINCREMENTALimport appends the imported data to the existing data.
- On success, responds with
CreateDatasetImportJobOutputwith field(s):dataset_import_job_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset import job.
- On failure, responds with
SdkError<CreateDatasetImportJobError>
source§impl Client
impl Client
sourcepub fn create_explainability(&self) -> CreateExplainabilityFluentBuilder
pub fn create_explainability(&self) -> CreateExplainabilityFluentBuilder
Constructs a fluent builder for the CreateExplainability operation.
- The fluent builder is configurable:
explainability_name(impl Into<String>)/set_explainability_name(Option<String>):A unique name for the Explainability.
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):The Amazon Resource Name (ARN) of the Predictor or Forecast used to create the Explainability.
explainability_config(ExplainabilityConfig)/set_explainability_config(Option<ExplainabilityConfig>):The configuration settings that define the granularity of time series and time points for the Explainability.
data_source(DataSource)/set_data_source(Option<DataSource>):The source of your data, an Identity and Access Management (IAM) role that allows Amazon Forecast to access the data and, optionally, an Key Management Service (KMS) key.
schema(Schema)/set_schema(Option<Schema>):Defines the fields of a dataset.
enable_visualization(bool)/set_enable_visualization(Option<bool>):Create an Explainability visualization that is viewable within the Amazon Web Services console.
start_date_time(impl Into<String>)/set_start_date_time(Option<String>):If
TimePointGranularityis set toSPECIFIC, define the first point for the Explainability.Use the following timestamp format: yyyy-MM-ddTHH:mm:ss (example: 2015-01-01T20:00:00)
end_date_time(impl Into<String>)/set_end_date_time(Option<String>):If
TimePointGranularityis set toSPECIFIC, define the last time point for the Explainability.Use the following timestamp format: yyyy-MM-ddTHH:mm:ss (example: 2015-01-01T20:00:00)
tags(Tag)/set_tags(Option<Vec<Tag>>):Optional metadata to help you categorize and organize your resources. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.
The following restrictions apply to tags:
-
For each resource, each tag key must be unique and each tag key must have one value.
-
Maximum number of tags per resource: 50.
-
Maximum key length: 128 Unicode characters in UTF-8.
-
Maximum value length: 256 Unicode characters in UTF-8.
-
Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.
-
Key prefixes cannot include any upper or lowercase combination of
aws:orAWS:. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
-
- On success, responds with
CreateExplainabilityOutputwith field(s):explainability_arn(Option<String>):The Amazon Resource Name (ARN) of the Explainability.
- On failure, responds with
SdkError<CreateExplainabilityError>
source§impl Client
impl Client
sourcepub fn create_explainability_export(
&self
) -> CreateExplainabilityExportFluentBuilder
pub fn create_explainability_export( &self ) -> CreateExplainabilityExportFluentBuilder
Constructs a fluent builder for the CreateExplainabilityExport operation.
- The fluent builder is configurable:
explainability_export_name(impl Into<String>)/set_explainability_export_name(Option<String>):A unique name for the Explainability export.
explainability_arn(impl Into<String>)/set_explainability_arn(Option<String>):The Amazon Resource Name (ARN) of the Explainability to export.
destination(DataDestination)/set_destination(Option<DataDestination>):The destination for an export job. Provide an S3 path, an Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an Key Management Service (KMS) key (optional).
tags(Tag)/set_tags(Option<Vec<Tag>>):Optional metadata to help you categorize and organize your resources. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.
The following restrictions apply to tags:
-
For each resource, each tag key must be unique and each tag key must have one value.
-
Maximum number of tags per resource: 50.
-
Maximum key length: 128 Unicode characters in UTF-8.
-
Maximum value length: 256 Unicode characters in UTF-8.
-
Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.
-
Key prefixes cannot include any upper or lowercase combination of
aws:orAWS:. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
-
format(impl Into<String>)/set_format(Option<String>):The format of the exported data, CSV or PARQUET.
- On success, responds with
CreateExplainabilityExportOutputwith field(s):explainability_export_arn(Option<String>):The Amazon Resource Name (ARN) of the export.
- On failure, responds with
SdkError<CreateExplainabilityExportError>
source§impl Client
impl Client
sourcepub fn create_forecast(&self) -> CreateForecastFluentBuilder
pub fn create_forecast(&self) -> CreateForecastFluentBuilder
Constructs a fluent builder for the CreateForecast operation.
- The fluent builder is configurable:
forecast_name(impl Into<String>)/set_forecast_name(Option<String>):A name for the forecast.
predictor_arn(impl Into<String>)/set_predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor to use to generate the forecast.
forecast_types(impl Into<String>)/set_forecast_types(Option<Vec<String>>):The quantiles at which probabilistic forecasts are generated. You can currently specify up to 5 quantiles per forecast. Accepted values include
0.01 to 0.99(increments of .01 only) andmean. The mean forecast is different from the median (0.50) when the distribution is not symmetric (for example, Beta and Negative Binomial).The default quantiles are the quantiles you specified during predictor creation. If you didn’t specify quantiles, the default values are
[“0.1”, “0.5”, “0.9”].tags(Tag)/set_tags(Option<Vec<Tag>>):The optional metadata that you apply to the forecast to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:,AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit.
-
time_series_selector(TimeSeriesSelector)/set_time_series_selector(Option<TimeSeriesSelector>):Defines the set of time series that are used to create the forecasts in a
TimeSeriesIdentifiersobject.The
TimeSeriesIdentifiersobject needs the following information:-
DataSource -
Format -
Schema
-
- On success, responds with
CreateForecastOutputwith field(s):forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the forecast.
- On failure, responds with
SdkError<CreateForecastError>
source§impl Client
impl Client
sourcepub fn create_forecast_export_job(&self) -> CreateForecastExportJobFluentBuilder
pub fn create_forecast_export_job(&self) -> CreateForecastExportJobFluentBuilder
Constructs a fluent builder for the CreateForecastExportJob operation.
- The fluent builder is configurable:
forecast_export_job_name(impl Into<String>)/set_forecast_export_job_name(Option<String>):The name for the forecast export job.
forecast_arn(impl Into<String>)/set_forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the forecast that you want to export.
destination(DataDestination)/set_destination(Option<DataDestination>):The location where you want to save the forecast and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the location. The forecast must be exported to an Amazon S3 bucket.
If encryption is used,
Destinationmust include an Key Management Service (KMS) key. The IAM role must allow Amazon Forecast permission to access the key.tags(Tag)/set_tags(Option<Vec<Tag>>):The optional metadata that you apply to the forecast export job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:,AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit.
-
format(impl Into<String>)/set_format(Option<String>):The format of the exported data, CSV or PARQUET. The default value is CSV.
- On success, responds with
CreateForecastExportJobOutputwith field(s):forecast_export_job_arn(Option<String>):The Amazon Resource Name (ARN) of the export job.
- On failure, responds with
SdkError<CreateForecastExportJobError>
source§impl Client
impl Client
sourcepub fn create_monitor(&self) -> CreateMonitorFluentBuilder
pub fn create_monitor(&self) -> CreateMonitorFluentBuilder
Constructs a fluent builder for the CreateMonitor operation.
- The fluent builder is configurable:
monitor_name(impl Into<String>)/set_monitor_name(Option<String>):The name of the monitor resource.
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor to monitor.
tags(Tag)/set_tags(Option<Vec<Tag>>):A list of tags to apply to the monitor resource.
- On success, responds with
CreateMonitorOutputwith field(s):monitor_arn(Option<String>):The Amazon Resource Name (ARN) of the monitor resource.
- On failure, responds with
SdkError<CreateMonitorError>
source§impl Client
impl Client
sourcepub fn create_predictor(&self) -> CreatePredictorFluentBuilder
pub fn create_predictor(&self) -> CreatePredictorFluentBuilder
Constructs a fluent builder for the CreatePredictor operation.
- The fluent builder is configurable:
predictor_name(impl Into<String>)/set_predictor_name(Option<String>):A name for the predictor.
algorithm_arn(impl Into<String>)/set_algorithm_arn(Option<String>):The Amazon Resource Name (ARN) of the algorithm to use for model training. Required if
PerformAutoMLis not set totrue.Supported algorithms:
-
arn:aws:forecast:::algorithm/ARIMA -
arn:aws:forecast:::algorithm/CNN-QR -
arn:aws:forecast:::algorithm/Deep_AR_Plus -
arn:aws:forecast:::algorithm/ETS -
arn:aws:forecast:::algorithm/NPTS -
arn:aws:forecast:::algorithm/Prophet
-
forecast_horizon(i32)/set_forecast_horizon(Option<i32>):Specifies the number of time-steps that the model is trained to predict. The forecast horizon is also called the prediction length.
For example, if you configure a dataset for daily data collection (using the
DataFrequencyparameter of theCreateDatasetoperation) and set the forecast horizon to 10, the model returns predictions for 10 days.The maximum forecast horizon is the lesser of 500 time-steps or 1/3 of the TARGET_TIME_SERIES dataset length.
forecast_types(impl Into<String>)/set_forecast_types(Option<Vec<String>>):Specifies the forecast types used to train a predictor. You can specify up to five forecast types. Forecast types can be quantiles from 0.01 to 0.99, by increments of 0.01 or higher. You can also specify the mean forecast with
mean.The default value is
[“0.10”, “0.50”, “0.9”].perform_auto_ml(bool)/set_perform_auto_ml(Option<bool>):Whether to perform AutoML. When Amazon Forecast performs AutoML, it evaluates the algorithms it provides and chooses the best algorithm and configuration for your training dataset.
The default value is
false. In this case, you are required to specify an algorithm.Set
PerformAutoMLtotrueto have Amazon Forecast perform AutoML. This is a good option if you aren’t sure which algorithm is suitable for your training data. In this case,PerformHPOmust be false.auto_ml_override_strategy(AutoMlOverrideStrategy)/set_auto_ml_override_strategy(Option<AutoMlOverrideStrategy>):The
LatencyOptimizedAutoML override strategy is only available in private beta. Contact Amazon Web Services Support or your account manager to learn more about access privileges.Used to overide the default AutoML strategy, which is to optimize predictor accuracy. To apply an AutoML strategy that minimizes training time, use
LatencyOptimized.This parameter is only valid for predictors trained using AutoML.
perform_hpo(bool)/set_perform_hpo(Option<bool>):Whether to perform hyperparameter optimization (HPO). HPO finds optimal hyperparameter values for your training data. The process of performing HPO is known as running a hyperparameter tuning job.
The default value is
false. In this case, Amazon Forecast uses default hyperparameter values from the chosen algorithm.To override the default values, set
PerformHPOtotrueand, optionally, supply theHyperParameterTuningJobConfigobject. The tuning job specifies a metric to optimize, which hyperparameters participate in tuning, and the valid range for each tunable hyperparameter. In this case, you are required to specify an algorithm andPerformAutoMLmust be false.The following algorithms support HPO:
-
DeepAR+
-
CNN-QR
-
training_parameters(impl Into<String>, impl Into<String>)/set_training_parameters(Option<HashMap<String, String>>):The hyperparameters to override for model training. The hyperparameters that you can override are listed in the individual algorithms. For the list of supported algorithms, see
aws-forecast-choosing-recipes.evaluation_parameters(EvaluationParameters)/set_evaluation_parameters(Option<EvaluationParameters>):Used to override the default evaluation parameters of the specified algorithm. Amazon Forecast evaluates a predictor by splitting a dataset into training data and testing data. The evaluation parameters define how to perform the split and the number of iterations.
hpo_config(HyperParameterTuningJobConfig)/set_hpo_config(Option<HyperParameterTuningJobConfig>):Provides hyperparameter override values for the algorithm. If you don’t provide this parameter, Amazon Forecast uses default values. The individual algorithms specify which hyperparameters support hyperparameter optimization (HPO). For more information, see
aws-forecast-choosing-recipes.If you included the
HPOConfigobject, you must setPerformHPOto true.input_data_config(InputDataConfig)/set_input_data_config(Option<InputDataConfig>):Describes the dataset group that contains the data to use to train the predictor.
featurization_config(FeaturizationConfig)/set_featurization_config(Option<FeaturizationConfig>):The featurization configuration.
encryption_config(EncryptionConfig)/set_encryption_config(Option<EncryptionConfig>):An Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
tags(Tag)/set_tags(Option<Vec<Tag>>):The optional metadata that you apply to the predictor to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:,AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit.
-
optimization_metric(OptimizationMetric)/set_optimization_metric(Option<OptimizationMetric>):The accuracy metric used to optimize the predictor.
- On success, responds with
CreatePredictorOutputwith field(s):predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor.
- On failure, responds with
SdkError<CreatePredictorError>
source§impl Client
impl Client
sourcepub fn create_predictor_backtest_export_job(
&self
) -> CreatePredictorBacktestExportJobFluentBuilder
pub fn create_predictor_backtest_export_job( &self ) -> CreatePredictorBacktestExportJobFluentBuilder
Constructs a fluent builder for the CreatePredictorBacktestExportJob operation.
- The fluent builder is configurable:
predictor_backtest_export_job_name(impl Into<String>)/set_predictor_backtest_export_job_name(Option<String>):The name for the backtest export job.
predictor_arn(impl Into<String>)/set_predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor that you want to export.
destination(DataDestination)/set_destination(Option<DataDestination>):The destination for an export job. Provide an S3 path, an Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an Key Management Service (KMS) key (optional).
tags(Tag)/set_tags(Option<Vec<Tag>>):Optional metadata to help you categorize and organize your backtests. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.
The following restrictions apply to tags:
-
For each resource, each tag key must be unique and each tag key must have one value.
-
Maximum number of tags per resource: 50.
-
Maximum key length: 128 Unicode characters in UTF-8.
-
Maximum value length: 256 Unicode characters in UTF-8.
-
Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.
-
Key prefixes cannot include any upper or lowercase combination of
aws:orAWS:. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
-
format(impl Into<String>)/set_format(Option<String>):The format of the exported data, CSV or PARQUET. The default value is CSV.
- On success, responds with
CreatePredictorBacktestExportJobOutputwith field(s):predictor_backtest_export_job_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor backtest export job that you want to export.
- On failure, responds with
SdkError<CreatePredictorBacktestExportJobError>
source§impl Client
impl Client
sourcepub fn create_what_if_analysis(&self) -> CreateWhatIfAnalysisFluentBuilder
pub fn create_what_if_analysis(&self) -> CreateWhatIfAnalysisFluentBuilder
Constructs a fluent builder for the CreateWhatIfAnalysis operation.
- The fluent builder is configurable:
what_if_analysis_name(impl Into<String>)/set_what_if_analysis_name(Option<String>):The name of the what-if analysis. Each name must be unique.
forecast_arn(impl Into<String>)/set_forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the baseline forecast.
time_series_selector(TimeSeriesSelector)/set_time_series_selector(Option<TimeSeriesSelector>):Defines the set of time series that are used in the what-if analysis with a
TimeSeriesIdentifiersobject. What-if analyses are performed only for the time series in this object.The
TimeSeriesIdentifiersobject needs the following information:-
DataSource -
Format -
Schema
-
tags(Tag)/set_tags(Option<Vec<Tag>>):A list of tags to apply to the what if forecast.
- On success, responds with
CreateWhatIfAnalysisOutputwith field(s):what_if_analysis_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if analysis.
- On failure, responds with
SdkError<CreateWhatIfAnalysisError>
source§impl Client
impl Client
sourcepub fn create_what_if_forecast(&self) -> CreateWhatIfForecastFluentBuilder
pub fn create_what_if_forecast(&self) -> CreateWhatIfForecastFluentBuilder
Constructs a fluent builder for the CreateWhatIfForecast operation.
- The fluent builder is configurable:
what_if_forecast_name(impl Into<String>)/set_what_if_forecast_name(Option<String>):The name of the what-if forecast. Names must be unique within each what-if analysis.
what_if_analysis_arn(impl Into<String>)/set_what_if_analysis_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if analysis.
time_series_transformations(TimeSeriesTransformation)/set_time_series_transformations(Option<Vec<TimeSeriesTransformation>>):The transformations that are applied to the baseline time series. Each transformation contains an action and a set of conditions. An action is applied only when all conditions are met. If no conditions are provided, the action is applied to all items.
time_series_replacements_data_source(TimeSeriesReplacementsDataSource)/set_time_series_replacements_data_source(Option<TimeSeriesReplacementsDataSource>):The replacement time series dataset, which contains the rows that you want to change in the related time series dataset. A replacement time series does not need to contain all rows that are in the baseline related time series. Include only the rows (measure-dimension combinations) that you want to include in the what-if forecast.
This dataset is merged with the original time series to create a transformed dataset that is used for the what-if analysis.
This dataset should contain the items to modify (such as item_id or workforce_type), any relevant dimensions, the timestamp column, and at least one of the related time series columns. This file should not contain duplicate timestamps for the same time series.
Timestamps and item_ids not included in this dataset are not included in the what-if analysis.
tags(Tag)/set_tags(Option<Vec<Tag>>):A list of tags to apply to the what if forecast.
- On success, responds with
CreateWhatIfForecastOutputwith field(s):what_if_forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast.
- On failure, responds with
SdkError<CreateWhatIfForecastError>
source§impl Client
impl Client
sourcepub fn create_what_if_forecast_export(
&self
) -> CreateWhatIfForecastExportFluentBuilder
pub fn create_what_if_forecast_export( &self ) -> CreateWhatIfForecastExportFluentBuilder
Constructs a fluent builder for the CreateWhatIfForecastExport operation.
- The fluent builder is configurable:
what_if_forecast_export_name(impl Into<String>)/set_what_if_forecast_export_name(Option<String>):The name of the what-if forecast to export.
what_if_forecast_arns(impl Into<String>)/set_what_if_forecast_arns(Option<Vec<String>>):The list of what-if forecast Amazon Resource Names (ARNs) to export.
destination(DataDestination)/set_destination(Option<DataDestination>):The location where you want to save the forecast and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the location. The forecast must be exported to an Amazon S3 bucket.
If encryption is used,
Destinationmust include an Key Management Service (KMS) key. The IAM role must allow Amazon Forecast permission to access the key.tags(Tag)/set_tags(Option<Vec<Tag>>):A list of tags to apply to the what if forecast.
format(impl Into<String>)/set_format(Option<String>):The format of the exported data, CSV or PARQUET.
- On success, responds with
CreateWhatIfForecastExportOutputwith field(s):what_if_forecast_export_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast.
- On failure, responds with
SdkError<CreateWhatIfForecastExportError>
source§impl Client
impl Client
sourcepub fn delete_dataset(&self) -> DeleteDatasetFluentBuilder
pub fn delete_dataset(&self) -> DeleteDatasetFluentBuilder
Constructs a fluent builder for the DeleteDataset operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)/set_dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset to delete.
- On success, responds with
DeleteDatasetOutput - On failure, responds with
SdkError<DeleteDatasetError>
source§impl Client
impl Client
sourcepub fn delete_dataset_group(&self) -> DeleteDatasetGroupFluentBuilder
pub fn delete_dataset_group(&self) -> DeleteDatasetGroupFluentBuilder
Constructs a fluent builder for the DeleteDatasetGroup operation.
- The fluent builder is configurable:
dataset_group_arn(impl Into<String>)/set_dataset_group_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset group to delete.
- On success, responds with
DeleteDatasetGroupOutput - On failure, responds with
SdkError<DeleteDatasetGroupError>
source§impl Client
impl Client
sourcepub fn delete_dataset_import_job(&self) -> DeleteDatasetImportJobFluentBuilder
pub fn delete_dataset_import_job(&self) -> DeleteDatasetImportJobFluentBuilder
Constructs a fluent builder for the DeleteDatasetImportJob operation.
- The fluent builder is configurable:
dataset_import_job_arn(impl Into<String>)/set_dataset_import_job_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset import job to delete.
- On success, responds with
DeleteDatasetImportJobOutput - On failure, responds with
SdkError<DeleteDatasetImportJobError>
source§impl Client
impl Client
sourcepub fn delete_explainability(&self) -> DeleteExplainabilityFluentBuilder
pub fn delete_explainability(&self) -> DeleteExplainabilityFluentBuilder
Constructs a fluent builder for the DeleteExplainability operation.
- The fluent builder is configurable:
explainability_arn(impl Into<String>)/set_explainability_arn(Option<String>):The Amazon Resource Name (ARN) of the Explainability resource to delete.
- On success, responds with
DeleteExplainabilityOutput - On failure, responds with
SdkError<DeleteExplainabilityError>
source§impl Client
impl Client
sourcepub fn delete_explainability_export(
&self
) -> DeleteExplainabilityExportFluentBuilder
pub fn delete_explainability_export( &self ) -> DeleteExplainabilityExportFluentBuilder
Constructs a fluent builder for the DeleteExplainabilityExport operation.
- The fluent builder is configurable:
explainability_export_arn(impl Into<String>)/set_explainability_export_arn(Option<String>):The Amazon Resource Name (ARN) of the Explainability export to delete.
- On success, responds with
DeleteExplainabilityExportOutput - On failure, responds with
SdkError<DeleteExplainabilityExportError>
source§impl Client
impl Client
sourcepub fn delete_forecast(&self) -> DeleteForecastFluentBuilder
pub fn delete_forecast(&self) -> DeleteForecastFluentBuilder
Constructs a fluent builder for the DeleteForecast operation.
- The fluent builder is configurable:
forecast_arn(impl Into<String>)/set_forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the forecast to delete.
- On success, responds with
DeleteForecastOutput - On failure, responds with
SdkError<DeleteForecastError>
source§impl Client
impl Client
sourcepub fn delete_forecast_export_job(&self) -> DeleteForecastExportJobFluentBuilder
pub fn delete_forecast_export_job(&self) -> DeleteForecastExportJobFluentBuilder
Constructs a fluent builder for the DeleteForecastExportJob operation.
- The fluent builder is configurable:
forecast_export_job_arn(impl Into<String>)/set_forecast_export_job_arn(Option<String>):The Amazon Resource Name (ARN) of the forecast export job to delete.
- On success, responds with
DeleteForecastExportJobOutput - On failure, responds with
SdkError<DeleteForecastExportJobError>
source§impl Client
impl Client
sourcepub fn delete_monitor(&self) -> DeleteMonitorFluentBuilder
pub fn delete_monitor(&self) -> DeleteMonitorFluentBuilder
Constructs a fluent builder for the DeleteMonitor operation.
- The fluent builder is configurable:
monitor_arn(impl Into<String>)/set_monitor_arn(Option<String>):The Amazon Resource Name (ARN) of the monitor resource to delete.
- On success, responds with
DeleteMonitorOutput - On failure, responds with
SdkError<DeleteMonitorError>
source§impl Client
impl Client
sourcepub fn delete_predictor(&self) -> DeletePredictorFluentBuilder
pub fn delete_predictor(&self) -> DeletePredictorFluentBuilder
Constructs a fluent builder for the DeletePredictor operation.
- The fluent builder is configurable:
predictor_arn(impl Into<String>)/set_predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor to delete.
- On success, responds with
DeletePredictorOutput - On failure, responds with
SdkError<DeletePredictorError>
source§impl Client
impl Client
sourcepub fn delete_predictor_backtest_export_job(
&self
) -> DeletePredictorBacktestExportJobFluentBuilder
pub fn delete_predictor_backtest_export_job( &self ) -> DeletePredictorBacktestExportJobFluentBuilder
Constructs a fluent builder for the DeletePredictorBacktestExportJob operation.
- The fluent builder is configurable:
predictor_backtest_export_job_arn(impl Into<String>)/set_predictor_backtest_export_job_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor backtest export job to delete.
- On success, responds with
DeletePredictorBacktestExportJobOutput - On failure, responds with
SdkError<DeletePredictorBacktestExportJobError>
source§impl Client
impl Client
sourcepub fn delete_resource_tree(&self) -> DeleteResourceTreeFluentBuilder
pub fn delete_resource_tree(&self) -> DeleteResourceTreeFluentBuilder
Constructs a fluent builder for the DeleteResourceTree operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):The Amazon Resource Name (ARN) of the parent resource to delete. All child resources of the parent resource will also be deleted.
- On success, responds with
DeleteResourceTreeOutput - On failure, responds with
SdkError<DeleteResourceTreeError>
source§impl Client
impl Client
sourcepub fn delete_what_if_analysis(&self) -> DeleteWhatIfAnalysisFluentBuilder
pub fn delete_what_if_analysis(&self) -> DeleteWhatIfAnalysisFluentBuilder
Constructs a fluent builder for the DeleteWhatIfAnalysis operation.
- The fluent builder is configurable:
what_if_analysis_arn(impl Into<String>)/set_what_if_analysis_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if analysis that you want to delete.
- On success, responds with
DeleteWhatIfAnalysisOutput - On failure, responds with
SdkError<DeleteWhatIfAnalysisError>
source§impl Client
impl Client
sourcepub fn delete_what_if_forecast(&self) -> DeleteWhatIfForecastFluentBuilder
pub fn delete_what_if_forecast(&self) -> DeleteWhatIfForecastFluentBuilder
Constructs a fluent builder for the DeleteWhatIfForecast operation.
- The fluent builder is configurable:
what_if_forecast_arn(impl Into<String>)/set_what_if_forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast that you want to delete.
- On success, responds with
DeleteWhatIfForecastOutput - On failure, responds with
SdkError<DeleteWhatIfForecastError>
source§impl Client
impl Client
sourcepub fn delete_what_if_forecast_export(
&self
) -> DeleteWhatIfForecastExportFluentBuilder
pub fn delete_what_if_forecast_export( &self ) -> DeleteWhatIfForecastExportFluentBuilder
Constructs a fluent builder for the DeleteWhatIfForecastExport operation.
- The fluent builder is configurable:
what_if_forecast_export_arn(impl Into<String>)/set_what_if_forecast_export_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast export that you want to delete.
- On success, responds with
DeleteWhatIfForecastExportOutput - On failure, responds with
SdkError<DeleteWhatIfForecastExportError>
source§impl Client
impl Client
sourcepub fn describe_auto_predictor(&self) -> DescribeAutoPredictorFluentBuilder
pub fn describe_auto_predictor(&self) -> DescribeAutoPredictorFluentBuilder
Constructs a fluent builder for the DescribeAutoPredictor operation.
- The fluent builder is configurable:
predictor_arn(impl Into<String>)/set_predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor.
- On success, responds with
DescribeAutoPredictorOutputwith field(s):predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor
predictor_name(Option<String>):The name of the predictor.
forecast_horizon(Option<i32>):The number of time-steps that the model predicts. The forecast horizon is also called the prediction length.
forecast_types(Option<Vec<String>>):The forecast types used during predictor training. Default value is [“0.1”,“0.5”,“0.9”].
forecast_frequency(Option<String>):The frequency of predictions in a forecast.
Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, “Y” indicates every year and “5min” indicates every five minutes.
forecast_dimensions(Option<Vec<String>>):An array of dimension (field) names that specify the attributes used to group your time series.
dataset_import_job_arns(Option<Vec<String>>):An array of the ARNs of the dataset import jobs used to import training data for the predictor.
data_config(Option<DataConfig>):The data configuration for your dataset group and any additional datasets.
encryption_config(Option<EncryptionConfig>):An Key Management Service (KMS) key and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key. You can specify this optional object in the
CreateDatasetandCreatePredictorrequests.reference_predictor_summary(Option<ReferencePredictorSummary>):The ARN and state of the reference predictor. This parameter is only valid for retrained or upgraded predictors.
estimated_time_remaining_in_minutes(Option<i64>):The estimated time remaining in minutes for the predictor training job to complete.
status(Option<String>):The status of the predictor. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
-
message(Option<String>):In the event of an error, a message detailing the cause of the error.
creation_time(Option<DateTime>):The timestamp of the CreateAutoPredictor request.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
optimization_metric(Option<OptimizationMetric>):The accuracy metric used to optimize the predictor.
explainability_info(Option<ExplainabilityInfo>):Provides the status and ARN of the Predictor Explainability.
monitor_info(Option<MonitorInfo>):A object with the Amazon Resource Name (ARN) and status of the monitor resource.
time_alignment_boundary(Option<TimeAlignmentBoundary>):The time boundary Forecast uses when aggregating data.
- On failure, responds with
SdkError<DescribeAutoPredictorError>
source§impl Client
impl Client
sourcepub fn describe_dataset(&self) -> DescribeDatasetFluentBuilder
pub fn describe_dataset(&self) -> DescribeDatasetFluentBuilder
Constructs a fluent builder for the DescribeDataset operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)/set_dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset.
- On success, responds with
DescribeDatasetOutputwith field(s):dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset.
dataset_name(Option<String>):The name of the dataset.
domain(Option<Domain>):The domain associated with the dataset.
dataset_type(Option<DatasetType>):The dataset type.
data_frequency(Option<String>):The frequency of data collection.
Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, “M” indicates every month and “30min” indicates every 30 minutes.
schema(Option<Schema>):An array of
SchemaAttributeobjects that specify the dataset fields. EachSchemaAttributespecifies the name and data type of a field.encryption_config(Option<EncryptionConfig>):The Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
status(Option<String>):The status of the dataset. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED -
UPDATE_PENDING,UPDATE_IN_PROGRESS,UPDATE_FAILED
The
UPDATEstates apply while data is imported to the dataset from a call to the CreateDatasetImportJob operation and reflect the status of the dataset import job. For example, when the import job status isCREATE_IN_PROGRESS, the status of the dataset isUPDATE_IN_PROGRESS.The
Statusof the dataset must beACTIVEbefore you can import training data.-
creation_time(Option<DateTime>):When the dataset was created.
last_modification_time(Option<DateTime>):When you create a dataset,
LastModificationTimeis the same asCreationTime. While data is being imported to the dataset,LastModificationTimeis the current time of theDescribeDatasetcall. After a CreateDatasetImportJob operation has finished,LastModificationTimeis when the import job completed or failed.
- On failure, responds with
SdkError<DescribeDatasetError>
source§impl Client
impl Client
sourcepub fn describe_dataset_group(&self) -> DescribeDatasetGroupFluentBuilder
pub fn describe_dataset_group(&self) -> DescribeDatasetGroupFluentBuilder
Constructs a fluent builder for the DescribeDatasetGroup operation.
- The fluent builder is configurable:
dataset_group_arn(impl Into<String>)/set_dataset_group_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset group.
- On success, responds with
DescribeDatasetGroupOutputwith field(s):dataset_group_name(Option<String>):The name of the dataset group.
dataset_group_arn(Option<String>):The ARN of the dataset group.
dataset_arns(Option<Vec<String>>):An array of Amazon Resource Names (ARNs) of the datasets contained in the dataset group.
domain(Option<Domain>):The domain associated with the dataset group.
status(Option<String>):The status of the dataset group. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED -
UPDATE_PENDING,UPDATE_IN_PROGRESS,UPDATE_FAILED
The
UPDATEstates apply when you call the UpdateDatasetGroup operation.The
Statusof the dataset group must beACTIVEbefore you can use the dataset group to create a predictor.-
creation_time(Option<DateTime>):When the dataset group was created.
last_modification_time(Option<DateTime>):When the dataset group was created or last updated from a call to the UpdateDatasetGroup operation. While the dataset group is being updated,
LastModificationTimeis the current time of theDescribeDatasetGroupcall.
- On failure, responds with
SdkError<DescribeDatasetGroupError>
source§impl Client
impl Client
sourcepub fn describe_dataset_import_job(
&self
) -> DescribeDatasetImportJobFluentBuilder
pub fn describe_dataset_import_job( &self ) -> DescribeDatasetImportJobFluentBuilder
Constructs a fluent builder for the DescribeDatasetImportJob operation.
- The fluent builder is configurable:
dataset_import_job_arn(impl Into<String>)/set_dataset_import_job_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset import job.
- On success, responds with
DescribeDatasetImportJobOutputwith field(s):dataset_import_job_name(Option<String>):The name of the dataset import job.
dataset_import_job_arn(Option<String>):The ARN of the dataset import job.
dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset that the training data was imported to.
timestamp_format(Option<String>):The format of timestamps in the dataset. The format that you specify depends on the
DataFrequencyspecified when the dataset was created. The following formats are supported-
“yyyy-MM-dd”
For the following data frequencies: Y, M, W, and D
-
“yyyy-MM-dd HH:mm:ss”
For the following data frequencies: H, 30min, 15min, and 1min; and optionally, for: Y, M, W, and D
-
time_zone(Option<String>):The single time zone applied to every item in the dataset
use_geolocation_for_time_zone(bool):Whether
TimeZoneis automatically derived from the geolocation attribute.geolocation_format(Option<String>):The format of the geolocation attribute. Valid Values:
“LAT_LONG”and“CC_POSTALCODE”.data_source(Option<DataSource>):The location of the training data to import and an Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data.
If encryption is used,
DataSourceincludes an Key Management Service (KMS) key.estimated_time_remaining_in_minutes(Option<i64>):The estimated time remaining in minutes for the dataset import job to complete.
field_statistics(Option<HashMap<String, Statistics>>):Statistical information about each field in the input data.
data_size(Option<f64>):The size of the dataset in gigabytes (GB) after the import job has finished.
status(Option<String>):The status of the dataset import job. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED -
CREATE_STOPPING,CREATE_STOPPED
-
message(Option<String>):If an error occurred, an informational message about the error.
creation_time(Option<DateTime>):When the dataset import job was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
format(Option<String>):The format of the imported data, CSV or PARQUET.
import_mode(Option<ImportMode>):The import mode of the dataset import job, FULL or INCREMENTAL.
- On failure, responds with
SdkError<DescribeDatasetImportJobError>
source§impl Client
impl Client
sourcepub fn describe_explainability(&self) -> DescribeExplainabilityFluentBuilder
pub fn describe_explainability(&self) -> DescribeExplainabilityFluentBuilder
Constructs a fluent builder for the DescribeExplainability operation.
- The fluent builder is configurable:
explainability_arn(impl Into<String>)/set_explainability_arn(Option<String>):The Amazon Resource Name (ARN) of the Explaianability to describe.
- On success, responds with
DescribeExplainabilityOutputwith field(s):explainability_arn(Option<String>):The Amazon Resource Name (ARN) of the Explainability.
explainability_name(Option<String>):The name of the Explainability.
resource_arn(Option<String>):The Amazon Resource Name (ARN) of the Predictor or Forecast used to create the Explainability resource.
explainability_config(Option<ExplainabilityConfig>):The configuration settings that define the granularity of time series and time points for the Explainability.
enable_visualization(Option<bool>):Whether the visualization was enabled for the Explainability resource.
data_source(Option<DataSource>):The source of your data, an Identity and Access Management (IAM) role that allows Amazon Forecast to access the data and, optionally, an Key Management Service (KMS) key.
schema(Option<Schema>):Defines the fields of a dataset.
start_date_time(Option<String>):If
TimePointGranularityis set toSPECIFIC, the first time point in the Explainability.end_date_time(Option<String>):If
TimePointGranularityis set toSPECIFIC, the last time point in the Explainability.estimated_time_remaining_in_minutes(Option<i64>):The estimated time remaining in minutes for the
CreateExplainabilityjob to complete.message(Option<String>):If an error occurred, a message about the error.
status(Option<String>):The status of the Explainability resource. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
-
creation_time(Option<DateTime>):When the Explainability resource was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
- On failure, responds with
SdkError<DescribeExplainabilityError>
source§impl Client
impl Client
sourcepub fn describe_explainability_export(
&self
) -> DescribeExplainabilityExportFluentBuilder
pub fn describe_explainability_export( &self ) -> DescribeExplainabilityExportFluentBuilder
Constructs a fluent builder for the DescribeExplainabilityExport operation.
- The fluent builder is configurable:
explainability_export_arn(impl Into<String>)/set_explainability_export_arn(Option<String>):The Amazon Resource Name (ARN) of the Explainability export.
- On success, responds with
DescribeExplainabilityExportOutputwith field(s):explainability_export_arn(Option<String>):The Amazon Resource Name (ARN) of the Explainability export.
explainability_export_name(Option<String>):The name of the Explainability export.
explainability_arn(Option<String>):The Amazon Resource Name (ARN) of the Explainability export.
destination(Option<DataDestination>):The destination for an export job. Provide an S3 path, an Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an Key Management Service (KMS) key (optional).
message(Option<String>):Information about any errors that occurred during the export.
status(Option<String>):The status of the Explainability export. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
-
creation_time(Option<DateTime>):When the Explainability export was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
format(Option<String>):The format of the exported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribeExplainabilityExportError>
source§impl Client
impl Client
sourcepub fn describe_forecast(&self) -> DescribeForecastFluentBuilder
pub fn describe_forecast(&self) -> DescribeForecastFluentBuilder
Constructs a fluent builder for the DescribeForecast operation.
- The fluent builder is configurable:
forecast_arn(impl Into<String>)/set_forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the forecast.
- On success, responds with
DescribeForecastOutputwith field(s):forecast_arn(Option<String>):The forecast ARN as specified in the request.
forecast_name(Option<String>):The name of the forecast.
forecast_types(Option<Vec<String>>):The quantiles at which probabilistic forecasts were generated.
predictor_arn(Option<String>):The ARN of the predictor used to generate the forecast.
dataset_group_arn(Option<String>):The ARN of the dataset group that provided the data used to train the predictor.
estimated_time_remaining_in_minutes(Option<i64>):The estimated time remaining in minutes for the forecast job to complete.
status(Option<String>):The status of the forecast. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
The
Statusof the forecast must beACTIVEbefore you can query or export the forecast.-
message(Option<String>):If an error occurred, an informational message about the error.
creation_time(Option<DateTime>):When the forecast creation task was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
time_series_selector(Option<TimeSeriesSelector>):The time series to include in the forecast.
- On failure, responds with
SdkError<DescribeForecastError>
source§impl Client
impl Client
sourcepub fn describe_forecast_export_job(
&self
) -> DescribeForecastExportJobFluentBuilder
pub fn describe_forecast_export_job( &self ) -> DescribeForecastExportJobFluentBuilder
Constructs a fluent builder for the DescribeForecastExportJob operation.
- The fluent builder is configurable:
forecast_export_job_arn(impl Into<String>)/set_forecast_export_job_arn(Option<String>):The Amazon Resource Name (ARN) of the forecast export job.
- On success, responds with
DescribeForecastExportJobOutputwith field(s):forecast_export_job_arn(Option<String>):The ARN of the forecast export job.
forecast_export_job_name(Option<String>):The name of the forecast export job.
forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the exported forecast.
destination(Option<DataDestination>):The path to the Amazon Simple Storage Service (Amazon S3) bucket where the forecast is exported.
message(Option<String>):If an error occurred, an informational message about the error.
status(Option<String>):The status of the forecast export job. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
The
Statusof the forecast export job must beACTIVEbefore you can access the forecast in your S3 bucket.-
creation_time(Option<DateTime>):When the forecast export job was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
format(Option<String>):The format of the exported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribeForecastExportJobError>
source§impl Client
impl Client
sourcepub fn describe_monitor(&self) -> DescribeMonitorFluentBuilder
pub fn describe_monitor(&self) -> DescribeMonitorFluentBuilder
Constructs a fluent builder for the DescribeMonitor operation.
- The fluent builder is configurable:
monitor_arn(impl Into<String>)/set_monitor_arn(Option<String>):The Amazon Resource Name (ARN) of the monitor resource to describe.
- On success, responds with
DescribeMonitorOutputwith field(s):monitor_name(Option<String>):The name of the monitor.
monitor_arn(Option<String>):The Amazon Resource Name (ARN) of the monitor resource described.
resource_arn(Option<String>):The Amazon Resource Name (ARN) of the auto predictor being monitored.
status(Option<String>):The status of the monitor resource.
last_evaluation_time(Option<DateTime>):The timestamp of the latest evaluation completed by the monitor.
last_evaluation_state(Option<String>):The state of the monitor’s latest evaluation.
baseline(Option<Baseline>):Metrics you can use as a baseline for comparison purposes. Use these values you interpret monitoring results for an auto predictor.
message(Option<String>):An error message, if any, for the monitor.
creation_time(Option<DateTime>):The timestamp for when the monitor resource was created.
last_modification_time(Option<DateTime>):The timestamp of the latest modification to the monitor.
estimated_evaluation_time_remaining_in_minutes(Option<i64>):The estimated number of minutes remaining before the monitor resource finishes its current evaluation.
- On failure, responds with
SdkError<DescribeMonitorError>
source§impl Client
impl Client
sourcepub fn describe_predictor(&self) -> DescribePredictorFluentBuilder
pub fn describe_predictor(&self) -> DescribePredictorFluentBuilder
Constructs a fluent builder for the DescribePredictor operation.
- The fluent builder is configurable:
predictor_arn(impl Into<String>)/set_predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor that you want information about.
- On success, responds with
DescribePredictorOutputwith field(s):predictor_arn(Option<String>):The ARN of the predictor.
predictor_name(Option<String>):The name of the predictor.
algorithm_arn(Option<String>):The Amazon Resource Name (ARN) of the algorithm used for model training.
auto_ml_algorithm_arns(Option<Vec<String>>):When
PerformAutoMLis specified, the ARN of the chosen algorithm.forecast_horizon(Option<i32>):The number of time-steps of the forecast. The forecast horizon is also called the prediction length.
forecast_types(Option<Vec<String>>):The forecast types used during predictor training. Default value is
[“0.1”,“0.5”,“0.9”]perform_auto_ml(Option<bool>):Whether the predictor is set to perform AutoML.
auto_ml_override_strategy(Option<AutoMlOverrideStrategy>):The
LatencyOptimizedAutoML override strategy is only available in private beta. Contact Amazon Web Services Support or your account manager to learn more about access privileges.The AutoML strategy used to train the predictor. Unless
LatencyOptimizedis specified, the AutoML strategy optimizes predictor accuracy.This parameter is only valid for predictors trained using AutoML.
perform_hpo(Option<bool>):Whether the predictor is set to perform hyperparameter optimization (HPO).
training_parameters(Option<HashMap<String, String>>):The default training parameters or overrides selected during model training. When running AutoML or choosing HPO with CNN-QR or DeepAR+, the optimized values for the chosen hyperparameters are returned. For more information, see
aws-forecast-choosing-recipes.evaluation_parameters(Option<EvaluationParameters>):Used to override the default evaluation parameters of the specified algorithm. Amazon Forecast evaluates a predictor by splitting a dataset into training data and testing data. The evaluation parameters define how to perform the split and the number of iterations.
hpo_config(Option<HyperParameterTuningJobConfig>):The hyperparameter override values for the algorithm.
input_data_config(Option<InputDataConfig>):Describes the dataset group that contains the data to use to train the predictor.
featurization_config(Option<FeaturizationConfig>):The featurization configuration.
encryption_config(Option<EncryptionConfig>):An Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
predictor_execution_details(Option<PredictorExecutionDetails>):Details on the the status and results of the backtests performed to evaluate the accuracy of the predictor. You specify the number of backtests to perform when you call the operation.
estimated_time_remaining_in_minutes(Option<i64>):The estimated time remaining in minutes for the predictor training job to complete.
is_auto_predictor(Option<bool>):Whether the predictor was created with
CreateAutoPredictor.dataset_import_job_arns(Option<Vec<String>>):An array of the ARNs of the dataset import jobs used to import training data for the predictor.
status(Option<String>):The status of the predictor. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED -
CREATE_STOPPING,CREATE_STOPPED
The
Statusof the predictor must beACTIVEbefore you can use the predictor to create a forecast.-
message(Option<String>):If an error occurred, an informational message about the error.
creation_time(Option<DateTime>):When the model training task was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
optimization_metric(Option<OptimizationMetric>):The accuracy metric used to optimize the predictor.
- On failure, responds with
SdkError<DescribePredictorError>
source§impl Client
impl Client
sourcepub fn describe_predictor_backtest_export_job(
&self
) -> DescribePredictorBacktestExportJobFluentBuilder
pub fn describe_predictor_backtest_export_job( &self ) -> DescribePredictorBacktestExportJobFluentBuilder
Constructs a fluent builder for the DescribePredictorBacktestExportJob operation.
- The fluent builder is configurable:
predictor_backtest_export_job_arn(impl Into<String>)/set_predictor_backtest_export_job_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor backtest export job.
- On success, responds with
DescribePredictorBacktestExportJobOutputwith field(s):predictor_backtest_export_job_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor backtest export job.
predictor_backtest_export_job_name(Option<String>):The name of the predictor backtest export job.
predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor.
destination(Option<DataDestination>):The destination for an export job. Provide an S3 path, an Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an Key Management Service (KMS) key (optional).
message(Option<String>):Information about any errors that may have occurred during the backtest export.
status(Option<String>):The status of the predictor backtest export job. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
-
creation_time(Option<DateTime>):When the predictor backtest export job was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
format(Option<String>):The format of the exported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribePredictorBacktestExportJobError>
source§impl Client
impl Client
sourcepub fn describe_what_if_analysis(&self) -> DescribeWhatIfAnalysisFluentBuilder
pub fn describe_what_if_analysis(&self) -> DescribeWhatIfAnalysisFluentBuilder
Constructs a fluent builder for the DescribeWhatIfAnalysis operation.
- The fluent builder is configurable:
what_if_analysis_arn(impl Into<String>)/set_what_if_analysis_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if analysis that you are interested in.
- On success, responds with
DescribeWhatIfAnalysisOutputwith field(s):what_if_analysis_name(Option<String>):The name of the what-if analysis.
what_if_analysis_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if analysis.
forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast.
estimated_time_remaining_in_minutes(Option<i64>):The approximate time remaining to complete the what-if analysis, in minutes.
status(Option<String>):The status of the what-if analysis. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
The
Statusof the what-if analysis must beACTIVEbefore you can access the analysis.-
message(Option<String>):If an error occurred, an informational message about the error.
creation_time(Option<DateTime>):When the what-if analysis was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
time_series_selector(Option<TimeSeriesSelector>):Defines the set of time series that are used to create the forecasts in a
TimeSeriesIdentifiersobject.The
TimeSeriesIdentifiersobject needs the following information:-
DataSource -
Format -
Schema
-
- On failure, responds with
SdkError<DescribeWhatIfAnalysisError>
source§impl Client
impl Client
sourcepub fn describe_what_if_forecast(&self) -> DescribeWhatIfForecastFluentBuilder
pub fn describe_what_if_forecast(&self) -> DescribeWhatIfForecastFluentBuilder
Constructs a fluent builder for the DescribeWhatIfForecast operation.
- The fluent builder is configurable:
what_if_forecast_arn(impl Into<String>)/set_what_if_forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast that you are interested in.
- On success, responds with
DescribeWhatIfForecastOutputwith field(s):what_if_forecast_name(Option<String>):The name of the what-if forecast.
what_if_forecast_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast.
what_if_analysis_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if analysis that contains this forecast.
estimated_time_remaining_in_minutes(Option<i64>):The approximate time remaining to complete the what-if forecast, in minutes.
status(Option<String>):The status of the what-if forecast. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
The
Statusof the what-if forecast must beACTIVEbefore you can access the forecast.-
message(Option<String>):If an error occurred, an informational message about the error.
creation_time(Option<DateTime>):When the what-if forecast was created.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
time_series_transformations(Option<Vec<TimeSeriesTransformation>>):An array of
ActionandTimeSeriesConditionselements that describe what transformations were applied to which time series.time_series_replacements_data_source(Option<TimeSeriesReplacementsDataSource>):An array of
S3Config,Schema, andFormatelements that describe the replacement time series.forecast_types(Option<Vec<String>>):The quantiles at which probabilistic forecasts are generated. You can specify up to five quantiles per what-if forecast in the
CreateWhatIfForecastoperation. If you didn’t specify quantiles, the default values are[“0.1”, “0.5”, “0.9”].
- On failure, responds with
SdkError<DescribeWhatIfForecastError>
source§impl Client
impl Client
sourcepub fn describe_what_if_forecast_export(
&self
) -> DescribeWhatIfForecastExportFluentBuilder
pub fn describe_what_if_forecast_export( &self ) -> DescribeWhatIfForecastExportFluentBuilder
Constructs a fluent builder for the DescribeWhatIfForecastExport operation.
- The fluent builder is configurable:
what_if_forecast_export_arn(impl Into<String>)/set_what_if_forecast_export_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast export that you are interested in.
- On success, responds with
DescribeWhatIfForecastExportOutputwith field(s):what_if_forecast_export_arn(Option<String>):The Amazon Resource Name (ARN) of the what-if forecast export.
what_if_forecast_export_name(Option<String>):The name of the what-if forecast export.
what_if_forecast_arns(Option<Vec<String>>):An array of Amazon Resource Names (ARNs) that represent all of the what-if forecasts exported in this resource.
destination(Option<DataDestination>):The destination for an export job. Provide an S3 path, an Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an Key Management Service (KMS) key (optional).
message(Option<String>):If an error occurred, an informational message about the error.
status(Option<String>):The status of the what-if forecast. States include:
-
ACTIVE -
CREATE_PENDING,CREATE_IN_PROGRESS,CREATE_FAILED -
CREATE_STOPPING,CREATE_STOPPED -
DELETE_PENDING,DELETE_IN_PROGRESS,DELETE_FAILED
The
Statusof the what-if forecast export must beACTIVEbefore you can access the forecast export.-
creation_time(Option<DateTime>):When the what-if forecast export was created.
estimated_time_remaining_in_minutes(Option<i64>):The approximate time remaining to complete the what-if forecast export, in minutes.
last_modification_time(Option<DateTime>):The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING- TheCreationTime. -
CREATE_IN_PROGRESS- The current timestamp. -
CREATE_STOPPING- The current timestamp. -
CREATE_STOPPED- When the job stopped. -
ACTIVEorCREATE_FAILED- When the job finished or failed.
-
format(Option<String>):The format of the exported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribeWhatIfForecastExportError>
source§impl Client
impl Client
sourcepub fn get_accuracy_metrics(&self) -> GetAccuracyMetricsFluentBuilder
pub fn get_accuracy_metrics(&self) -> GetAccuracyMetricsFluentBuilder
Constructs a fluent builder for the GetAccuracyMetrics operation.
- The fluent builder is configurable:
predictor_arn(impl Into<String>)/set_predictor_arn(Option<String>):The Amazon Resource Name (ARN) of the predictor to get metrics for.
- On success, responds with
GetAccuracyMetricsOutputwith field(s):predictor_evaluation_results(Option<Vec<EvaluationResult>>):An array of results from evaluating the predictor.
is_auto_predictor(Option<bool>):Whether the predictor was created with
CreateAutoPredictor.auto_ml_override_strategy(Option<AutoMlOverrideStrategy>):The
LatencyOptimizedAutoML override strategy is only available in private beta. Contact Amazon Web Services Support or your account manager to learn more about access privileges.The AutoML strategy used to train the predictor. Unless
LatencyOptimizedis specified, the AutoML strategy optimizes predictor accuracy.This parameter is only valid for predictors trained using AutoML.
optimization_metric(Option<OptimizationMetric>):The accuracy metric used to optimize the predictor.
- On failure, responds with
SdkError<GetAccuracyMetricsError>
source§impl Client
impl Client
sourcepub fn list_dataset_groups(&self) -> ListDatasetGroupsFluentBuilder
pub fn list_dataset_groups(&self) -> ListDatasetGroupsFluentBuilder
Constructs a fluent builder for the ListDatasetGroups operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
- On success, responds with
ListDatasetGroupsOutputwith field(s):dataset_groups(Option<Vec<DatasetGroupSummary>>):An array of objects that summarize each dataset group’s properties.
next_token(Option<String>):If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListDatasetGroupsError>
source§impl Client
impl Client
sourcepub fn list_dataset_import_jobs(&self) -> ListDatasetImportJobsFluentBuilder
pub fn list_dataset_import_jobs(&self) -> ListDatasetImportJobsFluentBuilder
Constructs a fluent builder for the ListDatasetImportJobs operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, you provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the datasets that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. To include the datasets that match the statement, specifyIS. To exclude matching datasets, specifyIS_NOT. -
Key- The name of the parameter to filter on. Valid values areDatasetArnandStatus. -
Value- The value to match.
For example, to list all dataset import jobs whose status is ACTIVE, you specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]-
- On success, responds with
ListDatasetImportJobsOutputwith field(s):dataset_import_jobs(Option<Vec<DatasetImportJobSummary>>):An array of objects that summarize each dataset import job’s properties.
next_token(Option<String>):If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListDatasetImportJobsError>
source§impl Client
impl Client
sourcepub fn list_datasets(&self) -> ListDatasetsFluentBuilder
pub fn list_datasets(&self) -> ListDatasetsFluentBuilder
Constructs a fluent builder for the ListDatasets operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
- On success, responds with
ListDatasetsOutputwith field(s):datasets(Option<Vec<DatasetSummary>>):An array of objects that summarize each dataset’s properties.
next_token(Option<String>):If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListDatasetsError>
source§impl Client
impl Client
sourcepub fn list_explainabilities(&self) -> ListExplainabilitiesFluentBuilder
pub fn list_explainabilities(&self) -> ListExplainabilitiesFluentBuilder
Constructs a fluent builder for the ListExplainabilities operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
max_results(i32)/set_max_results(Option<i32>):The number of items returned in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the resources that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. -
Key- The name of the parameter to filter on. Valid values areResourceArnandStatus. -
Value- The value to match.
-
- On success, responds with
ListExplainabilitiesOutputwith field(s):explainabilities(Option<Vec<ExplainabilitySummary>>):An array of objects that summarize the properties of each Explainability resource.
next_token(Option<String>):Returns this token if the response is truncated. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListExplainabilitiesError>
source§impl Client
impl Client
sourcepub fn list_explainability_exports(
&self
) -> ListExplainabilityExportsFluentBuilder
pub fn list_explainability_exports( &self ) -> ListExplainabilityExportsFluentBuilder
Constructs a fluent builder for the ListExplainabilityExports operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude resources that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. -
Key- The name of the parameter to filter on. Valid values areResourceArnandStatus. -
Value- The value to match.
-
- On success, responds with
ListExplainabilityExportsOutputwith field(s):explainability_exports(Option<Vec<ExplainabilityExportSummary>>):An array of objects that summarize the properties of each Explainability export.
next_token(Option<String>):Returns this token if the response is truncated. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListExplainabilityExportsError>
source§impl Client
impl Client
sourcepub fn list_forecast_export_jobs(&self) -> ListForecastExportJobsFluentBuilder
pub fn list_forecast_export_jobs(&self) -> ListForecastExportJobsFluentBuilder
Constructs a fluent builder for the ListForecastExportJobs operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, you provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. To include the forecast export jobs that match the statement, specifyIS. To exclude matching forecast export jobs, specifyIS_NOT. -
Key- The name of the parameter to filter on. Valid values areForecastArnandStatus. -
Value- The value to match.
For example, to list all jobs that export a forecast named electricityforecast, specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “ForecastArn”, “Value”: “arn:aws:forecast:us-west-2::forecast/electricityforecast” } ] -
- On success, responds with
ListForecastExportJobsOutputwith field(s):forecast_export_jobs(Option<Vec<ForecastExportJobSummary>>):An array of objects that summarize each export job’s properties.
next_token(Option<String>):If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListForecastExportJobsError>
source§impl Client
impl Client
sourcepub fn list_forecasts(&self) -> ListForecastsFluentBuilder
pub fn list_forecasts(&self) -> ListForecastsFluentBuilder
Constructs a fluent builder for the ListForecasts operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, you provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the forecasts that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. To include the forecasts that match the statement, specifyIS. To exclude matching forecasts, specifyIS_NOT. -
Key- The name of the parameter to filter on. Valid values areDatasetGroupArn,PredictorArn, andStatus. -
Value- The value to match.
For example, to list all forecasts whose status is not ACTIVE, you would specify:
“Filters”: [ { “Condition”: “IS_NOT”, “Key”: “Status”, “Value”: “ACTIVE” } ]-
- On success, responds with
ListForecastsOutputwith field(s):forecasts(Option<Vec<ForecastSummary>>):An array of objects that summarize each forecast’s properties.
next_token(Option<String>):If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListForecastsError>
source§impl Client
impl Client
sourcepub fn list_monitor_evaluations(&self) -> ListMonitorEvaluationsFluentBuilder
pub fn list_monitor_evaluations(&self) -> ListMonitorEvaluationsFluentBuilder
Constructs a fluent builder for the ListMonitorEvaluations operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The maximum number of monitoring results to return.
monitor_arn(impl Into<String>)/set_monitor_arn(Option<String>):The Amazon Resource Name (ARN) of the monitor resource to get results from.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the resources that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. -
Key- The name of the parameter to filter on. The only valid value isEvaluationState. -
Value- The value to match. Valid values are onlySUCCESSorFAILURE.
For example, to list only successful monitor evaluations, you would specify:
“Filters”: [ { “Condition”: “IS”, “Key”: “EvaluationState”, “Value”: “SUCCESS” } ]-
- On success, responds with
ListMonitorEvaluationsOutputwith field(s):next_token(Option<String>):If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
predictor_monitor_evaluations(Option<Vec<PredictorMonitorEvaluation>>):The monitoring results and predictor events collected by the monitor resource during different windows of time.
For information about monitoring see Viewing Monitoring Results. For more information about retrieving monitoring results see Viewing Monitoring Results.
- On failure, responds with
SdkError<ListMonitorEvaluationsError>
source§impl Client
impl Client
sourcepub fn list_monitors(&self) -> ListMonitorsFluentBuilder
pub fn list_monitors(&self) -> ListMonitorsFluentBuilder
Constructs a fluent builder for the ListMonitors operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The maximum number of monitors to include in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the resources that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. -
Key- The name of the parameter to filter on. The only valid value isStatus. -
Value- The value to match.
For example, to list all monitors who’s status is ACTIVE, you would specify:
“Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]-
- On success, responds with
ListMonitorsOutputwith field(s):monitors(Option<Vec<MonitorSummary>>):An array of objects that summarize each monitor’s properties.
next_token(Option<String>):If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListMonitorsError>
source§impl Client
impl Client
sourcepub fn list_predictor_backtest_export_jobs(
&self
) -> ListPredictorBacktestExportJobsFluentBuilder
pub fn list_predictor_backtest_export_jobs( &self ) -> ListPredictorBacktestExportJobsFluentBuilder
Constructs a fluent builder for the ListPredictorBacktestExportJobs operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the predictor backtest export jobs that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. To include the predictor backtest export jobs that match the statement, specifyIS. To exclude matching predictor backtest export jobs, specifyIS_NOT. -
Key- The name of the parameter to filter on. Valid values arePredictorArnandStatus. -
Value- The value to match.
-
- On success, responds with
ListPredictorBacktestExportJobsOutputwith field(s):predictor_backtest_export_jobs(Option<Vec<PredictorBacktestExportJobSummary>>):An array of objects that summarize the properties of each predictor backtest export job.
next_token(Option<String>):Returns this token if the response is truncated. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListPredictorBacktestExportJobsError>
source§impl Client
impl Client
sourcepub fn list_predictors(&self) -> ListPredictorsFluentBuilder
pub fn list_predictors(&self) -> ListPredictorsFluentBuilder
Constructs a fluent builder for the ListPredictors operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, you provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the predictors that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. To include the predictors that match the statement, specifyIS. To exclude matching predictors, specifyIS_NOT. -
Key- The name of the parameter to filter on. Valid values areDatasetGroupArnandStatus. -
Value- The value to match.
For example, to list all predictors whose status is ACTIVE, you would specify:
“Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]-
- On success, responds with
ListPredictorsOutputwith field(s):predictors(Option<Vec<PredictorSummary>>):An array of objects that summarize each predictor’s properties.
next_token(Option<String>):If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListPredictorsError>
source§impl Client
impl Client
Constructs a fluent builder for the ListTagsForResource operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):The Amazon Resource Name (ARN) that identifies the resource for which to list the tags.
- On success, responds with
ListTagsForResourceOutputwith field(s):tags(Option<Vec<Tag>>):The tags for the resource.
- On failure, responds with
SdkError<ListTagsForResourceError>
source§impl Client
impl Client
sourcepub fn list_what_if_analyses(&self) -> ListWhatIfAnalysesFluentBuilder
pub fn list_what_if_analyses(&self) -> ListWhatIfAnalysesFluentBuilder
Constructs a fluent builder for the ListWhatIfAnalyses operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, you provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the what-if analysis jobs that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. To include the what-if analysis jobs that match the statement, specifyIS. To exclude matching what-if analysis jobs, specifyIS_NOT. -
Key- The name of the parameter to filter on. Valid values areWhatIfAnalysisArnandStatus. -
Value- The value to match.
For example, to list all jobs that export a forecast named electricityWhatIf, specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfAnalysisArn”, “Value”: “arn:aws:forecast:us-west-2::forecast/electricityWhatIf” } ] -
- On success, responds with
ListWhatIfAnalysesOutputwith field(s):what_if_analyses(Option<Vec<WhatIfAnalysisSummary>>):An array of
WhatIfAnalysisSummaryobjects that describe the matched analyses.next_token(Option<String>):If the response is truncated, Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListWhatIfAnalysesError>
source§impl Client
impl Client
sourcepub fn list_what_if_forecast_exports(
&self
) -> ListWhatIfForecastExportsFluentBuilder
pub fn list_what_if_forecast_exports( &self ) -> ListWhatIfForecastExportsFluentBuilder
Constructs a fluent builder for the ListWhatIfForecastExports operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, you provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the what-if forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. To include the forecast export jobs that match the statement, specifyIS. To exclude matching forecast export jobs, specifyIS_NOT. -
Key- The name of the parameter to filter on. Valid values areWhatIfForecastExportArnandStatus. -
Value- The value to match.
For example, to list all jobs that export a forecast named electricityWIFExport, specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfForecastExportArn”, “Value”: “arn:aws:forecast:us-west-2::forecast/electricityWIFExport” } ] -
- On success, responds with
ListWhatIfForecastExportsOutputwith field(s):what_if_forecast_exports(Option<Vec<WhatIfForecastExportSummary>>):An array of
WhatIfForecastExportsobjects that describe the matched forecast exports.next_token(Option<String>):If the response is truncated, Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListWhatIfForecastExportsError>
source§impl Client
impl Client
sourcepub fn list_what_if_forecasts(&self) -> ListWhatIfForecastsFluentBuilder
pub fn list_what_if_forecasts(&self) -> ListWhatIfForecastsFluentBuilder
Constructs a fluent builder for the ListWhatIfForecasts operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)/set_max_results(Option<i32>):The number of items to return in the response.
filters(Filter)/set_filters(Option<Vec<Filter>>):An array of filters. For each filter, you provide a condition and a match statement. The condition is either
ISorIS_NOT, which specifies whether to include or exclude the what-if forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition- The condition to apply. Valid values areISandIS_NOT. To include the forecast export jobs that match the statement, specifyIS. To exclude matching forecast export jobs, specifyIS_NOT. -
Key- The name of the parameter to filter on. Valid values areWhatIfForecastArnandStatus. -
Value- The value to match.
For example, to list all jobs that export a forecast named electricityWhatIfForecast, specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfForecastArn”, “Value”: “arn:aws:forecast:us-west-2::forecast/electricityWhatIfForecast” } ] -
- On success, responds with
ListWhatIfForecastsOutputwith field(s):what_if_forecasts(Option<Vec<WhatIfForecastSummary>>):An array of
WhatIfForecastsobjects that describe the matched forecasts.next_token(Option<String>):If the result of the previous request was truncated, the response includes a
NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
- On failure, responds with
SdkError<ListWhatIfForecastsError>
source§impl Client
impl Client
sourcepub fn resume_resource(&self) -> ResumeResourceFluentBuilder
pub fn resume_resource(&self) -> ResumeResourceFluentBuilder
Constructs a fluent builder for the ResumeResource operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):The Amazon Resource Name (ARN) of the monitor resource to resume.
- On success, responds with
ResumeResourceOutput - On failure, responds with
SdkError<ResumeResourceError>
source§impl Client
impl Client
sourcepub fn stop_resource(&self) -> StopResourceFluentBuilder
pub fn stop_resource(&self) -> StopResourceFluentBuilder
Constructs a fluent builder for the StopResource operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):The Amazon Resource Name (ARN) that identifies the resource to stop. The supported ARNs are
DatasetImportJobArn,PredictorArn,PredictorBacktestExportJobArn,ForecastArn,ForecastExportJobArn,ExplainabilityArn, andExplainabilityExportArn.
- On success, responds with
StopResourceOutput - On failure, responds with
SdkError<StopResourceError>
source§impl Client
impl Client
sourcepub fn tag_resource(&self) -> TagResourceFluentBuilder
pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the TagResource operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):The Amazon Resource Name (ARN) that identifies the resource for which to list the tags.
tags(Tag)/set_tags(Option<Vec<Tag>>):The tags to add to the resource. A tag is an array of key-value pairs.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:,AWS:, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasawsas its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofawsdo not count against your tags per resource limit.
-
- On success, responds with
TagResourceOutput - On failure, responds with
SdkError<TagResourceError>
source§impl Client
impl Client
sourcepub fn untag_resource(&self) -> UntagResourceFluentBuilder
pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the UntagResource operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):The Amazon Resource Name (ARN) that identifies the resource for which to list the tags.
tag_keys(impl Into<String>)/set_tag_keys(Option<Vec<String>>):The keys of the tags to be removed.
- On success, responds with
UntagResourceOutput - On failure, responds with
SdkError<UntagResourceError>
source§impl Client
impl Client
sourcepub fn update_dataset_group(&self) -> UpdateDatasetGroupFluentBuilder
pub fn update_dataset_group(&self) -> UpdateDatasetGroupFluentBuilder
Constructs a fluent builder for the UpdateDatasetGroup operation.
- The fluent builder is configurable:
dataset_group_arn(impl Into<String>)/set_dataset_group_arn(Option<String>):The ARN of the dataset group.
dataset_arns(impl Into<String>)/set_dataset_arns(Option<Vec<String>>):An array of the Amazon Resource Names (ARNs) of the datasets to add to the dataset group.
- On success, responds with
UpdateDatasetGroupOutput - On failure, responds with
SdkError<UpdateDatasetGroupError>
source§impl Client
impl Client
sourcepub fn new(sdk_config: &SdkConfig) -> Self
pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
Panics
- This method will panic if the
sdk_configis missing an async sleep implementation. If you experience this panic, set thesleep_implon the Config passed into this function to fix it. - This method will panic if the
sdk_configis missing an HTTP connector. If you experience this panic, set thehttp_connectoron the Config passed into this function to fix it.