Struct aws_sdk_forecast::Client
source · [−]pub struct Client { /* private fields */ }
Expand description
Client for Amazon Forecast Service
Client for invoking operations on Amazon Forecast Service. Each operation on Amazon Forecast Service is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
Examples
Constructing a client and invoking an operation
// create a shared configuration. This can be used & shared between multiple service clients.
let shared_config = aws_config::load_from_env().await;
let client = aws_sdk_forecast::Client::new(&shared_config);
// invoke an operation
/* let rsp = client
.<operation_name>().
.<param>("some value")
.send().await; */
Constructing a client with custom configuration
use aws_config::retry::RetryConfig;
let shared_config = aws_config::load_from_env().await;
let config = aws_sdk_forecast::config::Builder::from(&shared_config)
.retry_config(RetryConfig::disabled())
.build();
let client = aws_sdk_forecast::Client::from_conf(config);
Implementations
sourceimpl Client
impl Client
sourcepub fn with_config(
client: Client<DynConnector, DynMiddleware<DynConnector>>,
conf: Config
) -> Self
pub fn with_config(
client: Client<DynConnector, DynMiddleware<DynConnector>>,
conf: Config
) -> Self
Creates a client with the given service configuration.
sourceimpl Client
impl Client
sourcepub fn create_auto_predictor(&self) -> CreateAutoPredictor
pub fn create_auto_predictor(&self) -> CreateAutoPredictor
Constructs a fluent builder for the CreateAutoPredictor
operation.
- The fluent builder is configurable:
predictor_name(impl Into<String>)
/set_predictor_name(Option<String>)
:A unique name for the predictor
forecast_horizon(i32)
/set_forecast_horizon(Option<i32>)
:The number of time-steps that the model predicts. The forecast horizon is also called the prediction length.
The maximum forecast horizon is the lesser of 500 time-steps or 1/4 of the TARGET_TIME_SERIES dataset length. If you are retraining an existing AutoPredictor, then the maximum forecast horizon is the lesser of 500 time-steps or 1/3 of the TARGET_TIME_SERIES dataset length.
If you are upgrading to an AutoPredictor or retraining an existing AutoPredictor, you cannot update the forecast horizon parameter. You can meet this requirement by providing longer time-series in the dataset.
forecast_types(Vec<String>)
/set_forecast_types(Option<Vec<String>>)
:The forecast types used to train a predictor. You can specify up to five forecast types. Forecast types can be quantiles from 0.01 to 0.99, by increments of 0.01 or higher. You can also specify the mean forecast with
mean
.forecast_dimensions(Vec<String>)
/set_forecast_dimensions(Option<Vec<String>>)
:An array of dimension (field) names that specify how to group the generated forecast.
For example, if you are generating forecasts for item sales across all your stores, and your dataset contains a
store_id
field, you would specifystore_id
as a dimension to group sales forecasts for each store.forecast_frequency(impl Into<String>)
/set_forecast_frequency(Option<String>)
:The frequency of predictions in a forecast.
Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, “Y” indicates every year and “5min” indicates every five minutes.
The frequency must be greater than or equal to the TARGET_TIME_SERIES dataset frequency.
When a RELATED_TIME_SERIES dataset is provided, the frequency must be equal to the RELATED_TIME_SERIES dataset frequency.
data_config(DataConfig)
/set_data_config(Option<DataConfig>)
:The data configuration for your dataset group and any additional datasets.
encryption_config(EncryptionConfig)
/set_encryption_config(Option<EncryptionConfig>)
:An AWS Key Management Service (KMS) key and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key. You can specify this optional object in the
CreateDataset
andCreatePredictor
requests.reference_predictor_arn(impl Into<String>)
/set_reference_predictor_arn(Option<String>)
:The ARN of the predictor to retrain or upgrade. This parameter is only used when retraining or upgrading a predictor. When creating a new predictor, do not specify a value for this parameter.
When upgrading or retraining a predictor, only specify values for the
ReferencePredictorArn
andPredictorName
. The value forPredictorName
must be a unique predictor name.optimization_metric(OptimizationMetric)
/set_optimization_metric(Option<OptimizationMetric>)
:The accuracy metric used to optimize the predictor.
explain_predictor(bool)
/set_explain_predictor(Option<bool>)
:Create an Explainability resource for the predictor.
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:Optional metadata to help you categorize and organize your predictors. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.
The following restrictions apply to tags:
-
For each resource, each tag key must be unique and each tag key must have one value.
-
Maximum number of tags per resource: 50.
-
Maximum key length: 128 Unicode characters in UTF-8.
-
Maximum value length: 256 Unicode characters in UTF-8.
-
Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.
-
Key prefixes cannot include any upper or lowercase combination of
aws:
orAWS:
. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
-
monitor_config(MonitorConfig)
/set_monitor_config(Option<MonitorConfig>)
:The configuration details for predictor monitoring. Provide a name for the monitor resource to enable predictor monitoring.
Predictor monitoring allows you to see how your predictor’s performance changes over time. For more information, see Predictor Monitoring.
time_alignment_boundary(TimeAlignmentBoundary)
/set_time_alignment_boundary(Option<TimeAlignmentBoundary>)
:The time boundary Forecast uses to align and aggregate any data that doesn’t align with your forecast frequency. Provide the unit of time and the time boundary as a key value pair. For more information on specifying a time boundary, see Specifying a Time Boundary. If you don’t provide a time boundary, Forecast uses a set of Default Time Boundaries.
- On success, responds with
CreateAutoPredictorOutput
with field(s):predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor.
- On failure, responds with
SdkError<CreateAutoPredictorError>
sourcepub fn create_dataset(&self) -> CreateDataset
pub fn create_dataset(&self) -> CreateDataset
Constructs a fluent builder for the CreateDataset
operation.
- The fluent builder is configurable:
dataset_name(impl Into<String>)
/set_dataset_name(Option<String>)
:A name for the dataset.
domain(Domain)
/set_domain(Option<Domain>)
:The domain associated with the dataset. When you add a dataset to a dataset group, this value and the value specified for the
Domain
parameter of the CreateDatasetGroup operation must match.The
Domain
andDatasetType
that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose theRETAIL
domain andTARGET_TIME_SERIES
as theDatasetType
, Amazon Forecast requiresitem_id
,timestamp
, anddemand
fields to be present in your data. For more information, see Importing datasets.dataset_type(DatasetType)
/set_dataset_type(Option<DatasetType>)
:The dataset type. Valid values depend on the chosen
Domain
.data_frequency(impl Into<String>)
/set_data_frequency(Option<String>)
:The frequency of data collection. This parameter is required for RELATED_TIME_SERIES datasets.
Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, “D” indicates every day and “15min” indicates every 15 minutes.
schema(Schema)
/set_schema(Option<Schema>)
:The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset
Domain
andDatasetType
that you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see Dataset Domains and Dataset Types.encryption_config(EncryptionConfig)
/set_encryption_config(Option<EncryptionConfig>)
:An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:The optional metadata that you apply to the dataset to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
-
- On success, responds with
CreateDatasetOutput
with field(s):dataset_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset.
- On failure, responds with
SdkError<CreateDatasetError>
sourcepub fn create_dataset_group(&self) -> CreateDatasetGroup
pub fn create_dataset_group(&self) -> CreateDatasetGroup
Constructs a fluent builder for the CreateDatasetGroup
operation.
- The fluent builder is configurable:
dataset_group_name(impl Into<String>)
/set_dataset_group_name(Option<String>)
:A name for the dataset group.
domain(Domain)
/set_domain(Option<Domain>)
:The domain associated with the dataset group. When you add a dataset to a dataset group, this value and the value specified for the
Domain
parameter of the CreateDataset operation must match.The
Domain
andDatasetType
that you choose determine the fields that must be present in training data that you import to a dataset. For example, if you choose theRETAIL
domain andTARGET_TIME_SERIES
as theDatasetType
, Amazon Forecast requires thatitem_id
,timestamp
, anddemand
fields are present in your data. For more information, see Dataset groups.dataset_arns(Vec<String>)
/set_dataset_arns(Option<Vec<String>>)
:An array of Amazon Resource Names (ARNs) of the datasets that you want to include in the dataset group.
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:The optional metadata that you apply to the dataset group to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
-
- On success, responds with
CreateDatasetGroupOutput
with field(s):dataset_group_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset group.
- On failure, responds with
SdkError<CreateDatasetGroupError>
sourcepub fn create_dataset_import_job(&self) -> CreateDatasetImportJob
pub fn create_dataset_import_job(&self) -> CreateDatasetImportJob
Constructs a fluent builder for the CreateDatasetImportJob
operation.
- The fluent builder is configurable:
dataset_import_job_name(impl Into<String>)
/set_dataset_import_job_name(Option<String>)
:The name for the dataset import job. We recommend including the current timestamp in the name, for example,
20190721DatasetImport
. This can help you avoid getting aResourceAlreadyExistsException
exception.dataset_arn(impl Into<String>)
/set_dataset_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Amazon Forecast dataset that you want to import data to.
data_source(DataSource)
/set_data_source(Option<DataSource>)
:The location of the training data to import and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data. The training data must be stored in an Amazon S3 bucket.
If encryption is used,
DataSource
must include an AWS Key Management Service (KMS) key and the IAM role must allow Amazon Forecast permission to access the key. The KMS key and IAM role must match those specified in theEncryptionConfig
parameter of the CreateDataset operation.timestamp_format(impl Into<String>)
/set_timestamp_format(Option<String>)
:The format of timestamps in the dataset. The format that you specify depends on the
DataFrequency
specified when the dataset was created. The following formats are supported-
“yyyy-MM-dd”
For the following data frequencies: Y, M, W, and D
-
“yyyy-MM-dd HH:mm:ss”
For the following data frequencies: H, 30min, 15min, and 1min; and optionally, for: Y, M, W, and D
If the format isn’t specified, Amazon Forecast expects the format to be “yyyy-MM-dd HH:mm:ss”.
-
time_zone(impl Into<String>)
/set_time_zone(Option<String>)
:A single time zone for every item in your dataset. This option is ideal for datasets with all timestamps within a single time zone, or if all timestamps are normalized to a single time zone.
Refer to the Joda-Time API for a complete list of valid time zone names.
use_geolocation_for_time_zone(bool)
/set_use_geolocation_for_time_zone(bool)
:Automatically derive time zone information from the geolocation attribute. This option is ideal for datasets that contain timestamps in multiple time zones and those timestamps are expressed in local time.
geolocation_format(impl Into<String>)
/set_geolocation_format(Option<String>)
:The format of the geolocation attribute. The geolocation attribute can be formatted in one of two ways:
-
LAT_LONG
- the latitude and longitude in decimal format (Example: 47.61_-122.33). -
CC_POSTALCODE
(US Only) - the country code (US), followed by the 5-digit ZIP code (Example: US_98121).
-
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:The optional metadata that you apply to the dataset import job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
-
format(impl Into<String>)
/set_format(Option<String>)
:The format of the imported data, CSV or PARQUET. The default value is CSV.
- On success, responds with
CreateDatasetImportJobOutput
with field(s):dataset_import_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset import job.
- On failure, responds with
SdkError<CreateDatasetImportJobError>
sourcepub fn create_explainability(&self) -> CreateExplainability
pub fn create_explainability(&self) -> CreateExplainability
Constructs a fluent builder for the CreateExplainability
operation.
- The fluent builder is configurable:
explainability_name(impl Into<String>)
/set_explainability_name(Option<String>)
:A unique name for the Explainability.
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Predictor or Forecast used to create the Explainability.
explainability_config(ExplainabilityConfig)
/set_explainability_config(Option<ExplainabilityConfig>)
:The configuration settings that define the granularity of time series and time points for the Explainability.
data_source(DataSource)
/set_data_source(Option<DataSource>)
:The source of your data, an AWS Identity and Access Management (IAM) role that allows Amazon Forecast to access the data and, optionally, an AWS Key Management Service (KMS) key.
schema(Schema)
/set_schema(Option<Schema>)
:Defines the fields of a dataset.
enable_visualization(bool)
/set_enable_visualization(Option<bool>)
:Create an Explainability visualization that is viewable within the AWS console.
start_date_time(impl Into<String>)
/set_start_date_time(Option<String>)
:If
TimePointGranularity
is set toSPECIFIC
, define the first point for the Explainability.Use the following timestamp format: yyyy-MM-ddTHH:mm:ss (example: 2015-01-01T20:00:00)
end_date_time(impl Into<String>)
/set_end_date_time(Option<String>)
:If
TimePointGranularity
is set toSPECIFIC
, define the last time point for the Explainability.Use the following timestamp format: yyyy-MM-ddTHH:mm:ss (example: 2015-01-01T20:00:00)
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:Optional metadata to help you categorize and organize your resources. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.
The following restrictions apply to tags:
-
For each resource, each tag key must be unique and each tag key must have one value.
-
Maximum number of tags per resource: 50.
-
Maximum key length: 128 Unicode characters in UTF-8.
-
Maximum value length: 256 Unicode characters in UTF-8.
-
Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.
-
Key prefixes cannot include any upper or lowercase combination of
aws:
orAWS:
. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
-
- On success, responds with
CreateExplainabilityOutput
with field(s):explainability_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explainability.
- On failure, responds with
SdkError<CreateExplainabilityError>
sourcepub fn create_explainability_export(&self) -> CreateExplainabilityExport
pub fn create_explainability_export(&self) -> CreateExplainabilityExport
Constructs a fluent builder for the CreateExplainabilityExport
operation.
- The fluent builder is configurable:
explainability_export_name(impl Into<String>)
/set_explainability_export_name(Option<String>)
:A unique name for the Explainability export.
explainability_arn(impl Into<String>)
/set_explainability_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explainability to export.
destination(DataDestination)
/set_destination(Option<DataDestination>)
:The destination for an export job. Provide an S3 path, an AWS Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an AWS Key Management Service (KMS) key (optional).
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:Optional metadata to help you categorize and organize your resources. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.
The following restrictions apply to tags:
-
For each resource, each tag key must be unique and each tag key must have one value.
-
Maximum number of tags per resource: 50.
-
Maximum key length: 128 Unicode characters in UTF-8.
-
Maximum value length: 256 Unicode characters in UTF-8.
-
Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.
-
Key prefixes cannot include any upper or lowercase combination of
aws:
orAWS:
. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
-
format(impl Into<String>)
/set_format(Option<String>)
:The format of the exported data, CSV or PARQUET.
- On success, responds with
CreateExplainabilityExportOutput
with field(s):explainability_export_arn(Option<String>)
:The Amazon Resource Name (ARN) of the export.
- On failure, responds with
SdkError<CreateExplainabilityExportError>
sourcepub fn create_forecast(&self) -> CreateForecast
pub fn create_forecast(&self) -> CreateForecast
Constructs a fluent builder for the CreateForecast
operation.
- The fluent builder is configurable:
forecast_name(impl Into<String>)
/set_forecast_name(Option<String>)
:A name for the forecast.
predictor_arn(impl Into<String>)
/set_predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor to use to generate the forecast.
forecast_types(Vec<String>)
/set_forecast_types(Option<Vec<String>>)
:The quantiles at which probabilistic forecasts are generated. You can currently specify up to 5 quantiles per forecast. Accepted values include
0.01 to 0.99
(increments of .01 only) andmean
. The mean forecast is different from the median (0.50) when the distribution is not symmetric (for example, Beta and Negative Binomial).The default quantiles are the quantiles you specified during predictor creation. If you didn’t specify quantiles, the default values are
[“0.1”, “0.5”, “0.9”]
.tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:The optional metadata that you apply to the forecast to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
-
time_series_selector(TimeSeriesSelector)
/set_time_series_selector(Option<TimeSeriesSelector>)
:Defines the set of time series that are used to create the forecasts in a
TimeSeriesIdentifiers
object.The
TimeSeriesIdentifiers
object needs the following information:-
DataSource
-
Format
-
Schema
-
- On success, responds with
CreateForecastOutput
with field(s):forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the forecast.
- On failure, responds with
SdkError<CreateForecastError>
sourcepub fn create_forecast_export_job(&self) -> CreateForecastExportJob
pub fn create_forecast_export_job(&self) -> CreateForecastExportJob
Constructs a fluent builder for the CreateForecastExportJob
operation.
- The fluent builder is configurable:
forecast_export_job_name(impl Into<String>)
/set_forecast_export_job_name(Option<String>)
:The name for the forecast export job.
forecast_arn(impl Into<String>)
/set_forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the forecast that you want to export.
destination(DataDestination)
/set_destination(Option<DataDestination>)
:The location where you want to save the forecast and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the location. The forecast must be exported to an Amazon S3 bucket.
If encryption is used,
Destination
must include an AWS Key Management Service (KMS) key. The IAM role must allow Amazon Forecast permission to access the key.tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:The optional metadata that you apply to the forecast export job to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
-
format(impl Into<String>)
/set_format(Option<String>)
:The format of the exported data, CSV or PARQUET. The default value is CSV.
- On success, responds with
CreateForecastExportJobOutput
with field(s):forecast_export_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the export job.
- On failure, responds with
SdkError<CreateForecastExportJobError>
sourcepub fn create_monitor(&self) -> CreateMonitor
pub fn create_monitor(&self) -> CreateMonitor
Constructs a fluent builder for the CreateMonitor
operation.
- The fluent builder is configurable:
monitor_name(impl Into<String>)
/set_monitor_name(Option<String>)
:The name of the monitor resource.
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor to monitor.
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:A list of tags to apply to the monitor resource.
- On success, responds with
CreateMonitorOutput
with field(s):monitor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the monitor resource.
- On failure, responds with
SdkError<CreateMonitorError>
sourcepub fn create_predictor(&self) -> CreatePredictor
pub fn create_predictor(&self) -> CreatePredictor
Constructs a fluent builder for the CreatePredictor
operation.
- The fluent builder is configurable:
predictor_name(impl Into<String>)
/set_predictor_name(Option<String>)
:A name for the predictor.
algorithm_arn(impl Into<String>)
/set_algorithm_arn(Option<String>)
:The Amazon Resource Name (ARN) of the algorithm to use for model training. Required if
PerformAutoML
is not set totrue
.Supported algorithms:
-
arn:aws:forecast:::algorithm/ARIMA
-
arn:aws:forecast:::algorithm/CNN-QR
-
arn:aws:forecast:::algorithm/Deep_AR_Plus
-
arn:aws:forecast:::algorithm/ETS
-
arn:aws:forecast:::algorithm/NPTS
-
arn:aws:forecast:::algorithm/Prophet
-
forecast_horizon(i32)
/set_forecast_horizon(Option<i32>)
:Specifies the number of time-steps that the model is trained to predict. The forecast horizon is also called the prediction length.
For example, if you configure a dataset for daily data collection (using the
DataFrequency
parameter of theCreateDataset
operation) and set the forecast horizon to 10, the model returns predictions for 10 days.The maximum forecast horizon is the lesser of 500 time-steps or 1/3 of the TARGET_TIME_SERIES dataset length.
forecast_types(Vec<String>)
/set_forecast_types(Option<Vec<String>>)
:Specifies the forecast types used to train a predictor. You can specify up to five forecast types. Forecast types can be quantiles from 0.01 to 0.99, by increments of 0.01 or higher. You can also specify the mean forecast with
mean
.The default value is
[“0.10”, “0.50”, “0.9”]
.perform_auto_ml(bool)
/set_perform_auto_ml(Option<bool>)
:Whether to perform AutoML. When Amazon Forecast performs AutoML, it evaluates the algorithms it provides and chooses the best algorithm and configuration for your training dataset.
The default value is
false
. In this case, you are required to specify an algorithm.Set
PerformAutoML
totrue
to have Amazon Forecast perform AutoML. This is a good option if you aren’t sure which algorithm is suitable for your training data. In this case,PerformHPO
must be false.auto_ml_override_strategy(AutoMlOverrideStrategy)
/set_auto_ml_override_strategy(Option<AutoMlOverrideStrategy>)
:The
LatencyOptimized
AutoML override strategy is only available in private beta. Contact AWS Support or your account manager to learn more about access privileges.Used to overide the default AutoML strategy, which is to optimize predictor accuracy. To apply an AutoML strategy that minimizes training time, use
LatencyOptimized
.This parameter is only valid for predictors trained using AutoML.
perform_hpo(bool)
/set_perform_hpo(Option<bool>)
:Whether to perform hyperparameter optimization (HPO). HPO finds optimal hyperparameter values for your training data. The process of performing HPO is known as running a hyperparameter tuning job.
The default value is
false
. In this case, Amazon Forecast uses default hyperparameter values from the chosen algorithm.To override the default values, set
PerformHPO
totrue
and, optionally, supply theHyperParameterTuningJobConfig
object. The tuning job specifies a metric to optimize, which hyperparameters participate in tuning, and the valid range for each tunable hyperparameter. In this case, you are required to specify an algorithm andPerformAutoML
must be false.The following algorithms support HPO:
-
DeepAR+
-
CNN-QR
-
training_parameters(HashMap<String, String>)
/set_training_parameters(Option<HashMap<String, String>>)
:The hyperparameters to override for model training. The hyperparameters that you can override are listed in the individual algorithms. For the list of supported algorithms, see
aws-forecast-choosing-recipes
.evaluation_parameters(EvaluationParameters)
/set_evaluation_parameters(Option<EvaluationParameters>)
:Used to override the default evaluation parameters of the specified algorithm. Amazon Forecast evaluates a predictor by splitting a dataset into training data and testing data. The evaluation parameters define how to perform the split and the number of iterations.
hpo_config(HyperParameterTuningJobConfig)
/set_hpo_config(Option<HyperParameterTuningJobConfig>)
:Provides hyperparameter override values for the algorithm. If you don’t provide this parameter, Amazon Forecast uses default values. The individual algorithms specify which hyperparameters support hyperparameter optimization (HPO). For more information, see
aws-forecast-choosing-recipes
.If you included the
HPOConfig
object, you must setPerformHPO
to true.input_data_config(InputDataConfig)
/set_input_data_config(Option<InputDataConfig>)
:Describes the dataset group that contains the data to use to train the predictor.
featurization_config(FeaturizationConfig)
/set_featurization_config(Option<FeaturizationConfig>)
:The featurization configuration.
encryption_config(EncryptionConfig)
/set_encryption_config(Option<EncryptionConfig>)
:An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:The optional metadata that you apply to the predictor to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
-
optimization_metric(OptimizationMetric)
/set_optimization_metric(Option<OptimizationMetric>)
:The accuracy metric used to optimize the predictor.
- On success, responds with
CreatePredictorOutput
with field(s):predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor.
- On failure, responds with
SdkError<CreatePredictorError>
sourcepub fn create_predictor_backtest_export_job(
&self
) -> CreatePredictorBacktestExportJob
pub fn create_predictor_backtest_export_job(
&self
) -> CreatePredictorBacktestExportJob
Constructs a fluent builder for the CreatePredictorBacktestExportJob
operation.
- The fluent builder is configurable:
predictor_backtest_export_job_name(impl Into<String>)
/set_predictor_backtest_export_job_name(Option<String>)
:The name for the backtest export job.
predictor_arn(impl Into<String>)
/set_predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor that you want to export.
destination(DataDestination)
/set_destination(Option<DataDestination>)
:The destination for an export job. Provide an S3 path, an AWS Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an AWS Key Management Service (KMS) key (optional).
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:Optional metadata to help you categorize and organize your backtests. Each tag consists of a key and an optional value, both of which you define. Tag keys and values are case sensitive.
The following restrictions apply to tags:
-
For each resource, each tag key must be unique and each tag key must have one value.
-
Maximum number of tags per resource: 50.
-
Maximum key length: 128 Unicode characters in UTF-8.
-
Maximum value length: 256 Unicode characters in UTF-8.
-
Accepted characters: all letters and numbers, spaces representable in UTF-8, and + - = . _ : / @. If your tagging schema is used across other services and resources, the character restrictions of those services also apply.
-
Key prefixes cannot include any upper or lowercase combination of
aws:
orAWS:
. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit. You cannot edit or delete tag keys with this prefix.
-
format(impl Into<String>)
/set_format(Option<String>)
:The format of the exported data, CSV or PARQUET. The default value is CSV.
- On success, responds with
CreatePredictorBacktestExportJobOutput
with field(s):predictor_backtest_export_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor backtest export job that you want to export.
- On failure, responds with
SdkError<CreatePredictorBacktestExportJobError>
sourcepub fn create_what_if_analysis(&self) -> CreateWhatIfAnalysis
pub fn create_what_if_analysis(&self) -> CreateWhatIfAnalysis
Constructs a fluent builder for the CreateWhatIfAnalysis
operation.
- The fluent builder is configurable:
what_if_analysis_name(impl Into<String>)
/set_what_if_analysis_name(Option<String>)
:The name of the what-if analysis. Each name must be unique.
forecast_arn(impl Into<String>)
/set_forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the baseline forecast.
time_series_selector(TimeSeriesSelector)
/set_time_series_selector(Option<TimeSeriesSelector>)
:Defines the set of time series that are used in the what-if analysis with a
TimeSeriesIdentifiers
object. What-if analyses are performed only for the time series in this object.The
TimeSeriesIdentifiers
object needs the following information:-
DataSource
-
Format
-
Schema
-
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:A list of tags to apply to the what if forecast.
- On success, responds with
CreateWhatIfAnalysisOutput
with field(s):what_if_analysis_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if analysis.
- On failure, responds with
SdkError<CreateWhatIfAnalysisError>
sourcepub fn create_what_if_forecast(&self) -> CreateWhatIfForecast
pub fn create_what_if_forecast(&self) -> CreateWhatIfForecast
Constructs a fluent builder for the CreateWhatIfForecast
operation.
- The fluent builder is configurable:
what_if_forecast_name(impl Into<String>)
/set_what_if_forecast_name(Option<String>)
:The name of the what-if forecast. Names must be unique within each what-if analysis.
what_if_analysis_arn(impl Into<String>)
/set_what_if_analysis_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if analysis.
time_series_transformations(Vec<TimeSeriesTransformation>)
/set_time_series_transformations(Option<Vec<TimeSeriesTransformation>>)
:The transformations that are applied to the baseline time series. Each transformation contains an action and a set of conditions. An action is applied only when all conditions are met. If no conditions are provided, the action is applied to all items.
time_series_replacements_data_source(TimeSeriesReplacementsDataSource)
/set_time_series_replacements_data_source(Option<TimeSeriesReplacementsDataSource>)
:The replacement time series dataset, which contains the rows that you want to change in the related time series dataset. A replacement time series does not need to contain all rows that are in the baseline related time series. Include only the rows (measure-dimension combinations) that you want to include in the what-if forecast. This dataset is merged with the original time series to create a transformed dataset that is used for the what-if analysis.
This dataset should contain the items to modify (such as item_id or workforce_type), any relevant dimensions, the timestamp column, and at least one of the related time series columns. This file should not contain duplicate timestamps for the same time series.
Timestamps and item_ids not included in this dataset are not included in the what-if analysis.
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:A list of tags to apply to the what if forecast.
- On success, responds with
CreateWhatIfForecastOutput
with field(s):what_if_forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast.
- On failure, responds with
SdkError<CreateWhatIfForecastError>
sourcepub fn create_what_if_forecast_export(&self) -> CreateWhatIfForecastExport
pub fn create_what_if_forecast_export(&self) -> CreateWhatIfForecastExport
Constructs a fluent builder for the CreateWhatIfForecastExport
operation.
- The fluent builder is configurable:
what_if_forecast_export_name(impl Into<String>)
/set_what_if_forecast_export_name(Option<String>)
:The name of the what-if forecast to export.
what_if_forecast_arns(Vec<String>)
/set_what_if_forecast_arns(Option<Vec<String>>)
:The list of what-if forecast Amazon Resource Names (ARNs) to export.
destination(DataDestination)
/set_destination(Option<DataDestination>)
:The location where you want to save the forecast and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the location. The forecast must be exported to an Amazon S3 bucket.
If encryption is used,
Destination
must include an AWS Key Management Service (KMS) key. The IAM role must allow Amazon Forecast permission to access the key.tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:A list of tags to apply to the what if forecast.
format(impl Into<String>)
/set_format(Option<String>)
:The format of the exported data, CSV or PARQUET.
- On success, responds with
CreateWhatIfForecastExportOutput
with field(s):what_if_forecast_export_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast.
- On failure, responds with
SdkError<CreateWhatIfForecastExportError>
sourcepub fn delete_dataset(&self) -> DeleteDataset
pub fn delete_dataset(&self) -> DeleteDataset
Constructs a fluent builder for the DeleteDataset
operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)
/set_dataset_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset to delete.
- On success, responds with
DeleteDatasetOutput
- On failure, responds with
SdkError<DeleteDatasetError>
sourcepub fn delete_dataset_group(&self) -> DeleteDatasetGroup
pub fn delete_dataset_group(&self) -> DeleteDatasetGroup
Constructs a fluent builder for the DeleteDatasetGroup
operation.
- The fluent builder is configurable:
dataset_group_arn(impl Into<String>)
/set_dataset_group_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset group to delete.
- On success, responds with
DeleteDatasetGroupOutput
- On failure, responds with
SdkError<DeleteDatasetGroupError>
sourcepub fn delete_dataset_import_job(&self) -> DeleteDatasetImportJob
pub fn delete_dataset_import_job(&self) -> DeleteDatasetImportJob
Constructs a fluent builder for the DeleteDatasetImportJob
operation.
- The fluent builder is configurable:
dataset_import_job_arn(impl Into<String>)
/set_dataset_import_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset import job to delete.
- On success, responds with
DeleteDatasetImportJobOutput
- On failure, responds with
SdkError<DeleteDatasetImportJobError>
sourcepub fn delete_explainability(&self) -> DeleteExplainability
pub fn delete_explainability(&self) -> DeleteExplainability
Constructs a fluent builder for the DeleteExplainability
operation.
- The fluent builder is configurable:
explainability_arn(impl Into<String>)
/set_explainability_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explainability resource to delete.
- On success, responds with
DeleteExplainabilityOutput
- On failure, responds with
SdkError<DeleteExplainabilityError>
sourcepub fn delete_explainability_export(&self) -> DeleteExplainabilityExport
pub fn delete_explainability_export(&self) -> DeleteExplainabilityExport
Constructs a fluent builder for the DeleteExplainabilityExport
operation.
- The fluent builder is configurable:
explainability_export_arn(impl Into<String>)
/set_explainability_export_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explainability export to delete.
- On success, responds with
DeleteExplainabilityExportOutput
- On failure, responds with
SdkError<DeleteExplainabilityExportError>
sourcepub fn delete_forecast(&self) -> DeleteForecast
pub fn delete_forecast(&self) -> DeleteForecast
Constructs a fluent builder for the DeleteForecast
operation.
- The fluent builder is configurable:
forecast_arn(impl Into<String>)
/set_forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the forecast to delete.
- On success, responds with
DeleteForecastOutput
- On failure, responds with
SdkError<DeleteForecastError>
sourcepub fn delete_forecast_export_job(&self) -> DeleteForecastExportJob
pub fn delete_forecast_export_job(&self) -> DeleteForecastExportJob
Constructs a fluent builder for the DeleteForecastExportJob
operation.
- The fluent builder is configurable:
forecast_export_job_arn(impl Into<String>)
/set_forecast_export_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the forecast export job to delete.
- On success, responds with
DeleteForecastExportJobOutput
- On failure, responds with
SdkError<DeleteForecastExportJobError>
sourcepub fn delete_monitor(&self) -> DeleteMonitor
pub fn delete_monitor(&self) -> DeleteMonitor
Constructs a fluent builder for the DeleteMonitor
operation.
- The fluent builder is configurable:
monitor_arn(impl Into<String>)
/set_monitor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the monitor resource to delete.
- On success, responds with
DeleteMonitorOutput
- On failure, responds with
SdkError<DeleteMonitorError>
sourcepub fn delete_predictor(&self) -> DeletePredictor
pub fn delete_predictor(&self) -> DeletePredictor
Constructs a fluent builder for the DeletePredictor
operation.
- The fluent builder is configurable:
predictor_arn(impl Into<String>)
/set_predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor to delete.
- On success, responds with
DeletePredictorOutput
- On failure, responds with
SdkError<DeletePredictorError>
sourcepub fn delete_predictor_backtest_export_job(
&self
) -> DeletePredictorBacktestExportJob
pub fn delete_predictor_backtest_export_job(
&self
) -> DeletePredictorBacktestExportJob
Constructs a fluent builder for the DeletePredictorBacktestExportJob
operation.
- The fluent builder is configurable:
predictor_backtest_export_job_arn(impl Into<String>)
/set_predictor_backtest_export_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor backtest export job to delete.
- On success, responds with
DeletePredictorBacktestExportJobOutput
- On failure, responds with
SdkError<DeletePredictorBacktestExportJobError>
sourcepub fn delete_resource_tree(&self) -> DeleteResourceTree
pub fn delete_resource_tree(&self) -> DeleteResourceTree
Constructs a fluent builder for the DeleteResourceTree
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) of the parent resource to delete. All child resources of the parent resource will also be deleted.
- On success, responds with
DeleteResourceTreeOutput
- On failure, responds with
SdkError<DeleteResourceTreeError>
sourcepub fn delete_what_if_analysis(&self) -> DeleteWhatIfAnalysis
pub fn delete_what_if_analysis(&self) -> DeleteWhatIfAnalysis
Constructs a fluent builder for the DeleteWhatIfAnalysis
operation.
- The fluent builder is configurable:
what_if_analysis_arn(impl Into<String>)
/set_what_if_analysis_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if analysis that you want to delete.
- On success, responds with
DeleteWhatIfAnalysisOutput
- On failure, responds with
SdkError<DeleteWhatIfAnalysisError>
sourcepub fn delete_what_if_forecast(&self) -> DeleteWhatIfForecast
pub fn delete_what_if_forecast(&self) -> DeleteWhatIfForecast
Constructs a fluent builder for the DeleteWhatIfForecast
operation.
- The fluent builder is configurable:
what_if_forecast_arn(impl Into<String>)
/set_what_if_forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast that you want to delete.
- On success, responds with
DeleteWhatIfForecastOutput
- On failure, responds with
SdkError<DeleteWhatIfForecastError>
sourcepub fn delete_what_if_forecast_export(&self) -> DeleteWhatIfForecastExport
pub fn delete_what_if_forecast_export(&self) -> DeleteWhatIfForecastExport
Constructs a fluent builder for the DeleteWhatIfForecastExport
operation.
- The fluent builder is configurable:
what_if_forecast_export_arn(impl Into<String>)
/set_what_if_forecast_export_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast export that you want to delete.
- On success, responds with
DeleteWhatIfForecastExportOutput
- On failure, responds with
SdkError<DeleteWhatIfForecastExportError>
sourcepub fn describe_auto_predictor(&self) -> DescribeAutoPredictor
pub fn describe_auto_predictor(&self) -> DescribeAutoPredictor
Constructs a fluent builder for the DescribeAutoPredictor
operation.
- The fluent builder is configurable:
predictor_arn(impl Into<String>)
/set_predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor.
- On success, responds with
DescribeAutoPredictorOutput
with field(s):predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor
predictor_name(Option<String>)
:The name of the predictor.
forecast_horizon(Option<i32>)
:The number of time-steps that the model predicts. The forecast horizon is also called the prediction length.
forecast_types(Option<Vec<String>>)
:The forecast types used during predictor training. Default value is [“0.1”,“0.5”,“0.9”].
forecast_frequency(Option<String>)
:The frequency of predictions in a forecast.
Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, “Y” indicates every year and “5min” indicates every five minutes.
forecast_dimensions(Option<Vec<String>>)
:An array of dimension (field) names that specify the attributes used to group your time series.
dataset_import_job_arns(Option<Vec<String>>)
:An array of the ARNs of the dataset import jobs used to import training data for the predictor.
data_config(Option<DataConfig>)
:The data configuration for your dataset group and any additional datasets.
encryption_config(Option<EncryptionConfig>)
:An AWS Key Management Service (KMS) key and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key. You can specify this optional object in the
CreateDataset
andCreatePredictor
requests.reference_predictor_summary(Option<ReferencePredictorSummary>)
:The ARN and state of the reference predictor. This parameter is only valid for retrained or upgraded predictors.
estimated_time_remaining_in_minutes(Option<i64>)
:The estimated time remaining in minutes for the predictor training job to complete.
status(Option<String>)
:The status of the predictor. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
-
message(Option<String>)
:In the event of an error, a message detailing the cause of the error.
creation_time(Option<DateTime>)
:The timestamp of the CreateAutoPredictor request.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
optimization_metric(Option<OptimizationMetric>)
:The accuracy metric used to optimize the predictor.
explainability_info(Option<ExplainabilityInfo>)
:Provides the status and ARN of the Predictor Explainability.
monitor_info(Option<MonitorInfo>)
:A object with the Amazon Resource Name (ARN) and status of the monitor resource.
time_alignment_boundary(Option<TimeAlignmentBoundary>)
:The time boundary Forecast uses when aggregating data.
- On failure, responds with
SdkError<DescribeAutoPredictorError>
sourcepub fn describe_dataset(&self) -> DescribeDataset
pub fn describe_dataset(&self) -> DescribeDataset
Constructs a fluent builder for the DescribeDataset
operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)
/set_dataset_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset.
- On success, responds with
DescribeDatasetOutput
with field(s):dataset_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset.
dataset_name(Option<String>)
:The name of the dataset.
domain(Option<Domain>)
:The domain associated with the dataset.
dataset_type(Option<DatasetType>)
:The dataset type.
data_frequency(Option<String>)
:The frequency of data collection.
Valid intervals are Y (Year), M (Month), W (Week), D (Day), H (Hour), 30min (30 minutes), 15min (15 minutes), 10min (10 minutes), 5min (5 minutes), and 1min (1 minute). For example, “M” indicates every month and “30min” indicates every 30 minutes.
schema(Option<Schema>)
:An array of
SchemaAttribute
objects that specify the dataset fields. EachSchemaAttribute
specifies the name and data type of a field.encryption_config(Option<EncryptionConfig>)
:The AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
status(Option<String>)
:The status of the dataset. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
-
UPDATE_PENDING
,UPDATE_IN_PROGRESS
,UPDATE_FAILED
The
UPDATE
states apply while data is imported to the dataset from a call to the CreateDatasetImportJob operation and reflect the status of the dataset import job. For example, when the import job status isCREATE_IN_PROGRESS
, the status of the dataset isUPDATE_IN_PROGRESS
.The
Status
of the dataset must beACTIVE
before you can import training data.-
creation_time(Option<DateTime>)
:When the dataset was created.
last_modification_time(Option<DateTime>)
:When you create a dataset,
LastModificationTime
is the same asCreationTime
. While data is being imported to the dataset,LastModificationTime
is the current time of theDescribeDataset
call. After a CreateDatasetImportJob operation has finished,LastModificationTime
is when the import job completed or failed.
- On failure, responds with
SdkError<DescribeDatasetError>
sourcepub fn describe_dataset_group(&self) -> DescribeDatasetGroup
pub fn describe_dataset_group(&self) -> DescribeDatasetGroup
Constructs a fluent builder for the DescribeDatasetGroup
operation.
- The fluent builder is configurable:
dataset_group_arn(impl Into<String>)
/set_dataset_group_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset group.
- On success, responds with
DescribeDatasetGroupOutput
with field(s):dataset_group_name(Option<String>)
:The name of the dataset group.
dataset_group_arn(Option<String>)
:The ARN of the dataset group.
dataset_arns(Option<Vec<String>>)
:An array of Amazon Resource Names (ARNs) of the datasets contained in the dataset group.
domain(Option<Domain>)
:The domain associated with the dataset group.
status(Option<String>)
:The status of the dataset group. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
-
UPDATE_PENDING
,UPDATE_IN_PROGRESS
,UPDATE_FAILED
The
UPDATE
states apply when you call the UpdateDatasetGroup operation.The
Status
of the dataset group must beACTIVE
before you can use the dataset group to create a predictor.-
creation_time(Option<DateTime>)
:When the dataset group was created.
last_modification_time(Option<DateTime>)
:When the dataset group was created or last updated from a call to the UpdateDatasetGroup operation. While the dataset group is being updated,
LastModificationTime
is the current time of theDescribeDatasetGroup
call.
- On failure, responds with
SdkError<DescribeDatasetGroupError>
sourcepub fn describe_dataset_import_job(&self) -> DescribeDatasetImportJob
pub fn describe_dataset_import_job(&self) -> DescribeDatasetImportJob
Constructs a fluent builder for the DescribeDatasetImportJob
operation.
- The fluent builder is configurable:
dataset_import_job_arn(impl Into<String>)
/set_dataset_import_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset import job.
- On success, responds with
DescribeDatasetImportJobOutput
with field(s):dataset_import_job_name(Option<String>)
:The name of the dataset import job.
dataset_import_job_arn(Option<String>)
:The ARN of the dataset import job.
dataset_arn(Option<String>)
:The Amazon Resource Name (ARN) of the dataset that the training data was imported to.
timestamp_format(Option<String>)
:The format of timestamps in the dataset. The format that you specify depends on the
DataFrequency
specified when the dataset was created. The following formats are supported-
“yyyy-MM-dd”
For the following data frequencies: Y, M, W, and D
-
“yyyy-MM-dd HH:mm:ss”
For the following data frequencies: H, 30min, 15min, and 1min; and optionally, for: Y, M, W, and D
-
time_zone(Option<String>)
:The single time zone applied to every item in the dataset
use_geolocation_for_time_zone(bool)
:Whether
TimeZone
is automatically derived from the geolocation attribute.geolocation_format(Option<String>)
:The format of the geolocation attribute. Valid Values:
“LAT_LONG”
and“CC_POSTALCODE”
.data_source(Option<DataSource>)
:The location of the training data to import and an AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the data.
If encryption is used,
DataSource
includes an AWS Key Management Service (KMS) key.estimated_time_remaining_in_minutes(Option<i64>)
:The estimated time remaining in minutes for the dataset import job to complete.
field_statistics(Option<HashMap<String, Statistics>>)
:Statistical information about each field in the input data.
data_size(Option<f64>)
:The size of the dataset in gigabytes (GB) after the import job has finished.
status(Option<String>)
:The status of the dataset import job. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
message(Option<String>)
:If an error occurred, an informational message about the error.
creation_time(Option<DateTime>)
:When the dataset import job was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
format(Option<String>)
:The format of the imported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribeDatasetImportJobError>
sourcepub fn describe_explainability(&self) -> DescribeExplainability
pub fn describe_explainability(&self) -> DescribeExplainability
Constructs a fluent builder for the DescribeExplainability
operation.
- The fluent builder is configurable:
explainability_arn(impl Into<String>)
/set_explainability_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explaianability to describe.
- On success, responds with
DescribeExplainabilityOutput
with field(s):explainability_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explainability.
explainability_name(Option<String>)
:The name of the Explainability.
resource_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Predictor or Forecast used to create the Explainability resource.
explainability_config(Option<ExplainabilityConfig>)
:The configuration settings that define the granularity of time series and time points for the Explainability.
enable_visualization(Option<bool>)
:Whether the visualization was enabled for the Explainability resource.
data_source(Option<DataSource>)
:The source of your data, an AWS Identity and Access Management (IAM) role that allows Amazon Forecast to access the data and, optionally, an AWS Key Management Service (KMS) key.
schema(Option<Schema>)
:Defines the fields of a dataset.
start_date_time(Option<String>)
:If
TimePointGranularity
is set toSPECIFIC
, the first time point in the Explainability.end_date_time(Option<String>)
:If
TimePointGranularity
is set toSPECIFIC
, the last time point in the Explainability.estimated_time_remaining_in_minutes(Option<i64>)
:The estimated time remaining in minutes for the
CreateExplainability
job to complete.message(Option<String>)
:If an error occurred, a message about the error.
status(Option<String>)
:The status of the Explainability resource. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
-
creation_time(Option<DateTime>)
:When the Explainability resource was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
- On failure, responds with
SdkError<DescribeExplainabilityError>
sourcepub fn describe_explainability_export(&self) -> DescribeExplainabilityExport
pub fn describe_explainability_export(&self) -> DescribeExplainabilityExport
Constructs a fluent builder for the DescribeExplainabilityExport
operation.
- The fluent builder is configurable:
explainability_export_arn(impl Into<String>)
/set_explainability_export_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explainability export.
- On success, responds with
DescribeExplainabilityExportOutput
with field(s):explainability_export_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explainability export.
explainability_export_name(Option<String>)
:The name of the Explainability export.
explainability_arn(Option<String>)
:The Amazon Resource Name (ARN) of the Explainability export.
destination(Option<DataDestination>)
:The destination for an export job. Provide an S3 path, an AWS Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an AWS Key Management Service (KMS) key (optional).
message(Option<String>)
:Information about any errors that occurred during the export.
status(Option<String>)
:The status of the Explainability export. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
-
creation_time(Option<DateTime>)
:When the Explainability export was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
format(Option<String>)
:The format of the exported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribeExplainabilityExportError>
sourcepub fn describe_forecast(&self) -> DescribeForecast
pub fn describe_forecast(&self) -> DescribeForecast
Constructs a fluent builder for the DescribeForecast
operation.
- The fluent builder is configurable:
forecast_arn(impl Into<String>)
/set_forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the forecast.
- On success, responds with
DescribeForecastOutput
with field(s):forecast_arn(Option<String>)
:The forecast ARN as specified in the request.
forecast_name(Option<String>)
:The name of the forecast.
forecast_types(Option<Vec<String>>)
:The quantiles at which probabilistic forecasts were generated.
predictor_arn(Option<String>)
:The ARN of the predictor used to generate the forecast.
dataset_group_arn(Option<String>)
:The ARN of the dataset group that provided the data used to train the predictor.
estimated_time_remaining_in_minutes(Option<i64>)
:The estimated time remaining in minutes for the forecast job to complete.
status(Option<String>)
:The status of the forecast. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
The
Status
of the forecast must beACTIVE
before you can query or export the forecast.-
message(Option<String>)
:If an error occurred, an informational message about the error.
creation_time(Option<DateTime>)
:When the forecast creation task was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
time_series_selector(Option<TimeSeriesSelector>)
:The time series to include in the forecast.
- On failure, responds with
SdkError<DescribeForecastError>
sourcepub fn describe_forecast_export_job(&self) -> DescribeForecastExportJob
pub fn describe_forecast_export_job(&self) -> DescribeForecastExportJob
Constructs a fluent builder for the DescribeForecastExportJob
operation.
- The fluent builder is configurable:
forecast_export_job_arn(impl Into<String>)
/set_forecast_export_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the forecast export job.
- On success, responds with
DescribeForecastExportJobOutput
with field(s):forecast_export_job_arn(Option<String>)
:The ARN of the forecast export job.
forecast_export_job_name(Option<String>)
:The name of the forecast export job.
forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the exported forecast.
destination(Option<DataDestination>)
:The path to the Amazon Simple Storage Service (Amazon S3) bucket where the forecast is exported.
message(Option<String>)
:If an error occurred, an informational message about the error.
status(Option<String>)
:The status of the forecast export job. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
The
Status
of the forecast export job must beACTIVE
before you can access the forecast in your S3 bucket.-
creation_time(Option<DateTime>)
:When the forecast export job was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
format(Option<String>)
:The format of the exported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribeForecastExportJobError>
sourcepub fn describe_monitor(&self) -> DescribeMonitor
pub fn describe_monitor(&self) -> DescribeMonitor
Constructs a fluent builder for the DescribeMonitor
operation.
- The fluent builder is configurable:
monitor_arn(impl Into<String>)
/set_monitor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the monitor resource to describe.
- On success, responds with
DescribeMonitorOutput
with field(s):monitor_name(Option<String>)
:The name of the monitor.
monitor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the monitor resource described.
resource_arn(Option<String>)
:The Amazon Resource Name (ARN) of the auto predictor being monitored.
status(Option<String>)
:The status of the monitor resource.
last_evaluation_time(Option<DateTime>)
:The timestamp of the latest evaluation completed by the monitor.
last_evaluation_state(Option<String>)
:The state of the monitor’s latest evaluation.
baseline(Option<Baseline>)
:Metrics you can use as a baseline for comparison purposes. Use these values you interpret monitoring results for an auto predictor.
message(Option<String>)
:An error message, if any, for the monitor.
creation_time(Option<DateTime>)
:The timestamp for when the monitor resource was created.
last_modification_time(Option<DateTime>)
:The timestamp of the latest modification to the monitor.
estimated_evaluation_time_remaining_in_minutes(Option<i64>)
:The estimated number of minutes remaining before the monitor resource finishes its current evaluation.
- On failure, responds with
SdkError<DescribeMonitorError>
sourcepub fn describe_predictor(&self) -> DescribePredictor
pub fn describe_predictor(&self) -> DescribePredictor
Constructs a fluent builder for the DescribePredictor
operation.
- The fluent builder is configurable:
predictor_arn(impl Into<String>)
/set_predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor that you want information about.
- On success, responds with
DescribePredictorOutput
with field(s):predictor_arn(Option<String>)
:The ARN of the predictor.
predictor_name(Option<String>)
:The name of the predictor.
algorithm_arn(Option<String>)
:The Amazon Resource Name (ARN) of the algorithm used for model training.
auto_ml_algorithm_arns(Option<Vec<String>>)
:When
PerformAutoML
is specified, the ARN of the chosen algorithm.forecast_horizon(Option<i32>)
:The number of time-steps of the forecast. The forecast horizon is also called the prediction length.
forecast_types(Option<Vec<String>>)
:The forecast types used during predictor training. Default value is
[“0.1”,“0.5”,“0.9”]
perform_auto_ml(Option<bool>)
:Whether the predictor is set to perform AutoML.
auto_ml_override_strategy(Option<AutoMlOverrideStrategy>)
:The
LatencyOptimized
AutoML override strategy is only available in private beta. Contact AWS Support or your account manager to learn more about access privileges.The AutoML strategy used to train the predictor. Unless
LatencyOptimized
is specified, the AutoML strategy optimizes predictor accuracy.This parameter is only valid for predictors trained using AutoML.
perform_hpo(Option<bool>)
:Whether the predictor is set to perform hyperparameter optimization (HPO).
training_parameters(Option<HashMap<String, String>>)
:The default training parameters or overrides selected during model training. When running AutoML or choosing HPO with CNN-QR or DeepAR+, the optimized values for the chosen hyperparameters are returned. For more information, see
aws-forecast-choosing-recipes
.evaluation_parameters(Option<EvaluationParameters>)
:Used to override the default evaluation parameters of the specified algorithm. Amazon Forecast evaluates a predictor by splitting a dataset into training data and testing data. The evaluation parameters define how to perform the split and the number of iterations.
hpo_config(Option<HyperParameterTuningJobConfig>)
:The hyperparameter override values for the algorithm.
input_data_config(Option<InputDataConfig>)
:Describes the dataset group that contains the data to use to train the predictor.
featurization_config(Option<FeaturizationConfig>)
:The featurization configuration.
encryption_config(Option<EncryptionConfig>)
:An AWS Key Management Service (KMS) key and the AWS Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
predictor_execution_details(Option<PredictorExecutionDetails>)
:Details on the the status and results of the backtests performed to evaluate the accuracy of the predictor. You specify the number of backtests to perform when you call the operation.
estimated_time_remaining_in_minutes(Option<i64>)
:The estimated time remaining in minutes for the predictor training job to complete.
is_auto_predictor(Option<bool>)
:Whether the predictor was created with
CreateAutoPredictor
.dataset_import_job_arns(Option<Vec<String>>)
:An array of the ARNs of the dataset import jobs used to import training data for the predictor.
status(Option<String>)
:The status of the predictor. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
The
Status
of the predictor must beACTIVE
before you can use the predictor to create a forecast.-
message(Option<String>)
:If an error occurred, an informational message about the error.
creation_time(Option<DateTime>)
:When the model training task was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
optimization_metric(Option<OptimizationMetric>)
:The accuracy metric used to optimize the predictor.
- On failure, responds with
SdkError<DescribePredictorError>
sourcepub fn describe_predictor_backtest_export_job(
&self
) -> DescribePredictorBacktestExportJob
pub fn describe_predictor_backtest_export_job(
&self
) -> DescribePredictorBacktestExportJob
Constructs a fluent builder for the DescribePredictorBacktestExportJob
operation.
- The fluent builder is configurable:
predictor_backtest_export_job_arn(impl Into<String>)
/set_predictor_backtest_export_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor backtest export job.
- On success, responds with
DescribePredictorBacktestExportJobOutput
with field(s):predictor_backtest_export_job_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor backtest export job.
predictor_backtest_export_job_name(Option<String>)
:The name of the predictor backtest export job.
predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor.
destination(Option<DataDestination>)
:The destination for an export job. Provide an S3 path, an AWS Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an AWS Key Management Service (KMS) key (optional).
message(Option<String>)
:Information about any errors that may have occurred during the backtest export.
status(Option<String>)
:The status of the predictor backtest export job. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
-
creation_time(Option<DateTime>)
:When the predictor backtest export job was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
format(Option<String>)
:The format of the exported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribePredictorBacktestExportJobError>
sourcepub fn describe_what_if_analysis(&self) -> DescribeWhatIfAnalysis
pub fn describe_what_if_analysis(&self) -> DescribeWhatIfAnalysis
Constructs a fluent builder for the DescribeWhatIfAnalysis
operation.
- The fluent builder is configurable:
what_if_analysis_arn(impl Into<String>)
/set_what_if_analysis_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if analysis that you are interested in.
- On success, responds with
DescribeWhatIfAnalysisOutput
with field(s):what_if_analysis_name(Option<String>)
:The name of the what-if analysis.
what_if_analysis_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if analysis.
forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast.
estimated_time_remaining_in_minutes(Option<i64>)
:The approximate time remaining to complete the what-if analysis, in minutes.
status(Option<String>)
:The status of the what-if analysis. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
The
Status
of the what-if analysis must beACTIVE
before you can access the analysis.-
message(Option<String>)
:If an error occurred, an informational message about the error.
creation_time(Option<DateTime>)
:When the what-if analysis was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
time_series_selector(Option<TimeSeriesSelector>)
:Defines the set of time series that are used to create the forecasts in a
TimeSeriesIdentifiers
object.The
TimeSeriesIdentifiers
object needs the following information:-
DataSource
-
Format
-
Schema
-
- On failure, responds with
SdkError<DescribeWhatIfAnalysisError>
sourcepub fn describe_what_if_forecast(&self) -> DescribeWhatIfForecast
pub fn describe_what_if_forecast(&self) -> DescribeWhatIfForecast
Constructs a fluent builder for the DescribeWhatIfForecast
operation.
- The fluent builder is configurable:
what_if_forecast_arn(impl Into<String>)
/set_what_if_forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast that you are interested in.
- On success, responds with
DescribeWhatIfForecastOutput
with field(s):what_if_forecast_name(Option<String>)
:The name of the what-if forecast.
what_if_forecast_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast.
what_if_analysis_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if analysis that contains this forecast.
estimated_time_remaining_in_minutes(Option<i64>)
:The approximate time remaining to complete the what-if forecast, in minutes.
status(Option<String>)
:The status of the what-if forecast. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
The
Status
of the what-if forecast must beACTIVE
before you can access the forecast.-
message(Option<String>)
:If an error occurred, an informational message about the error.
creation_time(Option<DateTime>)
:When the what-if forecast was created.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
time_series_transformations(Option<Vec<TimeSeriesTransformation>>)
:An array of
Action
andTimeSeriesConditions
elements that describe what transformations were applied to which time series.time_series_replacements_data_source(Option<TimeSeriesReplacementsDataSource>)
:An array of
S3Config
,Schema
, andFormat
elements that describe the replacement time series.forecast_types(Option<Vec<String>>)
:The quantiles at which probabilistic forecasts are generated. You can specify up to 5 quantiles per what-if forecast in the
CreateWhatIfForecast
operation. If you didn’t specify quantiles, the default values are[“0.1”, “0.5”, “0.9”]
.
- On failure, responds with
SdkError<DescribeWhatIfForecastError>
sourcepub fn describe_what_if_forecast_export(&self) -> DescribeWhatIfForecastExport
pub fn describe_what_if_forecast_export(&self) -> DescribeWhatIfForecastExport
Constructs a fluent builder for the DescribeWhatIfForecastExport
operation.
- The fluent builder is configurable:
what_if_forecast_export_arn(impl Into<String>)
/set_what_if_forecast_export_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast export that you are interested in.
- On success, responds with
DescribeWhatIfForecastExportOutput
with field(s):what_if_forecast_export_arn(Option<String>)
:The Amazon Resource Name (ARN) of the what-if forecast export.
what_if_forecast_export_name(Option<String>)
:The name of the what-if forecast export.
what_if_forecast_arns(Option<Vec<String>>)
:An array of Amazon Resource Names (ARNs) that represent all of the what-if forecasts exported in this resource.
destination(Option<DataDestination>)
:The destination for an export job. Provide an S3 path, an AWS Identity and Access Management (IAM) role that allows Amazon Forecast to access the location, and an AWS Key Management Service (KMS) key (optional).
message(Option<String>)
:If an error occurred, an informational message about the error.
status(Option<String>)
:The status of the what-if forecast. States include:
-
ACTIVE
-
CREATE_PENDING
,CREATE_IN_PROGRESS
,CREATE_FAILED
-
CREATE_STOPPING
,CREATE_STOPPED
-
DELETE_PENDING
,DELETE_IN_PROGRESS
,DELETE_FAILED
The
Status
of the what-if forecast export must beACTIVE
before you can access the forecast export.-
creation_time(Option<DateTime>)
:When the what-if forecast export was created.
estimated_time_remaining_in_minutes(Option<i64>)
:The approximate time remaining to complete the what-if forecast export, in minutes.
last_modification_time(Option<DateTime>)
:The last time the resource was modified. The timestamp depends on the status of the job:
-
CREATE_PENDING
- TheCreationTime
. -
CREATE_IN_PROGRESS
- The current timestamp. -
CREATE_STOPPING
- The current timestamp. -
CREATE_STOPPED
- When the job stopped. -
ACTIVE
orCREATE_FAILED
- When the job finished or failed.
-
format(Option<String>)
:The format of the exported data, CSV or PARQUET.
- On failure, responds with
SdkError<DescribeWhatIfForecastExportError>
sourcepub fn get_accuracy_metrics(&self) -> GetAccuracyMetrics
pub fn get_accuracy_metrics(&self) -> GetAccuracyMetrics
Constructs a fluent builder for the GetAccuracyMetrics
operation.
- The fluent builder is configurable:
predictor_arn(impl Into<String>)
/set_predictor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the predictor to get metrics for.
- On success, responds with
GetAccuracyMetricsOutput
with field(s):predictor_evaluation_results(Option<Vec<EvaluationResult>>)
:An array of results from evaluating the predictor.
is_auto_predictor(Option<bool>)
:Whether the predictor was created with
CreateAutoPredictor
.auto_ml_override_strategy(Option<AutoMlOverrideStrategy>)
:The
LatencyOptimized
AutoML override strategy is only available in private beta. Contact AWS Support or your account manager to learn more about access privileges.The AutoML strategy used to train the predictor. Unless
LatencyOptimized
is specified, the AutoML strategy optimizes predictor accuracy.This parameter is only valid for predictors trained using AutoML.
optimization_metric(Option<OptimizationMetric>)
:The accuracy metric used to optimize the predictor.
- On failure, responds with
SdkError<GetAccuracyMetricsError>
sourcepub fn list_dataset_groups(&self) -> ListDatasetGroups
pub fn list_dataset_groups(&self) -> ListDatasetGroups
Constructs a fluent builder for the ListDatasetGroups
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
- On success, responds with
ListDatasetGroupsOutput
with field(s):dataset_groups(Option<Vec<DatasetGroupSummary>>)
:An array of objects that summarize each dataset group’s properties.
next_token(Option<String>)
:If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListDatasetGroupsError>
sourcepub fn list_dataset_import_jobs(&self) -> ListDatasetImportJobs
pub fn list_dataset_import_jobs(&self) -> ListDatasetImportJobs
Constructs a fluent builder for the ListDatasetImportJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, you provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the datasets that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. To include the datasets that match the statement, specifyIS
. To exclude matching datasets, specifyIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areDatasetArn
andStatus
. -
Value
- The value to match.
For example, to list all dataset import jobs whose status is ACTIVE, you specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]
-
- On success, responds with
ListDatasetImportJobsOutput
with field(s):dataset_import_jobs(Option<Vec<DatasetImportJobSummary>>)
:An array of objects that summarize each dataset import job’s properties.
next_token(Option<String>)
:If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListDatasetImportJobsError>
sourcepub fn list_datasets(&self) -> ListDatasets
pub fn list_datasets(&self) -> ListDatasets
Constructs a fluent builder for the ListDatasets
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
- On success, responds with
ListDatasetsOutput
with field(s):datasets(Option<Vec<DatasetSummary>>)
:An array of objects that summarize each dataset’s properties.
next_token(Option<String>)
:If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListDatasetsError>
sourcepub fn list_explainabilities(&self) -> ListExplainabilities
pub fn list_explainabilities(&self) -> ListExplainabilities
Constructs a fluent builder for the ListExplainabilities
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
max_results(i32)
/set_max_results(Option<i32>)
:The number of items returned in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the resources that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areResourceArn
andStatus
. -
Value
- The value to match.
-
- On success, responds with
ListExplainabilitiesOutput
with field(s):explainabilities(Option<Vec<ExplainabilitySummary>>)
:An array of objects that summarize the properties of each Explainability resource.
next_token(Option<String>)
:Returns this token if the response is truncated. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListExplainabilitiesError>
sourcepub fn list_explainability_exports(&self) -> ListExplainabilityExports
pub fn list_explainability_exports(&self) -> ListExplainabilityExports
Constructs a fluent builder for the ListExplainabilityExports
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude resources that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areResourceArn
andStatus
. -
Value
- The value to match.
-
- On success, responds with
ListExplainabilityExportsOutput
with field(s):explainability_exports(Option<Vec<ExplainabilityExportSummary>>)
:An array of objects that summarize the properties of each Explainability export.
next_token(Option<String>)
:Returns this token if the response is truncated. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListExplainabilityExportsError>
sourcepub fn list_forecast_export_jobs(&self) -> ListForecastExportJobs
pub fn list_forecast_export_jobs(&self) -> ListForecastExportJobs
Constructs a fluent builder for the ListForecastExportJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, you provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. To include the forecast export jobs that match the statement, specifyIS
. To exclude matching forecast export jobs, specifyIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areForecastArn
andStatus
. -
Value
- The value to match.
For example, to list all jobs that export a forecast named electricityforecast, specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “ForecastArn”, “Value”: “arn:aws:forecast:us-west-2:
:forecast/electricityforecast” } ] -
- On success, responds with
ListForecastExportJobsOutput
with field(s):forecast_export_jobs(Option<Vec<ForecastExportJobSummary>>)
:An array of objects that summarize each export job’s properties.
next_token(Option<String>)
:If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListForecastExportJobsError>
sourcepub fn list_forecasts(&self) -> ListForecasts
pub fn list_forecasts(&self) -> ListForecasts
Constructs a fluent builder for the ListForecasts
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, you provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the forecasts that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. To include the forecasts that match the statement, specifyIS
. To exclude matching forecasts, specifyIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areDatasetGroupArn
,PredictorArn
, andStatus
. -
Value
- The value to match.
For example, to list all forecasts whose status is not ACTIVE, you would specify:
“Filters”: [ { “Condition”: “IS_NOT”, “Key”: “Status”, “Value”: “ACTIVE” } ]
-
- On success, responds with
ListForecastsOutput
with field(s):forecasts(Option<Vec<ForecastSummary>>)
:An array of objects that summarize each forecast’s properties.
next_token(Option<String>)
:If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListForecastsError>
sourcepub fn list_monitor_evaluations(&self) -> ListMonitorEvaluations
pub fn list_monitor_evaluations(&self) -> ListMonitorEvaluations
Constructs a fluent builder for the ListMonitorEvaluations
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of monitoring results to return.
monitor_arn(impl Into<String>)
/set_monitor_arn(Option<String>)
:The Amazon Resource Name (ARN) of the monitor resource to get results from.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the resources that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. -
Key
- The name of the parameter to filter on. The only valid value isEvaluationState
. -
Value
- The value to match. Valid values are onlySUCCESS
orFAILURE
.
For example, to list only successful monitor evaluations, you would specify:
“Filters”: [ { “Condition”: “IS”, “Key”: “EvaluationState”, “Value”: “SUCCESS” } ]
-
- On success, responds with
ListMonitorEvaluationsOutput
with field(s):next_token(Option<String>)
:If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
predictor_monitor_evaluations(Option<Vec<PredictorMonitorEvaluation>>)
:The monitoring results and predictor events collected by the monitor resource during different windows of time.
For information about monitoring see Viewing Monitoring Results. For more information about retrieving monitoring results see Viewing Monitoring Results.
- On failure, responds with
SdkError<ListMonitorEvaluationsError>
sourcepub fn list_monitors(&self) -> ListMonitors
pub fn list_monitors(&self) -> ListMonitors
Constructs a fluent builder for the ListMonitors
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The maximum number of monitors to include in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the resources that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. -
Key
- The name of the parameter to filter on. The only valid value isStatus
. -
Value
- The value to match.
For example, to list all monitors who’s status is ACTIVE, you would specify:
“Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]
-
- On success, responds with
ListMonitorsOutput
with field(s):monitors(Option<Vec<MonitorSummary>>)
:An array of objects that summarize each monitor’s properties.
next_token(Option<String>)
:If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListMonitorsError>
sourcepub fn list_predictor_backtest_export_jobs(
&self
) -> ListPredictorBacktestExportJobs
pub fn list_predictor_backtest_export_jobs(
&self
) -> ListPredictorBacktestExportJobs
Constructs a fluent builder for the ListPredictorBacktestExportJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a NextToken. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the predictor backtest export jobs that match the statement from the list. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. To include the predictor backtest export jobs that match the statement, specifyIS
. To exclude matching predictor backtest export jobs, specifyIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values arePredictorArn
andStatus
. -
Value
- The value to match.
-
- On success, responds with
ListPredictorBacktestExportJobsOutput
with field(s):predictor_backtest_export_jobs(Option<Vec<PredictorBacktestExportJobSummary>>)
:An array of objects that summarize the properties of each predictor backtest export job.
next_token(Option<String>)
:Returns this token if the response is truncated. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListPredictorBacktestExportJobsError>
sourcepub fn list_predictors(&self) -> ListPredictors
pub fn list_predictors(&self) -> ListPredictors
Constructs a fluent builder for the ListPredictors
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, you provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the predictors that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. To include the predictors that match the statement, specifyIS
. To exclude matching predictors, specifyIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areDatasetGroupArn
andStatus
. -
Value
- The value to match.
For example, to list all predictors whose status is ACTIVE, you would specify:
“Filters”: [ { “Condition”: “IS”, “Key”: “Status”, “Value”: “ACTIVE” } ]
-
- On success, responds with
ListPredictorsOutput
with field(s):predictors(Option<Vec<PredictorSummary>>)
:An array of objects that summarize each predictor’s properties.
next_token(Option<String>)
:If the response is truncated, Amazon Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListPredictorsError>
Constructs a fluent builder for the ListTagsForResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) that identifies the resource for which to list the tags.
- On success, responds with
ListTagsForResourceOutput
with field(s):tags(Option<Vec<Tag>>)
:The tags for the resource.
- On failure, responds with
SdkError<ListTagsForResourceError>
sourcepub fn list_what_if_analyses(&self) -> ListWhatIfAnalyses
pub fn list_what_if_analyses(&self) -> ListWhatIfAnalyses
Constructs a fluent builder for the ListWhatIfAnalyses
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, you provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the what-if analysis jobs that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. To include the what-if analysis jobs that match the statement, specifyIS
. To exclude matching what-if analysis jobs, specifyIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areWhatIfAnalysisArn
andStatus
. -
Value
- The value to match.
For example, to list all jobs that export a forecast named electricityWhatIf, specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfAnalysisArn”, “Value”: “arn:aws:forecast:us-west-2:
:forecast/electricityWhatIf” } ] -
- On success, responds with
ListWhatIfAnalysesOutput
with field(s):what_if_analyses(Option<Vec<WhatIfAnalysisSummary>>)
:An array of
WhatIfAnalysisSummary
objects that describe the matched analyses.next_token(Option<String>)
:If the response is truncated, Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListWhatIfAnalysesError>
sourcepub fn list_what_if_forecast_exports(&self) -> ListWhatIfForecastExports
pub fn list_what_if_forecast_exports(&self) -> ListWhatIfForecastExports
Constructs a fluent builder for the ListWhatIfForecastExports
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, you provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the what-if forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. To include the forecast export jobs that match the statement, specifyIS
. To exclude matching forecast export jobs, specifyIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areWhatIfForecastExportArn
andStatus
. -
Value
- The value to match.
For example, to list all jobs that export a forecast named electricityWIFExport, specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfForecastExportArn”, “Value”: “arn:aws:forecast:us-west-2:
:forecast/electricityWIFExport” } ] -
- On success, responds with
ListWhatIfForecastExportsOutput
with field(s):what_if_forecast_exports(Option<Vec<WhatIfForecastExportSummary>>)
:An array of
WhatIfForecastExports
objects that describe the matched forecast exports.next_token(Option<String>)
:If the response is truncated, Forecast returns this token. To retrieve the next set of results, use the token in the next request.
- On failure, responds with
SdkError<ListWhatIfForecastExportsError>
sourcepub fn list_what_if_forecasts(&self) -> ListWhatIfForecasts
pub fn list_what_if_forecasts(&self) -> ListWhatIfForecasts
Constructs a fluent builder for the ListWhatIfForecasts
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.max_results(i32)
/set_max_results(Option<i32>)
:The number of items to return in the response.
filters(Vec<Filter>)
/set_filters(Option<Vec<Filter>>)
:An array of filters. For each filter, you provide a condition and a match statement. The condition is either
IS
orIS_NOT
, which specifies whether to include or exclude the what-if forecast export jobs that match the statement from the list, respectively. The match statement consists of a key and a value.Filter properties
-
Condition
- The condition to apply. Valid values areIS
andIS_NOT
. To include the forecast export jobs that match the statement, specifyIS
. To exclude matching forecast export jobs, specifyIS_NOT
. -
Key
- The name of the parameter to filter on. Valid values areWhatIfForecastArn
andStatus
. -
Value
- The value to match.
For example, to list all jobs that export a forecast named electricityWhatIfForecast, specify the following filter:
“Filters”: [ { “Condition”: “IS”, “Key”: “WhatIfForecastArn”, “Value”: “arn:aws:forecast:us-west-2:
:forecast/electricityWhatIfForecast” } ] -
- On success, responds with
ListWhatIfForecastsOutput
with field(s):what_if_forecasts(Option<Vec<WhatIfForecastSummary>>)
:An array of
WhatIfForecasts
objects that describe the matched forecasts.next_token(Option<String>)
:If the result of the previous request was truncated, the response includes a
NextToken
. To retrieve the next set of results, use the token in the next request. Tokens expire after 24 hours.
- On failure, responds with
SdkError<ListWhatIfForecastsError>
sourcepub fn resume_resource(&self) -> ResumeResource
pub fn resume_resource(&self) -> ResumeResource
Constructs a fluent builder for the ResumeResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) of the monitor resource to resume.
- On success, responds with
ResumeResourceOutput
- On failure, responds with
SdkError<ResumeResourceError>
sourcepub fn stop_resource(&self) -> StopResource
pub fn stop_resource(&self) -> StopResource
Constructs a fluent builder for the StopResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) that identifies the resource to stop. The supported ARNs are
DatasetImportJobArn
,PredictorArn
,PredictorBacktestExportJobArn
,ForecastArn
,ForecastExportJobArn
,ExplainabilityArn
, andExplainabilityExportArn
.
- On success, responds with
StopResourceOutput
- On failure, responds with
SdkError<StopResourceError>
sourcepub fn tag_resource(&self) -> TagResource
pub fn tag_resource(&self) -> TagResource
Constructs a fluent builder for the TagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) that identifies the resource for which to list the tags.
tags(Vec<Tag>)
/set_tags(Option<Vec<Tag>>)
:The tags to add to the resource. A tag is an array of key-value pairs.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for AWS use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
-
- On success, responds with
TagResourceOutput
- On failure, responds with
SdkError<TagResourceError>
sourcepub fn untag_resource(&self) -> UntagResource
pub fn untag_resource(&self) -> UntagResource
Constructs a fluent builder for the UntagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:The Amazon Resource Name (ARN) that identifies the resource for which to list the tags.
tag_keys(Vec<String>)
/set_tag_keys(Option<Vec<String>>)
:The keys of the tags to be removed.
- On success, responds with
UntagResourceOutput
- On failure, responds with
SdkError<UntagResourceError>
sourcepub fn update_dataset_group(&self) -> UpdateDatasetGroup
pub fn update_dataset_group(&self) -> UpdateDatasetGroup
Constructs a fluent builder for the UpdateDatasetGroup
operation.
- The fluent builder is configurable:
dataset_group_arn(impl Into<String>)
/set_dataset_group_arn(Option<String>)
:The ARN of the dataset group.
dataset_arns(Vec<String>)
/set_dataset_arns(Option<Vec<String>>)
:An array of the Amazon Resource Names (ARNs) of the datasets to add to the dataset group.
- On success, responds with
UpdateDatasetGroupOutput
- On failure, responds with
SdkError<UpdateDatasetGroupError>
sourceimpl Client
impl Client
sourcepub fn from_conf_conn<C, E>(conf: Config, conn: C) -> Selfwhere
C: SmithyConnector<Error = E> + Send + 'static,
E: Into<ConnectorError>,
pub fn from_conf_conn<C, E>(conf: Config, conn: C) -> Selfwhere
C: SmithyConnector<Error = E> + Send + 'static,
E: Into<ConnectorError>,
Creates a client with the given service config and connector override.