pub struct CreateDatasetFluentBuilder { /* private fields */ }
Expand description
Fluent builder constructing a request to CreateDataset
.
Creates an Amazon Forecast dataset. The information about the dataset that you provide helps Forecast understand how to consume the data for model training. This includes the following:
-
DataFrequency
- How frequently your historical time-series data is collected. -
Domain
andDatasetType
- Each dataset has an associated dataset domain and a type within the domain. Amazon Forecast provides a list of predefined domains and types within each domain. For each unique dataset domain and type within the domain, Amazon Forecast requires your data to include a minimum set of predefined fields. -
Schema
- A schema specifies the fields in the dataset, including the field name and data type.
After creating a dataset, you import your training data into it and add the dataset to a dataset group. You use the dataset group to create a predictor. For more information, see Importing datasets.
To get a list of all your datasets, use the ListDatasets operation.
For example Forecast datasets, see the Amazon Forecast Sample GitHub repository.
The Status
of a dataset must be ACTIVE
before you can import training data. Use the DescribeDataset operation to get the status.
Implementations§
Source§impl CreateDatasetFluentBuilder
impl CreateDatasetFluentBuilder
Sourcepub fn as_input(&self) -> &CreateDatasetInputBuilder
pub fn as_input(&self) -> &CreateDatasetInputBuilder
Access the CreateDataset as a reference.
Sourcepub async fn send(
self,
) -> Result<CreateDatasetOutput, SdkError<CreateDatasetError, HttpResponse>>
pub async fn send( self, ) -> Result<CreateDatasetOutput, SdkError<CreateDatasetError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError
will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
Sourcepub fn customize(
self,
) -> CustomizableOperation<CreateDatasetOutput, CreateDatasetError, Self>
pub fn customize( self, ) -> CustomizableOperation<CreateDatasetOutput, CreateDatasetError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
Sourcepub fn dataset_name(self, input: impl Into<String>) -> Self
pub fn dataset_name(self, input: impl Into<String>) -> Self
A name for the dataset.
Sourcepub fn set_dataset_name(self, input: Option<String>) -> Self
pub fn set_dataset_name(self, input: Option<String>) -> Self
A name for the dataset.
Sourcepub fn get_dataset_name(&self) -> &Option<String>
pub fn get_dataset_name(&self) -> &Option<String>
A name for the dataset.
Sourcepub fn domain(self, input: Domain) -> Self
pub fn domain(self, input: Domain) -> Self
The domain associated with the dataset. When you add a dataset to a dataset group, this value and the value specified for the Domain
parameter of the CreateDatasetGroup operation must match.
The Domain
and DatasetType
that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose the RETAIL
domain and TARGET_TIME_SERIES
as the DatasetType
, Amazon Forecast requires item_id
, timestamp
, and demand
fields to be present in your data. For more information, see Importing datasets.
Sourcepub fn set_domain(self, input: Option<Domain>) -> Self
pub fn set_domain(self, input: Option<Domain>) -> Self
The domain associated with the dataset. When you add a dataset to a dataset group, this value and the value specified for the Domain
parameter of the CreateDatasetGroup operation must match.
The Domain
and DatasetType
that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose the RETAIL
domain and TARGET_TIME_SERIES
as the DatasetType
, Amazon Forecast requires item_id
, timestamp
, and demand
fields to be present in your data. For more information, see Importing datasets.
Sourcepub fn get_domain(&self) -> &Option<Domain>
pub fn get_domain(&self) -> &Option<Domain>
The domain associated with the dataset. When you add a dataset to a dataset group, this value and the value specified for the Domain
parameter of the CreateDatasetGroup operation must match.
The Domain
and DatasetType
that you choose determine the fields that must be present in the training data that you import to the dataset. For example, if you choose the RETAIL
domain and TARGET_TIME_SERIES
as the DatasetType
, Amazon Forecast requires item_id
, timestamp
, and demand
fields to be present in your data. For more information, see Importing datasets.
Sourcepub fn dataset_type(self, input: DatasetType) -> Self
pub fn dataset_type(self, input: DatasetType) -> Self
The dataset type. Valid values depend on the chosen Domain
.
Sourcepub fn set_dataset_type(self, input: Option<DatasetType>) -> Self
pub fn set_dataset_type(self, input: Option<DatasetType>) -> Self
The dataset type. Valid values depend on the chosen Domain
.
Sourcepub fn get_dataset_type(&self) -> &Option<DatasetType>
pub fn get_dataset_type(&self) -> &Option<DatasetType>
The dataset type. Valid values depend on the chosen Domain
.
Sourcepub fn data_frequency(self, input: impl Into<String>) -> Self
pub fn data_frequency(self, input: impl Into<String>) -> Self
The frequency of data collection. This parameter is required for RELATED_TIME_SERIES datasets.
Valid intervals are an integer followed by Y (Year), M (Month), W (Week), D (Day), H (Hour), and min (Minute). For example, "1D" indicates every day and "15min" indicates every 15 minutes. You cannot specify a value that would overlap with the next larger frequency. That means, for example, you cannot specify a frequency of 60 minutes, because that is equivalent to 1 hour. The valid values for each frequency are the following:
-
Minute - 1-59
-
Hour - 1-23
-
Day - 1-6
-
Week - 1-4
-
Month - 1-11
-
Year - 1
Thus, if you want every other week forecasts, specify "2W". Or, if you want quarterly forecasts, you specify "3M".
Sourcepub fn set_data_frequency(self, input: Option<String>) -> Self
pub fn set_data_frequency(self, input: Option<String>) -> Self
The frequency of data collection. This parameter is required for RELATED_TIME_SERIES datasets.
Valid intervals are an integer followed by Y (Year), M (Month), W (Week), D (Day), H (Hour), and min (Minute). For example, "1D" indicates every day and "15min" indicates every 15 minutes. You cannot specify a value that would overlap with the next larger frequency. That means, for example, you cannot specify a frequency of 60 minutes, because that is equivalent to 1 hour. The valid values for each frequency are the following:
-
Minute - 1-59
-
Hour - 1-23
-
Day - 1-6
-
Week - 1-4
-
Month - 1-11
-
Year - 1
Thus, if you want every other week forecasts, specify "2W". Or, if you want quarterly forecasts, you specify "3M".
Sourcepub fn get_data_frequency(&self) -> &Option<String>
pub fn get_data_frequency(&self) -> &Option<String>
The frequency of data collection. This parameter is required for RELATED_TIME_SERIES datasets.
Valid intervals are an integer followed by Y (Year), M (Month), W (Week), D (Day), H (Hour), and min (Minute). For example, "1D" indicates every day and "15min" indicates every 15 minutes. You cannot specify a value that would overlap with the next larger frequency. That means, for example, you cannot specify a frequency of 60 minutes, because that is equivalent to 1 hour. The valid values for each frequency are the following:
-
Minute - 1-59
-
Hour - 1-23
-
Day - 1-6
-
Week - 1-4
-
Month - 1-11
-
Year - 1
Thus, if you want every other week forecasts, specify "2W". Or, if you want quarterly forecasts, you specify "3M".
Sourcepub fn schema(self, input: Schema) -> Self
pub fn schema(self, input: Schema) -> Self
The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset Domain
and DatasetType
that you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see Dataset Domains and Dataset Types.
Sourcepub fn set_schema(self, input: Option<Schema>) -> Self
pub fn set_schema(self, input: Option<Schema>) -> Self
The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset Domain
and DatasetType
that you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see Dataset Domains and Dataset Types.
Sourcepub fn get_schema(&self) -> &Option<Schema>
pub fn get_schema(&self) -> &Option<Schema>
The schema for the dataset. The schema attributes and their order must match the fields in your data. The dataset Domain
and DatasetType
that you choose determine the minimum required fields in your training data. For information about the required fields for a specific dataset domain and type, see Dataset Domains and Dataset Types.
Sourcepub fn encryption_config(self, input: EncryptionConfig) -> Self
pub fn encryption_config(self, input: EncryptionConfig) -> Self
An Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
Sourcepub fn set_encryption_config(self, input: Option<EncryptionConfig>) -> Self
pub fn set_encryption_config(self, input: Option<EncryptionConfig>) -> Self
An Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
Sourcepub fn get_encryption_config(&self) -> &Option<EncryptionConfig>
pub fn get_encryption_config(&self) -> &Option<EncryptionConfig>
An Key Management Service (KMS) key and the Identity and Access Management (IAM) role that Amazon Forecast can assume to access the key.
Appends an item to Tags
.
To override the contents of this collection use set_tags
.
The optional metadata that you apply to the dataset to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
The optional metadata that you apply to the dataset to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
The optional metadata that you apply to the dataset to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
-
Maximum number of tags per resource - 50.
-
For each resource, each tag key must be unique, and each tag key can have only one value.
-
Maximum key length - 128 Unicode characters in UTF-8.
-
Maximum value length - 256 Unicode characters in UTF-8.
-
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
-
Tag keys and values are case sensitive.
-
Do not use
aws:
,AWS:
, or any upper or lowercase combination of such as a prefix for keys as it is reserved for Amazon Web Services use. You cannot edit or delete tag keys with this prefix. Values can have this prefix. If a tag value hasaws
as its prefix but the key does not, then Forecast considers it to be a user tag and will count against the limit of 50 tags. Tags with only the key prefix ofaws
do not count against your tags per resource limit.
Trait Implementations§
Source§impl Clone for CreateDatasetFluentBuilder
impl Clone for CreateDatasetFluentBuilder
Source§fn clone(&self) -> CreateDatasetFluentBuilder
fn clone(&self) -> CreateDatasetFluentBuilder
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreAuto Trait Implementations§
impl Freeze for CreateDatasetFluentBuilder
impl !RefUnwindSafe for CreateDatasetFluentBuilder
impl Send for CreateDatasetFluentBuilder
impl Sync for CreateDatasetFluentBuilder
impl Unpin for CreateDatasetFluentBuilder
impl !UnwindSafe for CreateDatasetFluentBuilder
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);