pub struct CreateDataSourceFromRedshiftFluentBuilder { /* private fields */ }
Expand description

Fluent builder constructing a request to CreateDataSourceFromRedshift.

Creates a DataSource from a database hosted on an Amazon Redshift cluster. A DataSource references data that can be used to perform either CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

CreateDataSourceFromRedshift is an asynchronous operation. In response to CreateDataSourceFromRedshift, Amazon Machine Learning (Amazon ML) immediately returns and sets the DataSource status to PENDING. After the DataSource is created and ready for use, Amazon ML sets the Status parameter to COMPLETED. DataSource in COMPLETED or PENDING states can be used to perform only CreateMLModel, CreateEvaluation, or CreateBatchPrediction operations.

If Amazon ML can't accept the input source, it sets the Status parameter to FAILED and includes an error message in the Message attribute of the GetDataSource operation response.

The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified by a SelectSqlQuery query. Amazon ML executes an Unload command in Amazon Redshift to transfer the result set of the SelectSqlQuery query to S3StagingLocation.

After the DataSource has been created, it's ready for use in evaluations and batch predictions. If you plan to use the DataSource to train an MLModel, the DataSource also requires a recipe. A recipe describes how each input variable will be used in training an MLModel. Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions.

You can't change an existing datasource, but you can copy and modify the settings from an existing Amazon Redshift datasource to create a new datasource. To do so, call GetDataSource for an existing datasource and copy the values to a CreateDataSource call. Change the settings that you want to change and make sure that all required fields have the appropriate values.

Implementations§

source§

impl CreateDataSourceFromRedshiftFluentBuilder

source

pub fn as_input(&self) -> &CreateDataSourceFromRedshiftInputBuilder

Access the CreateDataSourceFromRedshift as a reference.

source

pub async fn send( self ) -> Result<CreateDataSourceFromRedshiftOutput, SdkError<CreateDataSourceFromRedshiftError, HttpResponse>>

Sends the request and returns the response.

If an error occurs, an SdkError will be returned with additional details that can be matched against.

By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.

source

pub fn customize( self ) -> CustomizableOperation<CreateDataSourceFromRedshiftOutput, CreateDataSourceFromRedshiftError, Self>

Consumes this builder, creating a customizable operation that can be modified before being sent.

source

pub fn data_source_id(self, input: impl Into<String>) -> Self

A user-supplied ID that uniquely identifies the DataSource.

source

pub fn set_data_source_id(self, input: Option<String>) -> Self

A user-supplied ID that uniquely identifies the DataSource.

source

pub fn get_data_source_id(&self) -> &Option<String>

A user-supplied ID that uniquely identifies the DataSource.

source

pub fn data_source_name(self, input: impl Into<String>) -> Self

A user-supplied name or description of the DataSource.

source

pub fn set_data_source_name(self, input: Option<String>) -> Self

A user-supplied name or description of the DataSource.

source

pub fn get_data_source_name(&self) -> &Option<String>

A user-supplied name or description of the DataSource.

source

pub fn data_spec(self, input: RedshiftDataSpec) -> Self

The data specification of an Amazon Redshift DataSource:

  • DatabaseInformation -

    • DatabaseName - The name of the Amazon Redshift database.

    • ClusterIdentifier - The unique ID for the Amazon Redshift cluster.

  • DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database.

  • SelectSqlQuery - The query that is used to retrieve the observation data for the Datasource.

  • S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the SelectSqlQuery query is stored in this location.

  • DataSchemaUri - The Amazon S3 location of the DataSchema.

  • DataSchema - A JSON string representing the schema. This is not required if DataSchemaUri is specified.

  • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the DataSource.

    Sample - "{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"

source

pub fn set_data_spec(self, input: Option<RedshiftDataSpec>) -> Self

The data specification of an Amazon Redshift DataSource:

  • DatabaseInformation -

    • DatabaseName - The name of the Amazon Redshift database.

    • ClusterIdentifier - The unique ID for the Amazon Redshift cluster.

  • DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database.

  • SelectSqlQuery - The query that is used to retrieve the observation data for the Datasource.

  • S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the SelectSqlQuery query is stored in this location.

  • DataSchemaUri - The Amazon S3 location of the DataSchema.

  • DataSchema - A JSON string representing the schema. This is not required if DataSchemaUri is specified.

  • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the DataSource.

    Sample - "{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"

source

pub fn get_data_spec(&self) -> &Option<RedshiftDataSpec>

The data specification of an Amazon Redshift DataSource:

  • DatabaseInformation -

    • DatabaseName - The name of the Amazon Redshift database.

    • ClusterIdentifier - The unique ID for the Amazon Redshift cluster.

  • DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database.

  • SelectSqlQuery - The query that is used to retrieve the observation data for the Datasource.

  • S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the SelectSqlQuery query is stored in this location.

  • DataSchemaUri - The Amazon S3 location of the DataSchema.

  • DataSchema - A JSON string representing the schema. This is not required if DataSchemaUri is specified.

  • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the DataSource.

    Sample - "{\"splitting\":{\"percentBegin\":10,\"percentEnd\":60}}"

source

pub fn role_arn(self, input: impl Into<String>) -> Self

A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:

  • A security group to allow Amazon ML to execute the SelectSqlQuery query on an Amazon Redshift cluster

  • An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the S3StagingLocation

source

pub fn set_role_arn(self, input: Option<String>) -> Self

A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:

  • A security group to allow Amazon ML to execute the SelectSqlQuery query on an Amazon Redshift cluster

  • An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the S3StagingLocation

source

pub fn get_role_arn(&self) -> &Option<String>

A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:

  • A security group to allow Amazon ML to execute the SelectSqlQuery query on an Amazon Redshift cluster

  • An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the S3StagingLocation

source

pub fn compute_statistics(self, input: bool) -> Self

The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during MLModel training. This parameter must be set to true if the DataSource needs to be used for MLModel training.

source

pub fn set_compute_statistics(self, input: Option<bool>) -> Self

The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during MLModel training. This parameter must be set to true if the DataSource needs to be used for MLModel training.

source

pub fn get_compute_statistics(&self) -> &Option<bool>

The compute statistics for a DataSource. The statistics are generated from the observation data referenced by a DataSource. Amazon ML uses the statistics internally during MLModel training. This parameter must be set to true if the DataSource needs to be used for MLModel training.

Trait Implementations§

source§

impl Clone for CreateDataSourceFromRedshiftFluentBuilder

source§

fn clone(&self) -> CreateDataSourceFromRedshiftFluentBuilder

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for CreateDataSourceFromRedshiftFluentBuilder

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more