pub struct CreateTransformJobFluentBuilder { /* private fields */ }
Expand description

Fluent builder constructing a request to CreateTransformJob.

Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.

To perform batch transformations, you create a transform job and use the data that you have readily available.

In the request body, you provide the following:

  • TransformJobName - Identifies the transform job. The name must be unique within an Amazon Web Services Region in an Amazon Web Services account.

  • ModelName - Identifies the model to use. ModelName must be the name of an existing Amazon SageMaker model in the same Amazon Web Services Region and Amazon Web Services account. For information on creating a model, see CreateModel.

  • TransformInput - Describes the dataset to be transformed and the Amazon S3 location where it is stored.

  • TransformOutput - Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.

  • TransformResources - Identifies the ML compute instances for the transform job.

For more information about how batch transformation works, see Batch Transform.

Implementations§

source§

impl CreateTransformJobFluentBuilder

source

pub async fn customize( self ) -> Result<CustomizableOperation<CreateTransformJob, AwsResponseRetryClassifier>, SdkError<CreateTransformJobError>>

Consume this builder, creating a customizable operation that can be modified before being sent. The operation’s inner http::Request can be modified as well.

source

pub async fn send( self ) -> Result<CreateTransformJobOutput, SdkError<CreateTransformJobError>>

Sends the request and returns the response.

If an error occurs, an SdkError will be returned with additional details that can be matched against.

By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.

source

pub fn transform_job_name(self, input: impl Into<String>) -> Self

The name of the transform job. The name must be unique within an Amazon Web Services Region in an Amazon Web Services account.

source

pub fn set_transform_job_name(self, input: Option<String>) -> Self

The name of the transform job. The name must be unique within an Amazon Web Services Region in an Amazon Web Services account.

source

pub fn model_name(self, input: impl Into<String>) -> Self

The name of the model that you want to use for the transform job. ModelName must be the name of an existing Amazon SageMaker model within an Amazon Web Services Region in an Amazon Web Services account.

source

pub fn set_model_name(self, input: Option<String>) -> Self

The name of the model that you want to use for the transform job. ModelName must be the name of an existing Amazon SageMaker model within an Amazon Web Services Region in an Amazon Web Services account.

source

pub fn max_concurrent_transforms(self, input: i32) -> Self

The maximum number of parallel requests that can be sent to each instance in a transform job. If MaxConcurrentTransforms is set to 0 or left unset, Amazon SageMaker checks the optional execution-parameters to determine the settings for your chosen algorithm. If the execution-parameters endpoint is not enabled, the default value is 1. For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for MaxConcurrentTransforms.

source

pub fn set_max_concurrent_transforms(self, input: Option<i32>) -> Self

The maximum number of parallel requests that can be sent to each instance in a transform job. If MaxConcurrentTransforms is set to 0 or left unset, Amazon SageMaker checks the optional execution-parameters to determine the settings for your chosen algorithm. If the execution-parameters endpoint is not enabled, the default value is 1. For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for MaxConcurrentTransforms.

source

pub fn model_client_config(self, input: ModelClientConfig) -> Self

Configures the timeout and maximum number of retries for processing a transform job invocation.

source

pub fn set_model_client_config(self, input: Option<ModelClientConfig>) -> Self

Configures the timeout and maximum number of retries for processing a transform job invocation.

source

pub fn max_payload_in_mb(self, input: i32) -> Self

The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without metadata). The value in MaxPayloadInMB must be greater than, or equal to, the size of a single record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The default value is 6 MB.

The value of MaxPayloadInMB cannot be greater than 100 MB. If you specify the MaxConcurrentTransforms parameter, the value of (MaxConcurrentTransforms * MaxPayloadInMB) also cannot exceed 100 MB.

For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the value to 0. This feature works only in supported algorithms. Currently, Amazon SageMaker built-in algorithms do not support HTTP chunked encoding.

source

pub fn set_max_payload_in_mb(self, input: Option<i32>) -> Self

The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without metadata). The value in MaxPayloadInMB must be greater than, or equal to, the size of a single record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The default value is 6 MB.

The value of MaxPayloadInMB cannot be greater than 100 MB. If you specify the MaxConcurrentTransforms parameter, the value of (MaxConcurrentTransforms * MaxPayloadInMB) also cannot exceed 100 MB.

For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the value to 0. This feature works only in supported algorithms. Currently, Amazon SageMaker built-in algorithms do not support HTTP chunked encoding.

source

pub fn batch_strategy(self, input: BatchStrategy) -> Self

Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.

To enable the batch strategy, you must set the SplitType property to Line, RecordIO, or TFRecord.

To use only one record when making an HTTP invocation request to a container, set BatchStrategy to SingleRecord and SplitType to Line.

To fit as many records in a mini-batch as can fit within the MaxPayloadInMB limit, set BatchStrategy to MultiRecord and SplitType to Line.

source

pub fn set_batch_strategy(self, input: Option<BatchStrategy>) -> Self

Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.

To enable the batch strategy, you must set the SplitType property to Line, RecordIO, or TFRecord.

To use only one record when making an HTTP invocation request to a container, set BatchStrategy to SingleRecord and SplitType to Line.

To fit as many records in a mini-batch as can fit within the MaxPayloadInMB limit, set BatchStrategy to MultiRecord and SplitType to Line.

source

pub fn environment(self, k: impl Into<String>, v: impl Into<String>) -> Self

Adds a key-value pair to Environment.

To override the contents of this collection use set_environment.

The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.

source

pub fn set_environment(self, input: Option<HashMap<String, String>>) -> Self

The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.

source

pub fn transform_input(self, input: TransformInput) -> Self

Describes the input source and the way the transform job consumes it.

source

pub fn set_transform_input(self, input: Option<TransformInput>) -> Self

Describes the input source and the way the transform job consumes it.

source

pub fn transform_output(self, input: TransformOutput) -> Self

Describes the results of the transform job.

source

pub fn set_transform_output(self, input: Option<TransformOutput>) -> Self

Describes the results of the transform job.

source

pub fn data_capture_config(self, input: BatchDataCaptureConfig) -> Self

Configuration to control how SageMaker captures inference data.

source

pub fn set_data_capture_config( self, input: Option<BatchDataCaptureConfig> ) -> Self

Configuration to control how SageMaker captures inference data.

source

pub fn transform_resources(self, input: TransformResources) -> Self

Describes the resources, including ML instance types and ML instance count, to use for the transform job.

source

pub fn set_transform_resources(self, input: Option<TransformResources>) -> Self

Describes the resources, including ML instance types and ML instance count, to use for the transform job.

source

pub fn data_processing(self, input: DataProcessing) -> Self

The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.

source

pub fn set_data_processing(self, input: Option<DataProcessing>) -> Self

The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.

source

pub fn tags(self, input: Tag) -> Self

Appends an item to Tags.

To override the contents of this collection use set_tags.

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide.

source

pub fn set_tags(self, input: Option<Vec<Tag>>) -> Self

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the Amazon Web Services Billing and Cost Management User Guide.

source

pub fn experiment_config(self, input: ExperimentConfig) -> Self

Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:

  • CreateProcessingJob

  • CreateTrainingJob

  • CreateTransformJob

source

pub fn set_experiment_config(self, input: Option<ExperimentConfig>) -> Self

Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:

  • CreateProcessingJob

  • CreateTrainingJob

  • CreateTransformJob

Trait Implementations§

source§

impl Clone for CreateTransformJobFluentBuilder

source§

fn clone(&self) -> CreateTransformJobFluentBuilder

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for CreateTransformJobFluentBuilder

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for Twhere T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for Twhere T: ?Sized,

const: unstable · source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for Twhere T: ?Sized,

const: unstable · source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

const: unstable · source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for Twhere U: From<T>,

const: unstable · source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> Same<T> for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for Twhere T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for Twhere U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
const: unstable · source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for Twhere U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
const: unstable · source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more