[][src]Struct rusoto_sagemaker::CreateTransformJobRequest

pub struct CreateTransformJobRequest {
    pub batch_strategy: Option<String>,
    pub data_processing: Option<DataProcessing>,
    pub environment: Option<HashMap<String, String>>,
    pub experiment_config: Option<ExperimentConfig>,
    pub max_concurrent_transforms: Option<i64>,
    pub max_payload_in_mb: Option<i64>,
    pub model_client_config: Option<ModelClientConfig>,
    pub model_name: String,
    pub tags: Option<Vec<Tag>>,
    pub transform_input: TransformInput,
    pub transform_job_name: String,
    pub transform_output: TransformOutput,
    pub transform_resources: TransformResources,
}

Fields

batch_strategy: Option<String>

Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.

To enable the batch strategy, you must set the SplitType property to Line, RecordIO, or TFRecord.

To use only one record when making an HTTP invocation request to a container, set BatchStrategy to SingleRecord and SplitType to Line.

To fit as many records in a mini-batch as can fit within the MaxPayloadInMB limit, set BatchStrategy to MultiRecord and SplitType to Line.

data_processing: Option<DataProcessing>

The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.

environment: Option<HashMap<String, String>>

The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.

experiment_config: Option<ExperimentConfig>max_concurrent_transforms: Option<i64>

The maximum number of parallel requests that can be sent to each instance in a transform job. If MaxConcurrentTransforms is set to 0 or left unset, Amazon SageMaker checks the optional execution-parameters to determine the settings for your chosen algorithm. If the execution-parameters endpoint is not enabled, the default value is 1. For more information on execution-parameters, see How Containers Serve Requests. For built-in algorithms, you don't need to set a value for MaxConcurrentTransforms.

max_payload_in_mb: Option<i64>

The maximum allowed size of the payload, in MB. A payload is the data portion of a record (without metadata). The value in MaxPayloadInMB must be greater than, or equal to, the size of a single record. To estimate the size of a record in MB, divide the size of your dataset by the number of records. To ensure that the records fit within the maximum payload size, we recommend using a slightly larger value. The default value is 6 MB.

For cases where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding, set the value to 0. This feature works only in supported algorithms. Currently, Amazon SageMaker built-in algorithms do not support HTTP chunked encoding.

model_client_config: Option<ModelClientConfig>

Configures the timeout and maximum number of retries for processing a transform job invocation.

model_name: String

The name of the model that you want to use for the transform job. ModelName must be the name of an existing Amazon SageMaker model within an AWS Region in an AWS account.

tags: Option<Vec<Tag>>

(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.

transform_input: TransformInput

Describes the input source and the way the transform job consumes it.

transform_job_name: String

The name of the transform job. The name must be unique within an AWS Region in an AWS account.

transform_output: TransformOutput

Describes the results of the transform job.

transform_resources: TransformResources

Describes the resources, including ML instance types and ML instance count, to use for the transform job.

Trait Implementations

impl Clone for CreateTransformJobRequest[src]

impl Debug for CreateTransformJobRequest[src]

impl Default for CreateTransformJobRequest[src]

impl PartialEq<CreateTransformJobRequest> for CreateTransformJobRequest[src]

impl Serialize for CreateTransformJobRequest[src]

impl StructuralPartialEq for CreateTransformJobRequest[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Same<T> for T

type Output = T

Should always be Self

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.