Struct aws_sdk_sagemaker::model::transform_job::Builder [−][src]
#[non_exhaustive]pub struct Builder { /* fields omitted */ }
Expand description
A builder for TransformJob
Implementations
The name of the transform job.
The name of the transform job.
The Amazon Resource Name (ARN) of the transform job.
The Amazon Resource Name (ARN) of the transform job.
The status of the transform job.
Transform job statuses are:
-
InProgress
- The job is in progress. -
Completed
- The job has completed. -
Failed
- The transform job has failed. To see the reason for the failure, see theFailureReason
field in the response to aDescribeTransformJob
call. -
Stopping
- The transform job is stopping. -
Stopped
- The transform job has stopped.
The status of the transform job.
Transform job statuses are:
-
InProgress
- The job is in progress. -
Completed
- The job has completed. -
Failed
- The transform job has failed. To see the reason for the failure, see theFailureReason
field in the response to aDescribeTransformJob
call. -
Stopping
- The transform job is stopping. -
Stopped
- The transform job has stopped.
If the transform job failed, the reason it failed.
If the transform job failed, the reason it failed.
The name of the model associated with the transform job.
The name of the model associated with the transform job.
The maximum number of parallel requests that can be sent to each instance in a transform
job. If MaxConcurrentTransforms
is set to 0 or left unset, SageMaker checks the
optional execution-parameters to determine the settings for your chosen algorithm. If the
execution-parameters endpoint is not enabled, the default value is 1. For built-in algorithms,
you don't need to set a value for MaxConcurrentTransforms
.
The maximum number of parallel requests that can be sent to each instance in a transform
job. If MaxConcurrentTransforms
is set to 0 or left unset, SageMaker checks the
optional execution-parameters to determine the settings for your chosen algorithm. If the
execution-parameters endpoint is not enabled, the default value is 1. For built-in algorithms,
you don't need to set a value for MaxConcurrentTransforms
.
Configures the timeout and maximum number of retries for processing a transform job invocation.
Configures the timeout and maximum number of retries for processing a transform job invocation.
The maximum allowed size of the payload, in MB. A payload is the data portion of a record
(without metadata). The value in MaxPayloadInMB
must be greater than, or equal
to, the size of a single record. To estimate the size of a record in MB, divide the size of
your dataset by the number of records. To ensure that the records fit within the maximum
payload size, we recommend using a slightly larger value. The default value is 6 MB. For cases
where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding,
set the value to 0. This feature works only in supported algorithms. Currently, SageMaker built-in
algorithms do not support HTTP chunked encoding.
The maximum allowed size of the payload, in MB. A payload is the data portion of a record
(without metadata). The value in MaxPayloadInMB
must be greater than, or equal
to, the size of a single record. To estimate the size of a record in MB, divide the size of
your dataset by the number of records. To ensure that the records fit within the maximum
payload size, we recommend using a slightly larger value. The default value is 6 MB. For cases
where the payload might be arbitrarily large and is transmitted using HTTP chunked encoding,
set the value to 0. This feature works only in supported algorithms. Currently, SageMaker built-in
algorithms do not support HTTP chunked encoding.
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
Specifies the number of records to include in a mini-batch for an HTTP inference request. A record is a single unit of input data that inference can be made on. For example, a single line in a CSV file is a record.
Adds a key-value pair to environment
.
To override the contents of this collection use set_environment
.
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
The environment variables to set in the Docker container. We support up to 16 key and values entries in the map.
Describes the input source of a transform job and the way the transform job consumes it.
Describes the input source of a transform job and the way the transform job consumes it.
Describes the results of a transform job.
Describes the results of a transform job.
Describes the resources, including ML instance types and ML instance count, to use for transform job.
Describes the resources, including ML instance types and ML instance count, to use for transform job.
A timestamp that shows when the transform Job was created.
A timestamp that shows when the transform Job was created.
Indicates when the transform job starts on ML instances. You are billed for the time
interval between this time and the value of TransformEndTime
.
Indicates when the transform job starts on ML instances. You are billed for the time
interval between this time and the value of TransformEndTime
.
Indicates when the transform job has been completed, or has stopped or failed. You are
billed for the time interval between this time and the value of
TransformStartTime
.
Indicates when the transform job has been completed, or has stopped or failed. You are
billed for the time interval between this time and the value of
TransformStartTime
.
The Amazon Resource Name (ARN) of the labeling job that created the transform job.
The Amazon Resource Name (ARN) of the labeling job that created the transform job.
The Amazon Resource Name (ARN) of the AutoML job that created the transform job.
The Amazon Resource Name (ARN) of the AutoML job that created the transform job.
The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
The data structure used to specify the data to be used for inference in a batch transform job and to associate the data that is relevant to the prediction results in the output. The input filter provided allows you to exclude input data that is not needed for inference in a batch transform job. The output filter provided allows you to include input data relevant to interpreting the predictions in the output from the job. For more information, see Associate Prediction Results with their Corresponding Input Records.
Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:
Associates a SageMaker job as a trial component with an experiment and trial. Specified when you call the following APIs:
Appends an item to tags
.
To override the contents of this collection use set_tags
.
A list of tags associated with the transform job.
A list of tags associated with the transform job.
Consumes the builder and constructs a TransformJob
Trait Implementations
Auto Trait Implementations
impl RefUnwindSafe for Builder
impl UnwindSafe for Builder
Blanket Implementations
Mutably borrows from an owned value. Read more
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more