Struct aws_sdk_glue::client::fluent_builders::StartJobRun
source · pub struct StartJobRun { /* private fields */ }Expand description
Fluent builder constructing a request to StartJobRun.
Starts a job run using a job definition.
Implementations§
source§impl StartJobRun
impl StartJobRun
sourcepub async fn customize(
self
) -> Result<CustomizableOperation<StartJobRun, AwsResponseRetryClassifier>, SdkError<StartJobRunError>>
pub async fn customize(
self
) -> Result<CustomizableOperation<StartJobRun, AwsResponseRetryClassifier>, SdkError<StartJobRunError>>
Consume this builder, creating a customizable operation that can be modified before being sent. The operation’s inner http::Request can be modified as well.
sourcepub async fn send(self) -> Result<StartJobRunOutput, SdkError<StartJobRunError>>
pub async fn send(self) -> Result<StartJobRunOutput, SdkError<StartJobRunError>>
Sends the request and returns the response.
If an error occurs, an SdkError will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn set_job_name(self, input: Option<String>) -> Self
pub fn set_job_name(self, input: Option<String>) -> Self
The name of the job definition to use.
sourcepub fn job_run_id(self, input: impl Into<String>) -> Self
pub fn job_run_id(self, input: impl Into<String>) -> Self
The ID of a previous JobRun to retry.
sourcepub fn set_job_run_id(self, input: Option<String>) -> Self
pub fn set_job_run_id(self, input: Option<String>) -> Self
The ID of a previous JobRun to retry.
sourcepub fn arguments(self, k: impl Into<String>, v: impl Into<String>) -> Self
pub fn arguments(self, k: impl Into<String>, v: impl Into<String>) -> Self
Adds a key-value pair to Arguments.
To override the contents of this collection use set_arguments.
The job arguments specifically for this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
sourcepub fn set_arguments(self, input: Option<HashMap<String, String>>) -> Self
pub fn set_arguments(self, input: Option<HashMap<String, String>>) -> Self
The job arguments specifically for this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
sourcepub fn allocated_capacity(self, input: i32) -> Self
👎Deprecated: This property is deprecated, use MaxCapacity instead.
pub fn allocated_capacity(self, input: i32) -> Self
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) to allocate to this JobRun. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
sourcepub fn set_allocated_capacity(self, input: Option<i32>) -> Self
👎Deprecated: This property is deprecated, use MaxCapacity instead.
pub fn set_allocated_capacity(self, input: Option<i32>) -> Self
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) to allocate to this JobRun. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
sourcepub fn timeout(self, input: i32) -> Self
pub fn timeout(self, input: i32) -> Self
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
sourcepub fn set_timeout(self, input: Option<i32>) -> Self
pub fn set_timeout(self, input: Option<i32>) -> Self
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
sourcepub fn max_capacity(self, input: f64) -> Self
pub fn max_capacity(self, input: f64) -> Self
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers.
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, or an Apache Spark ETL job:
-
When you specify a Python shell job (
JobCommand.Name="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU. -
When you specify an Apache Spark ETL job (
JobCommand.Name="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
sourcepub fn set_max_capacity(self, input: Option<f64>) -> Self
pub fn set_max_capacity(self, input: Option<f64>) -> Self
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity if using WorkerType and NumberOfWorkers.
The value that can be allocated for MaxCapacity depends on whether you are running a Python shell job, or an Apache Spark ETL job:
-
When you specify a Python shell job (
JobCommand.Name="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU. -
When you specify an Apache Spark ETL job (
JobCommand.Name="glueetl"), you can allocate a minimum of 2 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
sourcepub fn security_configuration(self, input: impl Into<String>) -> Self
pub fn security_configuration(self, input: impl Into<String>) -> Self
The name of the SecurityConfiguration structure to be used with this job run.
sourcepub fn set_security_configuration(self, input: Option<String>) -> Self
pub fn set_security_configuration(self, input: Option<String>) -> Self
The name of the SecurityConfiguration structure to be used with this job run.
sourcepub fn notification_property(self, input: NotificationProperty) -> Self
pub fn notification_property(self, input: NotificationProperty) -> Self
Specifies configuration properties of a job run notification.
sourcepub fn set_notification_property(
self,
input: Option<NotificationProperty>
) -> Self
pub fn set_notification_property(
self,
input: Option<NotificationProperty>
) -> Self
Specifies configuration properties of a job run notification.
sourcepub fn worker_type(self, input: WorkerType) -> Self
pub fn worker_type(self, input: WorkerType) -> Self
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
-
For the
Standardworker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker. -
For the
G.1Xworker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker. -
For the
G.2Xworker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker. -
For the
G.025Xworker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
sourcepub fn set_worker_type(self, input: Option<WorkerType>) -> Self
pub fn set_worker_type(self, input: Option<WorkerType>) -> Self
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, G.2X, or G.025X.
-
For the
Standardworker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker. -
For the
G.1Xworker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1 executor per worker. -
For the
G.2Xworker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1 executor per worker. -
For the
G.025Xworker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
sourcepub fn number_of_workers(self, input: i32) -> Self
pub fn number_of_workers(self, input: i32) -> Self
The number of workers of a defined workerType that are allocated when a job runs.
sourcepub fn set_number_of_workers(self, input: Option<i32>) -> Self
pub fn set_number_of_workers(self, input: Option<i32>) -> Self
The number of workers of a defined workerType that are allocated when a job runs.
sourcepub fn execution_class(self, input: ExecutionClass) -> Self
pub fn execution_class(self, input: ExecutionClass) -> Self
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX. The flexible execution class is available for Spark jobs.
sourcepub fn set_execution_class(self, input: Option<ExecutionClass>) -> Self
pub fn set_execution_class(self, input: Option<ExecutionClass>) -> Self
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set ExecutionClass to FLEX. The flexible execution class is available for Spark jobs.
Trait Implementations§
source§impl Clone for StartJobRun
impl Clone for StartJobRun
source§fn clone(&self) -> StartJobRun
fn clone(&self) -> StartJobRun
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more