Struct aws_sdk_glue::operation::create_job::CreateJobInput
source · #[non_exhaustive]pub struct CreateJobInput {Show 24 fields
pub name: Option<String>,
pub job_mode: Option<JobMode>,
pub description: Option<String>,
pub log_uri: Option<String>,
pub role: Option<String>,
pub execution_property: Option<ExecutionProperty>,
pub command: Option<JobCommand>,
pub default_arguments: Option<HashMap<String, String>>,
pub non_overridable_arguments: Option<HashMap<String, String>>,
pub connections: Option<ConnectionsList>,
pub max_retries: Option<i32>,
pub allocated_capacity: Option<i32>,
pub timeout: Option<i32>,
pub max_capacity: Option<f64>,
pub security_configuration: Option<String>,
pub tags: Option<HashMap<String, String>>,
pub notification_property: Option<NotificationProperty>,
pub glue_version: Option<String>,
pub number_of_workers: Option<i32>,
pub worker_type: Option<WorkerType>,
pub code_gen_configuration_nodes: Option<HashMap<String, CodeGenConfigurationNode>>,
pub execution_class: Option<ExecutionClass>,
pub source_control_details: Option<SourceControlDetails>,
pub maintenance_window: Option<String>,
}
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.name: Option<String>
The name you assign to this job definition. It must be unique in your account.
job_mode: Option<JobMode>
A mode that describes how a job was created. Valid values are:
-
SCRIPT
- The job was created using the Glue Studio script editor. -
VISUAL
- The job was created using the Glue Studio visual editor. -
NOTEBOOK
- The job was created using an interactive sessions notebook.
When the JobMode
field is missing or null, SCRIPT
is assigned as the default value.
description: Option<String>
Description of the job being defined.
log_uri: Option<String>
This field is reserved for future use.
role: Option<String>
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
execution_property: Option<ExecutionProperty>
An ExecutionProperty
specifying the maximum number of concurrent runs allowed for this job.
command: Option<JobCommand>
The JobCommand
that runs this job.
default_arguments: Option<HashMap<String, String>>
The default arguments for every run of this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
non_overridable_arguments: Option<HashMap<String, String>>
Arguments for this job that are not overridden when providing job arguments in a job run, specified as name-value pairs.
connections: Option<ConnectionsList>
The connections used for this job.
max_retries: Option<i32>
The maximum number of times to retry this job if it fails.
allocated_capacity: Option<i32>
This parameter is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) to allocate to this Job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
timeout: Option<i32>
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours) for batch jobs.
Streaming jobs must have timeout values less than 7 days or 10080 minutes. When the value is left blank, the job will be restarted after 7 days based if you have not setup a maintenance window. If you have setup maintenance window, it will be restarted during the maintenance window after 7 days.
max_capacity: Option<f64>
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity
. Instead, you should specify a Worker type
and the Number of workers
.
Do not set MaxCapacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
-
When you specify a Python shell job (
JobCommand.Name
="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU. -
When you specify an Apache Spark ETL job (
JobCommand.Name
="glueetl") or Apache Spark streaming ETL job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
security_configuration: Option<String>
The name of the SecurityConfiguration
structure to be used with this job.
The tags to use with this job. You may use tags to limit access to the job. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
notification_property: Option<NotificationProperty>
Specifies configuration properties of a job notification.
glue_version: Option<String>
In Spark jobs, GlueVersion
determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion
to 4.0
or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime
parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
number_of_workers: Option<i32>
The number of workers of a defined workerType
that are allocated when a job runs.
worker_type: Option<WorkerType>
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
-
For the
G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). -
For the
G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for theG.4X
worker type. -
For the
G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs. -
For the
Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
code_gen_configuration_nodes: Option<HashMap<String, CodeGenConfigurationNode>>
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
execution_class: Option<ExecutionClass>
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl
will be allowed to set ExecutionClass
to FLEX
. The flexible execution class is available for Spark jobs.
source_control_details: Option<SourceControlDetails>
The details for a source control configuration for a job, allowing synchronization of job artifacts to or from a remote repository.
maintenance_window: Option<String>
This field specifies a day of the week and hour for a maintenance window for streaming jobs. Glue periodically performs maintenance activities. During these maintenance windows, Glue will need to restart your streaming jobs.
Glue will restart the job within 3 hours of the specified maintenance window. For instance, if you set up the maintenance window for Monday at 10:00AM GMT, your jobs will be restarted between 10:00AM GMT to 1:00PM GMT.
Implementations§
source§impl CreateJobInput
impl CreateJobInput
sourcepub fn name(&self) -> Option<&str>
pub fn name(&self) -> Option<&str>
The name you assign to this job definition. It must be unique in your account.
sourcepub fn job_mode(&self) -> Option<&JobMode>
pub fn job_mode(&self) -> Option<&JobMode>
A mode that describes how a job was created. Valid values are:
-
SCRIPT
- The job was created using the Glue Studio script editor. -
VISUAL
- The job was created using the Glue Studio visual editor. -
NOTEBOOK
- The job was created using an interactive sessions notebook.
When the JobMode
field is missing or null, SCRIPT
is assigned as the default value.
sourcepub fn description(&self) -> Option<&str>
pub fn description(&self) -> Option<&str>
Description of the job being defined.
sourcepub fn role(&self) -> Option<&str>
pub fn role(&self) -> Option<&str>
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
sourcepub fn execution_property(&self) -> Option<&ExecutionProperty>
pub fn execution_property(&self) -> Option<&ExecutionProperty>
An ExecutionProperty
specifying the maximum number of concurrent runs allowed for this job.
sourcepub fn command(&self) -> Option<&JobCommand>
pub fn command(&self) -> Option<&JobCommand>
The JobCommand
that runs this job.
sourcepub fn default_arguments(&self) -> Option<&HashMap<String, String>>
pub fn default_arguments(&self) -> Option<&HashMap<String, String>>
The default arguments for every run of this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
sourcepub fn non_overridable_arguments(&self) -> Option<&HashMap<String, String>>
pub fn non_overridable_arguments(&self) -> Option<&HashMap<String, String>>
Arguments for this job that are not overridden when providing job arguments in a job run, specified as name-value pairs.
sourcepub fn connections(&self) -> Option<&ConnectionsList>
pub fn connections(&self) -> Option<&ConnectionsList>
The connections used for this job.
sourcepub fn max_retries(&self) -> Option<i32>
pub fn max_retries(&self) -> Option<i32>
The maximum number of times to retry this job if it fails.
sourcepub fn allocated_capacity(&self) -> Option<i32>
👎Deprecated: This property is deprecated, use MaxCapacity instead.
pub fn allocated_capacity(&self) -> Option<i32>
This parameter is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) to allocate to this Job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
sourcepub fn timeout(&self) -> Option<i32>
pub fn timeout(&self) -> Option<i32>
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours) for batch jobs.
Streaming jobs must have timeout values less than 7 days or 10080 minutes. When the value is left blank, the job will be restarted after 7 days based if you have not setup a maintenance window. If you have setup maintenance window, it will be restarted during the maintenance window after 7 days.
sourcepub fn max_capacity(&self) -> Option<f64>
pub fn max_capacity(&self) -> Option<f64>
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity
. Instead, you should specify a Worker type
and the Number of workers
.
Do not set MaxCapacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
-
When you specify a Python shell job (
JobCommand.Name
="pythonshell"), you can allocate either 0.0625 or 1 DPU. The default is 0.0625 DPU. -
When you specify an Apache Spark ETL job (
JobCommand.Name
="glueetl") or Apache Spark streaming ETL job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
sourcepub fn security_configuration(&self) -> Option<&str>
pub fn security_configuration(&self) -> Option<&str>
The name of the SecurityConfiguration
structure to be used with this job.
The tags to use with this job. You may use tags to limit access to the job. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
sourcepub fn notification_property(&self) -> Option<&NotificationProperty>
pub fn notification_property(&self) -> Option<&NotificationProperty>
Specifies configuration properties of a job notification.
sourcepub fn glue_version(&self) -> Option<&str>
pub fn glue_version(&self) -> Option<&str>
In Spark jobs, GlueVersion
determines the versions of Apache Spark and Python that Glue available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion
to 4.0
or greater. However, the versions of Ray, Python and additional libraries available in your Ray job are determined by the Runtime
parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
sourcepub fn number_of_workers(&self) -> Option<i32>
pub fn number_of_workers(&self) -> Option<i32>
The number of workers of a defined workerType
that are allocated when a job runs.
sourcepub fn worker_type(&self) -> Option<&WorkerType>
pub fn worker_type(&self) -> Option<&WorkerType>
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
-
For the
G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs. -
For the
G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). -
For the
G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for theG.4X
worker type. -
For the
G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs. -
For the
Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
sourcepub fn code_gen_configuration_nodes(
&self,
) -> Option<&HashMap<String, CodeGenConfigurationNode>>
pub fn code_gen_configuration_nodes( &self, ) -> Option<&HashMap<String, CodeGenConfigurationNode>>
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
sourcepub fn execution_class(&self) -> Option<&ExecutionClass>
pub fn execution_class(&self) -> Option<&ExecutionClass>
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl
will be allowed to set ExecutionClass
to FLEX
. The flexible execution class is available for Spark jobs.
sourcepub fn source_control_details(&self) -> Option<&SourceControlDetails>
pub fn source_control_details(&self) -> Option<&SourceControlDetails>
The details for a source control configuration for a job, allowing synchronization of job artifacts to or from a remote repository.
sourcepub fn maintenance_window(&self) -> Option<&str>
pub fn maintenance_window(&self) -> Option<&str>
This field specifies a day of the week and hour for a maintenance window for streaming jobs. Glue periodically performs maintenance activities. During these maintenance windows, Glue will need to restart your streaming jobs.
Glue will restart the job within 3 hours of the specified maintenance window. For instance, if you set up the maintenance window for Monday at 10:00AM GMT, your jobs will be restarted between 10:00AM GMT to 1:00PM GMT.
source§impl CreateJobInput
impl CreateJobInput
sourcepub fn builder() -> CreateJobInputBuilder
pub fn builder() -> CreateJobInputBuilder
Creates a new builder-style object to manufacture CreateJobInput
.
Trait Implementations§
source§impl Clone for CreateJobInput
impl Clone for CreateJobInput
source§fn clone(&self) -> CreateJobInput
fn clone(&self) -> CreateJobInput
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl Debug for CreateJobInput
impl Debug for CreateJobInput
source§impl PartialEq for CreateJobInput
impl PartialEq for CreateJobInput
source§fn eq(&self, other: &CreateJobInput) -> bool
fn eq(&self, other: &CreateJobInput) -> bool
self
and other
values to be equal, and is used
by ==
.impl StructuralPartialEq for CreateJobInput
Auto Trait Implementations§
impl Freeze for CreateJobInput
impl RefUnwindSafe for CreateJobInput
impl Send for CreateJobInput
impl Sync for CreateJobInput
impl Unpin for CreateJobInput
impl UnwindSafe for CreateJobInput
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more