Struct google_dataproc1::Job [−][src]
pub struct Job { pub status: Option<JobStatus>, pub spark_sql_job: Option<SparkSqlJob>, pub labels: Option<HashMap<String, String>>, pub placement: Option<JobPlacement>, pub reference: Option<JobReference>, pub hadoop_job: Option<HadoopJob>, pub pig_job: Option<PigJob>, pub driver_output_resource_uri: Option<String>, pub driver_control_files_uri: Option<String>, pub spark_job: Option<SparkJob>, pub yarn_applications: Option<Vec<YarnApplication>>, pub scheduling: Option<JobScheduling>, pub status_history: Option<Vec<JobStatus>>, pub pyspark_job: Option<PySparkJob>, pub hive_job: Option<HiveJob>, }
A Cloud Dataproc job resource.
Activities
This type is used in activities, which are methods you may call on this type or where this type is involved in. The list links the activity name, along with information about where it is used (one of request and response).
- regions jobs submit projects (response)
- regions jobs get projects (response)
- regions jobs patch projects (request|response)
- regions jobs cancel projects (response)
Fields
status: Option<JobStatus>
Output-only. The job status. Additional application-specific status information may be contained in the type_job
and yarn_applications
fields.
spark_sql_job: Option<SparkSqlJob>
Job is a SparkSql job.
labels: Option<HashMap<String, String>>
Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
placement: Option<JobPlacement>
Required. Job information, including how, when, and where to run the job.
reference: Option<JobReference>
Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id
.
hadoop_job: Option<HadoopJob>
Job is a Hadoop job.
pig_job: Option<PigJob>
Job is a Pig job.
driver_output_resource_uri: Option<String>
Output-only. A URI pointing to the location of the stdout of the job's driver program.
driver_control_files_uri: Option<String>
Output-only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
spark_job: Option<SparkJob>
Job is a Spark job.
yarn_applications: Option<Vec<YarnApplication>>
Output-only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
scheduling: Option<JobScheduling>
Optional. Job scheduling configuration.
status_history: Option<Vec<JobStatus>>
Output-only. The previous job status.
pyspark_job: Option<PySparkJob>
Job is a Pyspark job.
hive_job: Option<HiveJob>
Job is a Hive job.
Trait Implementations
impl Default for Job
[src]
impl Default for Job
impl Clone for Job
[src]
impl Clone for Job
fn clone(&self) -> Job
[src]
fn clone(&self) -> Job
Returns a copy of the value. Read more
fn clone_from(&mut self, source: &Self)
1.0.0[src]
fn clone_from(&mut self, source: &Self)
Performs copy-assignment from source
. Read more
impl Debug for Job
[src]
impl Debug for Job
fn fmt(&self, f: &mut Formatter) -> Result
[src]
fn fmt(&self, f: &mut Formatter) -> Result
Formats the value using the given formatter. Read more
impl RequestValue for Job
[src]
impl RequestValue for Job
impl ResponseResult for Job
[src]
impl ResponseResult for Job