Struct google_dataproc1::Job[][src]

pub struct Job {
    pub status: Option<JobStatus>,
    pub spark_sql_job: Option<SparkSqlJob>,
    pub labels: Option<HashMap<String, String>>,
    pub placement: Option<JobPlacement>,
    pub reference: Option<JobReference>,
    pub hadoop_job: Option<HadoopJob>,
    pub pig_job: Option<PigJob>,
    pub driver_output_resource_uri: Option<String>,
    pub driver_control_files_uri: Option<String>,
    pub spark_job: Option<SparkJob>,
    pub yarn_applications: Option<Vec<YarnApplication>>,
    pub scheduling: Option<JobScheduling>,
    pub status_history: Option<Vec<JobStatus>>,
    pub pyspark_job: Option<PySparkJob>,
    pub hive_job: Option<HiveJob>,
}

A Cloud Dataproc job resource.

Activities

This type is used in activities, which are methods you may call on this type or where this type is involved in. The list links the activity name, along with information about where it is used (one of request and response).

Fields

Output-only. The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.

Job is a SparkSql job.

Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.

Required. Job information, including how, when, and where to run the job.

Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.

Job is a Hadoop job.

Job is a Pig job.

Output-only. A URI pointing to the location of the stdout of the job's driver program.

Output-only. If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.

Job is a Spark job.

Output-only. The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.

Optional. Job scheduling configuration.

Output-only. The previous job status.

Job is a Pyspark job.

Job is a Hive job.

Trait Implementations

impl Default for Job
[src]

Returns the "default value" for a type. Read more

impl Clone for Job
[src]

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

impl Debug for Job
[src]

Formats the value using the given formatter. Read more

impl RequestValue for Job
[src]

impl ResponseResult for Job
[src]

Auto Trait Implementations

impl Send for Job

impl Sync for Job