Struct google_dataproc1::SparkSqlJob[][src]

pub struct SparkSqlJob {
    pub query_file_uri: Option<String>,
    pub script_variables: Option<HashMap<String, String>>,
    pub logging_config: Option<LoggingConfig>,
    pub jar_file_uris: Option<Vec<String>>,
    pub query_list: Option<QueryList>,
    pub properties: Option<HashMap<String, String>>,
}

A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries.

This type is not used in any activity, and only used as part of another schema.

Fields

The HCFS URI of the script that contains SQL queries.

Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).

Optional. The runtime log config for job execution.

Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.

A list of queries.

Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.

Trait Implementations

impl Default for SparkSqlJob
[src]

Returns the "default value" for a type. Read more

impl Clone for SparkSqlJob
[src]

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

impl Debug for SparkSqlJob
[src]

Formats the value using the given formatter. Read more

impl Part for SparkSqlJob
[src]

Auto Trait Implementations

impl Send for SparkSqlJob

impl Sync for SparkSqlJob