pub struct SparkJob {
pub archive_uris: Option<Vec<String>>,
pub args: Option<Vec<String>>,
pub file_uris: Option<Vec<String>>,
pub jar_file_uris: Option<Vec<String>>,
pub logging_config: Option<LoggingConfig>,
pub main_class: Option<String>,
pub main_jar_file_uri: Option<String>,
pub properties: Option<HashMap<String, String>>,
}Expand description
A Dataproc job for running Apache Spark (https://spark.apache.org/) applications on YARN.
This type is not used in any activity, and only used as part of another schema.
Fields§
§archive_uris: Option<Vec<String>>Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
args: Option<Vec<String>>Optional. The arguments to pass to the driver. Do not include arguments, such as –conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
file_uris: Option<Vec<String>>Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
jar_file_uris: Option<Vec<String>>Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
logging_config: Option<LoggingConfig>Optional. The runtime log config for job execution.
main_class: Option<String>The name of the driver’s main class. The jar file that contains the class must be in the default CLASSPATH or specified in SparkJob.jar_file_uris.
main_jar_file_uri: Option<String>The HCFS URI of the jar file that contains the main class.
properties: Option<HashMap<String, String>>Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API might be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.