pub struct ClusterConfig {Show 20 fields
pub autoscaling_config: Option<AutoscalingConfig>,
pub auxiliary_node_groups: Option<Vec<AuxiliaryNodeGroup>>,
pub cluster_tier: Option<String>,
pub cluster_type: Option<String>,
pub config_bucket: Option<String>,
pub dataproc_metric_config: Option<DataprocMetricConfig>,
pub diagnostic_bucket: Option<String>,
pub encryption_config: Option<EncryptionConfig>,
pub endpoint_config: Option<EndpointConfig>,
pub gce_cluster_config: Option<GceClusterConfig>,
pub gke_cluster_config: Option<GkeClusterConfig>,
pub initialization_actions: Option<Vec<NodeInitializationAction>>,
pub lifecycle_config: Option<LifecycleConfig>,
pub master_config: Option<InstanceGroupConfig>,
pub metastore_config: Option<MetastoreConfig>,
pub secondary_worker_config: Option<InstanceGroupConfig>,
pub security_config: Option<SecurityConfig>,
pub software_config: Option<SoftwareConfig>,
pub temp_bucket: Option<String>,
pub worker_config: Option<InstanceGroupConfig>,
}Expand description
The cluster config.
This type is not used in any activity, and only used as part of another schema.
Fields§
§autoscaling_config: Option<AutoscalingConfig>Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
auxiliary_node_groups: Option<Vec<AuxiliaryNodeGroup>>Optional. The node group settings.
cluster_tier: Option<String>Optional. The cluster tier.
cluster_type: Option<String>Optional. The type of the cluster.
config_bucket: Option<String>Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://… URI to a Cloud Storage bucket.
dataproc_metric_config: Option<DataprocMetricConfig>Optional. The config for Dataproc metrics.
diagnostic_bucket: Option<String>Optional. A Cloud Storage bucket used to collect checkpoint diagnostic data (https://cloud.google.com/dataproc/docs/support/diagnose-clusters#checkpoint_diagnostic_data). If you do not specify a diagnostic bucket, Cloud Dataproc will use the Dataproc temp bucket to collect the checkpoint diagnostic data. This field requires a Cloud Storage bucket name, not a gs://… URI to a Cloud Storage bucket.
encryption_config: Option<EncryptionConfig>Optional. Encryption settings for the cluster.
endpoint_config: Option<EndpointConfig>Optional. Port/endpoint configuration for this cluster
gce_cluster_config: Option<GceClusterConfig>Optional. The shared Compute Engine config settings for all instances in a cluster.
gke_cluster_config: Option<GkeClusterConfig>Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. These config settings are mutually exclusive with Compute Engine-based options, such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initialization_actions: Option<Vec<NodeInitializationAction>>Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node’s role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ “${ROLE}” == ‘Master’ ]]; then … master specific actions … else … worker specific actions … fi
lifecycle_config: Option<LifecycleConfig>Optional. Lifecycle setting for the cluster.
master_config: Option<InstanceGroupConfig>Optional. The Compute Engine config settings for the cluster’s master instance.
metastore_config: Option<MetastoreConfig>Optional. Metastore configuration.
secondary_worker_config: Option<InstanceGroupConfig>Optional. The Compute Engine config settings for a cluster’s secondary worker instances
security_config: Option<SecurityConfig>Optional. Security settings for the cluster.
software_config: Option<SoftwareConfig>Optional. The config settings for cluster software.
temp_bucket: Option<String>Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket (see Dataproc staging and temp buckets (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a gs://… URI to a Cloud Storage bucket.
worker_config: Option<InstanceGroupConfig>Optional. The Compute Engine config settings for the cluster’s worker instances.
Trait Implementations§
Source§impl Clone for ClusterConfig
impl Clone for ClusterConfig
Source§fn clone(&self) -> ClusterConfig
fn clone(&self) -> ClusterConfig
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more