Struct google_dataproc1::api::ClusterConfig[][src]

pub struct ClusterConfig {
    pub autoscaling_config: Option<AutoscalingConfig>,
    pub config_bucket: Option<String>,
    pub encryption_config: Option<EncryptionConfig>,
    pub endpoint_config: Option<EndpointConfig>,
    pub gce_cluster_config: Option<GceClusterConfig>,
    pub gke_cluster_config: Option<GkeClusterConfig>,
    pub initialization_actions: Option<Vec<NodeInitializationAction>>,
    pub lifecycle_config: Option<LifecycleConfig>,
    pub master_config: Option<InstanceGroupConfig>,
    pub metastore_config: Option<MetastoreConfig>,
    pub secondary_worker_config: Option<InstanceGroupConfig>,
    pub security_config: Option<SecurityConfig>,
    pub software_config: Option<SoftwareConfig>,
    pub temp_bucket: Option<String>,
    pub worker_config: Option<InstanceGroupConfig>,
}

The cluster config.

This type is not used in any activity, and only used as part of another schema.

Fields

autoscaling_config: Option<AutoscalingConfig>

Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.

config_bucket: Option<String>

Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

encryption_config: Option<EncryptionConfig>

Optional. Encryption settings for the cluster.

endpoint_config: Option<EndpointConfig>

Optional. Port/endpoint configuration for this cluster

gce_cluster_config: Option<GceClusterConfig>

Optional. The shared Compute Engine config settings for all instances in a cluster.

gke_cluster_config: Option<GkeClusterConfig>

Optional. BETA. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.

initialization_actions: Option<Vec<NodeInitializationAction>>

Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node’s role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1/instance/attributes/dataproc-role) if [[ “${ROLE}” == ‘Master’ ]]; then … master specific actions … else … worker specific actions … fi

lifecycle_config: Option<LifecycleConfig>

Optional. Lifecycle setting for the cluster.

master_config: Option<InstanceGroupConfig>

Optional. The Compute Engine config settings for the master instance in a cluster.

metastore_config: Option<MetastoreConfig>

Optional. Metastore configuration.

secondary_worker_config: Option<InstanceGroupConfig>

Optional. The Compute Engine config settings for additional worker instances in a cluster.

security_config: Option<SecurityConfig>

Optional. Security settings for the cluster.

software_config: Option<SoftwareConfig>

Optional. The config settings for software inside the cluster.

temp_bucket: Option<String>

Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster’s temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.

worker_config: Option<InstanceGroupConfig>

Optional. The Compute Engine config settings for worker instances in a cluster.

Trait Implementations

impl Clone for ClusterConfig[src]

impl Debug for ClusterConfig[src]

impl Default for ClusterConfig[src]

impl<'de> Deserialize<'de> for ClusterConfig[src]

impl Part for ClusterConfig[src]

impl Serialize for ClusterConfig[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> DeserializeOwned for T where
    T: for<'de> Deserialize<'de>, 
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.