[][src]Struct google_ml1::GoogleCloudMlV1__TrainingInput

pub struct GoogleCloudMlV1__TrainingInput {
    pub runtime_version: Option<String>,
    pub master_type: Option<String>,
    pub hyperparameters: Option<GoogleCloudMlV1__HyperparameterSpec>,
    pub args: Option<Vec<String>>,
    pub python_module: Option<String>,
    pub job_dir: Option<String>,
    pub worker_count: Option<String>,
    pub max_running_time: Option<String>,
    pub parameter_server_count: Option<String>,
    pub worker_type: Option<String>,
    pub parameter_server_config: Option<GoogleCloudMlV1__ReplicaConfig>,
    pub scale_tier: Option<String>,
    pub region: Option<String>,
    pub python_version: Option<String>,
    pub package_uris: Option<Vec<String>>,
    pub worker_config: Option<GoogleCloudMlV1__ReplicaConfig>,
    pub parameter_server_type: Option<String>,
    pub master_config: Option<GoogleCloudMlV1__ReplicaConfig>,
}

Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the --config command-line argument. For details, see the guide to submitting a training job.

This type is not used in any activity, and only used as part of another schema.

Fields

runtime_version: Option<String>

Optional. The AI Platform runtime version to use for training. If not set, AI Platform uses the default stable version, 1.0. For more information, see the runtime version list and how to manage runtime versions.

master_type: Option<String>

Optional. Specifies the type of virtual machine to use for your training job's master worker.

The following types are supported:

standard
A basic machine configuration suitable for training simple models with small to moderate datasets.
large_model
A machine with a lot of memory, specially suited for parameter servers when your model is large (having many hidden layers or layers with very large numbers of nodes).
complex_model_s
A machine suitable for the master and workers of the cluster when your model requires more computation than the standard machine can handle satisfactorily.
complex_model_m
A machine with roughly twice the number of cores and roughly double the memory of complex_model_s.
complex_model_l
A machine with roughly twice the number of cores and roughly double the memory of complex_model_m.
standard_gpu
A machine equivalent to standard that also includes a single NVIDIA Tesla K80 GPU. See more about using GPUs to train your model.
complex_model_m_gpu
A machine equivalent to complex_model_m that also includes four NVIDIA Tesla K80 GPUs.
complex_model_l_gpu
A machine equivalent to complex_model_l that also includes eight NVIDIA Tesla K80 GPUs.
standard_p100
A machine equivalent to standard that also includes a single NVIDIA Tesla P100 GPU.
complex_model_m_p100
A machine equivalent to complex_model_m that also includes four NVIDIA Tesla P100 GPUs.
standard_v100
A machine equivalent to standard that also includes a single NVIDIA Tesla V100 GPU.
large_model_v100
A machine equivalent to large_model that also includes a single NVIDIA Tesla V100 GPU.
complex_model_m_v100
A machine equivalent to complex_model_m that also includes four NVIDIA Tesla V100 GPUs.
complex_model_l_v100
A machine equivalent to complex_model_l that also includes eight NVIDIA Tesla V100 GPUs.
cloud_tpu
A TPU VM including one Cloud TPU. See more about using TPUs to train your model.

You may also use certain Compute Engine machine types directly in this field. The following types are supported:

  • n1-standard-4
  • n1-standard-8
  • n1-standard-16
  • n1-standard-32
  • n1-standard-64
  • n1-standard-96
  • n1-highmem-2
  • n1-highmem-4
  • n1-highmem-8
  • n1-highmem-16
  • n1-highmem-32
  • n1-highmem-64
  • n1-highmem-96
  • n1-highcpu-16
  • n1-highcpu-32
  • n1-highcpu-64
  • n1-highcpu-96

See more about using Compute Engine machine types.

You must set this value when scaleTier is set to CUSTOM.

hyperparameters: Option<GoogleCloudMlV1__HyperparameterSpec>

Optional. The set of Hyperparameters to tune.

args: Option<Vec<String>>

Optional. Command line arguments to pass to the program.

python_module: Option<String>

Required. The Python module name to run after installing the packages.

job_dir: Option<String>

Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the '--job-dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.

worker_count: Option<String>

Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type.

This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type.

The default value is zero.

max_running_time: Option<String>

Optional. The maximum job running time. The default is 7 days.

parameter_server_count: Option<String>

Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type.

This value can only be used when scale_tier is set to CUSTOM.If you set this value, you must also set parameter_server_type.

The default value is zero.

worker_type: Option<String>

Optional. Specifies the type of virtual machine to use for your training job's worker nodes.

The supported values are the same as those described in the entry for masterType.

This value must be consistent with the category of machine type that masterType uses. In other words, both must be AI Platform machine types or both must be Compute Engine machine types.

If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine.

This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.

parameter_server_config: Option<GoogleCloudMlV1__ReplicaConfig>

Optional. The configuration for parameter servers.

You should only set parameterServerConfig.acceleratorConfig if parameterServerConfigType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training.

Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.

scale_tier: Option<String>

Required. Specifies the machine types, the number of replicas for workers and parameter servers.

region: Option<String>

Required. The Google Compute Engine region to run the training job in. See the available regions for AI Platform services.

python_version: Option<String>

Optional. The version of Python used in training. If not set, the default version is '2.7'. Python '3.5' is available when runtime_version is set to '1.4' and above. Python '2.7' works with all supported runtime versions.

package_uris: Option<Vec<String>>

Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.

worker_config: Option<GoogleCloudMlV1__ReplicaConfig>

Optional. The configuration for workers.

You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training.

Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.

parameter_server_type: Option<String>

Optional. Specifies the type of virtual machine to use for your training job's parameter server.

The supported values are the same as those described in the entry for master_type.

This value must be consistent with the category of machine type that masterType uses. In other words, both must be AI Platform machine types or both must be Compute Engine machine types.

This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.

master_config: Option<GoogleCloudMlV1__ReplicaConfig>

Optional. The configuration for your master worker.

You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training.

Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.

Trait Implementations

impl Part for GoogleCloudMlV1__TrainingInput[src]

impl Default for GoogleCloudMlV1__TrainingInput[src]

impl Clone for GoogleCloudMlV1__TrainingInput[src]

fn clone_from(&mut self, source: &Self)1.0.0[src]

Performs copy-assignment from source. Read more

impl Debug for GoogleCloudMlV1__TrainingInput[src]

impl Serialize for GoogleCloudMlV1__TrainingInput[src]

impl<'de> Deserialize<'de> for GoogleCloudMlV1__TrainingInput[src]

Auto Trait Implementations

Blanket Implementations

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Typeable for T where
    T: Any

fn get_type(&self) -> TypeId

Get the TypeId of this object.

impl<T> DeserializeOwned for T where
    T: Deserialize<'de>, 
[src]