pub struct GoogleCloudMlV1__TrainingInput {
Show 26 fields pub args: Option<Vec<String>>, pub enable_web_access: Option<bool>, pub encryption_config: Option<GoogleCloudMlV1__EncryptionConfig>, pub evaluator_config: Option<GoogleCloudMlV1__ReplicaConfig>, pub evaluator_count: Option<i64>, pub evaluator_type: Option<String>, pub hyperparameters: Option<GoogleCloudMlV1__HyperparameterSpec>, pub job_dir: Option<String>, pub master_config: Option<GoogleCloudMlV1__ReplicaConfig>, pub master_type: Option<String>, pub network: Option<String>, pub package_uris: Option<Vec<String>>, pub parameter_server_config: Option<GoogleCloudMlV1__ReplicaConfig>, pub parameter_server_count: Option<i64>, pub parameter_server_type: Option<String>, pub python_module: Option<String>, pub python_version: Option<String>, pub region: Option<String>, pub runtime_version: Option<String>, pub scale_tier: Option<String>, pub scheduling: Option<GoogleCloudMlV1__Scheduling>, pub service_account: Option<String>, pub use_chief_in_tf_config: Option<bool>, pub worker_config: Option<GoogleCloudMlV1__ReplicaConfig>, pub worker_count: Option<i64>, pub worker_type: Option<String>,
}
Expand description

Represents input parameters for a training job. When using the gcloud command to submit your training job, you can specify the input parameters as command-line arguments and/or in a YAML configuration file referenced from the –config command-line argument. For details, see the guide to submitting a training job.

This type is not used in any activity, and only used as part of another schema.

Fields§

§args: Option<Vec<String>>

Optional. Command-line arguments passed to the training application when it starts. If your job uses a custom container, then the arguments are passed to the container’s ENTRYPOINT command.

§enable_web_access: Option<bool>

Optional. Whether you want AI Platform Training to enable interactive shell access to training containers. If set to true, you can access interactive shells at the URIs given by TrainingOutput.web_access_uris or HyperparameterOutput.web_access_uris (within TrainingOutput.trials).

§encryption_config: Option<GoogleCloudMlV1__EncryptionConfig>

Optional. Options for using customer-managed encryption keys (CMEK) to protect resources created by a training job, instead of using Google’s default encryption. If this is set, then all resources created by the training job will be encrypted with the customer-managed encryption key that you specify. Learn how and when to use CMEK with AI Platform Training.

§evaluator_config: Option<GoogleCloudMlV1__ReplicaConfig>

Optional. The configuration for evaluators. You should only set evaluatorConfig.acceleratorConfig if evaluatorType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set evaluatorConfig.imageUri only if you build a custom image for your evaluator. If evaluatorConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.

§evaluator_count: Option<i64>

Optional. The number of evaluator replicas to use for the training job. Each replica in the cluster will be of the type specified in evaluator_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set evaluator_type. The default value is zero.

§evaluator_type: Option<String>

Optional. Specifies the type of virtual machine to use for your training job’s evaluator nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and evaluatorCount is greater than zero.

§hyperparameters: Option<GoogleCloudMlV1__HyperparameterSpec>

Optional. The set of Hyperparameters to tune.

§job_dir: Option<String>

Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the ‘–job-dir’ command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.

§master_config: Option<GoogleCloudMlV1__ReplicaConfig>

Optional. The configuration for your master worker. You should only set masterConfig.acceleratorConfig if masterType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set masterConfig.imageUri only if you build a custom image. Only one of masterConfig.imageUri and runtimeVersion should be set. Learn more about configuring custom containers.

§master_type: Option<String>

Optional. Specifies the type of virtual machine to use for your training job’s master worker. You must specify this field when scaleTier is set to CUSTOM. You can use certain Compute Engine machine types directly in this field. See the list of compatible Compute Engine machine types. Alternatively, you can use the certain legacy machine types in this field. See the list of legacy machine types. Finally, if you want to use a TPU for training, specify cloud_tpu in this field. Learn more about the special configuration options for training with TPUs.

§network: Option<String>

Optional. The full name of the Compute Engine network to which the Job is peered. For example, projects/12345/global/networks/myVPC. The format of this field is projects/{project}/global/networks/{network}, where {project} is a project number (like 12345) and {network} is network name. Private services access must already be configured for the network. If left unspecified, the Job is not peered with any network. Learn about using VPC Network Peering..

§package_uris: Option<Vec<String>>

Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies. The maximum number of package URIs is 100.

§parameter_server_config: Option<GoogleCloudMlV1__ReplicaConfig>

Optional. The configuration for parameter servers. You should only set parameterServerConfig.acceleratorConfig if parameterServerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set parameterServerConfig.imageUri only if you build a custom image for your parameter server. If parameterServerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.

§parameter_server_count: Option<i64>

Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in parameter_server_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set parameter_server_type. The default value is zero.

§parameter_server_type: Option<String>

Optional. Specifies the type of virtual machine to use for your training job’s parameter server. The supported values are the same as those described in the entry for master_type. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. This value must be present when scaleTier is set to CUSTOM and parameter_server_count is greater than zero.

§python_module: Option<String>

Required. The Python module name to run after installing the packages.

§python_version: Option<String>

Optional. The version of Python used in training. You must either specify this field or specify masterConfig.imageUri. The following Python versions are available: * Python ‘3.7’ is available when runtime_version is set to ‘1.15’ or later. * Python ‘3.5’ is available when runtime_version is set to a version from ‘1.4’ to ‘1.14’. * Python ‘2.7’ is available when runtime_version is set to ‘1.15’ or earlier. Read more about the Python versions available for each runtime version.

§region: Option<String>

Required. The region to run the training job in. See the available regions for AI Platform Training.

§runtime_version: Option<String>

Optional. The AI Platform runtime version to use for training. You must either specify this field or specify masterConfig.imageUri. For more information, see the runtime version list and learn how to manage runtime versions.

§scale_tier: Option<String>

Required. Specifies the machine types, the number of replicas for workers and parameter servers.

§scheduling: Option<GoogleCloudMlV1__Scheduling>

Optional. Scheduling options for a training job.

§service_account: Option<String>

Optional. The email address of a service account to use when running the training appplication. You must have the iam.serviceAccounts.actAs permission for the specified service account. In addition, the AI Platform Training Google-managed service account must have the roles/iam.serviceAccountAdmin role for the specified service account. Learn more about configuring a service account. If not specified, the AI Platform Training Google-managed service account is used by default.

§use_chief_in_tf_config: Option<bool>

Optional. Use chief instead of master in the TF_CONFIG environment variable when training with a custom container. Defaults to false. Learn more about this field. This field has no effect for training jobs that don’t use a custom container.

§worker_config: Option<GoogleCloudMlV1__ReplicaConfig>

Optional. The configuration for workers. You should only set workerConfig.acceleratorConfig if workerType is set to a Compute Engine machine type. Learn about restrictions on accelerator configurations for training. Set workerConfig.imageUri only if you build a custom image for your worker. If workerConfig.imageUri has not been set, AI Platform uses the value of masterConfig.imageUri. Learn more about configuring custom containers.

§worker_count: Option<i64>

Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in worker_type. This value can only be used when scale_tier is set to CUSTOM. If you set this value, you must also set worker_type. The default value is zero.

§worker_type: Option<String>

Optional. Specifies the type of virtual machine to use for your training job’s worker nodes. The supported values are the same as those described in the entry for masterType. This value must be consistent with the category of machine type that masterType uses. In other words, both must be Compute Engine machine types or both must be legacy machine types. If you use cloud_tpu for this value, see special instructions for configuring a custom TPU machine. This value must be present when scaleTier is set to CUSTOM and workerCount is greater than zero.

Trait Implementations§

source§

impl Clone for GoogleCloudMlV1__TrainingInput

source§

fn clone(&self) -> GoogleCloudMlV1__TrainingInput

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for GoogleCloudMlV1__TrainingInput

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Default for GoogleCloudMlV1__TrainingInput

source§

fn default() -> GoogleCloudMlV1__TrainingInput

Returns the “default value” for a type. Read more
source§

impl<'de> Deserialize<'de> for GoogleCloudMlV1__TrainingInput

source§

fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>
where __D: Deserializer<'de>,

Deserialize this value from the given Serde deserializer. Read more
source§

impl Serialize for GoogleCloudMlV1__TrainingInput

source§

fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>
where __S: Serializer,

Serialize this value into the given Serde serializer. Read more
source§

impl Part for GoogleCloudMlV1__TrainingInput

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

impl<T> DeserializeOwned for T
where T: for<'de> Deserialize<'de>,