[][src]Enum gcp_client::google::cloud::ml::v1::training_input::ScaleTier

#[repr(i32)]pub enum ScaleTier {
    Basic,
    Standard1,
    Premium1,
    BasicGpu,
    Custom,
}

A scale tier is an abstract representation of the resources Cloud ML will allocate to a training job. When selecting a scale tier for your training job, you should consider the size of your training dataset and the complexity of your model. As the tiers increase, virtual machines are added to handle your job, and the individual machines in the cluster generally have more memory and greater processing power than they do at lower tiers. The number of training units charged per hour of processing increases as tiers get more advanced. Refer to the pricing guide for more details. Note that in addition to incurring costs, your use of training resources is constrained by the quota policy.

Variants

Basic

A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.

Standard1

Many workers and a few parameter servers.

Premium1

A large number of workers with many parameter servers.

BasicGpu

A single worker instance with a GPU.

Custom

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines:

  • You must set TrainingInput.masterType to specify the type of machine to use for your master node. This is the only required setting.

  • You may set TrainingInput.workerCount to specify the number of workers to use. If you specify one or more workers, you must also set TrainingInput.workerType to specify the type of machine to use for your worker nodes.

  • You may set TrainingInput.parameterServerCount to specify the number of parameter servers to use. If you specify one or more parameter servers, you must also set TrainingInput.parameterServerType to specify the type of machine to use for your parameter servers.

Note that all of your workers must use the same machine type, which can be different from your parameter server type and master type. Your parameter servers must likewise use the same machine type, which can be different from your worker type and master type.

Implementations

impl ScaleTier[src]

pub fn is_valid(value: i32) -> bool[src]

Returns true if value is a variant of ScaleTier.

pub fn from_i32(value: i32) -> Option<ScaleTier>[src]

Converts an i32 to a ScaleTier, or None if value is not a valid variant.

Trait Implementations

impl Clone for ScaleTier[src]

impl Copy for ScaleTier[src]

impl Debug for ScaleTier[src]

impl Default for ScaleTier[src]

impl Eq for ScaleTier[src]

impl From<ScaleTier> for i32[src]

impl Hash for ScaleTier[src]

impl Ord for ScaleTier[src]

impl PartialEq<ScaleTier> for ScaleTier[src]

impl PartialOrd<ScaleTier> for ScaleTier[src]

impl StructuralEq for ScaleTier[src]

impl StructuralPartialEq for ScaleTier[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<Q, K> Equivalent<K> for Q where
    K: Borrow<Q> + ?Sized,
    Q: Eq + ?Sized
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> IntoRequest<T> for T[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>, 

impl<T> WithSubscriber for T[src]