pub struct GoogleCloudMlV1__Version {
Show 26 fields pub accelerator_config: Option<GoogleCloudMlV1__AcceleratorConfig>, pub auto_scaling: Option<GoogleCloudMlV1__AutoScaling>, pub container: Option<GoogleCloudMlV1__ContainerSpec>, pub create_time: Option<DateTime<Utc>>, pub deployment_uri: Option<String>, pub description: Option<String>, pub error_message: Option<String>, pub etag: Option<Vec<u8>>, pub explanation_config: Option<GoogleCloudMlV1__ExplanationConfig>, pub framework: Option<String>, pub is_default: Option<bool>, pub labels: Option<HashMap<String, String>>, pub last_migration_model_id: Option<String>, pub last_migration_time: Option<DateTime<Utc>>, pub last_use_time: Option<DateTime<Utc>>, pub machine_type: Option<String>, pub manual_scaling: Option<GoogleCloudMlV1__ManualScaling>, pub name: Option<String>, pub package_uris: Option<Vec<String>>, pub prediction_class: Option<String>, pub python_version: Option<String>, pub request_logging_config: Option<GoogleCloudMlV1__RequestLoggingConfig>, pub routes: Option<GoogleCloudMlV1__RouteMap>, pub runtime_version: Option<String>, pub service_account: Option<String>, pub state: Option<String>,
}
Expand description

Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling projects.models.versions.list.

§Activities

This type is used in activities, which are methods you may call on this type or where this type is involved in. The list links the activity name, along with information about where it is used (one of request and response).

Fields§

§accelerator_config: Option<GoogleCloudMlV1__AcceleratorConfig>

Optional. Accelerator config for using GPUs for online prediction (beta). Only specify this field if you have specified a Compute Engine (N1) machine type in the machineType field. Learn more about using GPUs for online prediction.

§auto_scaling: Option<GoogleCloudMlV1__AutoScaling>

Automatically scale the number of nodes used to serve the model in response to increases and decreases in traffic. Care should be taken to ramp up traffic according to the model’s ability to scale or you will start seeing increases in latency and 429 response codes.

§container: Option<GoogleCloudMlV1__ContainerSpec>

Optional. Specifies a custom container to use for serving predictions. If you specify this field, then machineType is required. If you specify this field, then deploymentUri is optional. If you specify this field, then you must not specify runtimeVersion, packageUris, framework, pythonVersion, or predictionClass.

§create_time: Option<DateTime<Utc>>

Output only. The time the version was created.

§deployment_uri: Option<String>

The Cloud Storage URI of a directory containing trained model artifacts to be used to create the model version. See the guide to deploying models for more information. The total number of files under this directory must not exceed 1000. During projects.models.versions.create, AI Platform Prediction copies all files from the specified directory to a location managed by the service. From then on, AI Platform Prediction uses these copies of the model artifacts to serve predictions, not the original files in Cloud Storage, so this location is useful only as a historical record. If you specify container, then this field is optional. Otherwise, it is required. Learn how to use this field with a custom container.

§description: Option<String>

Optional. The description specified for the version when it was created.

§error_message: Option<String>

Output only. The details of a failure or a cancellation.

§etag: Option<Vec<u8>>

etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a model from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform model updates in order to avoid race conditions: An etag is returned in the response to GetVersion, and systems are expected to put that etag in the request to UpdateVersion to ensure that their change will be applied to the model as intended.

§explanation_config: Option<GoogleCloudMlV1__ExplanationConfig>

Optional. Configures explainability features on the model’s version. Some explanation features require additional metadata to be loaded as part of the model payload.

§framework: Option<String>

Optional. The machine learning framework AI Platform uses to train this version of the model. Valid values are TENSORFLOW, SCIKIT_LEARN, XGBOOST. If you do not specify a framework, AI Platform will analyze files in the deployment_uri to determine a framework. If you choose SCIKIT_LEARN or XGBOOST, you must also set the runtime version of the model to 1.4 or greater. Do not specify a framework if you’re deploying a custom prediction routine or if you’re using a custom container.

§is_default: Option<bool>

Output only. If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling projects.methods.versions.setDefault.

§labels: Option<HashMap<String, String>>

Optional. One or more labels that you can add, to organize your model versions. Each label is a key-value pair, where both the key and the value are arbitrary strings that you supply. For more information, see the documentation on using labels. Note that this field is not updatable for mls1* models.

§last_migration_model_id: Option<String>

Output only. The AI Platform (Unified) Model ID for the last model migration.

§last_migration_time: Option<DateTime<Utc>>

Output only. The last time this version was successfully migrated to AI Platform (Unified).

§last_use_time: Option<DateTime<Utc>>

Output only. The time the version was last used for prediction.

§machine_type: Option<String>

Optional. The type of machine on which to serve the model. Currently only applies to online prediction service. To learn about valid values for this field, read Choosing a machine type for online prediction. If this field is not specified and you are using a regional endpoint, then the machine type defaults to n1-standard-2. If this field is not specified and you are using the global endpoint (ml.googleapis.com), then the machine type defaults to mls1-c1-m2.

§manual_scaling: Option<GoogleCloudMlV1__ManualScaling>

Manually select the number of nodes to use for serving the model. You should generally use auto_scaling with an appropriate min_nodes instead, but this option is available if you want more predictable billing. Beware that latency and error rates will increase if the traffic exceeds that capability of the system to serve it based on the selected number of nodes.

§name: Option<String>

Required. The name specified for the version when it was created. The version name must be unique within the model it is created in.

§package_uris: Option<Vec<String>>

Optional. Cloud Storage paths (gs://…) of packages for custom prediction routines or scikit-learn pipelines with custom code. For a custom prediction routine, one of these packages must contain your Predictor class (see predictionClass). Additionally, include any dependencies used by your Predictor or scikit-learn pipeline uses that are not already included in your selected runtime version. If you specify this field, you must also set runtimeVersion to 1.4 or greater.

§prediction_class: Option<String>

Optional. The fully qualified name (module_name.class_name) of a class that implements the Predictor interface described in this reference field. The module containing this class should be included in a package provided to the packageUris field. Specify this field if and only if you are deploying a custom prediction routine (beta). If you specify this field, you must set runtimeVersion to 1.4 or greater and you must set machineType to a legacy (MLS1) machine type. The following code sample provides the Predictor interface: class Predictor(object): “”“Interface for constructing custom predictors.”“” def predict(self, instances, **kwargs): “”“Performs custom prediction. Instances are the decoded values from the request. They have already been deserialized from JSON. Args: instances: A list of prediction input instances. **kwargs: A dictionary of keyword args provided as additional fields on the predict request body. Returns: A list of outputs containing the prediction results. This list must be JSON serializable. “”“ raise NotImplementedError() @classmethod def from_path(cls, model_dir): “”“Creates an instance of Predictor using the given path. Loading of the predictor should be done in this method. Args: model_dir: The local directory that contains the exported model file along with any additional files uploaded when creating the version resource. Returns: An instance implementing this Predictor class. “”“ raise NotImplementedError() Learn more about the Predictor interface and custom prediction routines.

§python_version: Option<String>

Required. The version of Python used in prediction. The following Python versions are available: * Python ‘3.7’ is available when runtime_version is set to ‘1.15’ or later. * Python ‘3.5’ is available when runtime_version is set to a version from ‘1.4’ to ‘1.14’. * Python ‘2.7’ is available when runtime_version is set to ‘1.15’ or earlier. Read more about the Python versions available for each runtime version.

§request_logging_config: Option<GoogleCloudMlV1__RequestLoggingConfig>

Optional. Only specify this field in a projects.models.versions.patch request. Specifying it in a projects.models.versions.create request has no effect. Configures the request-response pair logging on predictions from this Version.

§routes: Option<GoogleCloudMlV1__RouteMap>

Optional. Specifies paths on a custom container’s HTTP server where AI Platform Prediction sends certain requests. If you specify this field, then you must also specify the container field. If you specify the container field and do not specify this field, it defaults to the following: json { "predict": "/v1/models/MODEL/versions/VERSION:predict", "health": "/v1/models/MODEL/versions/VERSION" } See RouteMap for more details about these default values.

§runtime_version: Option<String>

Required. The AI Platform runtime version to use for this deployment. For more information, see the runtime version list and how to manage runtime versions.

§service_account: Option<String>

Optional. Specifies the service account for resource access control. If you specify this field, then you must also specify either the containerSpec or the predictionClass field. Learn more about using a custom service account.

§state: Option<String>

Output only. The state of a version.

Trait Implementations§

source§

impl Clone for GoogleCloudMlV1__Version

source§

fn clone(&self) -> GoogleCloudMlV1__Version

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for GoogleCloudMlV1__Version

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Default for GoogleCloudMlV1__Version

source§

fn default() -> GoogleCloudMlV1__Version

Returns the “default value” for a type. Read more
source§

impl<'de> Deserialize<'de> for GoogleCloudMlV1__Version

source§

fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>
where __D: Deserializer<'de>,

Deserialize this value from the given Serde deserializer. Read more
source§

impl Serialize for GoogleCloudMlV1__Version

source§

fn serialize<__S>(&self, __serializer: __S) -> Result<__S::Ok, __S::Error>
where __S: Serializer,

Serialize this value into the given Serde serializer. Read more
source§

impl RequestValue for GoogleCloudMlV1__Version

source§

impl ResponseResult for GoogleCloudMlV1__Version

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

impl<T> DeserializeOwned for T
where T: for<'de> Deserialize<'de>,