#[non_exhaustive]pub struct StartModelInputBuilder { /* private fields */ }Expand description
A builder for StartModelInput.
Implementations§
source§impl StartModelInputBuilder
impl StartModelInputBuilder
sourcepub fn project_name(self, input: impl Into<String>) -> Self
pub fn project_name(self, input: impl Into<String>) -> Self
The name of the project that contains the model that you want to start.
This field is required.sourcepub fn set_project_name(self, input: Option<String>) -> Self
pub fn set_project_name(self, input: Option<String>) -> Self
The name of the project that contains the model that you want to start.
sourcepub fn get_project_name(&self) -> &Option<String>
pub fn get_project_name(&self) -> &Option<String>
The name of the project that contains the model that you want to start.
sourcepub fn model_version(self, input: impl Into<String>) -> Self
pub fn model_version(self, input: impl Into<String>) -> Self
The version of the model that you want to start.
This field is required.sourcepub fn set_model_version(self, input: Option<String>) -> Self
pub fn set_model_version(self, input: Option<String>) -> Self
The version of the model that you want to start.
sourcepub fn get_model_version(&self) -> &Option<String>
pub fn get_model_version(&self) -> &Option<String>
The version of the model that you want to start.
sourcepub fn min_inference_units(self, input: i32) -> Self
pub fn min_inference_units(self, input: i32) -> Self
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
This field is required.sourcepub fn set_min_inference_units(self, input: Option<i32>) -> Self
pub fn set_min_inference_units(self, input: Option<i32>) -> Self
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
sourcepub fn get_min_inference_units(&self) -> &Option<i32>
pub fn get_min_inference_units(&self) -> &Option<i32>
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
sourcepub fn client_token(self, input: impl Into<String>) -> Self
pub fn client_token(self, input: impl Into<String>) -> Self
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response from StartModel. In this case, safely retry your call to StartModel by using the same ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple start requests. You'll need to provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value for ClientToken is considered a new call to StartModel. An idempotency token is active for 8 hours.
sourcepub fn set_client_token(self, input: Option<String>) -> Self
pub fn set_client_token(self, input: Option<String>) -> Self
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response from StartModel. In this case, safely retry your call to StartModel by using the same ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple start requests. You'll need to provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value for ClientToken is considered a new call to StartModel. An idempotency token is active for 8 hours.
sourcepub fn get_client_token(&self) -> &Option<String>
pub fn get_client_token(&self) -> &Option<String>
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response from StartModel. In this case, safely retry your call to StartModel by using the same ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple start requests. You'll need to provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value for ClientToken is considered a new call to StartModel. An idempotency token is active for 8 hours.
sourcepub fn max_inference_units(self, input: i32) -> Self
pub fn max_inference_units(self, input: i32) -> Self
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
sourcepub fn set_max_inference_units(self, input: Option<i32>) -> Self
pub fn set_max_inference_units(self, input: Option<i32>) -> Self
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
sourcepub fn get_max_inference_units(&self) -> &Option<i32>
pub fn get_max_inference_units(&self) -> &Option<i32>
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
sourcepub fn build(self) -> Result<StartModelInput, BuildError>
pub fn build(self) -> Result<StartModelInput, BuildError>
Consumes the builder and constructs a StartModelInput.
source§impl StartModelInputBuilder
impl StartModelInputBuilder
sourcepub async fn send_with(
self,
client: &Client
) -> Result<StartModelOutput, SdkError<StartModelError, HttpResponse>>
pub async fn send_with( self, client: &Client ) -> Result<StartModelOutput, SdkError<StartModelError, HttpResponse>>
Sends a request with this input using the given client.
Trait Implementations§
source§impl Clone for StartModelInputBuilder
impl Clone for StartModelInputBuilder
source§fn clone(&self) -> StartModelInputBuilder
fn clone(&self) -> StartModelInputBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moresource§impl Debug for StartModelInputBuilder
impl Debug for StartModelInputBuilder
source§impl Default for StartModelInputBuilder
impl Default for StartModelInputBuilder
source§fn default() -> StartModelInputBuilder
fn default() -> StartModelInputBuilder
source§impl PartialEq for StartModelInputBuilder
impl PartialEq for StartModelInputBuilder
source§fn eq(&self, other: &StartModelInputBuilder) -> bool
fn eq(&self, other: &StartModelInputBuilder) -> bool
self and other values to be equal, and is used
by ==.