pub struct StartModelFluentBuilder { /* private fields */ }Expand description
Fluent builder constructing a request to StartModel.
Starts the running of the version of an Amazon Lookout for Vision model. Starting a model takes a while to complete. To check the current state of the model, use DescribeModel.
A model is ready to use when its status is HOSTED.
Once the model is running, you can detect custom labels in new images by calling DetectAnomalies.
You are charged for the amount of time that the model is running. To stop a running model, call StopModel.
This operation requires permissions to perform the lookoutvision:StartModel operation.
Implementations§
source§impl StartModelFluentBuilder
impl StartModelFluentBuilder
sourcepub fn as_input(&self) -> &StartModelInputBuilder
pub fn as_input(&self) -> &StartModelInputBuilder
Access the StartModel as a reference.
sourcepub async fn send(
self
) -> Result<StartModelOutput, SdkError<StartModelError, HttpResponse>>
pub async fn send( self ) -> Result<StartModelOutput, SdkError<StartModelError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self
) -> CustomizableOperation<StartModelOutput, StartModelError, Self>
pub fn customize( self ) -> CustomizableOperation<StartModelOutput, StartModelError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn project_name(self, input: impl Into<String>) -> Self
pub fn project_name(self, input: impl Into<String>) -> Self
The name of the project that contains the model that you want to start.
sourcepub fn set_project_name(self, input: Option<String>) -> Self
pub fn set_project_name(self, input: Option<String>) -> Self
The name of the project that contains the model that you want to start.
sourcepub fn get_project_name(&self) -> &Option<String>
pub fn get_project_name(&self) -> &Option<String>
The name of the project that contains the model that you want to start.
sourcepub fn model_version(self, input: impl Into<String>) -> Self
pub fn model_version(self, input: impl Into<String>) -> Self
The version of the model that you want to start.
sourcepub fn set_model_version(self, input: Option<String>) -> Self
pub fn set_model_version(self, input: Option<String>) -> Self
The version of the model that you want to start.
sourcepub fn get_model_version(&self) -> &Option<String>
pub fn get_model_version(&self) -> &Option<String>
The version of the model that you want to start.
sourcepub fn min_inference_units(self, input: i32) -> Self
pub fn min_inference_units(self, input: i32) -> Self
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
sourcepub fn set_min_inference_units(self, input: Option<i32>) -> Self
pub fn set_min_inference_units(self, input: Option<i32>) -> Self
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
sourcepub fn get_min_inference_units(&self) -> &Option<i32>
pub fn get_min_inference_units(&self) -> &Option<i32>
The minimum number of inference units to use. A single inference unit represents 1 hour of processing. Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
sourcepub fn client_token(self, input: impl Into<String>) -> Self
pub fn client_token(self, input: impl Into<String>) -> Self
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response from StartModel. In this case, safely retry your call to StartModel by using the same ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple start requests. You'll need to provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value for ClientToken is considered a new call to StartModel. An idempotency token is active for 8 hours.
sourcepub fn set_client_token(self, input: Option<String>) -> Self
pub fn set_client_token(self, input: Option<String>) -> Self
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response from StartModel. In this case, safely retry your call to StartModel by using the same ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple start requests. You'll need to provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value for ClientToken is considered a new call to StartModel. An idempotency token is active for 8 hours.
sourcepub fn get_client_token(&self) -> &Option<String>
pub fn get_client_token(&self) -> &Option<String>
ClientToken is an idempotency token that ensures a call to StartModel completes only once. You choose the value to pass. For example, An issue might prevent you from getting a response from StartModel. In this case, safely retry your call to StartModel by using the same ClientToken parameter value.
If you don't supply a value for ClientToken, the AWS SDK you are using inserts a value for you. This prevents retries after a network error from making multiple start requests. You'll need to provide your own value for other use cases.
An error occurs if the other input parameters are not the same as in the first request. Using a different value for ClientToken is considered a new call to StartModel. An idempotency token is active for 8 hours.
sourcepub fn max_inference_units(self, input: i32) -> Self
pub fn max_inference_units(self, input: i32) -> Self
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
sourcepub fn set_max_inference_units(self, input: Option<i32>) -> Self
pub fn set_max_inference_units(self, input: Option<i32>) -> Self
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
sourcepub fn get_max_inference_units(&self) -> &Option<i32>
pub fn get_max_inference_units(&self) -> &Option<i32>
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Lookout for Vision doesn't auto-scale the model.
Trait Implementations§
source§impl Clone for StartModelFluentBuilder
impl Clone for StartModelFluentBuilder
source§fn clone(&self) -> StartModelFluentBuilder
fn clone(&self) -> StartModelFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more