Struct aws_sdk_rekognition::operation::start_project_version::builders::StartProjectVersionFluentBuilder
source · pub struct StartProjectVersionFluentBuilder { /* private fields */ }Expand description
Fluent builder constructing a request to StartProjectVersion.
This operation applies only to Amazon Rekognition Custom Labels.
Starts the running of the version of a model. Starting a model takes a while to complete. To check the current state of the model, use DescribeProjectVersions.
Once the model is running, you can detect custom labels in new images by calling DetectCustomLabels.
You are charged for the amount of time that the model is running. To stop a running model, call StopProjectVersion.
This operation requires permissions to perform the rekognition:StartProjectVersion action.
Implementations§
source§impl StartProjectVersionFluentBuilder
impl StartProjectVersionFluentBuilder
sourcepub fn as_input(&self) -> &StartProjectVersionInputBuilder
pub fn as_input(&self) -> &StartProjectVersionInputBuilder
Access the StartProjectVersion as a reference.
sourcepub async fn send(
self
) -> Result<StartProjectVersionOutput, SdkError<StartProjectVersionError, HttpResponse>>
pub async fn send( self ) -> Result<StartProjectVersionOutput, SdkError<StartProjectVersionError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self
) -> CustomizableOperation<StartProjectVersionOutput, StartProjectVersionError, Self>
pub fn customize( self ) -> CustomizableOperation<StartProjectVersionOutput, StartProjectVersionError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn project_version_arn(self, input: impl Into<String>) -> Self
pub fn project_version_arn(self, input: impl Into<String>) -> Self
The Amazon Resource Name(ARN) of the model version that you want to start.
sourcepub fn set_project_version_arn(self, input: Option<String>) -> Self
pub fn set_project_version_arn(self, input: Option<String>) -> Self
The Amazon Resource Name(ARN) of the model version that you want to start.
sourcepub fn get_project_version_arn(&self) -> &Option<String>
pub fn get_project_version_arn(&self) -> &Option<String>
The Amazon Resource Name(ARN) of the model version that you want to start.
sourcepub fn min_inference_units(self, input: i32) -> Self
pub fn min_inference_units(self, input: i32) -> Self
The minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
sourcepub fn set_min_inference_units(self, input: Option<i32>) -> Self
pub fn set_min_inference_units(self, input: Option<i32>) -> Self
The minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
sourcepub fn get_min_inference_units(&self) -> &Option<i32>
pub fn get_min_inference_units(&self) -> &Option<i32>
The minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
sourcepub fn max_inference_units(self, input: i32) -> Self
pub fn max_inference_units(self, input: i32) -> Self
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Rekognition Custom Labels doesn't auto-scale the model.
sourcepub fn set_max_inference_units(self, input: Option<i32>) -> Self
pub fn set_max_inference_units(self, input: Option<i32>) -> Self
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Rekognition Custom Labels doesn't auto-scale the model.
sourcepub fn get_max_inference_units(&self) -> &Option<i32>
pub fn get_max_inference_units(&self) -> &Option<i32>
The maximum number of inference units to use for auto-scaling the model. If you don't specify a value, Amazon Rekognition Custom Labels doesn't auto-scale the model.
Trait Implementations§
source§impl Clone for StartProjectVersionFluentBuilder
impl Clone for StartProjectVersionFluentBuilder
source§fn clone(&self) -> StartProjectVersionFluentBuilder
fn clone(&self) -> StartProjectVersionFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more