1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
pub use crate::operation::detect_custom_labels::_detect_custom_labels_output::DetectCustomLabelsOutputBuilder;
pub use crate::operation::detect_custom_labels::_detect_custom_labels_input::DetectCustomLabelsInputBuilder;
impl DetectCustomLabelsInputBuilder {
/// Sends a request with this input using the given client.
pub async fn send_with(
self,
client: &crate::Client,
) -> ::std::result::Result<
crate::operation::detect_custom_labels::DetectCustomLabelsOutput,
::aws_smithy_runtime_api::client::result::SdkError<
crate::operation::detect_custom_labels::DetectCustomLabelsError,
::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
>,
> {
let mut fluent_builder = client.detect_custom_labels();
fluent_builder.inner = self;
fluent_builder.send().await
}
}
/// Fluent builder constructing a request to `DetectCustomLabels`.
///
/// <note>
/// <p>This operation applies only to Amazon Rekognition Custom Labels.</p>
/// </note>
/// <p>Detects custom labels in a supplied image by using an Amazon Rekognition Custom Labels model. </p>
/// <p>You specify which version of a model version to use by using the <code>ProjectVersionArn</code> input parameter. </p>
/// <p>You pass the input image as base64-encoded image bytes or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. The image must be either a PNG or JPEG formatted file. </p>
/// <p> For each object that the model version detects on an image, the API returns a (<code>CustomLabel</code>) object in an array (<code>CustomLabels</code>). Each <code>CustomLabel</code> object provides the label name (<code>Name</code>), the level of confidence that the image contains the object (<code>Confidence</code>), and object location information, if it exists, for the label on the image (<code>Geometry</code>). </p>
/// <p>To filter labels that are returned, specify a value for <code>MinConfidence</code>. <code>DetectCustomLabelsLabels</code> only returns labels with a confidence that's higher than the specified value. The value of <code>MinConfidence</code> maps to the assumed threshold values created during training. For more information, see <i>Assumed threshold</i> in the Amazon Rekognition Custom Labels Developer Guide. Amazon Rekognition Custom Labels metrics expresses an assumed threshold as a floating point value between 0-1. The range of <code>MinConfidence</code> normalizes the threshold value to a percentage value (0-100). Confidence responses from <code>DetectCustomLabels</code> are also returned as a percentage. You can use <code>MinConfidence</code> to change the precision and recall or your model. For more information, see <i>Analyzing an image</i> in the Amazon Rekognition Custom Labels Developer Guide. </p>
/// <p>If you don't specify a value for <code>MinConfidence</code>, <code>DetectCustomLabels</code> returns labels based on the assumed threshold of each label.</p>
/// <p>This is a stateless API operation. That is, the operation does not persist any data.</p>
/// <p>This operation requires permissions to perform the <code>rekognition:DetectCustomLabels</code> action. </p>
/// <p>For more information, see <i>Analyzing an image</i> in the Amazon Rekognition Custom Labels Developer Guide. </p>
#[derive(::std::clone::Clone, ::std::fmt::Debug)]
pub struct DetectCustomLabelsFluentBuilder {
handle: ::std::sync::Arc<crate::client::Handle>,
inner: crate::operation::detect_custom_labels::builders::DetectCustomLabelsInputBuilder,
config_override: ::std::option::Option<crate::config::Builder>,
}
impl
crate::client::customize::internal::CustomizableSend<
crate::operation::detect_custom_labels::DetectCustomLabelsOutput,
crate::operation::detect_custom_labels::DetectCustomLabelsError,
> for DetectCustomLabelsFluentBuilder
{
fn send(
self,
config_override: crate::config::Builder,
) -> crate::client::customize::internal::BoxFuture<
crate::client::customize::internal::SendResult<
crate::operation::detect_custom_labels::DetectCustomLabelsOutput,
crate::operation::detect_custom_labels::DetectCustomLabelsError,
>,
> {
::std::boxed::Box::pin(async move { self.config_override(config_override).send().await })
}
}
impl DetectCustomLabelsFluentBuilder {
/// Creates a new `DetectCustomLabels`.
pub(crate) fn new(handle: ::std::sync::Arc<crate::client::Handle>) -> Self {
Self {
handle,
inner: ::std::default::Default::default(),
config_override: ::std::option::Option::None,
}
}
/// Access the DetectCustomLabels as a reference.
pub fn as_input(&self) -> &crate::operation::detect_custom_labels::builders::DetectCustomLabelsInputBuilder {
&self.inner
}
/// Sends the request and returns the response.
///
/// If an error occurs, an `SdkError` will be returned with additional details that
/// can be matched against.
///
/// By default, any retryable failures will be retried twice. Retry behavior
/// is configurable with the [RetryConfig](aws_smithy_types::retry::RetryConfig), which can be
/// set when configuring the client.
pub async fn send(
self,
) -> ::std::result::Result<
crate::operation::detect_custom_labels::DetectCustomLabelsOutput,
::aws_smithy_runtime_api::client::result::SdkError<
crate::operation::detect_custom_labels::DetectCustomLabelsError,
::aws_smithy_runtime_api::client::orchestrator::HttpResponse,
>,
> {
let input = self
.inner
.build()
.map_err(::aws_smithy_runtime_api::client::result::SdkError::construction_failure)?;
let runtime_plugins = crate::operation::detect_custom_labels::DetectCustomLabels::operation_runtime_plugins(
self.handle.runtime_plugins.clone(),
&self.handle.conf,
self.config_override,
);
crate::operation::detect_custom_labels::DetectCustomLabels::orchestrate(&runtime_plugins, input).await
}
/// Consumes this builder, creating a customizable operation that can be modified before being sent.
pub fn customize(
self,
) -> crate::client::customize::CustomizableOperation<
crate::operation::detect_custom_labels::DetectCustomLabelsOutput,
crate::operation::detect_custom_labels::DetectCustomLabelsError,
Self,
> {
crate::client::customize::CustomizableOperation::new(self)
}
pub(crate) fn config_override(mut self, config_override: impl Into<crate::config::Builder>) -> Self {
self.set_config_override(Some(config_override.into()));
self
}
pub(crate) fn set_config_override(&mut self, config_override: Option<crate::config::Builder>) -> &mut Self {
self.config_override = config_override;
self
}
/// <p>The ARN of the model version that you want to use. Only models associated with Custom Labels projects accepted by the operation. If a provided ARN refers to a model version associated with a project for a different feature type, then an InvalidParameterException is returned.</p>
pub fn project_version_arn(mut self, input: impl ::std::convert::Into<::std::string::String>) -> Self {
self.inner = self.inner.project_version_arn(input.into());
self
}
/// <p>The ARN of the model version that you want to use. Only models associated with Custom Labels projects accepted by the operation. If a provided ARN refers to a model version associated with a project for a different feature type, then an InvalidParameterException is returned.</p>
pub fn set_project_version_arn(mut self, input: ::std::option::Option<::std::string::String>) -> Self {
self.inner = self.inner.set_project_version_arn(input);
self
}
/// <p>The ARN of the model version that you want to use. Only models associated with Custom Labels projects accepted by the operation. If a provided ARN refers to a model version associated with a project for a different feature type, then an InvalidParameterException is returned.</p>
pub fn get_project_version_arn(&self) -> &::std::option::Option<::std::string::String> {
self.inner.get_project_version_arn()
}
/// <p>Provides the input image either as bytes or an S3 object.</p>
/// <p>You pass image bytes to an Amazon Rekognition API operation by using the <code>Bytes</code> property. For example, you would use the <code>Bytes</code> property to pass an image loaded from a local file system. Image bytes passed by using the <code>Bytes</code> property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations. </p>
/// <p>For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.</p>
/// <p> You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the <code>S3Object</code> property. Images stored in an S3 bucket do not need to be base64-encoded.</p>
/// <p>The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.</p>
/// <p>If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.</p>
/// <p>For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. </p>
pub fn image(mut self, input: crate::types::Image) -> Self {
self.inner = self.inner.image(input);
self
}
/// <p>Provides the input image either as bytes or an S3 object.</p>
/// <p>You pass image bytes to an Amazon Rekognition API operation by using the <code>Bytes</code> property. For example, you would use the <code>Bytes</code> property to pass an image loaded from a local file system. Image bytes passed by using the <code>Bytes</code> property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations. </p>
/// <p>For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.</p>
/// <p> You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the <code>S3Object</code> property. Images stored in an S3 bucket do not need to be base64-encoded.</p>
/// <p>The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.</p>
/// <p>If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.</p>
/// <p>For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. </p>
pub fn set_image(mut self, input: ::std::option::Option<crate::types::Image>) -> Self {
self.inner = self.inner.set_image(input);
self
}
/// <p>Provides the input image either as bytes or an S3 object.</p>
/// <p>You pass image bytes to an Amazon Rekognition API operation by using the <code>Bytes</code> property. For example, you would use the <code>Bytes</code> property to pass an image loaded from a local file system. Image bytes passed by using the <code>Bytes</code> property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations. </p>
/// <p>For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.</p>
/// <p> You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the <code>S3Object</code> property. Images stored in an S3 bucket do not need to be base64-encoded.</p>
/// <p>The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.</p>
/// <p>If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.</p>
/// <p>For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide. </p>
pub fn get_image(&self) -> &::std::option::Option<crate::types::Image> {
self.inner.get_image()
}
/// <p>Maximum number of results you want the service to return in the response. The service returns the specified number of highest confidence labels ranked from highest confidence to lowest.</p>
pub fn max_results(mut self, input: i32) -> Self {
self.inner = self.inner.max_results(input);
self
}
/// <p>Maximum number of results you want the service to return in the response. The service returns the specified number of highest confidence labels ranked from highest confidence to lowest.</p>
pub fn set_max_results(mut self, input: ::std::option::Option<i32>) -> Self {
self.inner = self.inner.set_max_results(input);
self
}
/// <p>Maximum number of results you want the service to return in the response. The service returns the specified number of highest confidence labels ranked from highest confidence to lowest.</p>
pub fn get_max_results(&self) -> &::std::option::Option<i32> {
self.inner.get_max_results()
}
/// <p>Specifies the minimum confidence level for the labels to return. <code>DetectCustomLabels</code> doesn't return any labels with a confidence value that's lower than this specified value. If you specify a value of 0, <code>DetectCustomLabels</code> returns all labels, regardless of the assumed threshold applied to each label. If you don't specify a value for <code>MinConfidence</code>, <code>DetectCustomLabels</code> returns labels based on the assumed threshold of each label.</p>
pub fn min_confidence(mut self, input: f32) -> Self {
self.inner = self.inner.min_confidence(input);
self
}
/// <p>Specifies the minimum confidence level for the labels to return. <code>DetectCustomLabels</code> doesn't return any labels with a confidence value that's lower than this specified value. If you specify a value of 0, <code>DetectCustomLabels</code> returns all labels, regardless of the assumed threshold applied to each label. If you don't specify a value for <code>MinConfidence</code>, <code>DetectCustomLabels</code> returns labels based on the assumed threshold of each label.</p>
pub fn set_min_confidence(mut self, input: ::std::option::Option<f32>) -> Self {
self.inner = self.inner.set_min_confidence(input);
self
}
/// <p>Specifies the minimum confidence level for the labels to return. <code>DetectCustomLabels</code> doesn't return any labels with a confidence value that's lower than this specified value. If you specify a value of 0, <code>DetectCustomLabels</code> returns all labels, regardless of the assumed threshold applied to each label. If you don't specify a value for <code>MinConfidence</code>, <code>DetectCustomLabels</code> returns labels based on the assumed threshold of each label.</p>
pub fn get_min_confidence(&self) -> &::std::option::Option<f32> {
self.inner.get_min_confidence()
}
}