1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
impl super::Client {
    /// Constructs a fluent builder for the [`DetectLabels`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder) operation.
    ///
    /// - The fluent builder is configurable:
    ///   - [`image(Image)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::image) / [`set_image(Option<Image>)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::set_image):<br>required: **true**<br><p>The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. Images stored in an S3 Bucket do not need to be base64-encoded.</p>  <p>If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the <code>Bytes</code> field. For more information, see Images in the Amazon Rekognition developer guide.</p><br>
    ///   - [`max_labels(i32)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::max_labels) / [`set_max_labels(Option<i32>)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::set_max_labels):<br>required: **false**<br><p>Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels. Only valid when GENERAL_LABELS is specified as a feature type in the Feature input parameter.</p><br>
    ///   - [`min_confidence(f32)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::min_confidence) / [`set_min_confidence(Option<f32>)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::set_min_confidence):<br>required: **false**<br><p>Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with confidence lower than this specified value.</p>  <p>If <code>MinConfidence</code> is not specified, the operation returns labels with a confidence values greater than or equal to 55 percent. Only valid when GENERAL_LABELS is specified as a feature type in the Feature input parameter.</p><br>
    ///   - [`features(DetectLabelsFeatureName)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::features) / [`set_features(Option<Vec::<DetectLabelsFeatureName>>)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::set_features):<br>required: **false**<br><p>A list of the types of analysis to perform. Specifying GENERAL_LABELS uses the label detection feature, while specifying IMAGE_PROPERTIES returns information regarding image color and quality. If no option is specified GENERAL_LABELS is used by default.</p><br>
    ///   - [`settings(DetectLabelsSettings)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::settings) / [`set_settings(Option<DetectLabelsSettings>)`](crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::set_settings):<br>required: **false**<br><p>A list of the filters to be applied to returned detected labels and image properties. Specified filters can be inclusive, exclusive, or a combination of both. Filters can be used for individual labels or label categories. The exact label names or label categories must be supplied. For a full list of labels and label categories, see <a href="https://docs.aws.amazon.com/rekognition/latest/dg/labels.html">Detecting labels</a>.</p><br>
    /// - On success, responds with [`DetectLabelsOutput`](crate::operation::detect_labels::DetectLabelsOutput) with field(s):
    ///   - [`labels(Option<Vec::<Label>>)`](crate::operation::detect_labels::DetectLabelsOutput::labels): <p>An array of labels for the real-world objects detected. </p>
    ///   - [`orientation_correction(Option<OrientationCorrection>)`](crate::operation::detect_labels::DetectLabelsOutput::orientation_correction): <p>The value of <code>OrientationCorrection</code> is always null.</p>  <p>If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.</p>  <p>Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated. </p>
    ///   - [`label_model_version(Option<String>)`](crate::operation::detect_labels::DetectLabelsOutput::label_model_version): <p>Version number of the label detection model that was used to detect labels.</p>
    ///   - [`image_properties(Option<DetectLabelsImageProperties>)`](crate::operation::detect_labels::DetectLabelsOutput::image_properties): <p>Information about the properties of the input image, such as brightness, sharpness, contrast, and dominant colors.</p>
    /// - On failure, responds with [`SdkError<DetectLabelsError>`](crate::operation::detect_labels::DetectLabelsError)
    pub fn detect_labels(&self) -> crate::operation::detect_labels::builders::DetectLabelsFluentBuilder {
        crate::operation::detect_labels::builders::DetectLabelsFluentBuilder::new(self.handle.clone())
    }
}