1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
impl super::Client {
    /// Constructs a fluent builder for the [`DetectFaces`](crate::operation::detect_faces::builders::DetectFacesFluentBuilder) operation.
    ///
    /// - The fluent builder is configurable:
    ///   - [`image(Image)`](crate::operation::detect_faces::builders::DetectFacesFluentBuilder::image) / [`set_image(Option<Image>)`](crate::operation::detect_faces::builders::DetectFacesFluentBuilder::set_image):<br>required: **true**<br><p>The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported. </p>  <p>If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the <code>Bytes</code> field. For more information, see Images in the Amazon Rekognition developer guide.</p><br>
    ///   - [`attributes(Attribute)`](crate::operation::detect_faces::builders::DetectFacesFluentBuilder::attributes) / [`set_attributes(Option<Vec::<Attribute>>)`](crate::operation::detect_faces::builders::DetectFacesFluentBuilder::set_attributes):<br>required: **false**<br><p>An array of facial attributes you want to be returned. A <code>DEFAULT</code> subset of facial attributes - <code>BoundingBox</code>, <code>Confidence</code>, <code>Pose</code>, <code>Quality</code>, and <code>Landmarks</code> - will always be returned. You can request for specific facial attributes (in addition to the default list) - by using [<code>"DEFAULT", "FACE_OCCLUDED"</code>] or just [<code>"FACE_OCCLUDED"</code>]. You can request for all facial attributes by using [<code>"ALL"]</code>. Requesting more attributes may increase response time.</p>  <p>If you provide both, <code>["ALL", "DEFAULT"]</code>, the service uses a logical "AND" operator to determine which attributes to return (in this case, all attributes). </p>  <p>Note that while the FaceOccluded and EyeDirection attributes are supported when using <code>DetectFaces</code>, they aren't supported when analyzing videos with <code>StartFaceDetection</code> and <code>GetFaceDetection</code>.</p><br>
    /// - On success, responds with [`DetectFacesOutput`](crate::operation::detect_faces::DetectFacesOutput) with field(s):
    ///   - [`face_details(Option<Vec::<FaceDetail>>)`](crate::operation::detect_faces::DetectFacesOutput::face_details): <p>Details of each face found in the image. </p>
    ///   - [`orientation_correction(Option<OrientationCorrection>)`](crate::operation::detect_faces::DetectFacesOutput::orientation_correction): <p>The value of <code>OrientationCorrection</code> is always null.</p>  <p>If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata.</p>  <p>Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren't translated and represent the object locations before the image is rotated. </p>
    /// - On failure, responds with [`SdkError<DetectFacesError>`](crate::operation::detect_faces::DetectFacesError)
    pub fn detect_faces(&self) -> crate::operation::detect_faces::builders::DetectFacesFluentBuilder {
        crate::operation::detect_faces::builders::DetectFacesFluentBuilder::new(self.handle.clone())
    }
}