// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
impl super::Client {
/// Constructs a fluent builder for the [`IndexFaces`](crate::operation::index_faces::builders::IndexFacesFluentBuilder) operation.
///
/// - The fluent builder is configurable:
/// - [`collection_id(impl Into<String>)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::collection_id) / [`set_collection_id(Option<String>)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::set_collection_id):<br>required: **true**<br><p>The ID of an existing collection to which you want to add the faces that are detected in the input images.</p><br>
/// - [`image(Image)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::image) / [`set_image(Option<Image>)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::set_image):<br>required: **true**<br><p>The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes isn't supported. </p> <p>If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the <code>Bytes</code> field. For more information, see Images in the Amazon Rekognition developer guide.</p><br>
/// - [`external_image_id(impl Into<String>)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::external_image_id) / [`set_external_image_id(Option<String>)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::set_external_image_id):<br>required: **false**<br><p>The ID you want to assign to all the faces detected in the image.</p><br>
/// - [`detection_attributes(Attribute)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::detection_attributes) / [`set_detection_attributes(Option<Vec::<Attribute>>)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::set_detection_attributes):<br>required: **false**<br><p>An array of facial attributes you want to be returned. A <code>DEFAULT</code> subset of facial attributes - <code>BoundingBox</code>, <code>Confidence</code>, <code>Pose</code>, <code>Quality</code>, and <code>Landmarks</code> - will always be returned. You can request for specific facial attributes (in addition to the default list) - by using <code>["DEFAULT", "FACE_OCCLUDED"]</code> or just <code>["FACE_OCCLUDED"]</code>. You can request for all facial attributes by using <code>["ALL"]</code>. Requesting more attributes may increase response time.</p> <p>If you provide both, <code>["ALL", "DEFAULT"]</code>, the service uses a logical AND operator to determine which attributes to return (in this case, all attributes). </p><br>
/// - [`max_faces(i32)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::max_faces) / [`set_max_faces(Option<i32>)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::set_max_faces):<br>required: **false**<br><p>The maximum number of faces to index. The value of <code>MaxFaces</code> must be greater than or equal to 1. <code>IndexFaces</code> returns no more than 100 detected faces in an image, even if you specify a larger value for <code>MaxFaces</code>.</p> <p>If <code>IndexFaces</code> detects more faces than the value of <code>MaxFaces</code>, the faces with the lowest quality are filtered out first. If there are still more faces than the value of <code>MaxFaces</code>, the faces with the smallest bounding boxes are filtered out (up to the number that's needed to satisfy the value of <code>MaxFaces</code>). Information about the unindexed faces is available in the <code>UnindexedFaces</code> array. </p> <p>The faces that are returned by <code>IndexFaces</code> are sorted by the largest face bounding box size to the smallest size, in descending order.</p> <p> <code>MaxFaces</code> can be used with a collection associated with any version of the face model.</p><br>
/// - [`quality_filter(QualityFilter)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::quality_filter) / [`set_quality_filter(Option<QualityFilter>)`](crate::operation::index_faces::builders::IndexFacesFluentBuilder::set_quality_filter):<br>required: **false**<br><p>A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren't indexed. If you specify <code>AUTO</code>, Amazon Rekognition chooses the quality bar. If you specify <code>LOW</code>, <code>MEDIUM</code>, or <code>HIGH</code>, filtering removes all faces that don’t meet the chosen quality bar. The default value is <code>AUTO</code>. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that's misidentified as a face, a face that's too blurry, or a face with a pose that's too extreme to use. If you specify <code>NONE</code>, no filtering is performed. </p> <p>To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.</p><br>
/// - On success, responds with [`IndexFacesOutput`](crate::operation::index_faces::IndexFacesOutput) with field(s):
/// - [`face_records(Option<Vec::<FaceRecord>>)`](crate::operation::index_faces::IndexFacesOutput::face_records): <p>An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide. </p>
/// - [`orientation_correction(Option<OrientationCorrection>)`](crate::operation::index_faces::IndexFacesOutput::orientation_correction): <p>If your collection is associated with a face detection model that's later than version 3.0, the value of <code>OrientationCorrection</code> is always null and no orientation information is returned.</p> <p>If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:</p> <ul> <li> <p>If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata. The value of <code>OrientationCorrection</code> is null.</p> </li> <li> <p>If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.</p> </li> </ul> <p>Bounding box information is returned in the <code>FaceRecords</code> array. You can get the version of the face detection model by calling <code>DescribeCollection</code>. </p>
/// - [`face_model_version(Option<String>)`](crate::operation::index_faces::IndexFacesOutput::face_model_version): <p>The version number of the face detection model that's associated with the input collection (<code>CollectionId</code>).</p>
/// - [`unindexed_faces(Option<Vec::<UnindexedFace>>)`](crate::operation::index_faces::IndexFacesOutput::unindexed_faces): <p>An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality filter identified them as low quality, or the <code>MaxFaces</code> request parameter filtered them out. To use the quality filter, you specify the <code>QualityFilter</code> request parameter.</p>
/// - On failure, responds with [`SdkError<IndexFacesError>`](crate::operation::index_faces::IndexFacesError)
pub fn index_faces(&self) -> crate::operation::index_faces::builders::IndexFacesFluentBuilder {
crate::operation::index_faces::builders::IndexFacesFluentBuilder::new(self.handle.clone())
}
}