[][src]Struct rusoto_rekognition::IndexFacesResponse

pub struct IndexFacesResponse {
    pub face_model_version: Option<String>,
    pub face_records: Option<Vec<FaceRecord>>,
    pub orientation_correction: Option<String>,
    pub unindexed_faces: Option<Vec<UnindexedFace>>,
}

Fields

face_model_version: Option<String>

The version number of the face detection model that's associated with the input collection (CollectionId).

face_records: Option<Vec<FaceRecord>>

An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

orientation_correction: Option<String>

If your collection is associated with a face detection model that's later than version 3.0, the value of OrientationCorrection is always null and no orientation information is returned.

If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:

  • If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata. The value of OrientationCorrection is null.

  • If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.

Bounding box information is returned in the FaceRecords array. You can get the version of the face detection model by calling DescribeCollection.

unindexed_faces: Option<Vec<UnindexedFace>>

An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality filter identified them as low quality, or the MaxFaces request parameter filtered them out. To use the quality filter, you specify the QualityFilter request parameter.

Trait Implementations

impl Clone for IndexFacesResponse[src]

impl Debug for IndexFacesResponse[src]

impl Default for IndexFacesResponse[src]

impl<'de> Deserialize<'de> for IndexFacesResponse[src]

impl PartialEq<IndexFacesResponse> for IndexFacesResponse[src]

impl StructuralPartialEq for IndexFacesResponse[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> DeserializeOwned for T where
    T: for<'de> Deserialize<'de>, 
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> Same<T> for T

type Output = T

Should always be Self

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.