Module types

Source
Expand description

Data structures used by operation inputs/outputs.

Modules§

builders
Builders
error
Error types that Amazon Rekognition can respond with.

Structs§

AgeRange

Structure containing the estimated age range, in years, for a face.

Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.

Asset

Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.

AssociatedFace

Provides face metadata for the faces that are associated to a specific UserID.

AudioMetadata

Metadata information about an audio stream. An array of AudioMetadata objects for the audio streams found in a stored video is returned by GetSegmentDetection.

AuditImage

An image that is picked from the Face Liveness video and returned for audit trail purposes, returned as Base64-encoded bytes.

Beard

Indicates whether or not the face has a beard, and the confidence level in the determination.

BlackFrame

A filter that allows you to control the black frame detection by specifying the black levels and pixel coverage of black pixels in a frame. As videos can come from multiple sources, formats, and time periods, they may contain different standards and varying noise levels for black frames that need to be accounted for. For more information, see StartSegmentDetection.

BoundingBox

Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The left (x-coordinate) and top (y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).

The top and left values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns a left value of 0.5 (350/700) and a top value of 0.25 (50/200).

The width and height values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.

The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the left or top values.

Celebrity

Provides information about a celebrity recognized by the RecognizeCelebrities operation.

CelebrityDetail

Information about a recognized celebrity.

CelebrityRecognition

Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.

Challenge

Describes the type and version of the challenge being used for the Face Liveness session.

ChallengePreference

An ordered list of preferred challenge type and versions.

CompareFacesMatch

Provides information about a face in a target image that matches the source image face analyzed by CompareFaces. The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.

ComparedFace

Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities.

ComparedSourceImageFace

Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.

ConnectedHomeSettings

Label detection settings to use on a streaming video. Defining the settings is required in the request parameter for CreateStreamProcessor. Including this setting in the CreateStreamProcessor request enables you to use the stream processor for label detection. You can then select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected and one SNS notification is published the first time a pet is detected, as well as an end-of-session summary.

ConnectedHomeSettingsForUpdate

The label detection settings you want to use in your stream processor. This includes the labels you want the stream processor to detect and the minimum confidence level allowed to label objects.

ContentModerationDetection

Information about an inappropriate, unwanted, or offensive content label detection in a stored video.

ContentType

Contains information regarding the confidence and name of a detected content type.

CoversBodyPart

Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.

CreateFaceLivenessSessionRequestSettings

A session settings object. It contains settings for the operation to be performed. It accepts arguments for OutputConfig and AuditImagesLimit.

CustomLabel

A custom label detected in an image by a call to DetectCustomLabels.

CustomizationFeatureConfig

Feature specific configuration for the training job. Configuration provided for the job must match the feature type parameter associated with project. If configuration and feature type do not match an InvalidParameterException is returned.

CustomizationFeatureContentModerationConfig

Configuration options for Content Moderation training.

DatasetChanges

Describes updates or additions to a dataset. A Single update or addition is an entry (JSON Line) that provides information about a single image. To update an existing entry, you match the source-ref field of the update entry with the source-ref filed of the entry that you want to update. If the source-ref field doesn't match an existing entry, the entry is added to dataset as a new entry.

DatasetDescription

A description for a dataset. For more information, see DescribeDataset.

The status fields Status, StatusMessage, and StatusMessageCode reflect the last operation on the dataset.

DatasetLabelDescription

Describes a dataset label. For more information, see ListDatasetLabels.

DatasetLabelStats

Statistics about a label used in a dataset. For more information, see DatasetLabelDescription.

DatasetMetadata

Summary information for an Amazon Rekognition Custom Labels dataset. For more information, see ProjectDescription.

DatasetSource

The source that Amazon Rekognition Custom Labels uses to create a dataset. To use an Amazon Sagemaker format manifest file, specify the S3 bucket location in the GroundTruthManifest field. The S3 bucket must be in your AWS account. To create a copy of an existing dataset, specify the Amazon Resource Name (ARN) of an existing dataset in DatasetArn.

You need to specify a value for DatasetArn or GroundTruthManifest, but not both. if you supply both values, or if you don't specify any values, an InvalidParameterException exception occurs.

For more information, see CreateDataset.

DatasetStats

Provides statistics about a dataset. For more information, see DescribeDataset.

DetectLabelsImageBackground

The background of the image with regard to image quality and dominant colors.

DetectLabelsImageForeground

The foreground of the image with regard to image quality and dominant colors.

DetectLabelsImageProperties

Information about the quality and dominant colors of an input image. Quality and color information is returned for the entire image, foreground, and background.

DetectLabelsImagePropertiesSettings

Settings for the IMAGE_PROPERTIES feature type.

DetectLabelsImageQuality

The quality of an image provided for label detection, with regard to brightness, sharpness, and contrast.

DetectLabelsSettings

Settings for the DetectLabels request. Settings can include filters for both GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive or exclusive and applied to individual labels or label categories. IMAGE_PROPERTIES filters allow specification of a maximum number of dominant colors.

DetectTextFilters

A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter looks at a word’s height, width, and minimum confidence. RegionOfInterest lets you set a specific region of the image to look for text in.

DetectionFilter

A set of parameters that allow you to filter out certain results from your returned results.

DisassociatedFace

Provides face metadata for the faces that are disassociated from a specific UserID.

DistributeDataset

A training dataset or a test dataset used in a dataset distribution operation. For more information, see DistributeDatasetEntries.

DominantColor

A description of the dominant colors in an image.

Emotion

The API returns a prediction of an emotion based on a person's facial expressions, along with the confidence level for the predicted emotion. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally. The API is not intended to be used, and you may not use it, in a manner that violates the EU Artificial Intelligence Act or any other applicable law.

EquipmentDetection

Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.

EvaluationResult

The evaluation results for the training of a model.

EyeDirection

Indicates the direction the eyes are gazing in (independent of the head pose) as determined by its pitch and yaw.

EyeOpen

Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

Eyeglasses

Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

Face

Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.

FaceDetail

Structure containing attributes of the face that the algorithm detected.

A FaceDetail object contains either the default facial attributes or all facial attributes. The default attributes are BoundingBox, Confidence, Landmarks, Pose, and Quality.

GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a FaceDetail object with all attributes. To specify which attributes to return, use the FaceAttributes input parameter for StartFaceDetection. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don't have a FaceAttributes input parameter:

  • GetCelebrityRecognition

  • GetPersonTracking

  • GetFaceSearch

The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the Attributes input parameter for DetectFaces. For IndexFaces, use the DetectAttributes input parameter.

FaceDetection

Information about a face detected in a video analysis request and the time the face was detected in the video.

FaceMatch

Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

FaceOccluded

FaceOccluded should return "true" with a high confidence score if a detected face’s eyes, nose, and mouth are partially captured or if they are covered by masks, dark sunglasses, cell phones, hands, or other objects. FaceOccluded should return "false" with a high confidence score if common occurrences that do not impact face verification are detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and others.

You can use FaceOccluded to determine if an obstruction on a face negatively impacts using the image for face matching.

FaceRecord

Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.

FaceSearchSettings

Input face recognition parameters for an Amazon Rekognition stream processor. Includes the collection to use for face recognition and the face attributes to detect. Defining the settings is required in the request parameter for CreateStreamProcessor.

Gender

The predicted gender of a detected face.

Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.

Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.

We don't recommend using gender binary predictions to make decisions that impact an individual's rights, privacy, or access to services.

GeneralLabelsSettings

Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, exclusive, or a combination of both and can be applied to individual labels or entire label categories. To see a list of label categories, see Detecting Labels.

Geometry

Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.

GetContentModerationRequestMetadata

Contains metadata about a content moderation request, including the SortBy and AggregateBy options.

GetLabelDetectionRequestMetadata

Contains metadata about a label detection request, including the SortBy and AggregateBy options.

GroundTruthManifest

The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.

HumanLoopActivationOutput

Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.

HumanLoopConfig

Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.

HumanLoopDataAttributes

Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.

Image

Provides the input image either as bytes or an S3 object.

You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.

For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.

You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.

The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.

ImageQuality

Identifies face image brightness and sharpness.

Instance

An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).

KinesisDataStream

The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

KinesisVideoStream

Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

KinesisVideoStreamStartSelector

Specifies the starting point in a Kinesis stream to start processing. You can use the producer timestamp or the fragment number. One of either producer timestamp or fragment number is required. If you use the producer timestamp, you must put the time in milliseconds. For more information about fragment numbers, see Fragment.

KnownGender

The known gender identity for the celebrity that matches the provided ID. The known gender identity can be Male, Female, Nonbinary, or Unlisted.

Label

Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.

LabelAlias

A potential alias of for a given label.

LabelCategory

The category that applies to a given label.

LabelDetection

Information about a label detected in a video analysis request and the time the label was detected in the video.

LabelDetectionSettings

Contains the specified filters that should be applied to a list of returned GENERAL_LABELS.

Landmark

Indicates the location of the landmark on the face.

LivenessOutputConfig

Contains settings that specify the location of an Amazon S3 bucket used to store the output of a Face Liveness session. Note that the S3 bucket must be located in the caller's AWS account and in the same region as the Face Liveness end-point. Additionally, the Amazon S3 object keys are auto-generated by the Face Liveness system.

MatchedUser

Contains metadata for a UserID matched with a given face.

MediaAnalysisDetectModerationLabelsConfig

Configuration for Moderation Labels Detection.

MediaAnalysisInput

Contains input information for a media analysis job.

MediaAnalysisJobDescription

Description for a media analysis job.

MediaAnalysisJobFailureDetails

Details about the error that resulted in failure of the job.

MediaAnalysisManifestSummary

Summary that provides statistics on input manifest and errors identified in the input manifest.

MediaAnalysisModelVersions

Object containing information about the model versions of selected features in a given job.

MediaAnalysisOperationsConfig

Configuration options for a media analysis job. Configuration is operation-specific.

MediaAnalysisOutputConfig

Output configuration provided in the job creation request.

MediaAnalysisResults

Contains the results for a media analysis job created with StartMediaAnalysisJob.

ModerationLabel

Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.

MouthOpen

Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

Mustache

Indicates whether or not the face has a mustache, and the confidence level in the determination.

NotificationChannel

The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see Calling Amazon Rekognition Video operations. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics.

OutputConfig

The S3 bucket and folder location where training output is placed.

Parent

A parent label for a label. A label can have 0, 1, or more parents.

PersonDetail

Details about a person detected in a video analysis request.

PersonDetection

Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of PersonDetection objects with elements for each time a person's path is tracked in a video.

For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.

PersonMatch

Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of PersonMatch objects is returned by GetFaceSearch.

Point

The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.

An array of Point objects makes up a Polygon. A Polygon is returned by DetectText and by DetectCustomLabels Polygon represents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.

Pose

Indicates the pose of the face as determined by its pitch, roll, and yaw.

ProjectDescription

A description of an Amazon Rekognition Custom Labels project. For more information, see DescribeProjects.

ProjectPolicy

Describes a project policy in the response from ListProjectPolicies.

ProjectVersionDescription

A description of a version of a Amazon Rekognition project version.

ProtectiveEquipmentBodyPart

Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of ProtectiveEquipmentBodyPart objects is returned for each person detected by DetectProtectiveEquipment.

ProtectiveEquipmentPerson

A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of ProtectiveEquipmentPerson objects.

ProtectiveEquipmentSummarizationAttributes

Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the Summary (ProtectiveEquipmentSummary) field of the response from DetectProtectiveEquipment. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.

ProtectiveEquipmentSummary

Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the SummarizationAttributes (ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment), and the persons in which a determination could not be made (PersonsIndeterminate).

To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the PersonsWithRequiredEquipment array. If you want to find out more about a person, such as the location (BoundingBox) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of a ProtectiveEquipmentPerson object returned in the Persons array by DetectProtectiveEquipment.

RegionOfInterest

Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a BoundingBox or Polygon to set a region of the screen.

A word, face, or label is included in the region if it is more than half in that region. If there is more than one region, the word, face, or label is compared with all regions of the screen. Any object of interest that is more than half in a region is kept in the results.

S3Destination

The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation. These results include the name of the stream processor resource, the session ID of the stream processing session, and labeled timestamps and bounding boxes for detected labels.

S3Object

Provides the S3 bucket name and object name.

The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.

SearchedFace

Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for search.

SearchedFaceDetails

Contains data regarding the input face used for a search.

SearchedUser

Contains metadata about a User searched for within a collection.

SegmentDetection

A technical cue or shot detection segment detected in a video. An array of SegmentDetection objects containing all segments detected in a stored video is returned by GetSegmentDetection.

SegmentTypeInfo

Information about the type of a segment requested in a call to StartSegmentDetection. An array of SegmentTypeInfo objects is returned by the response from GetSegmentDetection.

ShotSegment

Information about a shot detection segment detected in a video. For more information, see SegmentDetection.

Smile

Indicates whether or not the face is smiling, and the confidence level in the determination.

StartSegmentDetectionFilters

Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.

StartShotDetectionFilter

Filters for the shot detection segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.

StartTechnicalCueDetectionFilter

Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.

StartTextDetectionFilters

Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter looks at a word's height, width and minimum confidence. RegionOfInterest lets you set a specific region of the screen to look for text in.

StreamProcessingStartSelector

This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.

StreamProcessingStopSelector

Specifies when to stop processing the stream. You can specify a maximum amount of time to process the video.

StreamProcessor

An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for CreateStreamProcessor describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.

StreamProcessorDataSharingPreference

Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.

StreamProcessorInput

Information about the source streaming video.

StreamProcessorNotificationChannel

The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.

Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. For example, if Amazon Rekognition detects a person at second 2, a pet at second 4, and a person again at second 5, Amazon Rekognition sends 2 object class detected notifications, one for a person at second 2 and one for a pet at second 4.

Amazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete.

StreamProcessorOutput

Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

StreamProcessorSettings

Input parameters used in a streaming video analyzed by a Amazon Rekognition stream processor. You can use FaceSearch to recognize faces in a streaming video, or you can use ConnectedHome to detect labels.

StreamProcessorSettingsForUpdate

The stream processor settings that you want to update. ConnectedHome settings can be updated to detect different labels with a different minimum confidence.

Summary

The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.

You get the training summary S3 bucket location by calling DescribeProjectVersions.

Sunglasses

Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

TechnicalCueSegment

Information about a technical cue segment. For more information, see SegmentDetection.

TestingData

The dataset used for testing. Optionally, if AutoCreate is set, Amazon Rekognition uses the training dataset to create a test dataset with a temporary split of the training dataset.

TestingDataResult

Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.

TextDetection

Information about a word or line of text detected by DetectText.

The DetectedText field contains the text that Amazon Rekognition detected in the image.

Every word and line has an identifier (Id). Each word belongs to a line and has a parent identifier (ParentId) that identifies the line of text in which the word appears. The word Id is also an index for the word within a line of words.

For more information, see Detecting text in the Amazon Rekognition Developer Guide.

TextDetectionResult

Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.

TrainingData

The dataset used for training.

TrainingDataResult

The data validation manifest created for the training dataset during model training.

UnindexedFace

A face that IndexFaces detected, but didn't index. Use the Reasons response attribute to determine why a face wasn't indexed.

UnsearchedFace

Face details inferred from the image but not used for search. The response attribute contains reasons for why a face wasn't used for Search.

UnsuccessfulFaceAssociation

Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully associated.

UnsuccessfulFaceDeletion

Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully deleted.

UnsuccessfulFaceDisassociation

Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully disassociated.

User

Metadata of the user stored in a collection.

UserMatch

Provides UserID metadata along with the confidence in the match of this UserID with the input face.

ValidationData

Contains the Amazon S3 bucket location of the validation data for a model training job.

The validation data includes error information for individual JSON Lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide.

You get the ValidationData object for the training dataset (TrainingDataResult) and the test dataset (TestingDataResult) by calling DescribeProjectVersions.

The assets array contains a single Asset object. The GroundTruthManifest field of the Asset object contains the S3 bucket location of the validation data.

Versions

Object specifying the acceptable range of challenge versions.

Video

Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

VideoMetadata

Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

Enums§

Attribute
When writing a match expression against Attribute, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
BodyPart
When writing a match expression against BodyPart, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
CelebrityRecognitionSortBy
When writing a match expression against CelebrityRecognitionSortBy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ChallengeType
When writing a match expression against ChallengeType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ContentClassifier
When writing a match expression against ContentClassifier, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ContentModerationAggregateBy
When writing a match expression against ContentModerationAggregateBy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ContentModerationSortBy
When writing a match expression against ContentModerationSortBy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
CustomizationFeature
When writing a match expression against CustomizationFeature, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DatasetStatus
When writing a match expression against DatasetStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DatasetStatusMessageCode
When writing a match expression against DatasetStatusMessageCode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DatasetType
When writing a match expression against DatasetType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
DetectLabelsFeatureName
When writing a match expression against DetectLabelsFeatureName, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
EmotionName
When writing a match expression against EmotionName, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
FaceAttributes
When writing a match expression against FaceAttributes, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
FaceSearchSortBy
When writing a match expression against FaceSearchSortBy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
GenderType
When writing a match expression against GenderType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
KnownGenderType
When writing a match expression against KnownGenderType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
LabelDetectionAggregateBy
When writing a match expression against LabelDetectionAggregateBy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
LabelDetectionFeatureName
When writing a match expression against LabelDetectionFeatureName, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
LabelDetectionSortBy
When writing a match expression against LabelDetectionSortBy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
LandmarkType
When writing a match expression against LandmarkType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
LivenessSessionStatus
When writing a match expression against LivenessSessionStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
MediaAnalysisJobFailureCode
When writing a match expression against MediaAnalysisJobFailureCode, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
MediaAnalysisJobStatus
When writing a match expression against MediaAnalysisJobStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
OrientationCorrection
When writing a match expression against OrientationCorrection, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
PersonTrackingSortBy
When writing a match expression against PersonTrackingSortBy, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ProjectAutoUpdate
When writing a match expression against ProjectAutoUpdate, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ProjectStatus
When writing a match expression against ProjectStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ProjectVersionStatus
When writing a match expression against ProjectVersionStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
ProtectiveEquipmentType
When writing a match expression against ProtectiveEquipmentType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
QualityFilter
When writing a match expression against QualityFilter, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
Reason
When writing a match expression against Reason, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
SegmentType
When writing a match expression against SegmentType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
StreamProcessorParameterToDelete
When writing a match expression against StreamProcessorParameterToDelete, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
StreamProcessorStatus
When writing a match expression against StreamProcessorStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
TechnicalCueType
When writing a match expression against TechnicalCueType, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
TextTypes
When writing a match expression against TextTypes, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
UnsearchedFaceReason
When writing a match expression against UnsearchedFaceReason, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
UnsuccessfulFaceAssociationReason
When writing a match expression against UnsuccessfulFaceAssociationReason, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
UnsuccessfulFaceDeletionReason
When writing a match expression against UnsuccessfulFaceDeletionReason, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
UnsuccessfulFaceDisassociationReason
When writing a match expression against UnsuccessfulFaceDisassociationReason, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
UserStatus
When writing a match expression against UserStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
VideoColorRange
When writing a match expression against VideoColorRange, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.
VideoJobStatus
When writing a match expression against VideoJobStatus, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.