Expand description
Data structures used by operation inputs/outputs.
Modules§
Structs§
- AgeRange
Structure containing the estimated age range, in years, for a face.
Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.
- Asset
Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.
- Associated
Face Provides face metadata for the faces that are associated to a specific UserID.
- Audio
Metadata Metadata information about an audio stream. An array of
AudioMetadata
objects for the audio streams found in a stored video is returned byGetSegmentDetection
.- Audit
Image An image that is picked from the Face Liveness video and returned for audit trail purposes, returned as Base64-encoded bytes.
- Beard
Indicates whether or not the face has a beard, and the confidence level in the determination.
- Black
Frame A filter that allows you to control the black frame detection by specifying the black levels and pixel coverage of black pixels in a frame. As videos can come from multiple sources, formats, and time periods, they may contain different standards and varying noise levels for black frames that need to be accounted for. For more information, see
StartSegmentDetection
.- Bounding
Box Identifies the bounding box around the label, face, text, object of interest, or personal protective equipment. The
left
(x-coordinate) andtop
(y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).The
top
andleft
values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns aleft
value of 0.5 (350/700) and atop
value of 0.25 (50/200).The
width
andheight
values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the
left
ortop
values.- Celebrity
Provides information about a celebrity recognized by the
RecognizeCelebrities
operation.- Celebrity
Detail Information about a recognized celebrity.
- Celebrity
Recognition Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.
- Challenge
Describes the type and version of the challenge being used for the Face Liveness session.
- Challenge
Preference An ordered list of preferred challenge type and versions.
- Compare
Faces Match Provides information about a face in a target image that matches the source image face analyzed by
CompareFaces
. TheFace
property contains the bounding box of the face in the target image. TheSimilarity
property is the confidence that the source image face matches the face in the bounding box.- Compared
Face Provides face metadata for target image faces that are analyzed by
CompareFaces
andRecognizeCelebrities
.- Compared
Source Image Face Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.
- Connected
Home Settings Label detection settings to use on a streaming video. Defining the settings is required in the request parameter for
CreateStreamProcessor
. Including this setting in theCreateStreamProcessor
request enables you to use the stream processor for label detection. You can then select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected and one SNS notification is published the first time a pet is detected, as well as an end-of-session summary.- Connected
Home Settings ForUpdate The label detection settings you want to use in your stream processor. This includes the labels you want the stream processor to detect and the minimum confidence level allowed to label objects.
- Content
Moderation Detection Information about an inappropriate, unwanted, or offensive content label detection in a stored video.
- Content
Type Contains information regarding the confidence and name of a detected content type.
- Covers
Body Part Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see
DetectProtectiveEquipment
.- Create
Face Liveness Session Request Settings A session settings object. It contains settings for the operation to be performed. It accepts arguments for OutputConfig and AuditImagesLimit.
- Custom
Label A custom label detected in an image by a call to
DetectCustomLabels
.- Customization
Feature Config Feature specific configuration for the training job. Configuration provided for the job must match the feature type parameter associated with project. If configuration and feature type do not match an InvalidParameterException is returned.
- Customization
Feature Content Moderation Config Configuration options for Content Moderation training.
- Dataset
Changes Describes updates or additions to a dataset. A Single update or addition is an entry (JSON Line) that provides information about a single image. To update an existing entry, you match the
source-ref
field of the update entry with thesource-ref
filed of the entry that you want to update. If thesource-ref
field doesn't match an existing entry, the entry is added to dataset as a new entry.- Dataset
Description A description for a dataset. For more information, see
DescribeDataset
.The status fields
Status
,StatusMessage
, andStatusMessageCode
reflect the last operation on the dataset.- Dataset
Label Description Describes a dataset label. For more information, see
ListDatasetLabels
.- Dataset
Label Stats Statistics about a label used in a dataset. For more information, see
DatasetLabelDescription
.- Dataset
Metadata Summary information for an Amazon Rekognition Custom Labels dataset. For more information, see
ProjectDescription
.- Dataset
Source The source that Amazon Rekognition Custom Labels uses to create a dataset. To use an Amazon Sagemaker format manifest file, specify the S3 bucket location in the
GroundTruthManifest
field. The S3 bucket must be in your AWS account. To create a copy of an existing dataset, specify the Amazon Resource Name (ARN) of an existing dataset inDatasetArn
.You need to specify a value for
DatasetArn
orGroundTruthManifest
, but not both. if you supply both values, or if you don't specify any values, an InvalidParameterException exception occurs.For more information, see
CreateDataset
.- Dataset
Stats Provides statistics about a dataset. For more information, see
DescribeDataset
.- Detect
Labels Image Background The background of the image with regard to image quality and dominant colors.
- Detect
Labels Image Foreground The foreground of the image with regard to image quality and dominant colors.
- Detect
Labels Image Properties Information about the quality and dominant colors of an input image. Quality and color information is returned for the entire image, foreground, and background.
- Detect
Labels Image Properties Settings Settings for the IMAGE_PROPERTIES feature type.
- Detect
Labels Image Quality The quality of an image provided for label detection, with regard to brightness, sharpness, and contrast.
- Detect
Labels Settings Settings for the DetectLabels request. Settings can include filters for both GENERAL_LABELS and IMAGE_PROPERTIES. GENERAL_LABELS filters can be inclusive or exclusive and applied to individual labels or label categories. IMAGE_PROPERTIES filters allow specification of a maximum number of dominant colors.
- Detect
Text Filters A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response.
WordFilter
looks at a word’s height, width, and minimum confidence.RegionOfInterest
lets you set a specific region of the image to look for text in.- Detection
Filter A set of parameters that allow you to filter out certain results from your returned results.
- Disassociated
Face Provides face metadata for the faces that are disassociated from a specific UserID.
- Distribute
Dataset A training dataset or a test dataset used in a dataset distribution operation. For more information, see
DistributeDatasetEntries
.- Dominant
Color A description of the dominant colors in an image.
- Emotion
The API returns a prediction of an emotion based on a person's facial expressions, along with the confidence level for the predicted emotion. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally. The API is not intended to be used, and you may not use it, in a manner that violates the EU Artificial Intelligence Act or any other applicable law.
- Equipment
Detection Information about an item of Personal Protective Equipment (PPE) detected by
DetectProtectiveEquipment
. For more information, seeDetectProtectiveEquipment
.- Evaluation
Result The evaluation results for the training of a model.
- EyeDirection
Indicates the direction the eyes are gazing in (independent of the head pose) as determined by its pitch and yaw.
- EyeOpen
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
- Eyeglasses
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
- Face
Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.
- Face
Detail Structure containing attributes of the face that the algorithm detected.
A
FaceDetail
object contains either the default facial attributes or all facial attributes. The default attributes areBoundingBox
,Confidence
,Landmarks
,Pose
, andQuality
.GetFaceDetection
is the only Amazon Rekognition Video stored video operation that can return aFaceDetail
object with all attributes. To specify which attributes to return, use theFaceAttributes
input parameter forStartFaceDetection
. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don't have aFaceAttributes
input parameter:-
GetCelebrityRecognition
-
GetPersonTracking
-
GetFaceSearch
The Amazon Rekognition Image
DetectFaces
andIndexFaces
operations can return all facial attributes. To specify which attributes to return, use theAttributes
input parameter forDetectFaces
. ForIndexFaces
, use theDetectAttributes
input parameter.-
- Face
Detection Information about a face detected in a video analysis request and the time the face was detected in the video.
- Face
Match Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.
- Face
Occluded FaceOccluded
should return "true" with a high confidence score if a detected face’s eyes, nose, and mouth are partially captured or if they are covered by masks, dark sunglasses, cell phones, hands, or other objects.FaceOccluded
should return "false" with a high confidence score if common occurrences that do not impact face verification are detected, such as eye glasses, lightly tinted sunglasses, strands of hair, and others.You can use
FaceOccluded
to determine if an obstruction on a face negatively impacts using the image for face matching.- Face
Record Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.
- Face
Search Settings Input face recognition parameters for an Amazon Rekognition stream processor. Includes the collection to use for face recognition and the face attributes to detect. Defining the settings is required in the request parameter for
CreateStreamProcessor
.- Gender
The predicted gender of a detected face.
Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.
Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.
We don't recommend using gender binary predictions to make decisions that impact an individual's rights, privacy, or access to services.
- General
Labels Settings Contains filters for the object labels returned by DetectLabels. Filters can be inclusive, exclusive, or a combination of both and can be applied to individual labels or entire label categories. To see a list of label categories, see Detecting Labels.
- Geometry
Information about where an object (
DetectCustomLabels
) or text (DetectText
) is located on an image.- GetContent
Moderation Request Metadata Contains metadata about a content moderation request, including the SortBy and AggregateBy options.
- GetLabel
Detection Request Metadata Contains metadata about a label detection request, including the SortBy and AggregateBy options.
- Ground
Truth Manifest The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.
- Human
Loop Activation Output Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.
- Human
Loop Config Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.
- Human
Loop Data Attributes Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.
- Image
Provides the input image either as bytes or an S3 object.
You pass image bytes to an Amazon Rekognition API operation by using the
Bytes
property. For example, you would use theBytes
property to pass an image loaded from a local file system. Image bytes passed by using theBytes
property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.
You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the
S3Object
property. Images stored in an S3 bucket do not need to be base64-encoded.The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.
- Image
Quality Identifies face image brightness and sharpness.
- Instance
An instance of a label returned by Amazon Rekognition Image (
DetectLabels
) or by Amazon Rekognition Video (GetLabelDetection
).- Kinesis
Data Stream The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
- Kinesis
Video Stream Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
- Kinesis
Video Stream Start Selector Specifies the starting point in a Kinesis stream to start processing. You can use the producer timestamp or the fragment number. One of either producer timestamp or fragment number is required. If you use the producer timestamp, you must put the time in milliseconds. For more information about fragment numbers, see Fragment.
- Known
Gender The known gender identity for the celebrity that matches the provided ID. The known gender identity can be Male, Female, Nonbinary, or Unlisted.
- Label
Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.
- Label
Alias A potential alias of for a given label.
- Label
Category The category that applies to a given label.
- Label
Detection Information about a label detected in a video analysis request and the time the label was detected in the video.
- Label
Detection Settings Contains the specified filters that should be applied to a list of returned GENERAL_LABELS.
- Landmark
Indicates the location of the landmark on the face.
- Liveness
Output Config Contains settings that specify the location of an Amazon S3 bucket used to store the output of a Face Liveness session. Note that the S3 bucket must be located in the caller's AWS account and in the same region as the Face Liveness end-point. Additionally, the Amazon S3 object keys are auto-generated by the Face Liveness system.
- Matched
User Contains metadata for a UserID matched with a given face.
- Media
Analysis Detect Moderation Labels Config Configuration for Moderation Labels Detection.
- Media
Analysis Input Contains input information for a media analysis job.
- Media
Analysis JobDescription Description for a media analysis job.
- Media
Analysis JobFailure Details Details about the error that resulted in failure of the job.
- Media
Analysis Manifest Summary Summary that provides statistics on input manifest and errors identified in the input manifest.
- Media
Analysis Model Versions Object containing information about the model versions of selected features in a given job.
- Media
Analysis Operations Config Configuration options for a media analysis job. Configuration is operation-specific.
- Media
Analysis Output Config Output configuration provided in the job creation request.
- Media
Analysis Results Contains the results for a media analysis job created with StartMediaAnalysisJob.
- Moderation
Label Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.
- Mouth
Open Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
- Mustache
Indicates whether or not the face has a mustache, and the confidence level in the determination.
- Notification
Channel The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see Calling Amazon Rekognition Video operations. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics.
- Output
Config The S3 bucket and folder location where training output is placed.
- Parent
A parent label for a label. A label can have 0, 1, or more parents.
- Person
Detail Details about a person detected in a video analysis request.
- Person
Detection Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of
PersonDetection
objects with elements for each time a person's path is tracked in a video.For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.
- Person
Match Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (
FaceMatch
), information about the person (PersonDetail
), and the time stamp for when the person was detected in a video. An array ofPersonMatch
objects is returned byGetFaceSearch
.- Point
The X and Y coordinates of a point on an image or video frame. The X and Y values are ratios of the overall image size or video resolution. For example, if an input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
An array of
Point
objects makes up aPolygon
. APolygon
is returned byDetectText
and byDetectCustomLabels
Polygon
represents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.- Pose
Indicates the pose of the face as determined by its pitch, roll, and yaw.
- Project
Description A description of an Amazon Rekognition Custom Labels project. For more information, see
DescribeProjects
.- Project
Policy Describes a project policy in the response from
ListProjectPolicies
.- Project
Version Description A description of a version of a Amazon Rekognition project version.
- Protective
Equipment Body Part Information about a body part detected by
DetectProtectiveEquipment
that contains PPE. An array ofProtectiveEquipmentBodyPart
objects is returned for each person detected byDetectProtectiveEquipment
.- Protective
Equipment Person A person detected by a call to
DetectProtectiveEquipment
. The API returns all persons detected in the input image in an array ofProtectiveEquipmentPerson
objects.- Protective
Equipment Summarization Attributes Specifies summary attributes to return from a call to
DetectProtectiveEquipment
. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in theSummary
(ProtectiveEquipmentSummary
) field of the response fromDetectProtectiveEquipment
. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, seeProtectiveEquipmentSummary
.- Protective
Equipment Summary Summary information for required items of personal protective equipment (PPE) detected on persons by a call to
DetectProtectiveEquipment
. You specify the required type of PPE in theSummarizationAttributes
(ProtectiveEquipmentSummarizationAttributes
) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment
), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment
), and the persons in which a determination could not be made (PersonsIndeterminate
).To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the
PersonsWithRequiredEquipment
array. If you want to find out more about a person, such as the location (BoundingBox
) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of aProtectiveEquipmentPerson
object returned in thePersons
array byDetectProtectiveEquipment
.- Region
OfInterest Specifies a location within the frame that Rekognition checks for objects of interest such as text, labels, or faces. It uses a
BoundingBox
orPolygon
to set a region of the screen.A word, face, or label is included in the region if it is more than half in that region. If there is more than one region, the word, face, or label is compared with all regions of the screen. Any object of interest that is more than half in a region is kept in the results.
- S3Destination
The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation. These results include the name of the stream processor resource, the session ID of the stream processing session, and labeled timestamps and bounding boxes for detected labels.
- S3Object
Provides the S3 bucket name and object name.
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.
- Searched
Face Provides face metadata such as FaceId, BoundingBox, Confidence of the input face used for search.
- Searched
Face Details Contains data regarding the input face used for a search.
- Searched
User Contains metadata about a User searched for within a collection.
- Segment
Detection A technical cue or shot detection segment detected in a video. An array of
SegmentDetection
objects containing all segments detected in a stored video is returned byGetSegmentDetection
.- Segment
Type Info Information about the type of a segment requested in a call to
StartSegmentDetection
. An array ofSegmentTypeInfo
objects is returned by the response fromGetSegmentDetection
.- Shot
Segment Information about a shot detection segment detected in a video. For more information, see
SegmentDetection
.- Smile
Indicates whether or not the face is smiling, and the confidence level in the determination.
- Start
Segment Detection Filters Filters applied to the technical cue or shot detection segments. For more information, see
StartSegmentDetection
.- Start
Shot Detection Filter Filters for the shot detection segments returned by
GetSegmentDetection
. For more information, seeStartSegmentDetectionFilters
.- Start
Technical CueDetection Filter Filters for the technical segments returned by
GetSegmentDetection
. For more information, seeStartSegmentDetectionFilters
.- Start
Text Detection Filters Set of optional parameters that let you set the criteria text must meet to be included in your response.
WordFilter
looks at a word's height, width and minimum confidence.RegionOfInterest
lets you set a specific region of the screen to look for text in.- Stream
Processing Start Selector This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.
- Stream
Processing Stop Selector Specifies when to stop processing the stream. You can specify a maximum amount of time to process the video.
- Stream
Processor An object that recognizes faces or labels in a streaming video. An Amazon Rekognition stream processor is created by a call to
CreateStreamProcessor
. The request parameters forCreateStreamProcessor
describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.- Stream
Processor Data Sharing Preference Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.
- Stream
Processor Input Information about the source streaming video.
- Stream
Processor Notification Channel The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.
Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. For example, if Amazon Rekognition detects a person at second 2, a pet at second 4, and a person again at second 5, Amazon Rekognition sends 2 object class detected notifications, one for a person at second 2 and one for a pet at second 4.
Amazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete.
- Stream
Processor Output Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
- Stream
Processor Settings Input parameters used in a streaming video analyzed by a Amazon Rekognition stream processor. You can use
FaceSearch
to recognize faces in a streaming video, or you can useConnectedHome
to detect labels.- Stream
Processor Settings ForUpdate The stream processor settings that you want to update.
ConnectedHome
settings can be updated to detect different labels with a different minimum confidence.- Summary
The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.
You get the training summary S3 bucket location by calling
DescribeProjectVersions
.- Sunglasses
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
- Technical
CueSegment Information about a technical cue segment. For more information, see
SegmentDetection
.- Testing
Data The dataset used for testing. Optionally, if
AutoCreate
is set, Amazon Rekognition uses the training dataset to create a test dataset with a temporary split of the training dataset.- Testing
Data Result Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
- Text
Detection Information about a word or line of text detected by
DetectText
.The
DetectedText
field contains the text that Amazon Rekognition detected in the image.Every word and line has an identifier (
Id
). Each word belongs to a line and has a parent identifier (ParentId
) that identifies the line of text in which the word appears. The wordId
is also an index for the word within a line of words.For more information, see Detecting text in the Amazon Rekognition Developer Guide.
- Text
Detection Result Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
- Training
Data The dataset used for training.
- Training
Data Result The data validation manifest created for the training dataset during model training.
- Unindexed
Face A face that
IndexFaces
detected, but didn't index. Use theReasons
response attribute to determine why a face wasn't indexed.- Unsearched
Face Face details inferred from the image but not used for search. The response attribute contains reasons for why a face wasn't used for Search.
- Unsuccessful
Face Association Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully associated.
- Unsuccessful
Face Deletion Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully deleted.
- Unsuccessful
Face Disassociation Contains metadata like FaceId, UserID, and Reasons, for a face that was unsuccessfully disassociated.
- User
Metadata of the user stored in a collection.
- User
Match Provides UserID metadata along with the confidence in the match of this UserID with the input face.
- Validation
Data Contains the Amazon S3 bucket location of the validation data for a model training job.
The validation data includes error information for individual JSON Lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide.
You get the
ValidationData
object for the training dataset (TrainingDataResult
) and the test dataset (TestingDataResult
) by callingDescribeProjectVersions
.The assets array contains a single
Asset
object. TheGroundTruthManifest
field of the Asset object contains the S3 bucket location of the validation data.- Versions
Object specifying the acceptable range of challenge versions.
- Video
Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.- Video
Metadata Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition video operation.
Enums§
- Attribute
- When writing a match expression against
Attribute
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Body
Part - When writing a match expression against
BodyPart
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Celebrity
Recognition Sort By - When writing a match expression against
CelebrityRecognitionSortBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Challenge
Type - When writing a match expression against
ChallengeType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Content
Classifier - When writing a match expression against
ContentClassifier
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Content
Moderation Aggregate By - When writing a match expression against
ContentModerationAggregateBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Content
Moderation Sort By - When writing a match expression against
ContentModerationSortBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Customization
Feature - When writing a match expression against
CustomizationFeature
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Dataset
Status - When writing a match expression against
DatasetStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Dataset
Status Message Code - When writing a match expression against
DatasetStatusMessageCode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Dataset
Type - When writing a match expression against
DatasetType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Detect
Labels Feature Name - When writing a match expression against
DetectLabelsFeatureName
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Emotion
Name - When writing a match expression against
EmotionName
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Face
Attributes - When writing a match expression against
FaceAttributes
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Face
Search Sort By - When writing a match expression against
FaceSearchSortBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Gender
Type - When writing a match expression against
GenderType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Known
Gender Type - When writing a match expression against
KnownGenderType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Label
Detection Aggregate By - When writing a match expression against
LabelDetectionAggregateBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Label
Detection Feature Name - When writing a match expression against
LabelDetectionFeatureName
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Label
Detection Sort By - When writing a match expression against
LabelDetectionSortBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Landmark
Type - When writing a match expression against
LandmarkType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Liveness
Session Status - When writing a match expression against
LivenessSessionStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Media
Analysis JobFailure Code - When writing a match expression against
MediaAnalysisJobFailureCode
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Media
Analysis JobStatus - When writing a match expression against
MediaAnalysisJobStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Orientation
Correction - When writing a match expression against
OrientationCorrection
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Person
Tracking Sort By - When writing a match expression against
PersonTrackingSortBy
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Project
Auto Update - When writing a match expression against
ProjectAutoUpdate
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Project
Status - When writing a match expression against
ProjectStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Project
Version Status - When writing a match expression against
ProjectVersionStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Protective
Equipment Type - When writing a match expression against
ProtectiveEquipmentType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Quality
Filter - When writing a match expression against
QualityFilter
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Reason
- When writing a match expression against
Reason
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Segment
Type - When writing a match expression against
SegmentType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Stream
Processor Parameter ToDelete - When writing a match expression against
StreamProcessorParameterToDelete
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Stream
Processor Status - When writing a match expression against
StreamProcessorStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Technical
CueType - When writing a match expression against
TechnicalCueType
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Text
Types - When writing a match expression against
TextTypes
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Unsearched
Face Reason - When writing a match expression against
UnsearchedFaceReason
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Unsuccessful
Face Association Reason - When writing a match expression against
UnsuccessfulFaceAssociationReason
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Unsuccessful
Face Deletion Reason - When writing a match expression against
UnsuccessfulFaceDeletionReason
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Unsuccessful
Face Disassociation Reason - When writing a match expression against
UnsuccessfulFaceDisassociationReason
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - User
Status - When writing a match expression against
UserStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Video
Color Range - When writing a match expression against
VideoColorRange
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature. - Video
JobStatus - When writing a match expression against
VideoJobStatus
, it is important to ensure your code is forward-compatible. That is, if a match arm handles a case for a feature that is supported by the service but has not been represented as an enum variant in a current version of SDK, your code should continue to work when you upgrade SDK to a future version in which the enum does include a variant for that feature.