Expand description
This is the Amazon Rekognition API reference.
If you’re using the service, you’re probably looking for RekognitionClient and Rekognition.
Structs§
- AgeRange
Structure containing the estimated age range, in years, for a face.
Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.
- Asset
Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.
- Audio
Metadata Metadata information about an audio stream. An array of
AudioMetadataobjects for the audio streams found in a stored video is returned by GetSegmentDetection.- Beard
Indicates whether or not the face has a beard, and the confidence level in the determination.
- Bounding
Box Identifies the bounding box around the label, face, text or personal protective equipment. The
left(x-coordinate) andtop(y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).The
topandleftvalues returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns aleftvalue of 0.5 (350/700) and atopvalue of 0.25 (50/200).The
widthandheightvalues represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the
leftortopvalues.- Celebrity
Provides information about a celebrity recognized by the RecognizeCelebrities operation.
- Celebrity
Detail Information about a recognized celebrity.
- Celebrity
Recognition Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.
- Compare
Faces Match Provides information about a face in a target image that matches the source image face analyzed by
CompareFaces. TheFaceproperty contains the bounding box of the face in the target image. TheSimilarityproperty is the confidence that the source image face matches the face in the bounding box.- Compare
Faces Request - Compare
Faces Response - Compared
Face Provides face metadata for target image faces that are analyzed by
CompareFacesandRecognizeCelebrities.- Compared
Source Image Face Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.
- Content
Moderation Detection Information about an unsafe content label detection in a stored video.
- Covers
Body Part Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.
- Create
Collection Request - Create
Collection Response - Create
Project Request - Create
Project Response - Create
Project Version Request - Create
Project Version Response - Create
Stream Processor Request - Create
Stream Processor Response - Custom
Label A custom label detected in an image by a call to DetectCustomLabels.
- Delete
Collection Request - Delete
Collection Response - Delete
Faces Request - Delete
Faces Response - Delete
Project Request - Delete
Project Response - Delete
Project Version Request - Delete
Project Version Response - Delete
Stream Processor Request - Delete
Stream Processor Response - Describe
Collection Request - Describe
Collection Response - Describe
Project Versions Request - Describe
Project Versions Response - Describe
Projects Request - Describe
Projects Response - Describe
Stream Processor Request - Describe
Stream Processor Response - Detect
Custom Labels Request - Detect
Custom Labels Response - Detect
Faces Request - Detect
Faces Response - Detect
Labels Request - Detect
Labels Response - Detect
Moderation Labels Request - Detect
Moderation Labels Response - Detect
Protective Equipment Request - Detect
Protective Equipment Response - Detect
Text Filters A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response.
WordFilterlooks at a word’s height, width, and minimum confidence.RegionOfInterestlets you set a specific region of the image to look for text in.- Detect
Text Request - Detect
Text Response - Detection
Filter A set of parameters that allow you to filter out certain results from your returned results.
- Emotion
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
- Equipment
Detection Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.
- Evaluation
Result The evaluation results for the training of a model.
- EyeOpen
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
- Eyeglasses
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
- Face
Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.
- Face
Detail Structure containing attributes of the face that the algorithm detected.
A
FaceDetailobject contains either the default facial attributes or all facial attributes. The default attributes areBoundingBox,Confidence,Landmarks,Pose, andQuality.GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a
FaceDetailobject with all attributes. To specify which attributes to return, use theFaceAttributesinput parameter for StartFaceDetection. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don't have aFaceAttributesinput parameter.-
GetCelebrityRecognition
-
GetPersonTracking
-
GetFaceSearch
The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the
Attributesinput parameter forDetectFaces. ForIndexFaces, use theDetectAttributesinput parameter.-
- Face
Detection Information about a face detected in a video analysis request and the time the face was detected in the video.
- Face
Match Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.
- Face
Record Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.
- Face
Search Settings Input face recognition parameters for an Amazon Rekognition stream processor.
FaceRecognitionSettingsis a request parameter for CreateStreamProcessor.- Gender
The predicted gender of a detected face.
Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.
Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.
We don't recommend using gender binary predictions to make decisions that impact an individual's rights, privacy, or access to services.
- Geometry
Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.
- GetCelebrity
Info Request - GetCelebrity
Info Response - GetCelebrity
Recognition Request - GetCelebrity
Recognition Response - GetContent
Moderation Request - GetContent
Moderation Response - GetFace
Detection Request - GetFace
Detection Response - GetFace
Search Request - GetFace
Search Response - GetLabel
Detection Request - GetLabel
Detection Response - GetPerson
Tracking Request - GetPerson
Tracking Response - GetSegment
Detection Request - GetSegment
Detection Response - GetText
Detection Request - GetText
Detection Response - Ground
Truth Manifest The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.
- Human
Loop Activation Output Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.
- Human
Loop Config Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.
- Human
Loop Data Attributes Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.
- Image
Provides the input image either as bytes or an S3 object.
You pass image bytes to an Amazon Rekognition API operation by using the
Bytesproperty. For example, you would use theBytesproperty to pass an image loaded from a local file system. Image bytes passed by using theBytesproperty must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.
You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the
S3Objectproperty. Images stored in an S3 bucket do not need to be base64-encoded.The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.
- Image
Quality Identifies face image brightness and sharpness.
- Index
Faces Request - Index
Faces Response - Instance
An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).
- Kinesis
Data Stream The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
- Kinesis
Video Stream Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
- Label
Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.
- Label
Detection Information about a label detected in a video analysis request and the time the label was detected in the video.
- Landmark
Indicates the location of the landmark on the face.
- List
Collections Request - List
Collections Response - List
Faces Request - List
Faces Response - List
Stream Processors Request - List
Stream Processors Response - List
Tags ForResource Request - List
Tags ForResource Response - Moderation
Label Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
- Mouth
Open Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
- Mustache
Indicates whether or not the face has a mustache, and the confidence level in the determination.
- Notification
Channel The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see api-video.
- Output
Config The S3 bucket and folder location where training output is placed.
- Parent
A parent label for a label. A label can have 0, 1, or more parents.
- Person
Detail Details about a person detected in a video analysis request.
- Person
Detection Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of
PersonDetectionobjects with elements for each time a person's path is tracked in a video.For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.
- Person
Match Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of
PersonMatchobjects is returned by GetFaceSearch.- Point
The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
An array of
Pointobjects,Polygon, is returned by DetectText and by DetectCustomLabels.Polygonrepresents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.- Pose
Indicates the pose of the face as determined by its pitch, roll, and yaw.
- Project
Description A description of a Amazon Rekognition Custom Labels project.
- Project
Version Description The description of a version of a model.
- Protective
Equipment Body Part Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of
ProtectiveEquipmentBodyPartobjects is returned for each person detected byDetectProtectiveEquipment.- Protective
Equipment Person A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of
ProtectiveEquipmentPersonobjects.- Protective
Equipment Summarization Attributes Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the
Summary(ProtectiveEquipmentSummary) field of the response fromDetectProtectiveEquipment. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.- Protective
Equipment Summary Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the
SummarizationAttributes(ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment), and the persons in which a determination could not be made (PersonsIndeterminate).To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the
PersonsWithRequiredEquipmentarray. If you want to find out more about a person, such as the location (BoundingBox) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of a ProtectiveEquipmentPerson object returned in thePersonsarray byDetectProtectiveEquipment.- Recognize
Celebrities Request - Recognize
Celebrities Response - Region
OfInterest Specifies a location within the frame that Rekognition checks for text. Uses a
BoundingBoxobject to set a region of the screen.A word is included in the region if the word is more than half in that region. If there is more than one region, the word will be compared with all regions of the screen. Any word more than half in a region is kept in the results.
- Rekognition
Client - A client for the Amazon Rekognition API.
- S3Object
Provides the S3 bucket name and object name.
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource-Based Policies in the Amazon Rekognition Developer Guide.
- Search
Faces ByImage Request - Search
Faces ByImage Response - Search
Faces Request - Search
Faces Response - Segment
Detection A technical cue or shot detection segment detected in a video. An array of
SegmentDetectionobjects containing all segments detected in a stored video is returned by GetSegmentDetection.- Segment
Type Info Information about the type of a segment requested in a call to StartSegmentDetection. An array of
SegmentTypeInfoobjects is returned by the response from GetSegmentDetection.- Shot
Segment Information about a shot detection segment detected in a video. For more information, see SegmentDetection.
- Smile
Indicates whether or not the face is smiling, and the confidence level in the determination.
- Start
Celebrity Recognition Request - Start
Celebrity Recognition Response - Start
Content Moderation Request - Start
Content Moderation Response - Start
Face Detection Request - Start
Face Detection Response - Start
Face Search Request - Start
Face Search Response - Start
Label Detection Request - Start
Label Detection Response - Start
Person Tracking Request - Start
Person Tracking Response - Start
Project Version Request - Start
Project Version Response - Start
Segment Detection Filters Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.
- Start
Segment Detection Request - Start
Segment Detection Response - Start
Shot Detection Filter Filters for the shot detection segments returned by
GetSegmentDetection. For more information, see StartSegmentDetectionFilters.- Start
Stream Processor Request - Start
Stream Processor Response - Start
Technical CueDetection Filter Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.
- Start
Text Detection Filters Set of optional parameters that let you set the criteria text must meet to be included in your response.
WordFilterlooks at a word's height, width and minimum confidence.RegionOfInterestlets you set a specific region of the screen to look for text in.- Start
Text Detection Request - Start
Text Detection Response - Stop
Project Version Request - Stop
Project Version Response - Stop
Stream Processor Request - Stop
Stream Processor Response - Stream
Processor An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for
CreateStreamProcessordescribe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.- Stream
Processor Input Information about the source streaming video.
- Stream
Processor Output Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
- Stream
Processor Settings Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.
- Summary
The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.
You get the training summary S3 bucket location by calling DescribeProjectVersions.
- Sunglasses
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
- TagResource
Request - TagResource
Response - Technical
CueSegment Information about a technical cue segment. For more information, see SegmentDetection.
- Testing
Data The dataset used for testing. Optionally, if
AutoCreateis set, Amazon Rekognition Custom Labels creates a testing dataset using an 80/20 split of the training dataset.- Testing
Data Result Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
- Text
Detection Information about a word or line of text detected by DetectText.
The
DetectedTextfield contains the text that Amazon Rekognition detected in the image.Every word and line has an identifier (
Id). Each word belongs to a line and has a parent identifier (ParentId) that identifies the line of text in which the word appears. The wordIdis also an index for the word within a line of words.For more information, see Detecting Text in the Amazon Rekognition Developer Guide.
- Text
Detection Result Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
- Training
Data The dataset used for training.
- Training
Data Result Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
- Unindexed
Face A face that IndexFaces detected, but didn't index. Use the
Reasonsresponse attribute to determine why a face wasn't indexed.- Untag
Resource Request - Untag
Resource Response - Validation
Data Contains the Amazon S3 bucket location of the validation data for a model training job.
The validation data includes error information for individual JSON lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide.
You get the
ValidationDataobject for the training dataset (TrainingDataResult) and the test dataset (TestingDataResult) by calling DescribeProjectVersions.The assets array contains a single Asset object. The GroundTruthManifest field of the Asset object contains the S3 bucket location of the validation data.
- Video
Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use
Videoto specify a video for analysis. The supported file formats are .mp4, .mov and .avi.- Video
Metadata Information about a video that Amazon Rekognition analyzed.
Videometadatais returned in every page of paginated responses from a Amazon Rekognition video operation.
Enums§
- Compare
Faces Error - Errors returned by CompareFaces
- Create
Collection Error - Errors returned by CreateCollection
- Create
Project Error - Errors returned by CreateProject
- Create
Project Version Error - Errors returned by CreateProjectVersion
- Create
Stream Processor Error - Errors returned by CreateStreamProcessor
- Delete
Collection Error - Errors returned by DeleteCollection
- Delete
Faces Error - Errors returned by DeleteFaces
- Delete
Project Error - Errors returned by DeleteProject
- Delete
Project Version Error - Errors returned by DeleteProjectVersion
- Delete
Stream Processor Error - Errors returned by DeleteStreamProcessor
- Describe
Collection Error - Errors returned by DescribeCollection
- Describe
Project Versions Error - Errors returned by DescribeProjectVersions
- Describe
Projects Error - Errors returned by DescribeProjects
- Describe
Stream Processor Error - Errors returned by DescribeStreamProcessor
- Detect
Custom Labels Error - Errors returned by DetectCustomLabels
- Detect
Faces Error - Errors returned by DetectFaces
- Detect
Labels Error - Errors returned by DetectLabels
- Detect
Moderation Labels Error - Errors returned by DetectModerationLabels
- Detect
Protective Equipment Error - Errors returned by DetectProtectiveEquipment
- Detect
Text Error - Errors returned by DetectText
- GetCelebrity
Info Error - Errors returned by GetCelebrityInfo
- GetCelebrity
Recognition Error - Errors returned by GetCelebrityRecognition
- GetContent
Moderation Error - Errors returned by GetContentModeration
- GetFace
Detection Error - Errors returned by GetFaceDetection
- GetFace
Search Error - Errors returned by GetFaceSearch
- GetLabel
Detection Error - Errors returned by GetLabelDetection
- GetPerson
Tracking Error - Errors returned by GetPersonTracking
- GetSegment
Detection Error - Errors returned by GetSegmentDetection
- GetText
Detection Error - Errors returned by GetTextDetection
- Index
Faces Error - Errors returned by IndexFaces
- List
Collections Error - Errors returned by ListCollections
- List
Faces Error - Errors returned by ListFaces
- List
Stream Processors Error - Errors returned by ListStreamProcessors
- List
Tags ForResource Error - Errors returned by ListTagsForResource
- Recognize
Celebrities Error - Errors returned by RecognizeCelebrities
- Search
Faces ByImage Error - Errors returned by SearchFacesByImage
- Search
Faces Error - Errors returned by SearchFaces
- Start
Celebrity Recognition Error - Errors returned by StartCelebrityRecognition
- Start
Content Moderation Error - Errors returned by StartContentModeration
- Start
Face Detection Error - Errors returned by StartFaceDetection
- Start
Face Search Error - Errors returned by StartFaceSearch
- Start
Label Detection Error - Errors returned by StartLabelDetection
- Start
Person Tracking Error - Errors returned by StartPersonTracking
- Start
Project Version Error - Errors returned by StartProjectVersion
- Start
Segment Detection Error - Errors returned by StartSegmentDetection
- Start
Stream Processor Error - Errors returned by StartStreamProcessor
- Start
Text Detection Error - Errors returned by StartTextDetection
- Stop
Project Version Error - Errors returned by StopProjectVersion
- Stop
Stream Processor Error - Errors returned by StopStreamProcessor
- TagResource
Error - Errors returned by TagResource
- Untag
Resource Error - Errors returned by UntagResource
Traits§
- Rekognition
- Trait representing the capabilities of the Amazon Rekognition API. Amazon Rekognition clients implement this trait.