This is the Amazon Rekognition API reference.
Structure containing the estimated age range, in years, for a face.
Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.
Assets are the images that you use to train and evaluate a model version. Assets are referenced by Sagemaker GroundTruth manifest files.
Metadata information about an audio stream. An array of
Indicates whether or not the face has a beard, and the confidence level in the determination.
Identifies the bounding box around the label, face, or text. The
The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the
Provides information about a celebrity recognized by the RecognizeCelebrities operation.
Information about a recognized celebrity.
Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.
Provides information about a face in a target image that matches the source image face analyzed by
Provides face metadata for target image faces that are analyzed by
Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.
Information about an unsafe content label detection in a stored video.
A custom label detected in an image by a call to DetectCustomLabels.
A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response.
A set of parameters that allow you to filter out certain results from your returned results.
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
The evaluation results for the training of a model.
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.
Structure containing attributes of the face that the algorithm detected.
GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a
The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the
Information about a face detected in a video analysis request and the time the face was detected in the video.
Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.
Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.
Input face recognition parameters for an Amazon Rekognition stream processor.
The predicted gender of a detected face.
Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.
Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.
We don't recommend using gender binary predictions to make decisions that impact an individual's rights, privacy, or access to services.
The S3 bucket that contains the Ground Truth manifest file.
Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.
Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.
Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.
Provides the input image either as bytes or an S3 object.
You pass image bytes to an Amazon Rekognition API operation by using the
For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.
You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.
Identifies face image brightness and sharpness.
The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.
Information about a label detected in a video analysis request and the time the label was detected in the video.
Indicates the location of the landmark on the face.
Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
Indicates whether or not the face has a mustache, and the confidence level in the determination.
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see api-video.
The S3 bucket and folder location where training output is placed.
A parent label for a label. A label can have 0, 1, or more parents.
Details about a person detected in a video analysis request.
Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of
For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.
Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of
The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
An array of
Indicates the pose of the face as determined by its pitch, roll, and yaw.
A description of a Amazon Rekognition Custom Labels project.
The description of a version of a model.
Specifies a location within the frame that Rekognition checks for text. Uses a
A word is included in the region if the word is more than half in that region. If there is more than one region, the word will be compared with all regions of the screen. Any word more than half in a region is kept in the results.
A client for the Amazon Rekognition API.
Provides the S3 bucket name and object name.
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource-Based Policies in the Amazon Rekognition Developer Guide.
A technical cue or shot detection segment detected in a video. An array of
Information about a shot detection segment detected in a video. For more information, see SegmentDetection.
Indicates whether or not the face is smiling, and the confidence level in the determination.
Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.
Filters for the shot detection segments returned by
Set of optional parameters that let you set the criteria text must meet to be included in your response.
An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for
Information about the source streaming video.
Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.
The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.
You get the training summary S3 bucket location by calling DescribeProjectVersions.
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
Information about a technical cue segment. For more information, see SegmentDetection.
The dataset used for testing. Optionally, if
A Sagemaker Groundtruth format manifest file representing the dataset used for testing.
Information about a word or line of text detected by DetectText.
Every word and line has an identifier (
For more information, see Detecting Text in the Amazon Rekognition Developer Guide.
Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
The dataset used for training.
A Sagemaker Groundtruth format manifest file that represents the dataset used for training.
A face that IndexFaces detected, but didn't index. Use the
Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use
Information about a video that Amazon Rekognition analyzed.
Errors returned by CompareFaces
Errors returned by CreateCollection
Errors returned by CreateProject
Errors returned by CreateProjectVersion
Errors returned by CreateStreamProcessor
Errors returned by DeleteCollection
Errors returned by DeleteFaces
Errors returned by DeleteProject
Errors returned by DeleteProjectVersion
Errors returned by DeleteStreamProcessor
Errors returned by DescribeCollection
Errors returned by DescribeProjectVersions
Errors returned by DescribeProjects
Errors returned by DescribeStreamProcessor
Errors returned by DetectCustomLabels
Errors returned by DetectFaces
Errors returned by DetectLabels
Errors returned by DetectModerationLabels
Errors returned by DetectText
Errors returned by GetCelebrityInfo
Errors returned by GetCelebrityRecognition
Errors returned by GetContentModeration
Errors returned by GetFaceDetection
Errors returned by GetFaceSearch
Errors returned by GetLabelDetection
Errors returned by GetPersonTracking
Errors returned by GetSegmentDetection
Errors returned by GetTextDetection
Errors returned by IndexFaces
Errors returned by ListCollections
Errors returned by ListFaces
Errors returned by ListStreamProcessors
Errors returned by RecognizeCelebrities
Errors returned by SearchFacesByImage
Errors returned by SearchFaces
Errors returned by StartCelebrityRecognition
Errors returned by StartContentModeration
Errors returned by StartFaceDetection
Errors returned by StartFaceSearch
Errors returned by StartLabelDetection
Errors returned by StartPersonTracking
Errors returned by StartProjectVersion
Errors returned by StartSegmentDetection
Errors returned by StartStreamProcessor
Errors returned by StartTextDetection
Errors returned by StopProjectVersion
Errors returned by StopStreamProcessor
Trait representing the capabilities of the Amazon Rekognition API. Amazon Rekognition clients implement this trait.