[][src]Crate rusoto_rekognition

This is the Amazon Rekognition API reference.

If you're using the service, you're probably looking for RekognitionClient and Rekognition.

Structs

AgeRange

Structure containing the estimated age range, in years, for a face.

Amazon Rekognition estimates an age range for faces detected in the input image. Estimated age ranges can overlap. A face of a 5-year-old might have an estimated range of 4-6, while the face of a 6-year-old might have an estimated range of 4-8.

Asset

Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.

AudioMetadata

Metadata information about an audio stream. An array of AudioMetadata objects for the audio streams found in a stored video is returned by GetSegmentDetection.

Beard

Indicates whether or not the face has a beard, and the confidence level in the determination.

BoundingBox

Identifies the bounding box around the label, face, text or personal protective equipment. The left (x-coordinate) and top (y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).

The top and left values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns a left value of 0.5 (350/700) and a top value of 0.25 (50/200).

The width and height values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1.

The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the left or top values.

Celebrity

Provides information about a celebrity recognized by the RecognizeCelebrities operation.

CelebrityDetail

Information about a recognized celebrity.

CelebrityRecognition

Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.

CompareFacesMatch

Provides information about a face in a target image that matches the source image face analyzed by CompareFaces. The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.

CompareFacesRequest
CompareFacesResponse
ComparedFace

Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities.

ComparedSourceImageFace

Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.

ContentModerationDetection

Information about an unsafe content label detection in a stored video.

CoversBodyPart

Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.

CreateCollectionRequest
CreateCollectionResponse
CreateProjectRequest
CreateProjectResponse
CreateProjectVersionRequest
CreateProjectVersionResponse
CreateStreamProcessorRequest
CreateStreamProcessorResponse
CustomLabel

A custom label detected in an image by a call to DetectCustomLabels.

DeleteCollectionRequest
DeleteCollectionResponse
DeleteFacesRequest
DeleteFacesResponse
DeleteProjectRequest
DeleteProjectResponse
DeleteProjectVersionRequest
DeleteProjectVersionResponse
DeleteStreamProcessorRequest
DeleteStreamProcessorResponse
DescribeCollectionRequest
DescribeCollectionResponse
DescribeProjectVersionsRequest
DescribeProjectVersionsResponse
DescribeProjectsRequest
DescribeProjectsResponse
DescribeStreamProcessorRequest
DescribeStreamProcessorResponse
DetectCustomLabelsRequest
DetectCustomLabelsResponse
DetectFacesRequest
DetectFacesResponse
DetectLabelsRequest
DetectLabelsResponse
DetectModerationLabelsRequest
DetectModerationLabelsResponse
DetectProtectiveEquipmentRequest
DetectProtectiveEquipmentResponse
DetectTextFilters

A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter looks at a word’s height, width, and minimum confidence. RegionOfInterest lets you set a specific region of the image to look for text in.

DetectTextRequest
DetectTextResponse
DetectionFilter

A set of parameters that allow you to filter out certain results from your returned results.

Emotion

The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.

EquipmentDetection

Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.

EvaluationResult

The evaluation results for the training of a model.

EyeOpen

Indicates whether or not the eyes on the face are open, and the confidence level in the determination.

Eyeglasses

Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.

Face

Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.

FaceDetail

Structure containing attributes of the face that the algorithm detected.

A FaceDetail object contains either the default facial attributes or all facial attributes. The default attributes are BoundingBox, Confidence, Landmarks, Pose, and Quality.

GetFaceDetection is the only Amazon Rekognition Video stored video operation that can return a FaceDetail object with all attributes. To specify which attributes to return, use the FaceAttributes input parameter for StartFaceDetection. The following Amazon Rekognition Video operations return only the default attributes. The corresponding Start operations don't have a FaceAttributes input parameter.

  • GetCelebrityRecognition

  • GetPersonTracking

  • GetFaceSearch

The Amazon Rekognition Image DetectFaces and IndexFaces operations can return all facial attributes. To specify which attributes to return, use the Attributes input parameter for DetectFaces. For IndexFaces, use the DetectAttributes input parameter.

FaceDetection

Information about a face detected in a video analysis request and the time the face was detected in the video.

FaceMatch

Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.

FaceRecord

Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.

FaceSearchSettings

Input face recognition parameters for an Amazon Rekognition stream processor. FaceRecognitionSettings is a request parameter for CreateStreamProcessor.

Gender

The predicted gender of a detected face.

Amazon Rekognition makes gender binary (male/female) predictions based on the physical appearance of a face in a particular image. This kind of prediction is not designed to categorize a person’s gender identity, and you shouldn't use Amazon Rekognition to make such a determination. For example, a male actor wearing a long-haired wig and earrings for a role might be predicted as female.

Using Amazon Rekognition to make gender binary predictions is best suited for use cases where aggregate gender distribution statistics need to be analyzed without identifying specific users. For example, the percentage of female users compared to male users on a social media platform.

We don't recommend using gender binary predictions to make decisions that impact
 an individual's rights, privacy, or access to services.

Geometry

Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.

GetCelebrityInfoRequest
GetCelebrityInfoResponse
GetCelebrityRecognitionRequest
GetCelebrityRecognitionResponse
GetContentModerationRequest
GetContentModerationResponse
GetFaceDetectionRequest
GetFaceDetectionResponse
GetFaceSearchRequest
GetFaceSearchResponse
GetLabelDetectionRequest
GetLabelDetectionResponse
GetPersonTrackingRequest
GetPersonTrackingResponse
GetSegmentDetectionRequest
GetSegmentDetectionResponse
GetTextDetectionRequest
GetTextDetectionResponse
GroundTruthManifest

The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.

HumanLoopActivationOutput

Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.

HumanLoopConfig

Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.

HumanLoopDataAttributes

Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.

Image

Provides the input image either as bytes or an S3 object.

You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.

For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.

You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.

The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.

ImageQuality

Identifies face image brightness and sharpness.

IndexFacesRequest
IndexFacesResponse
Instance

An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).

KinesisDataStream

The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

KinesisVideoStream

Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

Label

Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.

LabelDetection

Information about a label detected in a video analysis request and the time the label was detected in the video.

Landmark

Indicates the location of the landmark on the face.

ListCollectionsRequest
ListCollectionsResponse
ListFacesRequest
ListFacesResponse
ListStreamProcessorsRequest
ListStreamProcessorsResponse
ModerationLabel

Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.

MouthOpen

Indicates whether or not the mouth on the face is open, and the confidence level in the determination.

Mustache

Indicates whether or not the face has a mustache, and the confidence level in the determination.

NotificationChannel

The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see api-video.

OutputConfig

The S3 bucket and folder location where training output is placed.

Parent

A parent label for a label. A label can have 0, 1, or more parents.

PersonDetail

Details about a person detected in a video analysis request.

PersonDetection

Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of PersonDetection objects with elements for each time a person's path is tracked in a video.

For more information, see GetPersonTracking in the Amazon Rekognition Developer Guide.

PersonMatch

Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of PersonMatch objects is returned by GetFaceSearch.

Point

The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.

An array of Point objects, Polygon, is returned by DetectText and by DetectCustomLabels. Polygon represents a fine-grained polygon around a detected item. For more information, see Geometry in the Amazon Rekognition Developer Guide.

Pose

Indicates the pose of the face as determined by its pitch, roll, and yaw.

ProjectDescription

A description of a Amazon Rekognition Custom Labels project.

ProjectVersionDescription

The description of a version of a model.

ProtectiveEquipmentBodyPart

Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of ProtectiveEquipmentBodyPart objects is returned for each person detected by DetectProtectiveEquipment.

ProtectiveEquipmentPerson

A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of ProtectiveEquipmentPerson objects.

ProtectiveEquipmentSummarizationAttributes

Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the Summary (ProtectiveEquipmentSummary) field of the response from DetectProtectiveEquipment. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.

ProtectiveEquipmentSummary

Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the SummarizationAttributes (ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment), and the persons in which a determination could not be made (PersonsIndeterminate).

To get a total for each category, use the size of the field array. For example, to find out how many people were detected as wearing the specified PPE, use the size of the PersonsWithRequiredEquipment array. If you want to find out more about a person, such as the location (BoundingBox) of the person on the image, use the person ID in each array element. Each person ID matches the ID field of a ProtectiveEquipmentPerson object returned in the Persons array by DetectProtectiveEquipment.

RecognizeCelebritiesRequest
RecognizeCelebritiesResponse
RegionOfInterest

Specifies a location within the frame that Rekognition checks for text. Uses a BoundingBox object to set a region of the screen.

A word is included in the region if the word is more than half in that region. If there is more than one region, the word will be compared with all regions of the screen. Any word more than half in a region is kept in the results.

RekognitionClient

A client for the Amazon Rekognition API.

S3Object

Provides the S3 bucket name and object name.

The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource-Based Policies in the Amazon Rekognition Developer Guide.

SearchFacesByImageRequest
SearchFacesByImageResponse
SearchFacesRequest
SearchFacesResponse
SegmentDetection

A technical cue or shot detection segment detected in a video. An array of SegmentDetection objects containing all segments detected in a stored video is returned by GetSegmentDetection.

SegmentTypeInfo

Information about the type of a segment requested in a call to StartSegmentDetection. An array of SegmentTypeInfo objects is returned by the response from GetSegmentDetection.

ShotSegment

Information about a shot detection segment detected in a video. For more information, see SegmentDetection.

Smile

Indicates whether or not the face is smiling, and the confidence level in the determination.

StartCelebrityRecognitionRequest
StartCelebrityRecognitionResponse
StartContentModerationRequest
StartContentModerationResponse
StartFaceDetectionRequest
StartFaceDetectionResponse
StartFaceSearchRequest
StartFaceSearchResponse
StartLabelDetectionRequest
StartLabelDetectionResponse
StartPersonTrackingRequest
StartPersonTrackingResponse
StartProjectVersionRequest
StartProjectVersionResponse
StartSegmentDetectionFilters

Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.

StartSegmentDetectionRequest
StartSegmentDetectionResponse
StartShotDetectionFilter

Filters for the shot detection segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.

StartStreamProcessorRequest
StartStreamProcessorResponse
StartTechnicalCueDetectionFilter

Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.

StartTextDetectionFilters

Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter looks at a word's height, width and minimum confidence. RegionOfInterest lets you set a specific region of the screen to look for text in.

StartTextDetectionRequest
StartTextDetectionResponse
StopProjectVersionRequest
StopProjectVersionResponse
StopStreamProcessorRequest
StopStreamProcessorResponse
StreamProcessor

An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for CreateStreamProcessor describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.

StreamProcessorInput

Information about the source streaming video.

StreamProcessorOutput

Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.

StreamProcessorSettings

Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.

Summary

The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.

You get the training summary S3 bucket location by calling DescribeProjectVersions.

Sunglasses

Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.

TechnicalCueSegment

Information about a technical cue segment. For more information, see SegmentDetection.

TestingData

The dataset used for testing. Optionally, if AutoCreate is set, Amazon Rekognition Custom Labels creates a testing dataset using an 80/20 split of the training dataset.

TestingDataResult

Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.

TextDetection

Information about a word or line of text detected by DetectText.

The DetectedText field contains the text that Amazon Rekognition detected in the image.

Every word and line has an identifier (Id). Each word belongs to a line and has a parent identifier (ParentId) that identifies the line of text in which the word appears. The word Id is also an index for the word within a line of words.

For more information, see Detecting Text in the Amazon Rekognition Developer Guide.

TextDetectionResult

Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.

TrainingData

The dataset used for training.

TrainingDataResult

Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.

UnindexedFace

A face that IndexFaces detected, but didn't index. Use the Reasons response attribute to determine why a face wasn't indexed.

ValidationData

Contains the Amazon S3 bucket location of the validation data for a model training job.

The validation data includes error information for individual JSON lines in the dataset. For more information, see Debugging a Failed Model Training in the Amazon Rekognition Custom Labels Developer Guide.

You get the ValidationData object for the training dataset (TrainingDataResult) and the test dataset (TestingDataResult) by calling DescribeProjectVersions.

The assets array contains a single Asset object. The GroundTruthManifest field of the Asset object contains the S3 bucket location of the validation data.

Video

Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

VideoMetadata

Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

Enums

CompareFacesError

Errors returned by CompareFaces

CreateCollectionError

Errors returned by CreateCollection

CreateProjectError

Errors returned by CreateProject

CreateProjectVersionError

Errors returned by CreateProjectVersion

CreateStreamProcessorError

Errors returned by CreateStreamProcessor

DeleteCollectionError

Errors returned by DeleteCollection

DeleteFacesError

Errors returned by DeleteFaces

DeleteProjectError

Errors returned by DeleteProject

DeleteProjectVersionError

Errors returned by DeleteProjectVersion

DeleteStreamProcessorError

Errors returned by DeleteStreamProcessor

DescribeCollectionError

Errors returned by DescribeCollection

DescribeProjectVersionsError

Errors returned by DescribeProjectVersions

DescribeProjectsError

Errors returned by DescribeProjects

DescribeStreamProcessorError

Errors returned by DescribeStreamProcessor

DetectCustomLabelsError

Errors returned by DetectCustomLabels

DetectFacesError

Errors returned by DetectFaces

DetectLabelsError

Errors returned by DetectLabels

DetectModerationLabelsError

Errors returned by DetectModerationLabels

DetectProtectiveEquipmentError

Errors returned by DetectProtectiveEquipment

DetectTextError

Errors returned by DetectText

GetCelebrityInfoError

Errors returned by GetCelebrityInfo

GetCelebrityRecognitionError

Errors returned by GetCelebrityRecognition

GetContentModerationError

Errors returned by GetContentModeration

GetFaceDetectionError

Errors returned by GetFaceDetection

GetFaceSearchError

Errors returned by GetFaceSearch

GetLabelDetectionError

Errors returned by GetLabelDetection

GetPersonTrackingError

Errors returned by GetPersonTracking

GetSegmentDetectionError

Errors returned by GetSegmentDetection

GetTextDetectionError

Errors returned by GetTextDetection

IndexFacesError

Errors returned by IndexFaces

ListCollectionsError

Errors returned by ListCollections

ListFacesError

Errors returned by ListFaces

ListStreamProcessorsError

Errors returned by ListStreamProcessors

RecognizeCelebritiesError

Errors returned by RecognizeCelebrities

SearchFacesByImageError

Errors returned by SearchFacesByImage

SearchFacesError

Errors returned by SearchFaces

StartCelebrityRecognitionError

Errors returned by StartCelebrityRecognition

StartContentModerationError

Errors returned by StartContentModeration

StartFaceDetectionError

Errors returned by StartFaceDetection

StartFaceSearchError

Errors returned by StartFaceSearch

StartLabelDetectionError

Errors returned by StartLabelDetection

StartPersonTrackingError

Errors returned by StartPersonTracking

StartProjectVersionError

Errors returned by StartProjectVersion

StartSegmentDetectionError

Errors returned by StartSegmentDetection

StartStreamProcessorError

Errors returned by StartStreamProcessor

StartTextDetectionError

Errors returned by StartTextDetection

StopProjectVersionError

Errors returned by StopProjectVersion

StopStreamProcessorError

Errors returned by StopStreamProcessor

Traits

Rekognition

Trait representing the capabilities of the Amazon Rekognition API. Amazon Rekognition clients implement this trait.