Struct aws_sdk_rekognition::Client
source · [−]pub struct Client { /* private fields */ }Expand description
Client for Amazon Rekognition
Client for invoking operations on Amazon Rekognition. Each operation on Amazon Rekognition is a method on this
this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.
Examples
Constructing a client and invoking an operation
// create a shared configuration. This can be used & shared between multiple service clients.
let shared_config = aws_config::load_from_env().await;
let client = aws_sdk_rekognition::Client::new(&shared_config);
// invoke an operation
/* let rsp = client
.<operation_name>().
.<param>("some value")
.send().await; */Constructing a client with custom configuration
use aws_config::RetryConfig;
let shared_config = aws_config::load_from_env().await;
let config = aws_sdk_rekognition::config::Builder::from(&shared_config)
.retry_config(RetryConfig::disabled())
.build();
let client = aws_sdk_rekognition::Client::from_conf(config);Implementations
sourceimpl Client
impl Client
sourcepub fn with_config(
client: Client<DynConnector, DynMiddleware<DynConnector>>,
conf: Config
) -> Self
pub fn with_config(
client: Client<DynConnector, DynMiddleware<DynConnector>>,
conf: Config
) -> Self
Creates a client with the given service configuration.
sourceimpl Client
impl Client
sourcepub fn compare_faces(&self) -> CompareFaces
pub fn compare_faces(&self) -> CompareFaces
Constructs a fluent builder for the CompareFaces operation.
- The fluent builder is configurable:
source_image(Image)/set_source_image(Option<Image>):The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.target_image(Image)/set_target_image(Option<Image>):The target image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.similarity_threshold(f32)/set_similarity_threshold(Option<f32>):The minimum level of confidence in the face matches that a match must meet to be included in the
FaceMatchesarray.quality_filter(QualityFilter)/set_quality_filter(Option<QualityFilter>):A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t compared. If you specify
AUTO, Amazon Rekognition chooses the quality bar. If you specifyLOW,MEDIUM, orHIGH, filtering removes all faces that don’t meet the chosen quality bar. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specifyNONE, no filtering is performed. The default value isNONE.To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
- On success, responds with
CompareFacesOutputwith field(s):source_image_face(Option<ComparedSourceImageFace>):The face in the source image that was used for comparison.
face_matches(Option<Vec<CompareFacesMatch>>):An array of faces in the target image that match the source image face. Each
CompareFacesMatchobject provides the bounding box, the confidence level that the bounding box contains a face, and the similarity score for the face in the bounding box and the face in the source image.unmatched_faces(Option<Vec<ComparedFace>>):An array of faces in the target image that did not match the source image face.
source_image_orientation_correction(Option<OrientationCorrection>):The value of
SourceImageOrientationCorrectionis always null.If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
target_image_orientation_correction(Option<OrientationCorrection>):The value of
TargetImageOrientationCorrectionis always null.If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
- On failure, responds with
SdkError<CompareFacesError>
sourcepub fn create_collection(&self) -> CreateCollection
pub fn create_collection(&self) -> CreateCollection
Constructs a fluent builder for the CreateCollection operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)/set_collection_id(Option<String>):ID for the collection that you are creating.
tags(HashMap<String, String>)/set_tags(Option<HashMap<String, String>>):A set of tags (key-value pairs) that you want to attach to the collection.
- On success, responds with
CreateCollectionOutputwith field(s):status_code(Option<i32>):HTTP status code indicating the result of the operation.
collection_arn(Option<String>):Amazon Resource Name (ARN) of the collection. You can use this to manage permissions on your resources.
face_model_version(Option<String>):Latest face model being used with the collection. For more information, see Model versioning.
- On failure, responds with
SdkError<CreateCollectionError>
sourcepub fn create_dataset(&self) -> CreateDataset
pub fn create_dataset(&self) -> CreateDataset
Constructs a fluent builder for the CreateDataset operation.
- The fluent builder is configurable:
dataset_source(DatasetSource)/set_dataset_source(Option<DatasetSource>):The source files for the dataset. You can specify the ARN of an existing dataset or specify the Amazon S3 bucket location of an Amazon Sagemaker format manifest file. If you don’t specify
datasetSource, an empty dataset is created. To add labeled images to the dataset, You can use the console or callUpdateDatasetEntries.dataset_type(DatasetType)/set_dataset_type(Option<DatasetType>):The type of the dataset. Specify
trainto create a training dataset. Specifytestto create a test dataset.project_arn(impl Into<String>)/set_project_arn(Option<String>):The ARN of the Amazon Rekognition Custom Labels project to which you want to asssign the dataset.
- On success, responds with
CreateDatasetOutputwith field(s):dataset_arn(Option<String>):The ARN of the created Amazon Rekognition Custom Labels dataset.
- On failure, responds with
SdkError<CreateDatasetError>
sourcepub fn create_project(&self) -> CreateProject
pub fn create_project(&self) -> CreateProject
Constructs a fluent builder for the CreateProject operation.
- The fluent builder is configurable:
project_name(impl Into<String>)/set_project_name(Option<String>):The name of the project to create.
- On success, responds with
CreateProjectOutputwith field(s):project_arn(Option<String>):The Amazon Resource Name (ARN) of the new project. You can use the ARN to configure IAM access to the project.
- On failure, responds with
SdkError<CreateProjectError>
sourcepub fn create_project_version(&self) -> CreateProjectVersion
pub fn create_project_version(&self) -> CreateProjectVersion
Constructs a fluent builder for the CreateProjectVersion operation.
- The fluent builder is configurable:
project_arn(impl Into<String>)/set_project_arn(Option<String>):The ARN of the Amazon Rekognition Custom Labels project that manages the model that you want to train.
version_name(impl Into<String>)/set_version_name(Option<String>):A name for the version of the model. This value must be unique.
output_config(OutputConfig)/set_output_config(Option<OutputConfig>):The Amazon S3 bucket location to store the results of training. The S3 bucket can be in any AWS account as long as the caller has
s3:PutObjectpermissions on the S3 bucket.training_data(TrainingData)/set_training_data(Option<TrainingData>):Specifies an external manifest that the services uses to train the model. If you specify
TrainingDatayou must also specifyTestingData. The project must not have any associated datasets.testing_data(TestingData)/set_testing_data(Option<TestingData>):Specifies an external manifest that the service uses to test the model. If you specify
TestingDatayou must also specifyTrainingData. The project must not have any associated datasets.tags(HashMap<String, String>)/set_tags(Option<HashMap<String, String>>):A set of tags (key-value pairs) that you want to attach to the model.
kms_key_id(impl Into<String>)/set_kms_key_id(Option<String>):The identifier for your AWS Key Management Service key (AWS KMS key). You can supply the Amazon Resource Name (ARN) of your KMS key, the ID of your KMS key, an alias for your KMS key, or an alias ARN. The key is used to encrypt training and test images copied into the service for model training. Your source images are unaffected. The key is also used to encrypt training results and manifest files written to the output Amazon S3 bucket (
OutputConfig).If you choose to use your own KMS key, you need the following permissions on the KMS key.
-
kms:CreateGrant
-
kms:DescribeKey
-
kms:GenerateDataKey
-
kms:Decrypt
If you don’t specify a value for
KmsKeyId, images copied into the service are encrypted using a key that AWS owns and manages.-
- On success, responds with
CreateProjectVersionOutputwith field(s):project_version_arn(Option<String>):The ARN of the model version that was created. Use
DescribeProjectVersionto get the current status of the training operation.
- On failure, responds with
SdkError<CreateProjectVersionError>
sourcepub fn create_stream_processor(&self) -> CreateStreamProcessor
pub fn create_stream_processor(&self) -> CreateStreamProcessor
Constructs a fluent builder for the CreateStreamProcessor operation.
- The fluent builder is configurable:
input(StreamProcessorInput)/set_input(Option<StreamProcessorInput>):Kinesis video stream stream that provides the source streaming video. If you are using the AWS CLI, the parameter name is
StreamProcessorInput.output(StreamProcessorOutput)/set_output(Option<StreamProcessorOutput>):Kinesis data stream stream to which Amazon Rekognition Video puts the analysis results. If you are using the AWS CLI, the parameter name is
StreamProcessorOutput.name(impl Into<String>)/set_name(Option<String>):An identifier you assign to the stream processor. You can use
Nameto manage the stream processor. For example, you can get the current status of the stream processor by callingDescribeStreamProcessor.Nameis idempotent.settings(StreamProcessorSettings)/set_settings(Option<StreamProcessorSettings>):Face recognition input parameters to be used by the stream processor. Includes the collection to use for face recognition and the face attributes to detect.
role_arn(impl Into<String>)/set_role_arn(Option<String>):ARN of the IAM role that allows access to the stream processor.
tags(HashMap<String, String>)/set_tags(Option<HashMap<String, String>>):A set of tags (key-value pairs) that you want to attach to the stream processor.
- On success, responds with
CreateStreamProcessorOutputwith field(s):stream_processor_arn(Option<String>):ARN for the newly create stream processor.
- On failure, responds with
SdkError<CreateStreamProcessorError>
sourcepub fn delete_collection(&self) -> DeleteCollection
pub fn delete_collection(&self) -> DeleteCollection
Constructs a fluent builder for the DeleteCollection operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)/set_collection_id(Option<String>):ID of the collection to delete.
- On success, responds with
DeleteCollectionOutputwith field(s):status_code(Option<i32>):HTTP status code that indicates the result of the operation.
- On failure, responds with
SdkError<DeleteCollectionError>
sourcepub fn delete_dataset(&self) -> DeleteDataset
pub fn delete_dataset(&self) -> DeleteDataset
Constructs a fluent builder for the DeleteDataset operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)/set_dataset_arn(Option<String>):The ARN of the Amazon Rekognition Custom Labels dataset that you want to delete.
- On success, responds with
DeleteDatasetOutput - On failure, responds with
SdkError<DeleteDatasetError>
sourcepub fn delete_faces(&self) -> DeleteFaces
pub fn delete_faces(&self) -> DeleteFaces
Constructs a fluent builder for the DeleteFaces operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)/set_collection_id(Option<String>):Collection from which to remove the specific faces.
face_ids(Vec<String>)/set_face_ids(Option<Vec<String>>):An array of face IDs to delete.
- On success, responds with
DeleteFacesOutputwith field(s):deleted_faces(Option<Vec<String>>):An array of strings (face IDs) of the faces that were deleted.
- On failure, responds with
SdkError<DeleteFacesError>
sourcepub fn delete_project(&self) -> DeleteProject
pub fn delete_project(&self) -> DeleteProject
Constructs a fluent builder for the DeleteProject operation.
- The fluent builder is configurable:
project_arn(impl Into<String>)/set_project_arn(Option<String>):The Amazon Resource Name (ARN) of the project that you want to delete.
- On success, responds with
DeleteProjectOutputwith field(s):status(Option<ProjectStatus>):The current status of the delete project operation.
- On failure, responds with
SdkError<DeleteProjectError>
sourcepub fn delete_project_version(&self) -> DeleteProjectVersion
pub fn delete_project_version(&self) -> DeleteProjectVersion
Constructs a fluent builder for the DeleteProjectVersion operation.
- The fluent builder is configurable:
project_version_arn(impl Into<String>)/set_project_version_arn(Option<String>):The Amazon Resource Name (ARN) of the model version that you want to delete.
- On success, responds with
DeleteProjectVersionOutputwith field(s):status(Option<ProjectVersionStatus>):The status of the deletion operation.
- On failure, responds with
SdkError<DeleteProjectVersionError>
sourcepub fn delete_stream_processor(&self) -> DeleteStreamProcessor
pub fn delete_stream_processor(&self) -> DeleteStreamProcessor
Constructs a fluent builder for the DeleteStreamProcessor operation.
- The fluent builder is configurable:
name(impl Into<String>)/set_name(Option<String>):The name of the stream processor you want to delete.
- On success, responds with
DeleteStreamProcessorOutput - On failure, responds with
SdkError<DeleteStreamProcessorError>
sourcepub fn describe_collection(&self) -> DescribeCollection
pub fn describe_collection(&self) -> DescribeCollection
Constructs a fluent builder for the DescribeCollection operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)/set_collection_id(Option<String>):The ID of the collection to describe.
- On success, responds with
DescribeCollectionOutputwith field(s):face_count(Option<i64>):The number of faces that are indexed into the collection. To index faces into a collection, use
IndexFaces.face_model_version(Option<String>):The version of the face model that’s used by the collection for face detection.
For more information, see Model Versioning in the Amazon Rekognition Developer Guide.
collection_arn(Option<String>):The Amazon Resource Name (ARN) of the collection.
creation_timestamp(Option<DateTime>):The number of milliseconds since the Unix epoch time until the creation of the collection. The Unix epoch time is 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970.
- On failure, responds with
SdkError<DescribeCollectionError>
sourcepub fn describe_dataset(&self) -> DescribeDataset
pub fn describe_dataset(&self) -> DescribeDataset
Constructs a fluent builder for the DescribeDataset operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)/set_dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset that you want to describe.
- On success, responds with
DescribeDatasetOutputwith field(s):dataset_description(Option<DatasetDescription>):The description for the dataset.
- On failure, responds with
SdkError<DescribeDatasetError>
sourcepub fn describe_projects(&self) -> DescribeProjects
pub fn describe_projects(&self) -> DescribeProjects
Constructs a fluent builder for the DescribeProjects operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)/set_max_results(Option<i32>):The maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100.
project_names(Vec<String>)/set_project_names(Option<Vec<String>>):A list of the projects that you want Amazon Rekognition Custom Labels to describe. If you don’t specify a value, the response includes descriptions for all the projects in your AWS account.
- On success, responds with
DescribeProjectsOutputwith field(s):project_descriptions(Option<Vec<ProjectDescription>>):A list of project descriptions. The list is sorted by the date and time the projects are created.
next_token(Option<String>):If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
- On failure, responds with
SdkError<DescribeProjectsError>
sourcepub fn describe_project_versions(&self) -> DescribeProjectVersions
pub fn describe_project_versions(&self) -> DescribeProjectVersions
Constructs a fluent builder for the DescribeProjectVersions operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
project_arn(impl Into<String>)/set_project_arn(Option<String>):The Amazon Resource Name (ARN) of the project that contains the models you want to describe.
version_names(Vec<String>)/set_version_names(Option<Vec<String>>):A list of model version names that you want to describe. You can add up to 10 model version names to the list. If you don’t specify a value, all model descriptions are returned. A version name is part of a model (ProjectVersion) ARN. For example,
my-model.2020-01-21T09.10.15is the version name in the following ARN.arn:aws:rekognition:us-east-1:123456789012:project/getting-started/version/my-model.2020-01-21T09.10.15/1234567890123.next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)/set_max_results(Option<i32>):The maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100.
- On success, responds with
DescribeProjectVersionsOutputwith field(s):project_version_descriptions(Option<Vec<ProjectVersionDescription>>):A list of model descriptions. The list is sorted by the creation date and time of the model versions, latest to earliest.
next_token(Option<String>):If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
- On failure, responds with
SdkError<DescribeProjectVersionsError>
sourcepub fn describe_stream_processor(&self) -> DescribeStreamProcessor
pub fn describe_stream_processor(&self) -> DescribeStreamProcessor
Constructs a fluent builder for the DescribeStreamProcessor operation.
- The fluent builder is configurable:
name(impl Into<String>)/set_name(Option<String>):Name of the stream processor for which you want information.
- On success, responds with
DescribeStreamProcessorOutputwith field(s):name(Option<String>):Name of the stream processor.
stream_processor_arn(Option<String>):ARN of the stream processor.
status(Option<StreamProcessorStatus>):Current status of the stream processor.
status_message(Option<String>):Detailed status message about the stream processor.
creation_timestamp(Option<DateTime>):Date and time the stream processor was created
last_update_timestamp(Option<DateTime>):The time, in Unix format, the stream processor was last updated. For example, when the stream processor moves from a running state to a failed state, or when the user starts or stops the stream processor.
input(Option<StreamProcessorInput>):Kinesis video stream that provides the source streaming video.
output(Option<StreamProcessorOutput>):Kinesis data stream to which Amazon Rekognition Video puts the analysis results.
role_arn(Option<String>):ARN of the IAM role that allows access to the stream processor.
settings(Option<StreamProcessorSettings>):Face recognition input parameters that are being used by the stream processor. Includes the collection to use for face recognition and the face attributes to detect.
- On failure, responds with
SdkError<DescribeStreamProcessorError>
sourcepub fn detect_custom_labels(&self) -> DetectCustomLabels
pub fn detect_custom_labels(&self) -> DetectCustomLabels
Constructs a fluent builder for the DetectCustomLabels operation.
- The fluent builder is configurable:
project_version_arn(impl Into<String>)/set_project_version_arn(Option<String>):The ARN of the model version that you want to use.
image(Image)/set_image(Option<Image>):Provides the input image either as bytes or an S3 object.
You pass image bytes to an Amazon Rekognition API operation by using the
Bytesproperty. For example, you would use theBytesproperty to pass an image loaded from a local file system. Image bytes passed by using theBytesproperty must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.
You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the
S3Objectproperty. Images stored in an S3 bucket do not need to be base64-encoded.The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.
max_results(i32)/set_max_results(Option<i32>):Maximum number of results you want the service to return in the response. The service returns the specified number of highest confidence labels ranked from highest confidence to lowest.
min_confidence(f32)/set_min_confidence(Option<f32>):Specifies the minimum confidence level for the labels to return.
DetectCustomLabelsdoesn’t return any labels with a confidence value that’s lower than this specified value. If you specify a value of 0,DetectCustomLabelsreturns all labels, regardless of the assumed threshold applied to each label. If you don’t specify a value forMinConfidence,DetectCustomLabelsreturns labels based on the assumed threshold of each label.
- On success, responds with
DetectCustomLabelsOutputwith field(s):custom_labels(Option<Vec<CustomLabel>>):An array of custom labels detected in the input image.
- On failure, responds with
SdkError<DetectCustomLabelsError>
sourcepub fn detect_faces(&self) -> DetectFaces
pub fn detect_faces(&self) -> DetectFaces
Constructs a fluent builder for the DetectFaces operation.
- The fluent builder is configurable:
image(Image)/set_image(Option<Image>):The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.attributes(Vec<Attribute>)/set_attributes(Option<Vec<Attribute>>):An array of facial attributes you want to be returned. This can be the default list of attributes or all attributes. If you don’t specify a value for
Attributesor if you specify[“DEFAULT”], the API returns the following subset of facial attributes:BoundingBox,Confidence,Pose,Quality, andLandmarks. If you provide[“ALL”], all facial attributes are returned, but the operation takes longer to complete.If you provide both,
[“ALL”, “DEFAULT”], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).
- On success, responds with
DetectFacesOutputwith field(s):face_details(Option<Vec<FaceDetail>>):Details of each face found in the image.
orientation_correction(Option<OrientationCorrection>):The value of
OrientationCorrectionis always null.If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
- On failure, responds with
SdkError<DetectFacesError>
sourcepub fn detect_labels(&self) -> DetectLabels
pub fn detect_labels(&self) -> DetectLabels
Constructs a fluent builder for the DetectLabels operation.
- The fluent builder is configurable:
image(Image)/set_image(Option<Image>):The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. Images stored in an S3 Bucket do not need to be base64-encoded.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.max_labels(i32)/set_max_labels(Option<i32>):Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.
min_confidence(f32)/set_min_confidence(Option<f32>):Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with confidence lower than this specified value.
If
MinConfidenceis not specified, the operation returns labels with a confidence values greater than or equal to 55 percent.
- On success, responds with
DetectLabelsOutputwith field(s):labels(Option<Vec<Label>>):An array of labels for the real-world objects detected.
orientation_correction(Option<OrientationCorrection>):The value of
OrientationCorrectionis always null.If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
label_model_version(Option<String>):Version number of the label detection model that was used to detect labels.
- On failure, responds with
SdkError<DetectLabelsError>
sourcepub fn detect_moderation_labels(&self) -> DetectModerationLabels
pub fn detect_moderation_labels(&self) -> DetectModerationLabels
Constructs a fluent builder for the DetectModerationLabels operation.
- The fluent builder is configurable:
image(Image)/set_image(Option<Image>):The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.min_confidence(f32)/set_min_confidence(Option<f32>):Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with a confidence level lower than this specified value.
If you don’t specify
MinConfidence, the operation returns labels with confidence values greater than or equal to 50 percent.human_loop_config(HumanLoopConfig)/set_human_loop_config(Option<HumanLoopConfig>):Sets up the configuration for human evaluation, including the FlowDefinition the image will be sent to.
- On success, responds with
DetectModerationLabelsOutputwith field(s):moderation_labels(Option<Vec<ModerationLabel>>):Array of detected Moderation labels and the time, in milliseconds from the start of the video, they were detected.
moderation_model_version(Option<String>):Version number of the moderation detection model that was used to detect unsafe content.
human_loop_activation_output(Option<HumanLoopActivationOutput>):Shows the results of the human in the loop evaluation.
- On failure, responds with
SdkError<DetectModerationLabelsError>
sourcepub fn detect_protective_equipment(&self) -> DetectProtectiveEquipment
pub fn detect_protective_equipment(&self) -> DetectProtectiveEquipment
Constructs a fluent builder for the DetectProtectiveEquipment operation.
- The fluent builder is configurable:
image(Image)/set_image(Option<Image>):The image in which you want to detect PPE on detected persons. The image can be passed as image bytes or you can reference an image stored in an Amazon S3 bucket.
summarization_attributes(ProtectiveEquipmentSummarizationAttributes)/set_summarization_attributes(Option<ProtectiveEquipmentSummarizationAttributes>):An array of PPE types that you want to summarize.
- On success, responds with
DetectProtectiveEquipmentOutputwith field(s):protective_equipment_model_version(Option<String>):The version number of the PPE detection model used to detect PPE in the image.
persons(Option<Vec<ProtectiveEquipmentPerson>>):An array of persons detected in the image (including persons not wearing PPE).
summary(Option<ProtectiveEquipmentSummary>):Summary information for the types of PPE specified in the
SummarizationAttributesinput parameter.
- On failure, responds with
SdkError<DetectProtectiveEquipmentError>
sourcepub fn detect_text(&self) -> DetectText
pub fn detect_text(&self) -> DetectText
Constructs a fluent builder for the DetectText operation.
- The fluent builder is configurable:
image(Image)/set_image(Option<Image>):The input image as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Rekognition operations, you can’t pass image bytes.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.filters(DetectTextFilters)/set_filters(Option<DetectTextFilters>):Optional parameters that let you set the criteria that the text must meet to be included in your response.
- On success, responds with
DetectTextOutputwith field(s):text_detections(Option<Vec<TextDetection>>):An array of text that was detected in the input image.
text_model_version(Option<String>):The model version used to detect text.
- On failure, responds with
SdkError<DetectTextError>
sourcepub fn distribute_dataset_entries(&self) -> DistributeDatasetEntries
pub fn distribute_dataset_entries(&self) -> DistributeDatasetEntries
Constructs a fluent builder for the DistributeDatasetEntries operation.
- The fluent builder is configurable:
datasets(Vec<DistributeDataset>)/set_datasets(Option<Vec<DistributeDataset>>):The ARNS for the training dataset and test dataset that you want to use. The datasets must belong to the same project. The test dataset must be empty.
- On success, responds with
DistributeDatasetEntriesOutput - On failure, responds with
SdkError<DistributeDatasetEntriesError>
sourcepub fn get_celebrity_info(&self) -> GetCelebrityInfo
pub fn get_celebrity_info(&self) -> GetCelebrityInfo
Constructs a fluent builder for the GetCelebrityInfo operation.
- The fluent builder is configurable:
id(impl Into<String>)/set_id(Option<String>):The ID for the celebrity. You get the celebrity ID from a call to the
RecognizeCelebritiesoperation, which recognizes celebrities in an image.
- On success, responds with
GetCelebrityInfoOutputwith field(s):urls(Option<Vec<String>>):An array of URLs pointing to additional celebrity information.
name(Option<String>):The name of the celebrity.
known_gender(Option<KnownGender>):Retrieves the known gender for the celebrity.
- On failure, responds with
SdkError<GetCelebrityInfoError>
sourcepub fn get_celebrity_recognition(&self) -> GetCelebrityRecognition
pub fn get_celebrity_recognition(&self) -> GetCelebrityRecognition
Constructs a fluent builder for the GetCelebrityRecognition operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
job_id(impl Into<String>)/set_job_id(Option<String>):Job identifier for the required celebrity recognition analysis. You can get the job identifer from a call to
StartCelebrityRecognition.max_results(i32)/set_max_results(Option<i32>):Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there is more recognized celebrities to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of celebrities.
sort_by(CelebrityRecognitionSortBy)/set_sort_by(Option<CelebrityRecognitionSortBy>):Sort to use for celebrities returned in
Celebritiesfield. SpecifyIDto sort by the celebrity identifier, specifyTIMESTAMPto sort by the time the celebrity was recognized.
- On success, responds with
GetCelebrityRecognitionOutputwith field(s):job_status(Option<VideoJobStatus>):The current status of the celebrity recognition job.
status_message(Option<String>):If the job fails,
StatusMessageprovides a descriptive error message.video_metadata(Option<VideoMetadata>):Information about a video that Amazon Rekognition Video analyzed.
Videometadatais returned in every page of paginated responses from a Amazon Rekognition Video operation.next_token(Option<String>):If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of celebrities.
celebrities(Option<Vec<CelebrityRecognition>>):Array of celebrities recognized in the video.
- On failure, responds with
SdkError<GetCelebrityRecognitionError>
sourcepub fn get_content_moderation(&self) -> GetContentModeration
pub fn get_content_moderation(&self) -> GetContentModeration
Constructs a fluent builder for the GetContentModeration operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
job_id(impl Into<String>)/set_job_id(Option<String>):The identifier for the inappropriate, unwanted, or offensive content moderation job. Use
JobIdto identify the job in a subsequent call toGetContentModeration.max_results(i32)/set_max_results(Option<i32>):Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there is more data to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of content moderation labels.
sort_by(ContentModerationSortBy)/set_sort_by(Option<ContentModerationSortBy>):Sort to use for elements in the
ModerationLabelDetectionsarray. UseTIMESTAMPto sort array elements by the time labels are detected. UseNAMEto alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is byTIMESTAMP.
- On success, responds with
GetContentModerationOutputwith field(s):job_status(Option<VideoJobStatus>):The current status of the content moderation analysis job.
status_message(Option<String>):If the job fails,
StatusMessageprovides a descriptive error message.video_metadata(Option<VideoMetadata>):Information about a video that Amazon Rekognition analyzed.
Videometadatais returned in every page of paginated responses fromGetContentModeration.moderation_labels(Option<Vec<ContentModerationDetection>>):The detected inappropriate, unwanted, or offensive content moderation labels and the time(s) they were detected.
next_token(Option<String>):If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of content moderation labels.
moderation_model_version(Option<String>):Version number of the moderation detection model that was used to detect inappropriate, unwanted, or offensive content.
- On failure, responds with
SdkError<GetContentModerationError>
sourcepub fn get_face_detection(&self) -> GetFaceDetection
pub fn get_face_detection(&self) -> GetFaceDetection
Constructs a fluent builder for the GetFaceDetection operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
job_id(impl Into<String>)/set_job_id(Option<String>):Unique identifier for the face detection job. The
JobIdis returned fromStartFaceDetection.max_results(i32)/set_max_results(Option<i32>):Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there are more faces to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.
- On success, responds with
GetFaceDetectionOutputwith field(s):job_status(Option<VideoJobStatus>):The current status of the face detection job.
status_message(Option<String>):If the job fails,
StatusMessageprovides a descriptive error message.video_metadata(Option<VideoMetadata>):Information about a video that Amazon Rekognition Video analyzed.
Videometadatais returned in every page of paginated responses from a Amazon Rekognition video operation.next_token(Option<String>):If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.
faces(Option<Vec<FaceDetection>>):An array of faces detected in the video. Each element contains a detected face’s details and the time, in milliseconds from the start of the video, the face was detected.
- On failure, responds with
SdkError<GetFaceDetectionError>
sourcepub fn get_face_search(&self) -> GetFaceSearch
pub fn get_face_search(&self) -> GetFaceSearch
Constructs a fluent builder for the GetFaceSearch operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
job_id(impl Into<String>)/set_job_id(Option<String>):The job identifer for the search request. You get the job identifier from an initial call to
StartFaceSearch.max_results(i32)/set_max_results(Option<i32>):Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there is more search results to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of search results.
sort_by(FaceSearchSortBy)/set_sort_by(Option<FaceSearchSortBy>):Sort to use for grouping faces in the response. Use
TIMESTAMPto group faces by the time that they are recognized. UseINDEXto sort by recognized faces.
- On success, responds with
GetFaceSearchOutputwith field(s):job_status(Option<VideoJobStatus>):The current status of the face search job.
status_message(Option<String>):If the job fails,
StatusMessageprovides a descriptive error message.next_token(Option<String>):If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.
video_metadata(Option<VideoMetadata>):Information about a video that Amazon Rekognition analyzed.
Videometadatais returned in every page of paginated responses from a Amazon Rekognition Video operation.persons(Option<Vec<PersonMatch>>):An array of persons,
PersonMatch, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call toStartFaceSearch. EachPersonselement includes a time the person was matched, face match details (FaceMatches) for matching faces in the collection, and person information (Person) for the matched person.
- On failure, responds with
SdkError<GetFaceSearchError>
sourcepub fn get_label_detection(&self) -> GetLabelDetection
pub fn get_label_detection(&self) -> GetLabelDetection
Constructs a fluent builder for the GetLabelDetection operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
job_id(impl Into<String>)/set_job_id(Option<String>):Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to
StartlabelDetection.max_results(i32)/set_max_results(Option<i32>):Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.
sort_by(LabelDetectionSortBy)/set_sort_by(Option<LabelDetectionSortBy>):Sort to use for elements in the
Labelsarray. UseTIMESTAMPto sort array elements by the time labels are detected. UseNAMEto alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is byTIMESTAMP.
- On success, responds with
GetLabelDetectionOutputwith field(s):job_status(Option<VideoJobStatus>):The current status of the label detection job.
status_message(Option<String>):If the job fails,
StatusMessageprovides a descriptive error message.video_metadata(Option<VideoMetadata>):Information about a video that Amazon Rekognition Video analyzed.
Videometadatais returned in every page of paginated responses from a Amazon Rekognition video operation.next_token(Option<String>):If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of labels.
labels(Option<Vec<LabelDetection>>):An array of labels detected in the video. Each element contains the detected label and the time, in milliseconds from the start of the video, that the label was detected.
label_model_version(Option<String>):Version number of the label detection model that was used to detect labels.
- On failure, responds with
SdkError<GetLabelDetectionError>
sourcepub fn get_person_tracking(&self) -> GetPersonTracking
pub fn get_person_tracking(&self) -> GetPersonTracking
Constructs a fluent builder for the GetPersonTracking operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
job_id(impl Into<String>)/set_job_id(Option<String>):The identifier for a job that tracks persons in a video. You get the
JobIdfrom a call toStartPersonTracking.max_results(i32)/set_max_results(Option<i32>):Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there are more persons to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of persons.
sort_by(PersonTrackingSortBy)/set_sort_by(Option<PersonTrackingSortBy>):Sort to use for elements in the
Personsarray. UseTIMESTAMPto sort array elements by the time persons are detected. UseINDEXto sort by the tracked persons. If you sort byINDEX, the array elements for each person are sorted by detection confidence. The default sort is byTIMESTAMP.
- On success, responds with
GetPersonTrackingOutputwith field(s):job_status(Option<VideoJobStatus>):The current status of the person tracking job.
status_message(Option<String>):If the job fails,
StatusMessageprovides a descriptive error message.video_metadata(Option<VideoMetadata>):Information about a video that Amazon Rekognition Video analyzed.
Videometadatais returned in every page of paginated responses from a Amazon Rekognition Video operation.next_token(Option<String>):If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of persons.
persons(Option<Vec<PersonDetection>>):An array of the persons detected in the video and the time(s) their path was tracked throughout the video. An array element will exist for each time a person’s path is tracked.
- On failure, responds with
SdkError<GetPersonTrackingError>
sourcepub fn get_segment_detection(&self) -> GetSegmentDetection
pub fn get_segment_detection(&self) -> GetSegmentDetection
Constructs a fluent builder for the GetSegmentDetection operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
job_id(impl Into<String>)/set_job_id(Option<String>):Job identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to
StartSegmentDetection.max_results(i32)/set_max_results(Option<i32>):Maximum number of results to return per paginated call. The largest value you can specify is 1000.
next_token(impl Into<String>)/set_next_token(Option<String>):If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.
- On success, responds with
GetSegmentDetectionOutputwith field(s):job_status(Option<VideoJobStatus>):Current status of the segment detection job.
status_message(Option<String>):If the job fails,
StatusMessageprovides a descriptive error message.video_metadata(Option<Vec<VideoMetadata>>):Currently, Amazon Rekognition Video returns a single object in the
VideoMetadataarray. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. TheVideoMetadataobject includes the video codec, video format and other information. Video metadata is returned in each page of information returned byGetSegmentDetection.audio_metadata(Option<Vec<AudioMetadata>>):An array of objects. There can be multiple audio streams. Each
AudioMetadataobject contains metadata for a single audio stream. Audio information in anAudioMetadataobjects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned byGetSegmentDetection.next_token(Option<String>):If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
segments(Option<Vec<SegmentDetection>>):An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the
SegmentTypesinput parameter ofStartSegmentDetection. Within each segment type the array is sorted by timestamp values.selected_segment_types(Option<Vec<SegmentTypeInfo>>):An array containing the segment types requested in the call to
StartSegmentDetection.
- On failure, responds with
SdkError<GetSegmentDetectionError>
sourcepub fn get_text_detection(&self) -> GetTextDetection
pub fn get_text_detection(&self) -> GetTextDetection
Constructs a fluent builder for the GetTextDetection operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
job_id(impl Into<String>)/set_job_id(Option<String>):Job identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to
StartTextDetection.max_results(i32)/set_max_results(Option<i32>):Maximum number of results to return per paginated call. The largest value you can specify is 1000.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
- On success, responds with
GetTextDetectionOutputwith field(s):job_status(Option<VideoJobStatus>):Current status of the text detection job.
status_message(Option<String>):If the job fails,
StatusMessageprovides a descriptive error message.video_metadata(Option<VideoMetadata>):Information about a video that Amazon Rekognition analyzed.
Videometadatais returned in every page of paginated responses from a Amazon Rekognition video operation.text_detections(Option<Vec<TextDetectionResult>>):An array of text detected in the video. Each element contains the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
next_token(Option<String>):If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.
text_model_version(Option<String>):Version number of the text detection model that was used to detect text.
- On failure, responds with
SdkError<GetTextDetectionError>
sourcepub fn index_faces(&self) -> IndexFaces
pub fn index_faces(&self) -> IndexFaces
Constructs a fluent builder for the IndexFaces operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)/set_collection_id(Option<String>):The ID of an existing collection to which you want to add the faces that are detected in the input images.
image(Image)/set_image(Option<Image>):The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes isn’t supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.external_image_id(impl Into<String>)/set_external_image_id(Option<String>):The ID you want to assign to all the faces detected in the image.
detection_attributes(Vec<Attribute>)/set_detection_attributes(Option<Vec<Attribute>>):An array of facial attributes that you want to be returned. This can be the default list of attributes or all attributes. If you don’t specify a value for
Attributesor if you specify[“DEFAULT”], the API returns the following subset of facial attributes:BoundingBox,Confidence,Pose,Quality, andLandmarks. If you provide[“ALL”], all facial attributes are returned, but the operation takes longer to complete.If you provide both,
[“ALL”, “DEFAULT”], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).max_faces(i32)/set_max_faces(Option<i32>):The maximum number of faces to index. The value of
MaxFacesmust be greater than or equal to 1.IndexFacesreturns no more than 100 detected faces in an image, even if you specify a larger value forMaxFaces.If
IndexFacesdetects more faces than the value ofMaxFaces, the faces with the lowest quality are filtered out first. If there are still more faces than the value ofMaxFaces, the faces with the smallest bounding boxes are filtered out (up to the number that’s needed to satisfy the value ofMaxFaces). Information about the unindexed faces is available in theUnindexedFacesarray.The faces that are returned by
IndexFacesare sorted by the largest face bounding box size to the smallest size, in descending order.MaxFacescan be used with a collection associated with any version of the face model.quality_filter(QualityFilter)/set_quality_filter(Option<QualityFilter>):A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t indexed. If you specify
AUTO, Amazon Rekognition chooses the quality bar. If you specifyLOW,MEDIUM, orHIGH, filtering removes all faces that don’t meet the chosen quality bar. The default value isAUTO. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specifyNONE, no filtering is performed.To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
- On success, responds with
IndexFacesOutputwith field(s):face_records(Option<Vec<FaceRecord>>):An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
orientation_correction(Option<OrientationCorrection>):If your collection is associated with a face detection model that’s later than version 3.0, the value of
OrientationCorrectionis always null and no orientation information is returned.If your collection is associated with a face detection model that’s version 3.0 or earlier, the following applies:
-
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata. The value of
OrientationCorrectionis null. -
If the image doesn’t contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
Bounding box information is returned in the
FaceRecordsarray. You can get the version of the face detection model by callingDescribeCollection.-
face_model_version(Option<String>):Latest face model being used with the collection. For more information, see Model versioning.
unindexed_faces(Option<Vec<UnindexedFace>>):An array of faces that were detected in the image but weren’t indexed. They weren’t indexed because the quality filter identified them as low quality, or the
MaxFacesrequest parameter filtered them out. To use the quality filter, you specify theQualityFilterrequest parameter.
- On failure, responds with
SdkError<IndexFacesError>
sourcepub fn list_collections(&self) -> ListCollections
pub fn list_collections(&self) -> ListCollections
Constructs a fluent builder for the ListCollections operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):Pagination token from the previous response.
max_results(i32)/set_max_results(Option<i32>):Maximum number of collection IDs to return.
- On success, responds with
ListCollectionsOutputwith field(s):collection_ids(Option<Vec<String>>):An array of collection IDs.
next_token(Option<String>):If the result is truncated, the response provides a
NextTokenthat you can use in the subsequent request to fetch the next set of collection IDs.face_model_versions(Option<Vec<String>>):Latest face models being used with the corresponding collections in the array. For more information, see Model versioning. For example, the value of
FaceModelVersions[2]is the version number for the face detection model used by the collection inCollectionId[2].
- On failure, responds with
SdkError<ListCollectionsError>
sourcepub fn list_dataset_entries(&self) -> ListDatasetEntries
pub fn list_dataset_entries(&self) -> ListDatasetEntries
Constructs a fluent builder for the ListDatasetEntries operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
dataset_arn(impl Into<String>)/set_dataset_arn(Option<String>):The Amazon Resource Name (ARN) for the dataset that you want to use.
contains_labels(Vec<String>)/set_contains_labels(Option<Vec<String>>):Specifies a label filter for the response. The response includes an entry only if one or more of the labels in
ContainsLabelsexist in the entry.labeled(bool)/set_labeled(Option<bool>):Specify
trueto get only the JSON Lines where the image is labeled. Specifyfalseto get only the JSON Lines where the image isn’t labeled. If you don’t specifyLabeled,ListDatasetEntriesreturns JSON Lines for labeled and unlabeled images.source_ref_contains(impl Into<String>)/set_source_ref_contains(Option<String>):If specified,
ListDatasetEntriesonly returns JSON Lines where the value ofSourceRefContainsis part of thesource-reffield. Thesource-reffield contains the Amazon S3 location of the image. You can useSouceRefContainsfor tasks such as getting the JSON Line for a single image, or gettting JSON Lines for all images within a specific folder.has_errors(bool)/set_has_errors(Option<bool>):Specifies an error filter for the response. Specify
Trueto only include entries that have errors.next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)/set_max_results(Option<i32>):The maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100.
- On success, responds with
ListDatasetEntriesOutputwith field(s):dataset_entries(Option<Vec<String>>):A list of entries (images) in the dataset.
next_token(Option<String>):If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
- On failure, responds with
SdkError<ListDatasetEntriesError>
sourcepub fn list_dataset_labels(&self) -> ListDatasetLabels
pub fn list_dataset_labels(&self) -> ListDatasetLabels
Constructs a fluent builder for the ListDatasetLabels operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
dataset_arn(impl Into<String>)/set_dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset that you want to use.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)/set_max_results(Option<i32>):The maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100.
- On success, responds with
ListDatasetLabelsOutputwith field(s):dataset_label_descriptions(Option<Vec<DatasetLabelDescription>>):A list of the labels in the dataset.
next_token(Option<String>):If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
- On failure, responds with
SdkError<ListDatasetLabelsError>
sourcepub fn list_faces(&self) -> ListFaces
pub fn list_faces(&self) -> ListFaces
Constructs a fluent builder for the ListFaces operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
collection_id(impl Into<String>)/set_collection_id(Option<String>):ID of the collection from which to list the faces.
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there is more data to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.
max_results(i32)/set_max_results(Option<i32>):Maximum number of faces to return.
- On success, responds with
ListFacesOutputwith field(s):faces(Option<Vec<Face>>):An array of
Faceobjects.next_token(Option<String>):If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.
face_model_version(Option<String>):Latest face model being used with the collection. For more information, see Model versioning.
- On failure, responds with
SdkError<ListFacesError>
sourcepub fn list_stream_processors(&self) -> ListStreamProcessors
pub fn list_stream_processors(&self) -> ListStreamProcessors
Constructs a fluent builder for the ListStreamProcessors operation.
This operation supports pagination; See into_paginator().
- The fluent builder is configurable:
next_token(impl Into<String>)/set_next_token(Option<String>):If the previous response was incomplete (because there are more stream processors to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of stream processors.
max_results(i32)/set_max_results(Option<i32>):Maximum number of stream processors you want Amazon Rekognition Video to return in the response. The default is 1000.
- On success, responds with
ListStreamProcessorsOutputwith field(s):next_token(Option<String>):If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of stream processors.
stream_processors(Option<Vec<StreamProcessor>>):List of stream processors that you have created.
- On failure, responds with
SdkError<ListStreamProcessorsError>
Constructs a fluent builder for the ListTagsForResource operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):Amazon Resource Name (ARN) of the model, collection, or stream processor that contains the tags that you want a list of.
- On success, responds with
ListTagsForResourceOutputwith field(s):tags(Option<HashMap<String, String>>):A list of key-value tags assigned to the resource.
- On failure, responds with
SdkError<ListTagsForResourceError>
sourcepub fn recognize_celebrities(&self) -> RecognizeCelebrities
pub fn recognize_celebrities(&self) -> RecognizeCelebrities
Constructs a fluent builder for the RecognizeCelebrities operation.
- The fluent builder is configurable:
image(Image)/set_image(Option<Image>):The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.
- On success, responds with
RecognizeCelebritiesOutputwith field(s):celebrity_faces(Option<Vec<Celebrity>>):Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 64 celebrities in an image. Each celebrity object includes the following attributes:
Face,Confidence,Emotions,Landmarks,Pose,Quality,Smile,Id,KnownGender,MatchConfidence,Name,Urls.unrecognized_faces(Option<Vec<ComparedFace>>):Details about each unrecognized face in the image.
orientation_correction(Option<OrientationCorrection>):Support for estimating image orientation using the the OrientationCorrection field has ceased as of August 2021. Any returned values for this field included in an API response will always be NULL.
The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct the orientation. The bounding box coordinates returned in
CelebrityFacesandUnrecognizedFacesrepresent face locations before the image orientation is corrected.If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the image’s orientation. If so, and the Exif metadata for the input image populates the orientation field, the value of
OrientationCorrectionis null. TheCelebrityFacesandUnrecognizedFacesbounding box coordinates represent face locations after Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
- On failure, responds with
SdkError<RecognizeCelebritiesError>
sourcepub fn search_faces(&self) -> SearchFaces
pub fn search_faces(&self) -> SearchFaces
Constructs a fluent builder for the SearchFaces operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)/set_collection_id(Option<String>):ID of the collection the face belongs to.
face_id(impl Into<String>)/set_face_id(Option<String>):ID of a face to find matches for in the collection.
max_faces(i32)/set_max_faces(Option<i32>):Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
face_match_threshold(f32)/set_face_match_threshold(Option<f32>):Optional value specifying the minimum confidence in the face match to return. For example, don’t return any matches where confidence in matches is less than 70%. The default value is 80%.
- On success, responds with
SearchFacesOutputwith field(s):searched_face_id(Option<String>):ID of the face that was searched for matches in a collection.
face_matches(Option<Vec<FaceMatch>>):An array of faces that matched the input face, along with the confidence in the match.
face_model_version(Option<String>):Latest face model being used with the collection. For more information, see Model versioning.
- On failure, responds with
SdkError<SearchFacesError>
sourcepub fn search_faces_by_image(&self) -> SearchFacesByImage
pub fn search_faces_by_image(&self) -> SearchFacesByImage
Constructs a fluent builder for the SearchFacesByImage operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)/set_collection_id(Option<String>):ID of the collection to search.
image(Image)/set_image(Option<Image>):The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytesfield. For more information, see Images in the Amazon Rekognition developer guide.max_faces(i32)/set_max_faces(Option<i32>):Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
face_match_threshold(f32)/set_face_match_threshold(Option<f32>):(Optional) Specifies the minimum confidence in the face match to return. For example, don’t return any matches where confidence in matches is less than 70%. The default value is 80%.
quality_filter(QualityFilter)/set_quality_filter(Option<QualityFilter>):A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t searched for in the collection. If you specify
AUTO, Amazon Rekognition chooses the quality bar. If you specifyLOW,MEDIUM, orHIGH, filtering removes all faces that don’t meet the chosen quality bar. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specifyNONE, no filtering is performed. The default value isNONE.To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
- On success, responds with
SearchFacesByImageOutputwith field(s):searched_face_bounding_box(Option<BoundingBox>):The bounding box around the face in the input image that Amazon Rekognition used for the search.
searched_face_confidence(Option<f32>):The level of confidence that the
searchedFaceBoundingBox, contains a face.face_matches(Option<Vec<FaceMatch>>):An array of faces that match the input face, along with the confidence in the match.
face_model_version(Option<String>):Latest face model being used with the collection. For more information, see Model versioning.
- On failure, responds with
SdkError<SearchFacesByImageError>
sourcepub fn start_celebrity_recognition(&self) -> StartCelebrityRecognition
pub fn start_celebrity_recognition(&self) -> StartCelebrityRecognition
Constructs a fluent builder for the StartCelebrityRecognition operation.
- The fluent builder is configurable:
video(Video)/set_video(Option<Video>):The video in which you want to recognize celebrities. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)/set_client_request_token(Option<String>):Idempotent token used to identify the start request. If you use the same token with multiple
StartCelebrityRecognitionrequests, the sameJobIdis returned. UseClientRequestTokento prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)/set_notification_channel(Option<NotificationChannel>):The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish the completion status of the celebrity recognition analysis to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.
job_tag(impl Into<String>)/set_job_tag(Option<String>):An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTagto group related jobs and identify them in the completion notification.
- On success, responds with
StartCelebrityRecognitionOutputwith field(s):job_id(Option<String>):The identifier for the celebrity recognition analysis job. Use
JobIdto identify the job in a subsequent call toGetCelebrityRecognition.
- On failure, responds with
SdkError<StartCelebrityRecognitionError>
sourcepub fn start_content_moderation(&self) -> StartContentModeration
pub fn start_content_moderation(&self) -> StartContentModeration
Constructs a fluent builder for the StartContentModeration operation.
- The fluent builder is configurable:
video(Video)/set_video(Option<Video>):The video in which you want to detect inappropriate, unwanted, or offensive content. The video must be stored in an Amazon S3 bucket.
min_confidence(f32)/set_min_confidence(Option<f32>):Specifies the minimum confidence that Amazon Rekognition must have in order to return a moderated content label. Confidence represents how certain Amazon Rekognition is that the moderated content is correctly identified. 0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition doesn’t return any moderated content labels with a confidence level lower than this specified value. If you don’t specify
MinConfidence,GetContentModerationreturns labels with confidence values greater than or equal to 50 percent.client_request_token(impl Into<String>)/set_client_request_token(Option<String>):Idempotent token used to identify the start request. If you use the same token with multiple
StartContentModerationrequests, the sameJobIdis returned. UseClientRequestTokento prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)/set_notification_channel(Option<NotificationChannel>):The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish the completion status of the content analysis to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.
job_tag(impl Into<String>)/set_job_tag(Option<String>):An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTagto group related jobs and identify them in the completion notification.
- On success, responds with
StartContentModerationOutputwith field(s):job_id(Option<String>):The identifier for the content analysis job. Use
JobIdto identify the job in a subsequent call toGetContentModeration.
- On failure, responds with
SdkError<StartContentModerationError>
sourcepub fn start_face_detection(&self) -> StartFaceDetection
pub fn start_face_detection(&self) -> StartFaceDetection
Constructs a fluent builder for the StartFaceDetection operation.
- The fluent builder is configurable:
video(Video)/set_video(Option<Video>):The video in which you want to detect faces. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)/set_client_request_token(Option<String>):Idempotent token used to identify the start request. If you use the same token with multiple
StartFaceDetectionrequests, the sameJobIdis returned. UseClientRequestTokento prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)/set_notification_channel(Option<NotificationChannel>):The ARN of the Amazon SNS topic to which you want Amazon Rekognition Video to publish the completion status of the face detection operation. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.
face_attributes(FaceAttributes)/set_face_attributes(Option<FaceAttributes>):The face attributes you want returned.
DEFAULT- The following subset of facial attributes are returned: BoundingBox, Confidence, Pose, Quality and Landmarks.ALL- All facial attributes are returned.job_tag(impl Into<String>)/set_job_tag(Option<String>):An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTagto group related jobs and identify them in the completion notification.
- On success, responds with
StartFaceDetectionOutputwith field(s):job_id(Option<String>):The identifier for the face detection job. Use
JobIdto identify the job in a subsequent call toGetFaceDetection.
- On failure, responds with
SdkError<StartFaceDetectionError>
sourcepub fn start_face_search(&self) -> StartFaceSearch
pub fn start_face_search(&self) -> StartFaceSearch
Constructs a fluent builder for the StartFaceSearch operation.
- The fluent builder is configurable:
video(Video)/set_video(Option<Video>):The video you want to search. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)/set_client_request_token(Option<String>):Idempotent token used to identify the start request. If you use the same token with multiple
StartFaceSearchrequests, the sameJobIdis returned. UseClientRequestTokento prevent the same job from being accidently started more than once.face_match_threshold(f32)/set_face_match_threshold(Option<f32>):The minimum confidence in the person match to return. For example, don’t return any matches where confidence in matches is less than 70%. The default value is 80%.
collection_id(impl Into<String>)/set_collection_id(Option<String>):ID of the collection that contains the faces you want to search for.
notification_channel(NotificationChannel)/set_notification_channel(Option<NotificationChannel>):The ARN of the Amazon SNS topic to which you want Amazon Rekognition Video to publish the completion status of the search. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.
job_tag(impl Into<String>)/set_job_tag(Option<String>):An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTagto group related jobs and identify them in the completion notification.
- On success, responds with
StartFaceSearchOutputwith field(s):job_id(Option<String>):The identifier for the search job. Use
JobIdto identify the job in a subsequent call toGetFaceSearch.
- On failure, responds with
SdkError<StartFaceSearchError>
sourcepub fn start_label_detection(&self) -> StartLabelDetection
pub fn start_label_detection(&self) -> StartLabelDetection
Constructs a fluent builder for the StartLabelDetection operation.
- The fluent builder is configurable:
video(Video)/set_video(Option<Video>):The video in which you want to detect labels. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)/set_client_request_token(Option<String>):Idempotent token used to identify the start request. If you use the same token with multiple
StartLabelDetectionrequests, the sameJobIdis returned. UseClientRequestTokento prevent the same job from being accidently started more than once.min_confidence(f32)/set_min_confidence(Option<f32>):Specifies the minimum confidence that Amazon Rekognition Video must have in order to return a detected label. Confidence represents how certain Amazon Rekognition is that a label is correctly identified.0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition Video doesn’t return any labels with a confidence level lower than this specified value.
If you don’t specify
MinConfidence, the operation returns labels with confidence values greater than or equal to 50 percent.notification_channel(NotificationChannel)/set_notification_channel(Option<NotificationChannel>):The Amazon SNS topic ARN you want Amazon Rekognition Video to publish the completion status of the label detection operation to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.
job_tag(impl Into<String>)/set_job_tag(Option<String>):An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTagto group related jobs and identify them in the completion notification.
- On success, responds with
StartLabelDetectionOutputwith field(s):job_id(Option<String>):The identifier for the label detection job. Use
JobIdto identify the job in a subsequent call toGetLabelDetection.
- On failure, responds with
SdkError<StartLabelDetectionError>
sourcepub fn start_person_tracking(&self) -> StartPersonTracking
pub fn start_person_tracking(&self) -> StartPersonTracking
Constructs a fluent builder for the StartPersonTracking operation.
- The fluent builder is configurable:
video(Video)/set_video(Option<Video>):The video in which you want to detect people. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)/set_client_request_token(Option<String>):Idempotent token used to identify the start request. If you use the same token with multiple
StartPersonTrackingrequests, the sameJobIdis returned. UseClientRequestTokento prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)/set_notification_channel(Option<NotificationChannel>):The Amazon SNS topic ARN you want Amazon Rekognition Video to publish the completion status of the people detection operation to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.
job_tag(impl Into<String>)/set_job_tag(Option<String>):An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTagto group related jobs and identify them in the completion notification.
- On success, responds with
StartPersonTrackingOutputwith field(s):job_id(Option<String>):The identifier for the person detection job. Use
JobIdto identify the job in a subsequent call toGetPersonTracking.
- On failure, responds with
SdkError<StartPersonTrackingError>
sourcepub fn start_project_version(&self) -> StartProjectVersion
pub fn start_project_version(&self) -> StartProjectVersion
Constructs a fluent builder for the StartProjectVersion operation.
- The fluent builder is configurable:
project_version_arn(impl Into<String>)/set_project_version_arn(Option<String>):The Amazon Resource Name(ARN) of the model version that you want to start.
min_inference_units(i32)/set_min_inference_units(Option<i32>):The minimum number of inference units to use. A single inference unit represents 1 hour of processing and can support up to 5 Transaction Pers Second (TPS). Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
- On success, responds with
StartProjectVersionOutputwith field(s):status(Option<ProjectVersionStatus>):The current running status of the model.
- On failure, responds with
SdkError<StartProjectVersionError>
sourcepub fn start_segment_detection(&self) -> StartSegmentDetection
pub fn start_segment_detection(&self) -> StartSegmentDetection
Constructs a fluent builder for the StartSegmentDetection operation.
- The fluent builder is configurable:
video(Video)/set_video(Option<Video>):Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetectionuseVideoto specify a video for analysis. The supported file formats are .mp4, .mov and .avi.client_request_token(impl Into<String>)/set_client_request_token(Option<String>):Idempotent token used to identify the start request. If you use the same token with multiple
StartSegmentDetectionrequests, the sameJobIdis returned. UseClientRequestTokento prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)/set_notification_channel(Option<NotificationChannel>):The ARN of the Amazon SNS topic to which you want Amazon Rekognition Video to publish the completion status of the segment detection operation. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.
job_tag(impl Into<String>)/set_job_tag(Option<String>):An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTagto group related jobs and identify them in the completion notification.filters(StartSegmentDetectionFilters)/set_filters(Option<StartSegmentDetectionFilters>):Filters for technical cue or shot detection.
segment_types(Vec<SegmentType>)/set_segment_types(Option<Vec<SegmentType>>):An array of segment types to detect in the video. Valid values are TECHNICAL_CUE and SHOT.
- On success, responds with
StartSegmentDetectionOutputwith field(s):job_id(Option<String>):Unique identifier for the segment detection job. The
JobIdis returned fromStartSegmentDetection.
- On failure, responds with
SdkError<StartSegmentDetectionError>
sourcepub fn start_stream_processor(&self) -> StartStreamProcessor
pub fn start_stream_processor(&self) -> StartStreamProcessor
Constructs a fluent builder for the StartStreamProcessor operation.
- The fluent builder is configurable:
name(impl Into<String>)/set_name(Option<String>):The name of the stream processor to start processing.
- On success, responds with
StartStreamProcessorOutput - On failure, responds with
SdkError<StartStreamProcessorError>
sourcepub fn start_text_detection(&self) -> StartTextDetection
pub fn start_text_detection(&self) -> StartTextDetection
Constructs a fluent builder for the StartTextDetection operation.
- The fluent builder is configurable:
video(Video)/set_video(Option<Video>):Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetectionuseVideoto specify a video for analysis. The supported file formats are .mp4, .mov and .avi.client_request_token(impl Into<String>)/set_client_request_token(Option<String>):Idempotent token used to identify the start request. If you use the same token with multiple
StartTextDetectionrequests, the sameJobIdis returned. UseClientRequestTokento prevent the same job from being accidentaly started more than once.notification_channel(NotificationChannel)/set_notification_channel(Option<NotificationChannel>):The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see
api-video. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics.job_tag(impl Into<String>)/set_job_tag(Option<String>):An identifier returned in the completion status published by your Amazon Simple Notification Service topic. For example, you can use
JobTagto group related jobs and identify them in the completion notification.filters(StartTextDetectionFilters)/set_filters(Option<StartTextDetectionFilters>):Optional parameters that let you set criteria the text must meet to be included in your response.
- On success, responds with
StartTextDetectionOutputwith field(s):job_id(Option<String>):Identifier for the text detection job. Use
JobIdto identify the job in a subsequent call toGetTextDetection.
- On failure, responds with
SdkError<StartTextDetectionError>
sourcepub fn stop_project_version(&self) -> StopProjectVersion
pub fn stop_project_version(&self) -> StopProjectVersion
Constructs a fluent builder for the StopProjectVersion operation.
- The fluent builder is configurable:
project_version_arn(impl Into<String>)/set_project_version_arn(Option<String>):The Amazon Resource Name (ARN) of the model version that you want to delete.
This operation requires permissions to perform the
rekognition:StopProjectVersionaction.
- On success, responds with
StopProjectVersionOutputwith field(s):status(Option<ProjectVersionStatus>):The current status of the stop operation.
- On failure, responds with
SdkError<StopProjectVersionError>
sourcepub fn stop_stream_processor(&self) -> StopStreamProcessor
pub fn stop_stream_processor(&self) -> StopStreamProcessor
Constructs a fluent builder for the StopStreamProcessor operation.
- The fluent builder is configurable:
name(impl Into<String>)/set_name(Option<String>):The name of a stream processor created by
CreateStreamProcessor.
- On success, responds with
StopStreamProcessorOutput - On failure, responds with
SdkError<StopStreamProcessorError>
sourcepub fn tag_resource(&self) -> TagResource
pub fn tag_resource(&self) -> TagResource
Constructs a fluent builder for the TagResource operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):Amazon Resource Name (ARN) of the model, collection, or stream processor that you want to assign the tags to.
tags(HashMap<String, String>)/set_tags(Option<HashMap<String, String>>):The key-value tags to assign to the resource.
- On success, responds with
TagResourceOutput - On failure, responds with
SdkError<TagResourceError>
sourcepub fn untag_resource(&self) -> UntagResource
pub fn untag_resource(&self) -> UntagResource
Constructs a fluent builder for the UntagResource operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)/set_resource_arn(Option<String>):Amazon Resource Name (ARN) of the model, collection, or stream processor that you want to remove the tags from.
tag_keys(Vec<String>)/set_tag_keys(Option<Vec<String>>):A list of the tags that you want to remove.
- On success, responds with
UntagResourceOutput - On failure, responds with
SdkError<UntagResourceError>
sourcepub fn update_dataset_entries(&self) -> UpdateDatasetEntries
pub fn update_dataset_entries(&self) -> UpdateDatasetEntries
Constructs a fluent builder for the UpdateDatasetEntries operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)/set_dataset_arn(Option<String>):The Amazon Resource Name (ARN) of the dataset that you want to update.
changes(DatasetChanges)/set_changes(Option<DatasetChanges>):The changes that you want to make to the dataset.
- On success, responds with
UpdateDatasetEntriesOutput - On failure, responds with
SdkError<UpdateDatasetEntriesError>
sourceimpl Client
impl Client
sourcepub fn from_conf_conn<C, E>(conf: Config, conn: C) -> Self where
C: SmithyConnector<Error = E> + Send + 'static,
E: Into<ConnectorError>,
pub fn from_conf_conn<C, E>(conf: Config, conn: C) -> Self where
C: SmithyConnector<Error = E> + Send + 'static,
E: Into<ConnectorError>,
Creates a client with the given service config and connector override.
Trait Implementations
sourceimpl From<Client<DynConnector, DynMiddleware<DynConnector>, Standard>> for Client
impl From<Client<DynConnector, DynMiddleware<DynConnector>, Standard>> for Client
sourcefn from(client: Client<DynConnector, DynMiddleware<DynConnector>>) -> Self
fn from(client: Client<DynConnector, DynMiddleware<DynConnector>>) -> Self
Performs the conversion.
Auto Trait Implementations
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
sourceimpl<T> ToOwned for T where
T: Clone,
impl<T> ToOwned for T where
T: Clone,
type Owned = T
type Owned = T
The resulting type after obtaining ownership.
sourcefn clone_into(&self, target: &mut T)
fn clone_into(&self, target: &mut T)
toowned_clone_into)Uses borrowed data to replace owned data, usually by cloning. Read more
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber to this type, returning a
WithDispatch wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber to this type, returning a
WithDispatch wrapper. Read more