pub struct Client { /* private fields */ }
Expand description
Client for Amazon Rekognition
Client for invoking operations on Amazon Rekognition. Each operation on Amazon Rekognition is a method on this
this struct. .send()
MUST be invoked on the generated operations to dispatch the request to the service.
§Constructing a Client
A Config
is required to construct a client. For most use cases, the aws-config
crate should be used to automatically resolve this config using
aws_config::load_from_env()
, since this will resolve an SdkConfig
which can be shared
across multiple different AWS SDK clients. This config resolution process can be customized
by calling aws_config::from_env()
instead, which returns a ConfigLoader
that uses
the builder pattern to customize the default config.
In the simplest case, creating a client looks as follows:
let config = aws_config::load_from_env().await;
let client = aws_sdk_rekognition::Client::new(&config);
Occasionally, SDKs may have additional service-specific values that can be set on the Config
that
is absent from SdkConfig
, or slightly different settings for a specific client may be desired.
The Builder
struct implements From<&SdkConfig>
, so setting these specific settings can be
done as follows:
let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_rekognition::config::Builder::from(&sdk_config)
.some_service_specific_setting("value")
.build();
See the aws-config
docs and Config
for more information on customizing configuration.
Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
§Using the Client
A client has a function for every operation that can be performed by the service.
For example, the AssociateFaces
operation has
a Client::associate_faces
, function which returns a builder for that operation.
The fluent builder ultimately has a send()
function that returns an async future that
returns a result, as illustrated below:
let result = client.associate_faces()
.collection_id("example")
.send()
.await;
The underlying HTTP requests that get made by this can be modified with the customize_operation
function on the fluent builder. See the customize
module for more
information.
§Waiters
This client provides wait_until
methods behind the Waiters
trait.
To use them, simply import the trait, and then call one of the wait_until
methods. This will
return a waiter fluent builder that takes various parameters, which are documented on the builder
type. Once parameters have been provided, the wait
method can be called to initiate waiting.
For example, if there was a wait_until_thing
method, it could look like:
let result = client.wait_until_thing()
.thing_id("someId")
.wait(Duration::from_secs(120))
.await;
Implementations§
Source§impl Client
impl Client
Sourcepub fn associate_faces(&self) -> AssociateFacesFluentBuilder
pub fn associate_faces(&self) -> AssociateFacesFluentBuilder
Constructs a fluent builder for the AssociateFaces
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of an existing collection containing the UserID.
user_id(impl Into<String>)
/set_user_id(Option<String>)
:
required: trueThe ID for the existing UserID.
face_ids(impl Into<String>)
/set_face_ids(Option<Vec::<String>>)
:
required: trueAn array of FaceIDs to associate with the UserID.
user_match_threshold(f32)
/set_user_match_threshold(Option<f32>)
:
required: falseAn optional value specifying the minimum confidence in the UserID match to return. The default value is 75.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the request to
AssociateFaces
. If you use the same token with multipleAssociateFaces
requests, the same response is returned. Use ClientRequestToken to prevent the same request from being processed more than once.
- On success, responds with
AssociateFacesOutput
with field(s):associated_faces(Option<Vec::<AssociatedFace>>)
:An array of AssociatedFace objects containing FaceIDs that have been successfully associated with the UserID. Returned if the AssociateFaces action is successful.
unsuccessful_face_associations(Option<Vec::<UnsuccessfulFaceAssociation>>)
:An array of UnsuccessfulAssociation objects containing FaceIDs that are not successfully associated along with the reasons. Returned if the AssociateFaces action is successful.
user_status(Option<UserStatus>)
:The status of an update made to a UserID. Reflects if the UserID has been updated for every requested change.
- On failure, responds with
SdkError<AssociateFacesError>
Source§impl Client
impl Client
Sourcepub fn compare_faces(&self) -> CompareFacesFluentBuilder
pub fn compare_faces(&self) -> CompareFacesFluentBuilder
Constructs a fluent builder for the CompareFaces
operation.
- The fluent builder is configurable:
source_image(Image)
/set_source_image(Option<Image>)
:
required: trueThe input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.target_image(Image)
/set_target_image(Option<Image>)
:
required: trueThe target image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.similarity_threshold(f32)
/set_similarity_threshold(Option<f32>)
:
required: falseThe minimum level of confidence in the face matches that a match must meet to be included in the
FaceMatches
array.quality_filter(QualityFilter)
/set_quality_filter(Option<QualityFilter>)
:
required: falseA filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t compared. If you specify
AUTO
, Amazon Rekognition chooses the quality bar. If you specifyLOW
,MEDIUM
, orHIGH
, filtering removes all faces that don’t meet the chosen quality bar. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specifyNONE
, no filtering is performed. The default value isNONE
.To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
- On success, responds with
CompareFacesOutput
with field(s):source_image_face(Option<ComparedSourceImageFace>)
:The face in the source image that was used for comparison.
face_matches(Option<Vec::<CompareFacesMatch>>)
:An array of faces in the target image that match the source image face. Each
CompareFacesMatch
object provides the bounding box, the confidence level that the bounding box contains a face, and the similarity score for the face in the bounding box and the face in the source image.unmatched_faces(Option<Vec::<ComparedFace>>)
:An array of faces in the target image that did not match the source image face.
source_image_orientation_correction(Option<OrientationCorrection>)
:The value of
SourceImageOrientationCorrection
is always null.If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
target_image_orientation_correction(Option<OrientationCorrection>)
:The value of
TargetImageOrientationCorrection
is always null.If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
- On failure, responds with
SdkError<CompareFacesError>
Source§impl Client
impl Client
Sourcepub fn copy_project_version(&self) -> CopyProjectVersionFluentBuilder
pub fn copy_project_version(&self) -> CopyProjectVersionFluentBuilder
Constructs a fluent builder for the CopyProjectVersion
operation.
- The fluent builder is configurable:
source_project_arn(impl Into<String>)
/set_source_project_arn(Option<String>)
:
required: trueThe ARN of the source project in the trusting AWS account.
source_project_version_arn(impl Into<String>)
/set_source_project_version_arn(Option<String>)
:
required: trueThe ARN of the model version in the source project that you want to copy to a destination project.
destination_project_arn(impl Into<String>)
/set_destination_project_arn(Option<String>)
:
required: trueThe ARN of the project in the trusted AWS account that you want to copy the model version to.
version_name(impl Into<String>)
/set_version_name(Option<String>)
:
required: trueA name for the version of the model that’s copied to the destination project.
output_config(OutputConfig)
/set_output_config(Option<OutputConfig>)
:
required: trueThe S3 bucket and folder location where the training output for the source model version is placed.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseThe key-value tags to assign to the model version.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: falseThe identifier for your AWS Key Management Service key (AWS KMS key). You can supply the Amazon Resource Name (ARN) of your KMS key, the ID of your KMS key, an alias for your KMS key, or an alias ARN. The key is used to encrypt training results and manifest files written to the output Amazon S3 bucket (
OutputConfig
).If you choose to use your own KMS key, you need the following permissions on the KMS key.
-
kms:CreateGrant
-
kms:DescribeKey
-
kms:GenerateDataKey
-
kms:Decrypt
If you don’t specify a value for
KmsKeyId
, images copied into the service are encrypted using a key that AWS owns and manages.-
- On success, responds with
CopyProjectVersionOutput
with field(s):project_version_arn(Option<String>)
:The ARN of the copied model version in the destination project.
- On failure, responds with
SdkError<CopyProjectVersionError>
Source§impl Client
impl Client
Sourcepub fn create_collection(&self) -> CreateCollectionFluentBuilder
pub fn create_collection(&self) -> CreateCollectionFluentBuilder
Constructs a fluent builder for the CreateCollection
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueID for the collection that you are creating.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseA set of tags (key-value pairs) that you want to attach to the collection.
- On success, responds with
CreateCollectionOutput
with field(s):status_code(Option<i32>)
:HTTP status code indicating the result of the operation.
collection_arn(Option<String>)
:Amazon Resource Name (ARN) of the collection. You can use this to manage permissions on your resources.
face_model_version(Option<String>)
:Version number of the face detection model associated with the collection you are creating.
- On failure, responds with
SdkError<CreateCollectionError>
Source§impl Client
impl Client
Sourcepub fn create_dataset(&self) -> CreateDatasetFluentBuilder
pub fn create_dataset(&self) -> CreateDatasetFluentBuilder
Constructs a fluent builder for the CreateDataset
operation.
- The fluent builder is configurable:
dataset_source(DatasetSource)
/set_dataset_source(Option<DatasetSource>)
:
required: falseThe source files for the dataset. You can specify the ARN of an existing dataset or specify the Amazon S3 bucket location of an Amazon Sagemaker format manifest file. If you don’t specify
datasetSource
, an empty dataset is created. To add labeled images to the dataset, You can use the console or callUpdateDatasetEntries
.dataset_type(DatasetType)
/set_dataset_type(Option<DatasetType>)
:
required: trueThe type of the dataset. Specify
TRAIN
to create a training dataset. SpecifyTEST
to create a test dataset.project_arn(impl Into<String>)
/set_project_arn(Option<String>)
:
required: trueThe ARN of the Amazon Rekognition Custom Labels project to which you want to asssign the dataset.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseA set of tags (key-value pairs) that you want to attach to the dataset.
- On success, responds with
CreateDatasetOutput
with field(s):dataset_arn(Option<String>)
:The ARN of the created Amazon Rekognition Custom Labels dataset.
- On failure, responds with
SdkError<CreateDatasetError>
Source§impl Client
impl Client
Sourcepub fn create_face_liveness_session(
&self,
) -> CreateFaceLivenessSessionFluentBuilder
pub fn create_face_liveness_session( &self, ) -> CreateFaceLivenessSessionFluentBuilder
Constructs a fluent builder for the CreateFaceLivenessSession
operation.
- The fluent builder is configurable:
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: falseThe identifier for your AWS Key Management Service key (AWS KMS key). Used to encrypt audit images and reference images.
settings(CreateFaceLivenessSessionRequestSettings)
/set_settings(Option<CreateFaceLivenessSessionRequestSettings>)
:
required: falseA session settings object. It contains settings for the operation to be performed. For Face Liveness, it accepts
OutputConfig
andAuditImagesLimit
.client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token is used to recognize the Face Liveness request. If the same token is used with multiple
CreateFaceLivenessSession
requests, the same session is returned. This token is employed to avoid unintentionally creating the same session multiple times.
- On success, responds with
CreateFaceLivenessSessionOutput
with field(s):session_id(String)
:A unique 128-bit UUID identifying a Face Liveness session. A new sessionID must be used for every Face Liveness check. If a given sessionID is used for subsequent Face Liveness checks, the checks will fail. Additionally, a SessionId expires 3 minutes after it’s sent, making all Liveness data associated with the session (e.g., sessionID, reference image, audit images, etc.) unavailable.
- On failure, responds with
SdkError<CreateFaceLivenessSessionError>
Source§impl Client
impl Client
Sourcepub fn create_project(&self) -> CreateProjectFluentBuilder
pub fn create_project(&self) -> CreateProjectFluentBuilder
Constructs a fluent builder for the CreateProject
operation.
- The fluent builder is configurable:
project_name(impl Into<String>)
/set_project_name(Option<String>)
:
required: trueThe name of the project to create.
feature(CustomizationFeature)
/set_feature(Option<CustomizationFeature>)
:
required: falseSpecifies feature that is being customized. If no value is provided CUSTOM_LABELS is used as a default.
auto_update(ProjectAutoUpdate)
/set_auto_update(Option<ProjectAutoUpdate>)
:
required: falseSpecifies whether automatic retraining should be attempted for the versions of the project. Automatic retraining is done as a best effort. Required argument for Content Moderation. Applicable only to adapters.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseA set of tags (key-value pairs) that you want to attach to the project.
- On success, responds with
CreateProjectOutput
with field(s):project_arn(Option<String>)
:The Amazon Resource Name (ARN) of the new project. You can use the ARN to configure IAM access to the project.
- On failure, responds with
SdkError<CreateProjectError>
Source§impl Client
impl Client
Sourcepub fn create_project_version(&self) -> CreateProjectVersionFluentBuilder
pub fn create_project_version(&self) -> CreateProjectVersionFluentBuilder
Constructs a fluent builder for the CreateProjectVersion
operation.
- The fluent builder is configurable:
project_arn(impl Into<String>)
/set_project_arn(Option<String>)
:
required: trueThe ARN of the Amazon Rekognition project that will manage the project version you want to train.
version_name(impl Into<String>)
/set_version_name(Option<String>)
:
required: trueA name for the version of the project version. This value must be unique.
output_config(OutputConfig)
/set_output_config(Option<OutputConfig>)
:
required: trueThe Amazon S3 bucket location to store the results of training. The bucket can be any S3 bucket in your AWS account. You need
s3:PutObject
permission on the bucket.training_data(TrainingData)
/set_training_data(Option<TrainingData>)
:
required: falseSpecifies an external manifest that the services uses to train the project version. If you specify
TrainingData
you must also specifyTestingData
. The project must not have any associated datasets.testing_data(TestingData)
/set_testing_data(Option<TestingData>)
:
required: falseSpecifies an external manifest that the service uses to test the project version. If you specify
TestingData
you must also specifyTrainingData
. The project must not have any associated datasets.tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseA set of tags (key-value pairs) that you want to attach to the project version.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: falseThe identifier for your AWS Key Management Service key (AWS KMS key). You can supply the Amazon Resource Name (ARN) of your KMS key, the ID of your KMS key, an alias for your KMS key, or an alias ARN. The key is used to encrypt training images, test images, and manifest files copied into the service for the project version. Your source images are unaffected. The key is also used to encrypt training results and manifest files written to the output Amazon S3 bucket (
OutputConfig
).If you choose to use your own KMS key, you need the following permissions on the KMS key.
-
kms:CreateGrant
-
kms:DescribeKey
-
kms:GenerateDataKey
-
kms:Decrypt
If you don’t specify a value for
KmsKeyId
, images copied into the service are encrypted using a key that AWS owns and manages.-
version_description(impl Into<String>)
/set_version_description(Option<String>)
:
required: falseA description applied to the project version being created.
feature_config(CustomizationFeatureConfig)
/set_feature_config(Option<CustomizationFeatureConfig>)
:
required: falseFeature-specific configuration of the training job. If the job configuration does not match the feature type associated with the project, an InvalidParameterException is returned.
- On success, responds with
CreateProjectVersionOutput
with field(s):project_version_arn(Option<String>)
:The ARN of the model or the project version that was created. Use
DescribeProjectVersion
to get the current status of the training operation.
- On failure, responds with
SdkError<CreateProjectVersionError>
Source§impl Client
impl Client
Sourcepub fn create_stream_processor(&self) -> CreateStreamProcessorFluentBuilder
pub fn create_stream_processor(&self) -> CreateStreamProcessorFluentBuilder
Constructs a fluent builder for the CreateStreamProcessor
operation.
- The fluent builder is configurable:
input(StreamProcessorInput)
/set_input(Option<StreamProcessorInput>)
:
required: trueKinesis video stream stream that provides the source streaming video. If you are using the AWS CLI, the parameter name is
StreamProcessorInput
. This is required for both face search and label detection stream processors.output(StreamProcessorOutput)
/set_output(Option<StreamProcessorOutput>)
:
required: trueKinesis data stream stream or Amazon S3 bucket location to which Amazon Rekognition Video puts the analysis results. If you are using the AWS CLI, the parameter name is
StreamProcessorOutput
. This must be aS3Destination
of an Amazon S3 bucket that you own for a label detection stream processor or a Kinesis data stream ARN for a face search stream processor.name(impl Into<String>)
/set_name(Option<String>)
:
required: trueAn identifier you assign to the stream processor. You can use
Name
to manage the stream processor. For example, you can get the current status of the stream processor by callingDescribeStreamProcessor
.Name
is idempotent. This is required for both face search and label detection stream processors.settings(StreamProcessorSettings)
/set_settings(Option<StreamProcessorSettings>)
:
required: trueInput parameters used in a streaming video analyzed by a stream processor. You can use
FaceSearch
to recognize faces in a streaming video, or you can useConnectedHome
to detect labels.role_arn(impl Into<String>)
/set_role_arn(Option<String>)
:
required: trueThe Amazon Resource Number (ARN) of the IAM role that allows access to the stream processor. The IAM role provides Rekognition read permissions for a Kinesis stream. It also provides write permissions to an Amazon S3 bucket and Amazon Simple Notification Service topic for a label detection stream processor. This is required for both face search and label detection stream processors.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: falseA set of tags (key-value pairs) that you want to attach to the stream processor.
notification_channel(StreamProcessorNotificationChannel)
/set_notification_channel(Option<StreamProcessorNotificationChannel>)
:
required: falseThe Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.
Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. For example, if Amazon Rekognition detects a person at second 2, a pet at second 4, and a person again at second 5, Amazon Rekognition sends 2 object class detected notifications, one for a person at second 2 and one for a pet at second 4.
Amazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: falseThe identifier for your AWS Key Management Service key (AWS KMS key). This is an optional parameter for label detection stream processors and should not be used to create a face search stream processor. You can supply the Amazon Resource Name (ARN) of your KMS key, the ID of your KMS key, an alias for your KMS key, or an alias ARN. The key is used to encrypt results and data published to your Amazon S3 bucket, which includes image frames and hero images. Your source images are unaffected.
regions_of_interest(RegionOfInterest)
/set_regions_of_interest(Option<Vec::<RegionOfInterest>>)
:
required: falseSpecifies locations in the frames where Amazon Rekognition checks for objects or people. You can specify up to 10 regions of interest, and each region has either a polygon or a bounding box. This is an optional parameter for label detection stream processors and should not be used to create a face search stream processor.
data_sharing_preference(StreamProcessorDataSharingPreference)
/set_data_sharing_preference(Option<StreamProcessorDataSharingPreference>)
:
required: falseShows whether you are sharing data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.
- On success, responds with
CreateStreamProcessorOutput
with field(s):stream_processor_arn(Option<String>)
:Amazon Resource Number for the newly created stream processor.
- On failure, responds with
SdkError<CreateStreamProcessorError>
Source§impl Client
impl Client
Sourcepub fn create_user(&self) -> CreateUserFluentBuilder
pub fn create_user(&self) -> CreateUserFluentBuilder
Constructs a fluent builder for the CreateUser
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of an existing collection to which the new UserID needs to be created.
user_id(impl Into<String>)
/set_user_id(Option<String>)
:
required: trueID for the UserID to be created. This ID needs to be unique within the collection.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the request to
CreateUser
. If you use the same token with multipleCreateUser
requests, the same response is returned. Use ClientRequestToken to prevent the same request from being processed more than once.
- On success, responds with
CreateUserOutput
- On failure, responds with
SdkError<CreateUserError>
Source§impl Client
impl Client
Sourcepub fn delete_collection(&self) -> DeleteCollectionFluentBuilder
pub fn delete_collection(&self) -> DeleteCollectionFluentBuilder
Constructs a fluent builder for the DeleteCollection
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueID of the collection to delete.
- On success, responds with
DeleteCollectionOutput
with field(s):status_code(Option<i32>)
:HTTP status code that indicates the result of the operation.
- On failure, responds with
SdkError<DeleteCollectionError>
Source§impl Client
impl Client
Sourcepub fn delete_dataset(&self) -> DeleteDatasetFluentBuilder
pub fn delete_dataset(&self) -> DeleteDatasetFluentBuilder
Constructs a fluent builder for the DeleteDataset
operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)
/set_dataset_arn(Option<String>)
:
required: trueThe ARN of the Amazon Rekognition Custom Labels dataset that you want to delete.
- On success, responds with
DeleteDatasetOutput
- On failure, responds with
SdkError<DeleteDatasetError>
Source§impl Client
impl Client
Sourcepub fn delete_faces(&self) -> DeleteFacesFluentBuilder
pub fn delete_faces(&self) -> DeleteFacesFluentBuilder
Constructs a fluent builder for the DeleteFaces
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueCollection from which to remove the specific faces.
face_ids(impl Into<String>)
/set_face_ids(Option<Vec::<String>>)
:
required: trueAn array of face IDs to delete.
- On success, responds with
DeleteFacesOutput
with field(s):deleted_faces(Option<Vec::<String>>)
:An array of strings (face IDs) of the faces that were deleted.
unsuccessful_face_deletions(Option<Vec::<UnsuccessfulFaceDeletion>>)
:An array of any faces that weren’t deleted.
- On failure, responds with
SdkError<DeleteFacesError>
Source§impl Client
impl Client
Sourcepub fn delete_project(&self) -> DeleteProjectFluentBuilder
pub fn delete_project(&self) -> DeleteProjectFluentBuilder
Constructs a fluent builder for the DeleteProject
operation.
- The fluent builder is configurable:
project_arn(impl Into<String>)
/set_project_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the project that you want to delete.
- On success, responds with
DeleteProjectOutput
with field(s):status(Option<ProjectStatus>)
:The current status of the delete project operation.
- On failure, responds with
SdkError<DeleteProjectError>
Source§impl Client
impl Client
Sourcepub fn delete_project_policy(&self) -> DeleteProjectPolicyFluentBuilder
pub fn delete_project_policy(&self) -> DeleteProjectPolicyFluentBuilder
Constructs a fluent builder for the DeleteProjectPolicy
operation.
- The fluent builder is configurable:
project_arn(impl Into<String>)
/set_project_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the project that the project policy you want to delete is attached to.
policy_name(impl Into<String>)
/set_policy_name(Option<String>)
:
required: trueThe name of the policy that you want to delete.
policy_revision_id(impl Into<String>)
/set_policy_revision_id(Option<String>)
:
required: falseThe ID of the project policy revision that you want to delete.
- On success, responds with
DeleteProjectPolicyOutput
- On failure, responds with
SdkError<DeleteProjectPolicyError>
Source§impl Client
impl Client
Sourcepub fn delete_project_version(&self) -> DeleteProjectVersionFluentBuilder
pub fn delete_project_version(&self) -> DeleteProjectVersionFluentBuilder
Constructs a fluent builder for the DeleteProjectVersion
operation.
- The fluent builder is configurable:
project_version_arn(impl Into<String>)
/set_project_version_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the project version that you want to delete.
- On success, responds with
DeleteProjectVersionOutput
with field(s):status(Option<ProjectVersionStatus>)
:The status of the deletion operation.
- On failure, responds with
SdkError<DeleteProjectVersionError>
Source§impl Client
impl Client
Sourcepub fn delete_stream_processor(&self) -> DeleteStreamProcessorFluentBuilder
pub fn delete_stream_processor(&self) -> DeleteStreamProcessorFluentBuilder
Constructs a fluent builder for the DeleteStreamProcessor
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the stream processor you want to delete.
- On success, responds with
DeleteStreamProcessorOutput
- On failure, responds with
SdkError<DeleteStreamProcessorError>
Source§impl Client
impl Client
Sourcepub fn delete_user(&self) -> DeleteUserFluentBuilder
pub fn delete_user(&self) -> DeleteUserFluentBuilder
Constructs a fluent builder for the DeleteUser
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of an existing collection from which the UserID needs to be deleted.
user_id(impl Into<String>)
/set_user_id(Option<String>)
:
required: trueID for the UserID to be deleted.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the request to
DeleteUser
. If you use the same token with multipleDeleteUser
requests, the same response is returned. Use ClientRequestToken to prevent the same request from being processed more than once.
- On success, responds with
DeleteUserOutput
- On failure, responds with
SdkError<DeleteUserError>
Source§impl Client
impl Client
Sourcepub fn describe_collection(&self) -> DescribeCollectionFluentBuilder
pub fn describe_collection(&self) -> DescribeCollectionFluentBuilder
Constructs a fluent builder for the DescribeCollection
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of the collection to describe.
- On success, responds with
DescribeCollectionOutput
with field(s):face_count(Option<i64>)
:The number of faces that are indexed into the collection. To index faces into a collection, use
IndexFaces
.face_model_version(Option<String>)
:The version of the face model that’s used by the collection for face detection.
For more information, see Model versioning in the Amazon Rekognition Developer Guide.
collection_arn(Option<String>)
:The Amazon Resource Name (ARN) of the collection.
creation_timestamp(Option<DateTime>)
:The number of milliseconds since the Unix epoch time until the creation of the collection. The Unix epoch time is 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970.
user_count(Option<i64>)
:The number of UserIDs assigned to the specified colleciton.
- On failure, responds with
SdkError<DescribeCollectionError>
Source§impl Client
impl Client
Sourcepub fn describe_dataset(&self) -> DescribeDatasetFluentBuilder
pub fn describe_dataset(&self) -> DescribeDatasetFluentBuilder
Constructs a fluent builder for the DescribeDataset
operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)
/set_dataset_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the dataset that you want to describe.
- On success, responds with
DescribeDatasetOutput
with field(s):dataset_description(Option<DatasetDescription>)
:The description for the dataset.
- On failure, responds with
SdkError<DescribeDatasetError>
Source§impl Client
impl Client
Sourcepub fn describe_project_versions(&self) -> DescribeProjectVersionsFluentBuilder
pub fn describe_project_versions(&self) -> DescribeProjectVersionsFluentBuilder
Constructs a fluent builder for the DescribeProjectVersions
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
project_arn(impl Into<String>)
/set_project_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the project that contains the model/adapter you want to describe.
version_names(impl Into<String>)
/set_version_names(Option<Vec::<String>>)
:
required: falseA list of model or project version names that you want to describe. You can add up to 10 model or project version names to the list. If you don’t specify a value, all project version descriptions are returned. A version name is part of a project version ARN. For example,
my-model.2020-01-21T09.10.15
is the version name in the following ARN.arn:aws:rekognition:us-east-1:123456789012:project/getting-started/version/my-model.2020-01-21T09.10.15/1234567890123
.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100.
- On success, responds with
DescribeProjectVersionsOutput
with field(s):project_version_descriptions(Option<Vec::<ProjectVersionDescription>>)
:A list of project version descriptions. The list is sorted by the creation date and time of the project versions, latest to earliest.
next_token(Option<String>)
:If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
- On failure, responds with
SdkError<DescribeProjectVersionsError>
Source§impl Client
impl Client
Sourcepub fn describe_projects(&self) -> DescribeProjectsFluentBuilder
pub fn describe_projects(&self) -> DescribeProjectsFluentBuilder
Constructs a fluent builder for the DescribeProjects
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more results to retrieve), Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100.
project_names(impl Into<String>)
/set_project_names(Option<Vec::<String>>)
:
required: falseA list of the projects that you want Rekognition to describe. If you don’t specify a value, the response includes descriptions for all the projects in your AWS account.
features(CustomizationFeature)
/set_features(Option<Vec::<CustomizationFeature>>)
:
required: falseSpecifies the type of customization to filter projects by. If no value is specified, CUSTOM_LABELS is used as a default.
- On success, responds with
DescribeProjectsOutput
with field(s):project_descriptions(Option<Vec::<ProjectDescription>>)
:A list of project descriptions. The list is sorted by the date and time the projects are created.
next_token(Option<String>)
:If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
- On failure, responds with
SdkError<DescribeProjectsError>
Source§impl Client
impl Client
Sourcepub fn describe_stream_processor(&self) -> DescribeStreamProcessorFluentBuilder
pub fn describe_stream_processor(&self) -> DescribeStreamProcessorFluentBuilder
Constructs a fluent builder for the DescribeStreamProcessor
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueName of the stream processor for which you want information.
- On success, responds with
DescribeStreamProcessorOutput
with field(s):name(Option<String>)
:Name of the stream processor.
stream_processor_arn(Option<String>)
:ARN of the stream processor.
status(Option<StreamProcessorStatus>)
:Current status of the stream processor.
status_message(Option<String>)
:Detailed status message about the stream processor.
creation_timestamp(Option<DateTime>)
:Date and time the stream processor was created
last_update_timestamp(Option<DateTime>)
:The time, in Unix format, the stream processor was last updated. For example, when the stream processor moves from a running state to a failed state, or when the user starts or stops the stream processor.
input(Option<StreamProcessorInput>)
:Kinesis video stream that provides the source streaming video.
output(Option<StreamProcessorOutput>)
:Kinesis data stream to which Amazon Rekognition Video puts the analysis results.
role_arn(Option<String>)
:ARN of the IAM role that allows access to the stream processor.
settings(Option<StreamProcessorSettings>)
:Input parameters used in a streaming video analyzed by a stream processor. You can use
FaceSearch
to recognize faces in a streaming video, or you can useConnectedHome
to detect labels.notification_channel(Option<StreamProcessorNotificationChannel>)
:The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.
Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. For example, if Amazon Rekognition detects a person at second 2, a pet at second 4, and a person again at second 5, Amazon Rekognition sends 2 object class detected notifications, one for a person at second 2 and one for a pet at second 4.
Amazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete.
kms_key_id(Option<String>)
:The identifier for your AWS Key Management Service key (AWS KMS key). This is an optional parameter for label detection stream processors.
regions_of_interest(Option<Vec::<RegionOfInterest>>)
:Specifies locations in the frames where Amazon Rekognition checks for objects or people. This is an optional parameter for label detection stream processors.
data_sharing_preference(Option<StreamProcessorDataSharingPreference>)
:Shows whether you are sharing data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.
- On failure, responds with
SdkError<DescribeStreamProcessorError>
Source§impl Client
impl Client
Sourcepub fn detect_custom_labels(&self) -> DetectCustomLabelsFluentBuilder
pub fn detect_custom_labels(&self) -> DetectCustomLabelsFluentBuilder
Constructs a fluent builder for the DetectCustomLabels
operation.
- The fluent builder is configurable:
project_version_arn(impl Into<String>)
/set_project_version_arn(Option<String>)
:
required: trueThe ARN of the model version that you want to use. Only models associated with Custom Labels projects accepted by the operation. If a provided ARN refers to a model version associated with a project for a different feature type, then an InvalidParameterException is returned.
image(Image)
/set_image(Option<Image>)
:
required: trueProvides the input image either as bytes or an S3 object.
You pass image bytes to an Amazon Rekognition API operation by using the
Bytes
property. For example, you would use theBytes
property to pass an image loaded from a local file system. Image bytes passed by using theBytes
property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.
You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the
S3Object
property. Images stored in an S3 bucket do not need to be base64-encoded.The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results you want the service to return in the response. The service returns the specified number of highest confidence labels ranked from highest confidence to lowest.
min_confidence(f32)
/set_min_confidence(Option<f32>)
:
required: falseSpecifies the minimum confidence level for the labels to return.
DetectCustomLabels
doesn’t return any labels with a confidence value that’s lower than this specified value. If you specify a value of 0,DetectCustomLabels
returns all labels, regardless of the assumed threshold applied to each label. If you don’t specify a value forMinConfidence
,DetectCustomLabels
returns labels based on the assumed threshold of each label.
- On success, responds with
DetectCustomLabelsOutput
with field(s):custom_labels(Option<Vec::<CustomLabel>>)
:An array of custom labels detected in the input image.
- On failure, responds with
SdkError<DetectCustomLabelsError>
Source§impl Client
impl Client
Sourcepub fn detect_faces(&self) -> DetectFacesFluentBuilder
pub fn detect_faces(&self) -> DetectFacesFluentBuilder
Constructs a fluent builder for the DetectFaces
operation.
- The fluent builder is configurable:
image(Image)
/set_image(Option<Image>)
:
required: trueThe input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.attributes(Attribute)
/set_attributes(Option<Vec::<Attribute>>)
:
required: falseAn array of facial attributes you want to be returned. A DEFAULT subset of facial attributes - BoundingBox, Confidence, Pose, Quality, and Landmarks - will always be returned. You can request for specific facial attributes (in addition to the default list) - by using [“DEFAULT”, “FACE_OCCLUDED”] or just [“FACE_OCCLUDED”]. You can request for all facial attributes by using [“ALL”]. Requesting more attributes may increase response time.
If you provide both,
[“ALL”, “DEFAULT”]
, the service uses a logical “AND” operator to determine which attributes to return (in this case, all attributes).Note that while the FaceOccluded and EyeDirection attributes are supported when using
DetectFaces
, they aren’t supported when analyzing videos withStartFaceDetection
andGetFaceDetection
.
- On success, responds with
DetectFacesOutput
with field(s):face_details(Option<Vec::<FaceDetail>>)
:Details of each face found in the image.
orientation_correction(Option<OrientationCorrection>)
:The value of
OrientationCorrection
is always null.If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
- On failure, responds with
SdkError<DetectFacesError>
Source§impl Client
impl Client
Sourcepub fn detect_labels(&self) -> DetectLabelsFluentBuilder
pub fn detect_labels(&self) -> DetectLabelsFluentBuilder
Constructs a fluent builder for the DetectLabels
operation.
- The fluent builder is configurable:
image(Image)
/set_image(Option<Image>)
:
required: trueThe input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. Images stored in an S3 Bucket do not need to be base64-encoded.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.max_labels(i32)
/set_max_labels(Option<i32>)
:
required: falseMaximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels. Only valid when GENERAL_LABELS is specified as a feature type in the Feature input parameter.
min_confidence(f32)
/set_min_confidence(Option<f32>)
:
required: falseSpecifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with confidence lower than this specified value.
If
MinConfidence
is not specified, the operation returns labels with a confidence values greater than or equal to 55 percent. Only valid when GENERAL_LABELS is specified as a feature type in the Feature input parameter.features(DetectLabelsFeatureName)
/set_features(Option<Vec::<DetectLabelsFeatureName>>)
:
required: falseA list of the types of analysis to perform. Specifying GENERAL_LABELS uses the label detection feature, while specifying IMAGE_PROPERTIES returns information regarding image color and quality. If no option is specified GENERAL_LABELS is used by default.
settings(DetectLabelsSettings)
/set_settings(Option<DetectLabelsSettings>)
:
required: falseA list of the filters to be applied to returned detected labels and image properties. Specified filters can be inclusive, exclusive, or a combination of both. Filters can be used for individual labels or label categories. The exact label names or label categories must be supplied. For a full list of labels and label categories, see Detecting labels.
- On success, responds with
DetectLabelsOutput
with field(s):labels(Option<Vec::<Label>>)
:An array of labels for the real-world objects detected.
orientation_correction(Option<OrientationCorrection>)
:The value of
OrientationCorrection
is always null.If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
label_model_version(Option<String>)
:Version number of the label detection model that was used to detect labels.
image_properties(Option<DetectLabelsImageProperties>)
:Information about the properties of the input image, such as brightness, sharpness, contrast, and dominant colors.
- On failure, responds with
SdkError<DetectLabelsError>
Source§impl Client
impl Client
Sourcepub fn detect_moderation_labels(&self) -> DetectModerationLabelsFluentBuilder
pub fn detect_moderation_labels(&self) -> DetectModerationLabelsFluentBuilder
Constructs a fluent builder for the DetectModerationLabels
operation.
- The fluent builder is configurable:
image(Image)
/set_image(Option<Image>)
:
required: trueThe input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.min_confidence(f32)
/set_min_confidence(Option<f32>)
:
required: falseSpecifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with a confidence level lower than this specified value.
If you don’t specify
MinConfidence
, the operation returns labels with confidence values greater than or equal to 50 percent.human_loop_config(HumanLoopConfig)
/set_human_loop_config(Option<HumanLoopConfig>)
:
required: falseSets up the configuration for human evaluation, including the FlowDefinition the image will be sent to.
project_version(impl Into<String>)
/set_project_version(Option<String>)
:
required: falseIdentifier for the custom adapter. Expects the ProjectVersionArn as a value. Use the CreateProject or CreateProjectVersion APIs to create a custom adapter.
- On success, responds with
DetectModerationLabelsOutput
with field(s):moderation_labels(Option<Vec::<ModerationLabel>>)
:Array of detected Moderation labels. For video operations, this includes the time, in milliseconds from the start of the video, they were detected.
moderation_model_version(Option<String>)
:Version number of the base moderation detection model that was used to detect unsafe content.
human_loop_activation_output(Option<HumanLoopActivationOutput>)
:Shows the results of the human in the loop evaluation.
project_version(Option<String>)
:Identifier of the custom adapter that was used during inference. If during inference the adapter was EXPIRED, then the parameter will not be returned, indicating that a base moderation detection project version was used.
content_types(Option<Vec::<ContentType>>)
:A list of predicted results for the type of content an image contains. For example, the image content might be from animation, sports, or a video game.
- On failure, responds with
SdkError<DetectModerationLabelsError>
Source§impl Client
impl Client
Sourcepub fn detect_protective_equipment(
&self,
) -> DetectProtectiveEquipmentFluentBuilder
pub fn detect_protective_equipment( &self, ) -> DetectProtectiveEquipmentFluentBuilder
Constructs a fluent builder for the DetectProtectiveEquipment
operation.
- The fluent builder is configurable:
image(Image)
/set_image(Option<Image>)
:
required: trueThe image in which you want to detect PPE on detected persons. The image can be passed as image bytes or you can reference an image stored in an Amazon S3 bucket.
summarization_attributes(ProtectiveEquipmentSummarizationAttributes)
/set_summarization_attributes(Option<ProtectiveEquipmentSummarizationAttributes>)
:
required: falseAn array of PPE types that you want to summarize.
- On success, responds with
DetectProtectiveEquipmentOutput
with field(s):protective_equipment_model_version(Option<String>)
:The version number of the PPE detection model used to detect PPE in the image.
persons(Option<Vec::<ProtectiveEquipmentPerson>>)
:An array of persons detected in the image (including persons not wearing PPE).
summary(Option<ProtectiveEquipmentSummary>)
:Summary information for the types of PPE specified in the
SummarizationAttributes
input parameter.
- On failure, responds with
SdkError<DetectProtectiveEquipmentError>
Source§impl Client
impl Client
Sourcepub fn detect_text(&self) -> DetectTextFluentBuilder
pub fn detect_text(&self) -> DetectTextFluentBuilder
Constructs a fluent builder for the DetectText
operation.
- The fluent builder is configurable:
image(Image)
/set_image(Option<Image>)
:
required: trueThe input image as base64-encoded bytes or an Amazon S3 object. If you use the AWS CLI to call Amazon Rekognition operations, you can’t pass image bytes.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.filters(DetectTextFilters)
/set_filters(Option<DetectTextFilters>)
:
required: falseOptional parameters that let you set the criteria that the text must meet to be included in your response.
- On success, responds with
DetectTextOutput
with field(s):text_detections(Option<Vec::<TextDetection>>)
:An array of text that was detected in the input image.
text_model_version(Option<String>)
:The model version used to detect text.
- On failure, responds with
SdkError<DetectTextError>
Source§impl Client
impl Client
Sourcepub fn disassociate_faces(&self) -> DisassociateFacesFluentBuilder
pub fn disassociate_faces(&self) -> DisassociateFacesFluentBuilder
Constructs a fluent builder for the DisassociateFaces
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of an existing collection containing the UserID.
user_id(impl Into<String>)
/set_user_id(Option<String>)
:
required: trueID for the existing UserID.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the request to
DisassociateFaces
. If you use the same token with multipleDisassociateFaces
requests, the same response is returned. Use ClientRequestToken to prevent the same request from being processed more than once.face_ids(impl Into<String>)
/set_face_ids(Option<Vec::<String>>)
:
required: trueAn array of face IDs to disassociate from the UserID.
- On success, responds with
DisassociateFacesOutput
with field(s):disassociated_faces(Option<Vec::<DisassociatedFace>>)
:An array of DissociatedFace objects containing FaceIds that are successfully disassociated with the UserID is returned. Returned if the DisassociatedFaces action is successful.
unsuccessful_face_disassociations(Option<Vec::<UnsuccessfulFaceDisassociation>>)
:An array of UnsuccessfulDisassociation objects containing FaceIds that are not successfully associated, along with the reasons for the failure to associate. Returned if the DisassociateFaces action is successful.
user_status(Option<UserStatus>)
:The status of an update made to a User. Reflects if the User has been updated for every requested change.
- On failure, responds with
SdkError<DisassociateFacesError>
Source§impl Client
impl Client
Sourcepub fn distribute_dataset_entries(
&self,
) -> DistributeDatasetEntriesFluentBuilder
pub fn distribute_dataset_entries( &self, ) -> DistributeDatasetEntriesFluentBuilder
Constructs a fluent builder for the DistributeDatasetEntries
operation.
- The fluent builder is configurable:
datasets(DistributeDataset)
/set_datasets(Option<Vec::<DistributeDataset>>)
:
required: trueThe ARNS for the training dataset and test dataset that you want to use. The datasets must belong to the same project. The test dataset must be empty.
- On success, responds with
DistributeDatasetEntriesOutput
- On failure, responds with
SdkError<DistributeDatasetEntriesError>
Source§impl Client
impl Client
Sourcepub fn get_celebrity_info(&self) -> GetCelebrityInfoFluentBuilder
pub fn get_celebrity_info(&self) -> GetCelebrityInfoFluentBuilder
Constructs a fluent builder for the GetCelebrityInfo
operation.
- The fluent builder is configurable:
id(impl Into<String>)
/set_id(Option<String>)
:
required: trueThe ID for the celebrity. You get the celebrity ID from a call to the
RecognizeCelebrities
operation, which recognizes celebrities in an image.
- On success, responds with
GetCelebrityInfoOutput
with field(s):urls(Option<Vec::<String>>)
:An array of URLs pointing to additional celebrity information.
name(Option<String>)
:The name of the celebrity.
known_gender(Option<KnownGender>)
:Retrieves the known gender for the celebrity.
- On failure, responds with
SdkError<GetCelebrityInfoError>
Source§impl Client
impl Client
Sourcepub fn get_celebrity_recognition(&self) -> GetCelebrityRecognitionFluentBuilder
pub fn get_celebrity_recognition(&self) -> GetCelebrityRecognitionFluentBuilder
Constructs a fluent builder for the GetCelebrityRecognition
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueJob identifier for the required celebrity recognition analysis. You can get the job identifer from a call to
StartCelebrityRecognition
.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more recognized celebrities to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of celebrities.
sort_by(CelebrityRecognitionSortBy)
/set_sort_by(Option<CelebrityRecognitionSortBy>)
:
required: falseSort to use for celebrities returned in
Celebrities
field. SpecifyID
to sort by the celebrity identifier, specifyTIMESTAMP
to sort by the time the celebrity was recognized.
- On success, responds with
GetCelebrityRecognitionOutput
with field(s):job_status(Option<VideoJobStatus>)
:The current status of the celebrity recognition job.
status_message(Option<String>)
:If the job fails,
StatusMessage
provides a descriptive error message.video_metadata(Option<VideoMetadata>)
:Information about a video that Amazon Rekognition Video analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition Video operation.next_token(Option<String>)
:If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of celebrities.
celebrities(Option<Vec::<CelebrityRecognition>>)
:Array of celebrities recognized in the video.
job_id(Option<String>)
:Job identifier for the celebrity recognition operation for which you want to obtain results. The job identifer is returned by an initial call to StartCelebrityRecognition.
video(Option<Video>)
:Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.job_tag(Option<String>)
:A job identifier specified in the call to StartCelebrityRecognition and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
- On failure, responds with
SdkError<GetCelebrityRecognitionError>
Source§impl Client
impl Client
Sourcepub fn get_content_moderation(&self) -> GetContentModerationFluentBuilder
pub fn get_content_moderation(&self) -> GetContentModerationFluentBuilder
Constructs a fluent builder for the GetContentModeration
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueThe identifier for the inappropriate, unwanted, or offensive content moderation job. Use
JobId
to identify the job in a subsequent call toGetContentModeration
.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more data to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of content moderation labels.
sort_by(ContentModerationSortBy)
/set_sort_by(Option<ContentModerationSortBy>)
:
required: falseSort to use for elements in the
ModerationLabelDetections
array. UseTIMESTAMP
to sort array elements by the time labels are detected. UseNAME
to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is byTIMESTAMP
.aggregate_by(ContentModerationAggregateBy)
/set_aggregate_by(Option<ContentModerationAggregateBy>)
:
required: falseDefines how to aggregate results of the StartContentModeration request. Default aggregation option is TIMESTAMPS. SEGMENTS mode aggregates moderation labels over time.
- On success, responds with
GetContentModerationOutput
with field(s):job_status(Option<VideoJobStatus>)
:The current status of the content moderation analysis job.
status_message(Option<String>)
:If the job fails,
StatusMessage
provides a descriptive error message.video_metadata(Option<VideoMetadata>)
:Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated responses fromGetContentModeration
.moderation_labels(Option<Vec::<ContentModerationDetection>>)
:The detected inappropriate, unwanted, or offensive content moderation labels and the time(s) they were detected.
next_token(Option<String>)
:If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of content moderation labels.
moderation_model_version(Option<String>)
:Version number of the moderation detection model that was used to detect inappropriate, unwanted, or offensive content.
job_id(Option<String>)
:Job identifier for the content moderation operation for which you want to obtain results. The job identifer is returned by an initial call to StartContentModeration.
video(Option<Video>)
:Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.job_tag(Option<String>)
:A job identifier specified in the call to StartContentModeration and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
get_request_metadata(Option<GetContentModerationRequestMetadata>)
:Information about the paramters used when getting a response. Includes information on aggregation and sorting methods.
- On failure, responds with
SdkError<GetContentModerationError>
Source§impl Client
impl Client
Sourcepub fn get_face_detection(&self) -> GetFaceDetectionFluentBuilder
pub fn get_face_detection(&self) -> GetFaceDetectionFluentBuilder
Constructs a fluent builder for the GetFaceDetection
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueUnique identifier for the face detection job. The
JobId
is returned fromStartFaceDetection
.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there are more faces to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.
- On success, responds with
GetFaceDetectionOutput
with field(s):job_status(Option<VideoJobStatus>)
:The current status of the face detection job.
status_message(Option<String>)
:If the job fails,
StatusMessage
provides a descriptive error message.video_metadata(Option<VideoMetadata>)
:Information about a video that Amazon Rekognition Video analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition video operation.next_token(Option<String>)
:If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.
faces(Option<Vec::<FaceDetection>>)
:An array of faces detected in the video. Each element contains a detected face’s details and the time, in milliseconds from the start of the video, the face was detected.
job_id(Option<String>)
:Job identifier for the face detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartFaceDetection.
video(Option<Video>)
:Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.job_tag(Option<String>)
:A job identifier specified in the call to StartFaceDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
- On failure, responds with
SdkError<GetFaceDetectionError>
Source§impl Client
impl Client
Sourcepub fn get_face_liveness_session_results(
&self,
) -> GetFaceLivenessSessionResultsFluentBuilder
pub fn get_face_liveness_session_results( &self, ) -> GetFaceLivenessSessionResultsFluentBuilder
Constructs a fluent builder for the GetFaceLivenessSessionResults
operation.
- The fluent builder is configurable:
session_id(impl Into<String>)
/set_session_id(Option<String>)
:
required: trueA unique 128-bit UUID. This is used to uniquely identify the session and also acts as an idempotency token for all operations associated with the session.
- On success, responds with
GetFaceLivenessSessionResultsOutput
with field(s):session_id(String)
:The sessionId for which this request was called.
status(LivenessSessionStatus)
:Represents a status corresponding to the state of the session. Possible statuses are: CREATED, IN_PROGRESS, SUCCEEDED, FAILED, EXPIRED.
confidence(Option<f32>)
:Probabalistic confidence score for if the person in the given video was live, represented as a float value between 0 to 100.
reference_image(Option<AuditImage>)
:A high-quality image from the Face Liveness video that can be used for face comparison or search. It includes a bounding box of the face and the Base64-encoded bytes that return an image. If the CreateFaceLivenessSession request included an OutputConfig argument, the image will be uploaded to an S3Object specified in the output configuration. In case the reference image is not returned, it’s recommended to retry the Liveness check.
audit_images(Option<Vec::<AuditImage>>)
:A set of images from the Face Liveness video that can be used for audit purposes. It includes a bounding box of the face and the Base64-encoded bytes that return an image. If the CreateFaceLivenessSession request included an OutputConfig argument, the image will be uploaded to an S3Object specified in the output configuration. If no Amazon S3 bucket is defined, raw bytes are sent instead.
- On failure, responds with
SdkError<GetFaceLivenessSessionResultsError>
Source§impl Client
impl Client
Sourcepub fn get_face_search(&self) -> GetFaceSearchFluentBuilder
pub fn get_face_search(&self) -> GetFaceSearchFluentBuilder
Constructs a fluent builder for the GetFaceSearch
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueThe job identifer for the search request. You get the job identifier from an initial call to
StartFaceSearch
.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more search results to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of search results.
sort_by(FaceSearchSortBy)
/set_sort_by(Option<FaceSearchSortBy>)
:
required: falseSort to use for grouping faces in the response. Use
TIMESTAMP
to group faces by the time that they are recognized. UseINDEX
to sort by recognized faces.
- On success, responds with
GetFaceSearchOutput
with field(s):job_status(Option<VideoJobStatus>)
:The current status of the face search job.
status_message(Option<String>)
:If the job fails,
StatusMessage
provides a descriptive error message.next_token(Option<String>)
:If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.
video_metadata(Option<VideoMetadata>)
:Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition Video operation.persons(Option<Vec::<PersonMatch>>)
:An array of persons,
PersonMatch
, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call toStartFaceSearch
. EachPersons
element includes a time the person was matched, face match details (FaceMatches
) for matching faces in the collection, and person information (Person
) for the matched person.job_id(Option<String>)
:Job identifier for the face search operation for which you want to obtain results. The job identifer is returned by an initial call to StartFaceSearch.
video(Option<Video>)
:Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.job_tag(Option<String>)
:A job identifier specified in the call to StartFaceSearch and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
- On failure, responds with
SdkError<GetFaceSearchError>
Source§impl Client
impl Client
Sourcepub fn get_label_detection(&self) -> GetLabelDetectionFluentBuilder
pub fn get_label_detection(&self) -> GetLabelDetectionFluentBuilder
Constructs a fluent builder for the GetLabelDetection
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueJob identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to
StartlabelDetection
.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.
sort_by(LabelDetectionSortBy)
/set_sort_by(Option<LabelDetectionSortBy>)
:
required: falseSort to use for elements in the
Labels
array. UseTIMESTAMP
to sort array elements by the time labels are detected. UseNAME
to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is byTIMESTAMP
.aggregate_by(LabelDetectionAggregateBy)
/set_aggregate_by(Option<LabelDetectionAggregateBy>)
:
required: falseDefines how to aggregate the returned results. Results can be aggregated by timestamps or segments.
- On success, responds with
GetLabelDetectionOutput
with field(s):job_status(Option<VideoJobStatus>)
:The current status of the label detection job.
status_message(Option<String>)
:If the job fails,
StatusMessage
provides a descriptive error message.video_metadata(Option<VideoMetadata>)
:Information about a video that Amazon Rekognition Video analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition video operation.next_token(Option<String>)
:If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of labels.
labels(Option<Vec::<LabelDetection>>)
:An array of labels detected in the video. Each element contains the detected label and the time, in milliseconds from the start of the video, that the label was detected.
label_model_version(Option<String>)
:Version number of the label detection model that was used to detect labels.
job_id(Option<String>)
:Job identifier for the label detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartLabelDetection.
video(Option<Video>)
:Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.job_tag(Option<String>)
:A job identifier specified in the call to StartLabelDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
get_request_metadata(Option<GetLabelDetectionRequestMetadata>)
:Information about the paramters used when getting a response. Includes information on aggregation and sorting methods.
- On failure, responds with
SdkError<GetLabelDetectionError>
Source§impl Client
impl Client
Sourcepub fn get_media_analysis_job(&self) -> GetMediaAnalysisJobFluentBuilder
pub fn get_media_analysis_job(&self) -> GetMediaAnalysisJobFluentBuilder
Constructs a fluent builder for the GetMediaAnalysisJob
operation.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueUnique identifier for the media analysis job for which you want to retrieve results.
- On success, responds with
GetMediaAnalysisJobOutput
with field(s):job_id(String)
:The identifier for the media analysis job.
job_name(Option<String>)
:The name of the media analysis job.
operations_config(Option<MediaAnalysisOperationsConfig>)
:Operation configurations that were provided during job creation.
status(MediaAnalysisJobStatus)
:The current status of the media analysis job.
failure_details(Option<MediaAnalysisJobFailureDetails>)
:Details about the error that resulted in failure of the job.
creation_timestamp(DateTime)
:The Unix date and time when the job was started.
completion_timestamp(Option<DateTime>)
:The Unix date and time when the job finished.
input(Option<MediaAnalysisInput>)
:Reference to the input manifest that was provided in the job creation request.
output_config(Option<MediaAnalysisOutputConfig>)
:Output configuration that was provided in the creation request.
kms_key_id(Option<String>)
:KMS Key that was provided in the creation request.
results(Option<MediaAnalysisResults>)
:Output manifest that contains prediction results.
manifest_summary(Option<MediaAnalysisManifestSummary>)
:The summary manifest provides statistics on input manifest and errors identified in the input manifest.
- On failure, responds with
SdkError<GetMediaAnalysisJobError>
Source§impl Client
impl Client
Sourcepub fn get_person_tracking(&self) -> GetPersonTrackingFluentBuilder
pub fn get_person_tracking(&self) -> GetPersonTrackingFluentBuilder
Constructs a fluent builder for the GetPersonTracking
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueThe identifier for a job that tracks persons in a video. You get the
JobId
from a call toStartPersonTracking
.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there are more persons to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of persons.
sort_by(PersonTrackingSortBy)
/set_sort_by(Option<PersonTrackingSortBy>)
:
required: falseSort to use for elements in the
Persons
array. UseTIMESTAMP
to sort array elements by the time persons are detected. UseINDEX
to sort by the tracked persons. If you sort byINDEX
, the array elements for each person are sorted by detection confidence. The default sort is byTIMESTAMP
.
- On success, responds with
GetPersonTrackingOutput
with field(s):job_status(Option<VideoJobStatus>)
:The current status of the person tracking job.
status_message(Option<String>)
:If the job fails,
StatusMessage
provides a descriptive error message.video_metadata(Option<VideoMetadata>)
:Information about a video that Amazon Rekognition Video analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition Video operation.next_token(Option<String>)
:If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of persons.
persons(Option<Vec::<PersonDetection>>)
:An array of the persons detected in the video and the time(s) their path was tracked throughout the video. An array element will exist for each time a person’s path is tracked.
job_id(Option<String>)
:Job identifier for the person tracking operation for which you want to obtain results. The job identifer is returned by an initial call to StartPersonTracking.
video(Option<Video>)
:Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.job_tag(Option<String>)
:A job identifier specified in the call to StartCelebrityRecognition and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
- On failure, responds with
SdkError<GetPersonTrackingError>
Source§impl Client
impl Client
Sourcepub fn get_segment_detection(&self) -> GetSegmentDetectionFluentBuilder
pub fn get_segment_detection(&self) -> GetSegmentDetectionFluentBuilder
Constructs a fluent builder for the GetSegmentDetection
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueJob identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to
StartSegmentDetection
.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results to return per paginated call. The largest value you can specify is 1000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.
- On success, responds with
GetSegmentDetectionOutput
with field(s):job_status(Option<VideoJobStatus>)
:Current status of the segment detection job.
status_message(Option<String>)
:If the job fails,
StatusMessage
provides a descriptive error message.video_metadata(Option<Vec::<VideoMetadata>>)
:Currently, Amazon Rekognition Video returns a single object in the
VideoMetadata
array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. TheVideoMetadata
object includes the video codec, video format and other information. Video metadata is returned in each page of information returned byGetSegmentDetection
.audio_metadata(Option<Vec::<AudioMetadata>>)
:An array of objects. There can be multiple audio streams. Each
AudioMetadata
object contains metadata for a single audio stream. Audio information in anAudioMetadata
objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned byGetSegmentDetection
.next_token(Option<String>)
:If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
segments(Option<Vec::<SegmentDetection>>)
:An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the
SegmentTypes
input parameter ofStartSegmentDetection
. Within each segment type the array is sorted by timestamp values.selected_segment_types(Option<Vec::<SegmentTypeInfo>>)
:An array containing the segment types requested in the call to
StartSegmentDetection
.job_id(Option<String>)
:Job identifier for the segment detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartSegmentDetection.
video(Option<Video>)
:Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.job_tag(Option<String>)
:A job identifier specified in the call to StartSegmentDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
- On failure, responds with
SdkError<GetSegmentDetectionError>
Source§impl Client
impl Client
Sourcepub fn get_text_detection(&self) -> GetTextDetectionFluentBuilder
pub fn get_text_detection(&self) -> GetTextDetectionFluentBuilder
Constructs a fluent builder for the GetTextDetection
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
job_id(impl Into<String>)
/set_job_id(Option<String>)
:
required: trueJob identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to
StartTextDetection
.max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of results to return per paginated call. The largest value you can specify is 1000.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
- On success, responds with
GetTextDetectionOutput
with field(s):job_status(Option<VideoJobStatus>)
:Current status of the text detection job.
status_message(Option<String>)
:If the job fails,
StatusMessage
provides a descriptive error message.video_metadata(Option<VideoMetadata>)
:Information about a video that Amazon Rekognition analyzed.
Videometadata
is returned in every page of paginated responses from a Amazon Rekognition video operation.text_detections(Option<Vec::<TextDetectionResult>>)
:An array of text detected in the video. Each element contains the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
next_token(Option<String>)
:If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.
text_model_version(Option<String>)
:Version number of the text detection model that was used to detect text.
job_id(Option<String>)
:Job identifier for the text detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartTextDetection.
video(Option<Video>)
:Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.job_tag(Option<String>)
:A job identifier specified in the call to StartTextDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
- On failure, responds with
SdkError<GetTextDetectionError>
Source§impl Client
impl Client
Sourcepub fn index_faces(&self) -> IndexFacesFluentBuilder
pub fn index_faces(&self) -> IndexFacesFluentBuilder
Constructs a fluent builder for the IndexFaces
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of an existing collection to which you want to add the faces that are detected in the input images.
image(Image)
/set_image(Option<Image>)
:
required: trueThe input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes isn’t supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.external_image_id(impl Into<String>)
/set_external_image_id(Option<String>)
:
required: falseThe ID you want to assign to all the faces detected in the image.
detection_attributes(Attribute)
/set_detection_attributes(Option<Vec::<Attribute>>)
:
required: falseAn array of facial attributes you want to be returned. A
DEFAULT
subset of facial attributes -BoundingBox
,Confidence
,Pose
,Quality
, andLandmarks
- will always be returned. You can request for specific facial attributes (in addition to the default list) - by using[“DEFAULT”, “FACE_OCCLUDED”]
or just[“FACE_OCCLUDED”]
. You can request for all facial attributes by using[“ALL”]
. Requesting more attributes may increase response time.If you provide both,
[“ALL”, “DEFAULT”]
, the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).max_faces(i32)
/set_max_faces(Option<i32>)
:
required: falseThe maximum number of faces to index. The value of
MaxFaces
must be greater than or equal to 1.IndexFaces
returns no more than 100 detected faces in an image, even if you specify a larger value forMaxFaces
.If
IndexFaces
detects more faces than the value ofMaxFaces
, the faces with the lowest quality are filtered out first. If there are still more faces than the value ofMaxFaces
, the faces with the smallest bounding boxes are filtered out (up to the number that’s needed to satisfy the value ofMaxFaces
). Information about the unindexed faces is available in theUnindexedFaces
array.The faces that are returned by
IndexFaces
are sorted by the largest face bounding box size to the smallest size, in descending order.MaxFaces
can be used with a collection associated with any version of the face model.quality_filter(QualityFilter)
/set_quality_filter(Option<QualityFilter>)
:
required: falseA filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t indexed. If you specify
AUTO
, Amazon Rekognition chooses the quality bar. If you specifyLOW
,MEDIUM
, orHIGH
, filtering removes all faces that don’t meet the chosen quality bar. The default value isAUTO
. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specifyNONE
, no filtering is performed.To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
- On success, responds with
IndexFacesOutput
with field(s):face_records(Option<Vec::<FaceRecord>>)
:An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
orientation_correction(Option<OrientationCorrection>)
:If your collection is associated with a face detection model that’s later than version 3.0, the value of
OrientationCorrection
is always null and no orientation information is returned.If your collection is associated with a face detection model that’s version 3.0 or earlier, the following applies:
-
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata. The value of
OrientationCorrection
is null. -
If the image doesn’t contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.
Bounding box information is returned in the
FaceRecords
array. You can get the version of the face detection model by callingDescribeCollection
.-
face_model_version(Option<String>)
:The version number of the face detection model that’s associated with the input collection (
CollectionId
).unindexed_faces(Option<Vec::<UnindexedFace>>)
:An array of faces that were detected in the image but weren’t indexed. They weren’t indexed because the quality filter identified them as low quality, or the
MaxFaces
request parameter filtered them out. To use the quality filter, you specify theQualityFilter
request parameter.
- On failure, responds with
SdkError<IndexFacesError>
Source§impl Client
impl Client
Sourcepub fn list_collections(&self) -> ListCollectionsFluentBuilder
pub fn list_collections(&self) -> ListCollectionsFluentBuilder
Constructs a fluent builder for the ListCollections
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falsePagination token from the previous response.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of collection IDs to return.
- On success, responds with
ListCollectionsOutput
with field(s):collection_ids(Option<Vec::<String>>)
:An array of collection IDs.
next_token(Option<String>)
:If the result is truncated, the response provides a
NextToken
that you can use in the subsequent request to fetch the next set of collection IDs.face_model_versions(Option<Vec::<String>>)
:Version numbers of the face detection models associated with the collections in the array
CollectionIds
. For example, the value ofFaceModelVersions[2]
is the version number for the face detection model used by the collection inCollectionId[2]
.
- On failure, responds with
SdkError<ListCollectionsError>
Source§impl Client
impl Client
Sourcepub fn list_dataset_entries(&self) -> ListDatasetEntriesFluentBuilder
pub fn list_dataset_entries(&self) -> ListDatasetEntriesFluentBuilder
Constructs a fluent builder for the ListDatasetEntries
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)
/set_dataset_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) for the dataset that you want to use.
contains_labels(impl Into<String>)
/set_contains_labels(Option<Vec::<String>>)
:
required: falseSpecifies a label filter for the response. The response includes an entry only if one or more of the labels in
ContainsLabels
exist in the entry.labeled(bool)
/set_labeled(Option<bool>)
:
required: falseSpecify
true
to get only the JSON Lines where the image is labeled. Specifyfalse
to get only the JSON Lines where the image isn’t labeled. If you don’t specifyLabeled
,ListDatasetEntries
returns JSON Lines for labeled and unlabeled images.source_ref_contains(impl Into<String>)
/set_source_ref_contains(Option<String>)
:
required: falseIf specified,
ListDatasetEntries
only returns JSON Lines where the value ofSourceRefContains
is part of thesource-ref
field. Thesource-ref
field contains the Amazon S3 location of the image. You can useSouceRefContains
for tasks such as getting the JSON Line for a single image, or gettting JSON Lines for all images within a specific folder.has_errors(bool)
/set_has_errors(Option<bool>)
:
required: falseSpecifies an error filter for the response. Specify
True
to only include entries that have errors.next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100.
- On success, responds with
ListDatasetEntriesOutput
with field(s):dataset_entries(Option<Vec::<String>>)
:A list of entries (images) in the dataset.
next_token(Option<String>)
:If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
- On failure, responds with
SdkError<ListDatasetEntriesError>
Source§impl Client
impl Client
Sourcepub fn list_dataset_labels(&self) -> ListDatasetLabelsFluentBuilder
pub fn list_dataset_labels(&self) -> ListDatasetLabelsFluentBuilder
Constructs a fluent builder for the ListDatasetLabels
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)
/set_dataset_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the dataset that you want to use.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return per paginated call. The largest value you can specify is 100. If you specify a value greater than 100, a ValidationException error occurs. The default value is 100.
- On success, responds with
ListDatasetLabelsOutput
with field(s):dataset_label_descriptions(Option<Vec::<DatasetLabelDescription>>)
:A list of the labels in the dataset.
next_token(Option<String>)
:If the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
- On failure, responds with
SdkError<ListDatasetLabelsError>
Source§impl Client
impl Client
Sourcepub fn list_faces(&self) -> ListFacesFluentBuilder
pub fn list_faces(&self) -> ListFacesFluentBuilder
Constructs a fluent builder for the ListFaces
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueID of the collection from which to list the faces.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more data to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of faces to return.
user_id(impl Into<String>)
/set_user_id(Option<String>)
:
required: falseAn array of user IDs to filter results with when listing faces in a collection.
face_ids(impl Into<String>)
/set_face_ids(Option<Vec::<String>>)
:
required: falseAn array of face IDs to filter results with when listing faces in a collection.
- On success, responds with
ListFacesOutput
with field(s):faces(Option<Vec::<Face>>)
:An array of
Face
objects.next_token(Option<String>)
:If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.
face_model_version(Option<String>)
:Version number of the face detection model associated with the input collection (
CollectionId
).
- On failure, responds with
SdkError<ListFacesError>
Source§impl Client
impl Client
Sourcepub fn list_media_analysis_jobs(&self) -> ListMediaAnalysisJobsFluentBuilder
pub fn list_media_analysis_jobs(&self) -> ListMediaAnalysisJobsFluentBuilder
Constructs a fluent builder for the ListMediaAnalysisJobs
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falsePagination token, if the previous response was incomplete.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return per paginated call. The largest value user can specify is 100. If user specifies a value greater than 100, an
InvalidParameterException
error occurs. The default value is 100.
- On success, responds with
ListMediaAnalysisJobsOutput
with field(s):next_token(Option<String>)
:Pagination token, if the previous response was incomplete.
media_analysis_jobs(Vec::<MediaAnalysisJobDescription>)
:Contains a list of all media analysis jobs.
- On failure, responds with
SdkError<ListMediaAnalysisJobsError>
Source§impl Client
impl Client
Sourcepub fn list_project_policies(&self) -> ListProjectPoliciesFluentBuilder
pub fn list_project_policies(&self) -> ListProjectPoliciesFluentBuilder
Constructs a fluent builder for the ListProjectPolicies
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
project_arn(impl Into<String>)
/set_project_arn(Option<String>)
:
required: trueThe ARN of the project for which you want to list the project policies.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there is more results to retrieve), Amazon Rekognition Custom Labels returns a pagination token in the response. You can use this pagination token to retrieve the next set of results.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseThe maximum number of results to return per paginated call. The largest value you can specify is 5. If you specify a value greater than 5, a ValidationException error occurs. The default value is 5.
- On success, responds with
ListProjectPoliciesOutput
with field(s):project_policies(Option<Vec::<ProjectPolicy>>)
:A list of project policies attached to the project.
next_token(Option<String>)
:If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of project policies.
- On failure, responds with
SdkError<ListProjectPoliciesError>
Source§impl Client
impl Client
Sourcepub fn list_stream_processors(&self) -> ListStreamProcessorsFluentBuilder
pub fn list_stream_processors(&self) -> ListStreamProcessorsFluentBuilder
Constructs a fluent builder for the ListStreamProcessors
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falseIf the previous response was incomplete (because there are more stream processors to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of stream processors.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of stream processors you want Amazon Rekognition Video to return in the response. The default is 1000.
- On success, responds with
ListStreamProcessorsOutput
with field(s):next_token(Option<String>)
:If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of stream processors.
stream_processors(Option<Vec::<StreamProcessor>>)
:List of stream processors that you have created.
- On failure, responds with
SdkError<ListStreamProcessorsError>
Source§impl Client
impl Client
Constructs a fluent builder for the ListTagsForResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueAmazon Resource Name (ARN) of the model, collection, or stream processor that contains the tags that you want a list of.
- On success, responds with
ListTagsForResourceOutput
with field(s):tags(Option<HashMap::<String, String>>)
:A list of key-value tags assigned to the resource.
- On failure, responds with
SdkError<ListTagsForResourceError>
Source§impl Client
impl Client
Sourcepub fn list_users(&self) -> ListUsersFluentBuilder
pub fn list_users(&self) -> ListUsersFluentBuilder
Constructs a fluent builder for the ListUsers
operation.
This operation supports pagination; See into_paginator()
.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of an existing collection.
max_results(i32)
/set_max_results(Option<i32>)
:
required: falseMaximum number of UsersID to return.
next_token(impl Into<String>)
/set_next_token(Option<String>)
:
required: falsePagingation token to receive the next set of UsersID.
- On success, responds with
ListUsersOutput
with field(s):users(Option<Vec::<User>>)
:List of UsersID associated with the specified collection.
next_token(Option<String>)
:A pagination token to be used with the subsequent request if the response is truncated.
- On failure, responds with
SdkError<ListUsersError>
Source§impl Client
impl Client
Sourcepub fn put_project_policy(&self) -> PutProjectPolicyFluentBuilder
pub fn put_project_policy(&self) -> PutProjectPolicyFluentBuilder
Constructs a fluent builder for the PutProjectPolicy
operation.
- The fluent builder is configurable:
project_arn(impl Into<String>)
/set_project_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the project that the project policy is attached to.
policy_name(impl Into<String>)
/set_policy_name(Option<String>)
:
required: trueA name for the policy.
policy_revision_id(impl Into<String>)
/set_policy_revision_id(Option<String>)
:
required: falseThe revision ID for the Project Policy. Each time you modify a policy, Amazon Rekognition Custom Labels generates and assigns a new
PolicyRevisionId
and then deletes the previous version of the policy.policy_document(impl Into<String>)
/set_policy_document(Option<String>)
:
required: trueA resource policy to add to the model. The policy is a JSON structure that contains one or more statements that define the policy. The policy must follow the IAM syntax. For more information about the contents of a JSON policy document, see IAM JSON policy reference.
- On success, responds with
PutProjectPolicyOutput
with field(s):policy_revision_id(Option<String>)
:The ID of the project policy.
- On failure, responds with
SdkError<PutProjectPolicyError>
Source§impl Client
impl Client
Sourcepub fn recognize_celebrities(&self) -> RecognizeCelebritiesFluentBuilder
pub fn recognize_celebrities(&self) -> RecognizeCelebritiesFluentBuilder
Constructs a fluent builder for the RecognizeCelebrities
operation.
- The fluent builder is configurable:
image(Image)
/set_image(Option<Image>)
:
required: trueThe input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.
- On success, responds with
RecognizeCelebritiesOutput
with field(s):celebrity_faces(Option<Vec::<Celebrity>>)
:Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 64 celebrities in an image. Each celebrity object includes the following attributes:
Face
,Confidence
,Emotions
,Landmarks
,Pose
,Quality
,Smile
,Id
,KnownGender
,MatchConfidence
,Name
,Urls
.unrecognized_faces(Option<Vec::<ComparedFace>>)
:Details about each unrecognized face in the image.
orientation_correction(Option<OrientationCorrection>)
:Support for estimating image orientation using the the OrientationCorrection field has ceased as of August 2021. Any returned values for this field included in an API response will always be NULL.
The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct the orientation. The bounding box coordinates returned in
CelebrityFaces
andUnrecognizedFaces
represent face locations before the image orientation is corrected.If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the image’s orientation. If so, and the Exif metadata for the input image populates the orientation field, the value of
OrientationCorrection
is null. TheCelebrityFaces
andUnrecognizedFaces
bounding box coordinates represent face locations after Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.
- On failure, responds with
SdkError<RecognizeCelebritiesError>
Source§impl Client
impl Client
Sourcepub fn search_faces(&self) -> SearchFacesFluentBuilder
pub fn search_faces(&self) -> SearchFacesFluentBuilder
Constructs a fluent builder for the SearchFaces
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueID of the collection the face belongs to.
face_id(impl Into<String>)
/set_face_id(Option<String>)
:
required: trueID of a face to find matches for in the collection.
max_faces(i32)
/set_max_faces(Option<i32>)
:
required: falseMaximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
face_match_threshold(f32)
/set_face_match_threshold(Option<f32>)
:
required: falseOptional value specifying the minimum confidence in the face match to return. For example, don’t return any matches where confidence in matches is less than 70%. The default value is 80%.
- On success, responds with
SearchFacesOutput
with field(s):searched_face_id(Option<String>)
:ID of the face that was searched for matches in a collection.
face_matches(Option<Vec::<FaceMatch>>)
:An array of faces that matched the input face, along with the confidence in the match.
face_model_version(Option<String>)
:Version number of the face detection model associated with the input collection (
CollectionId
).
- On failure, responds with
SdkError<SearchFacesError>
Source§impl Client
impl Client
Sourcepub fn search_faces_by_image(&self) -> SearchFacesByImageFluentBuilder
pub fn search_faces_by_image(&self) -> SearchFacesByImageFluentBuilder
Constructs a fluent builder for the SearchFacesByImage
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueID of the collection to search.
image(Image)
/set_image(Option<Image>)
:
required: trueThe input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.
If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the
Bytes
field. For more information, see Images in the Amazon Rekognition developer guide.max_faces(i32)
/set_max_faces(Option<i32>)
:
required: falseMaximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
face_match_threshold(f32)
/set_face_match_threshold(Option<f32>)
:
required: false(Optional) Specifies the minimum confidence in the face match to return. For example, don’t return any matches where confidence in matches is less than 70%. The default value is 80%.
quality_filter(QualityFilter)
/set_quality_filter(Option<QualityFilter>)
:
required: falseA filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t searched for in the collection. If you specify
AUTO
, Amazon Rekognition chooses the quality bar. If you specifyLOW
,MEDIUM
, orHIGH
, filtering removes all faces that don’t meet the chosen quality bar. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specifyNONE
, no filtering is performed. The default value isNONE
.To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.
- On success, responds with
SearchFacesByImageOutput
with field(s):searched_face_bounding_box(Option<BoundingBox>)
:The bounding box around the face in the input image that Amazon Rekognition used for the search.
searched_face_confidence(Option<f32>)
:The level of confidence that the
searchedFaceBoundingBox
, contains a face.face_matches(Option<Vec::<FaceMatch>>)
:An array of faces that match the input face, along with the confidence in the match.
face_model_version(Option<String>)
:Version number of the face detection model associated with the input collection (
CollectionId
).
- On failure, responds with
SdkError<SearchFacesByImageError>
Source§impl Client
impl Client
Sourcepub fn search_users(&self) -> SearchUsersFluentBuilder
pub fn search_users(&self) -> SearchUsersFluentBuilder
Constructs a fluent builder for the SearchUsers
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of an existing collection containing the UserID, used with a UserId or FaceId. If a FaceId is provided, UserId isn’t required to be present in the Collection.
user_id(impl Into<String>)
/set_user_id(Option<String>)
:
required: falseID for the existing User.
face_id(impl Into<String>)
/set_face_id(Option<String>)
:
required: falseID for the existing face.
user_match_threshold(f32)
/set_user_match_threshold(Option<f32>)
:
required: falseOptional value that specifies the minimum confidence in the matched UserID to return. Default value of 80.
max_users(i32)
/set_max_users(Option<i32>)
:
required: falseMaximum number of identities to return.
- On success, responds with
SearchUsersOutput
with field(s):user_matches(Option<Vec::<UserMatch>>)
:An array of UserMatch objects that matched the input face along with the confidence in the match. Array will be empty if there are no matches.
face_model_version(Option<String>)
:Version number of the face detection model associated with the input CollectionId.
searched_face(Option<SearchedFace>)
:Contains the ID of a face that was used to search for matches in a collection.
searched_user(Option<SearchedUser>)
:Contains the ID of the UserID that was used to search for matches in a collection.
- On failure, responds with
SdkError<SearchUsersError>
Source§impl Client
impl Client
Sourcepub fn search_users_by_image(&self) -> SearchUsersByImageFluentBuilder
pub fn search_users_by_image(&self) -> SearchUsersByImageFluentBuilder
Constructs a fluent builder for the SearchUsersByImage
operation.
- The fluent builder is configurable:
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueThe ID of an existing collection containing the UserID.
image(Image)
/set_image(Option<Image>)
:
required: trueProvides the input image either as bytes or an S3 object.
You pass image bytes to an Amazon Rekognition API operation by using the
Bytes
property. For example, you would use theBytes
property to pass an image loaded from a local file system. Image bytes passed by using theBytes
property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.
You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the
S3Object
property. Images stored in an S3 bucket do not need to be base64-encoded.The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.
user_match_threshold(f32)
/set_user_match_threshold(Option<f32>)
:
required: falseSpecifies the minimum confidence in the UserID match to return. Default value is 80.
max_users(i32)
/set_max_users(Option<i32>)
:
required: falseMaximum number of UserIDs to return.
quality_filter(QualityFilter)
/set_quality_filter(Option<QualityFilter>)
:
required: falseA filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t searched for in the collection. The default value is NONE.
- On success, responds with
SearchUsersByImageOutput
with field(s):user_matches(Option<Vec::<UserMatch>>)
:An array of UserID objects that matched the input face, along with the confidence in the match. The returned structure will be empty if there are no matches. Returned if the SearchUsersByImageResponse action is successful.
face_model_version(Option<String>)
:Version number of the face detection model associated with the input collection CollectionId.
searched_face(Option<SearchedFaceDetails>)
:A list of FaceDetail objects containing the BoundingBox for the largest face in image, as well as the confidence in the bounding box, that was searched for matches. If no valid face is detected in the image the response will contain no SearchedFace object.
unsearched_faces(Option<Vec::<UnsearchedFace>>)
:List of UnsearchedFace objects. Contains the face details infered from the specified image but not used for search. Contains reasons that describe why a face wasn’t used for Search.
- On failure, responds with
SdkError<SearchUsersByImageError>
Source§impl Client
impl Client
Sourcepub fn start_celebrity_recognition(
&self,
) -> StartCelebrityRecognitionFluentBuilder
pub fn start_celebrity_recognition( &self, ) -> StartCelebrityRecognitionFluentBuilder
Constructs a fluent builder for the StartCelebrityRecognition
operation.
- The fluent builder is configurable:
video(Video)
/set_video(Option<Video>)
:
required: trueThe video in which you want to recognize celebrities. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the start request. If you use the same token with multiple
StartCelebrityRecognition
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:
required: falseThe Amazon SNS topic ARN that you want Amazon Rekognition Video to publish the completion status of the celebrity recognition analysis to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.
job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:
required: falseAn identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTag
to group related jobs and identify them in the completion notification.
- On success, responds with
StartCelebrityRecognitionOutput
with field(s):job_id(Option<String>)
:The identifier for the celebrity recognition analysis job. Use
JobId
to identify the job in a subsequent call toGetCelebrityRecognition
.
- On failure, responds with
SdkError<StartCelebrityRecognitionError>
Source§impl Client
impl Client
Sourcepub fn start_content_moderation(&self) -> StartContentModerationFluentBuilder
pub fn start_content_moderation(&self) -> StartContentModerationFluentBuilder
Constructs a fluent builder for the StartContentModeration
operation.
- The fluent builder is configurable:
video(Video)
/set_video(Option<Video>)
:
required: trueThe video in which you want to detect inappropriate, unwanted, or offensive content. The video must be stored in an Amazon S3 bucket.
min_confidence(f32)
/set_min_confidence(Option<f32>)
:
required: falseSpecifies the minimum confidence that Amazon Rekognition must have in order to return a moderated content label. Confidence represents how certain Amazon Rekognition is that the moderated content is correctly identified. 0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition doesn’t return any moderated content labels with a confidence level lower than this specified value. If you don’t specify
MinConfidence
,GetContentModeration
returns labels with confidence values greater than or equal to 50 percent.client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the start request. If you use the same token with multiple
StartContentModeration
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:
required: falseThe Amazon SNS topic ARN that you want Amazon Rekognition Video to publish the completion status of the content analysis to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.
job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:
required: falseAn identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTag
to group related jobs and identify them in the completion notification.
- On success, responds with
StartContentModerationOutput
with field(s):job_id(Option<String>)
:The identifier for the content analysis job. Use
JobId
to identify the job in a subsequent call toGetContentModeration
.
- On failure, responds with
SdkError<StartContentModerationError>
Source§impl Client
impl Client
Sourcepub fn start_face_detection(&self) -> StartFaceDetectionFluentBuilder
pub fn start_face_detection(&self) -> StartFaceDetectionFluentBuilder
Constructs a fluent builder for the StartFaceDetection
operation.
- The fluent builder is configurable:
video(Video)
/set_video(Option<Video>)
:
required: trueThe video in which you want to detect faces. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the start request. If you use the same token with multiple
StartFaceDetection
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:
required: falseThe ARN of the Amazon SNS topic to which you want Amazon Rekognition Video to publish the completion status of the face detection operation. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.
face_attributes(FaceAttributes)
/set_face_attributes(Option<FaceAttributes>)
:
required: falseThe face attributes you want returned.
DEFAULT
- The following subset of facial attributes are returned: BoundingBox, Confidence, Pose, Quality and Landmarks.ALL
- All facial attributes are returned.job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:
required: falseAn identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTag
to group related jobs and identify them in the completion notification.
- On success, responds with
StartFaceDetectionOutput
with field(s):job_id(Option<String>)
:The identifier for the face detection job. Use
JobId
to identify the job in a subsequent call toGetFaceDetection
.
- On failure, responds with
SdkError<StartFaceDetectionError>
Source§impl Client
impl Client
Sourcepub fn start_face_search(&self) -> StartFaceSearchFluentBuilder
pub fn start_face_search(&self) -> StartFaceSearchFluentBuilder
Constructs a fluent builder for the StartFaceSearch
operation.
- The fluent builder is configurable:
video(Video)
/set_video(Option<Video>)
:
required: trueThe video you want to search. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the start request. If you use the same token with multiple
StartFaceSearch
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidently started more than once.face_match_threshold(f32)
/set_face_match_threshold(Option<f32>)
:
required: falseThe minimum confidence in the person match to return. For example, don’t return any matches where confidence in matches is less than 70%. The default value is 80%.
collection_id(impl Into<String>)
/set_collection_id(Option<String>)
:
required: trueID of the collection that contains the faces you want to search for.
notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:
required: falseThe ARN of the Amazon SNS topic to which you want Amazon Rekognition Video to publish the completion status of the search. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.
job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:
required: falseAn identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTag
to group related jobs and identify them in the completion notification.
- On success, responds with
StartFaceSearchOutput
with field(s):job_id(Option<String>)
:The identifier for the search job. Use
JobId
to identify the job in a subsequent call toGetFaceSearch
.
- On failure, responds with
SdkError<StartFaceSearchError>
Source§impl Client
impl Client
Sourcepub fn start_label_detection(&self) -> StartLabelDetectionFluentBuilder
pub fn start_label_detection(&self) -> StartLabelDetectionFluentBuilder
Constructs a fluent builder for the StartLabelDetection
operation.
- The fluent builder is configurable:
video(Video)
/set_video(Option<Video>)
:
required: trueThe video in which you want to detect labels. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the start request. If you use the same token with multiple
StartLabelDetection
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidently started more than once.min_confidence(f32)
/set_min_confidence(Option<f32>)
:
required: falseSpecifies the minimum confidence that Amazon Rekognition Video must have in order to return a detected label. Confidence represents how certain Amazon Rekognition is that a label is correctly identified.0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition Video doesn’t return any labels with a confidence level lower than this specified value.
If you don’t specify
MinConfidence
, the operation returns labels and bounding boxes (if detected) with confidence values greater than or equal to 50 percent.notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:
required: falseThe Amazon SNS topic ARN you want Amazon Rekognition Video to publish the completion status of the label detection operation to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.
job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:
required: falseAn identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTag
to group related jobs and identify them in the completion notification.features(LabelDetectionFeatureName)
/set_features(Option<Vec::<LabelDetectionFeatureName>>)
:
required: falseThe features to return after video analysis. You can specify that GENERAL_LABELS are returned.
settings(LabelDetectionSettings)
/set_settings(Option<LabelDetectionSettings>)
:
required: falseThe settings for a StartLabelDetection request.Contains the specified parameters for the label detection request of an asynchronous label analysis operation. Settings can include filters for GENERAL_LABELS.
- On success, responds with
StartLabelDetectionOutput
with field(s):job_id(Option<String>)
:The identifier for the label detection job. Use
JobId
to identify the job in a subsequent call toGetLabelDetection
.
- On failure, responds with
SdkError<StartLabelDetectionError>
Source§impl Client
impl Client
Sourcepub fn start_media_analysis_job(&self) -> StartMediaAnalysisJobFluentBuilder
pub fn start_media_analysis_job(&self) -> StartMediaAnalysisJobFluentBuilder
Constructs a fluent builder for the StartMediaAnalysisJob
operation.
- The fluent builder is configurable:
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotency token used to prevent the accidental creation of duplicate versions. If you use the same token with multiple
StartMediaAnalysisJobRequest
requests, the same response is returned. UseClientRequestToken
to prevent the same request from being processed more than once.job_name(impl Into<String>)
/set_job_name(Option<String>)
:
required: falseThe name of the job. Does not have to be unique.
operations_config(MediaAnalysisOperationsConfig)
/set_operations_config(Option<MediaAnalysisOperationsConfig>)
:
required: trueConfiguration options for the media analysis job to be created.
input(MediaAnalysisInput)
/set_input(Option<MediaAnalysisInput>)
:
required: trueInput data to be analyzed by the job.
output_config(MediaAnalysisOutputConfig)
/set_output_config(Option<MediaAnalysisOutputConfig>)
:
required: trueThe Amazon S3 bucket location to store the results.
kms_key_id(impl Into<String>)
/set_kms_key_id(Option<String>)
:
required: falseThe identifier of customer managed AWS KMS key (name or ARN). The key is used to encrypt images copied into the service. The key is also used to encrypt results and manifest files written to the output Amazon S3 bucket.
- On success, responds with
StartMediaAnalysisJobOutput
with field(s):job_id(String)
:Identifier for the created job.
- On failure, responds with
SdkError<StartMediaAnalysisJobError>
Source§impl Client
impl Client
Sourcepub fn start_person_tracking(&self) -> StartPersonTrackingFluentBuilder
pub fn start_person_tracking(&self) -> StartPersonTrackingFluentBuilder
Constructs a fluent builder for the StartPersonTracking
operation.
- The fluent builder is configurable:
video(Video)
/set_video(Option<Video>)
:
required: trueThe video in which you want to detect people. The video must be stored in an Amazon S3 bucket.
client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the start request. If you use the same token with multiple
StartPersonTracking
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:
required: falseThe Amazon SNS topic ARN you want Amazon Rekognition Video to publish the completion status of the people detection operation to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.
job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:
required: falseAn identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTag
to group related jobs and identify them in the completion notification.
- On success, responds with
StartPersonTrackingOutput
with field(s):job_id(Option<String>)
:The identifier for the person detection job. Use
JobId
to identify the job in a subsequent call toGetPersonTracking
.
- On failure, responds with
SdkError<StartPersonTrackingError>
Source§impl Client
impl Client
Sourcepub fn start_project_version(&self) -> StartProjectVersionFluentBuilder
pub fn start_project_version(&self) -> StartProjectVersionFluentBuilder
Constructs a fluent builder for the StartProjectVersion
operation.
- The fluent builder is configurable:
project_version_arn(impl Into<String>)
/set_project_version_arn(Option<String>)
:
required: trueThe Amazon Resource Name(ARN) of the model version that you want to start.
min_inference_units(i32)
/set_min_inference_units(Option<i32>)
:
required: trueThe minimum number of inference units to use. A single inference unit represents 1 hour of processing.
Use a higher number to increase the TPS throughput of your model. You are charged for the number of inference units that you use.
max_inference_units(i32)
/set_max_inference_units(Option<i32>)
:
required: falseThe maximum number of inference units to use for auto-scaling the model. If you don’t specify a value, Amazon Rekognition Custom Labels doesn’t auto-scale the model.
- On success, responds with
StartProjectVersionOutput
with field(s):status(Option<ProjectVersionStatus>)
:The current running status of the model.
- On failure, responds with
SdkError<StartProjectVersionError>
Source§impl Client
impl Client
Sourcepub fn start_segment_detection(&self) -> StartSegmentDetectionFluentBuilder
pub fn start_segment_detection(&self) -> StartSegmentDetectionFluentBuilder
Constructs a fluent builder for the StartSegmentDetection
operation.
- The fluent builder is configurable:
video(Video)
/set_video(Option<Video>)
:
required: trueVideo file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the start request. If you use the same token with multiple
StartSegmentDetection
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidently started more than once.notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:
required: falseThe ARN of the Amazon SNS topic to which you want Amazon Rekognition Video to publish the completion status of the segment detection operation. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.
job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:
required: falseAn identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use
JobTag
to group related jobs and identify them in the completion notification.filters(StartSegmentDetectionFilters)
/set_filters(Option<StartSegmentDetectionFilters>)
:
required: falseFilters for technical cue or shot detection.
segment_types(SegmentType)
/set_segment_types(Option<Vec::<SegmentType>>)
:
required: trueAn array of segment types to detect in the video. Valid values are TECHNICAL_CUE and SHOT.
- On success, responds with
StartSegmentDetectionOutput
with field(s):job_id(Option<String>)
:Unique identifier for the segment detection job. The
JobId
is returned fromStartSegmentDetection
.
- On failure, responds with
SdkError<StartSegmentDetectionError>
Source§impl Client
impl Client
Sourcepub fn start_stream_processor(&self) -> StartStreamProcessorFluentBuilder
pub fn start_stream_processor(&self) -> StartStreamProcessorFluentBuilder
Constructs a fluent builder for the StartStreamProcessor
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of the stream processor to start processing.
start_selector(StreamProcessingStartSelector)
/set_start_selector(Option<StreamProcessingStartSelector>)
:
required: falseSpecifies the starting point in the Kinesis stream to start processing. You can use the producer timestamp or the fragment number. If you use the producer timestamp, you must put the time in milliseconds. For more information about fragment numbers, see Fragment.
This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.
stop_selector(StreamProcessingStopSelector)
/set_stop_selector(Option<StreamProcessingStopSelector>)
:
required: falseSpecifies when to stop processing the stream. You can specify a maximum amount of time to process the video.
This is a required parameter for label detection stream processors and should not be used to start a face search stream processor.
- On success, responds with
StartStreamProcessorOutput
with field(s):session_id(Option<String>)
:A unique identifier for the stream processing session.
- On failure, responds with
SdkError<StartStreamProcessorError>
Source§impl Client
impl Client
Sourcepub fn start_text_detection(&self) -> StartTextDetectionFluentBuilder
pub fn start_text_detection(&self) -> StartTextDetectionFluentBuilder
Constructs a fluent builder for the StartTextDetection
operation.
- The fluent builder is configurable:
video(Video)
/set_video(Option<Video>)
:
required: trueVideo file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as
StartLabelDetection
useVideo
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.client_request_token(impl Into<String>)
/set_client_request_token(Option<String>)
:
required: falseIdempotent token used to identify the start request. If you use the same token with multiple
StartTextDetection
requests, the sameJobId
is returned. UseClientRequestToken
to prevent the same job from being accidentaly started more than once.notification_channel(NotificationChannel)
/set_notification_channel(Option<NotificationChannel>)
:
required: falseThe Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see Calling Amazon Rekognition Video operations. Note that the Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic. For more information, see Giving access to multiple Amazon SNS topics.
job_tag(impl Into<String>)
/set_job_tag(Option<String>)
:
required: falseAn identifier returned in the completion status published by your Amazon Simple Notification Service topic. For example, you can use
JobTag
to group related jobs and identify them in the completion notification.filters(StartTextDetectionFilters)
/set_filters(Option<StartTextDetectionFilters>)
:
required: falseOptional parameters that let you set criteria the text must meet to be included in your response.
- On success, responds with
StartTextDetectionOutput
with field(s):job_id(Option<String>)
:Identifier for the text detection job. Use
JobId
to identify the job in a subsequent call toGetTextDetection
.
- On failure, responds with
SdkError<StartTextDetectionError>
Source§impl Client
impl Client
Sourcepub fn stop_project_version(&self) -> StopProjectVersionFluentBuilder
pub fn stop_project_version(&self) -> StopProjectVersionFluentBuilder
Constructs a fluent builder for the StopProjectVersion
operation.
- The fluent builder is configurable:
project_version_arn(impl Into<String>)
/set_project_version_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the model version that you want to stop.
This operation requires permissions to perform the
rekognition:StopProjectVersion
action.
- On success, responds with
StopProjectVersionOutput
with field(s):status(Option<ProjectVersionStatus>)
:The current status of the stop operation.
- On failure, responds with
SdkError<StopProjectVersionError>
Source§impl Client
impl Client
Sourcepub fn stop_stream_processor(&self) -> StopStreamProcessorFluentBuilder
pub fn stop_stream_processor(&self) -> StopStreamProcessorFluentBuilder
Constructs a fluent builder for the StopStreamProcessor
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueThe name of a stream processor created by
CreateStreamProcessor
.
- On success, responds with
StopStreamProcessorOutput
- On failure, responds with
SdkError<StopStreamProcessorError>
Source§impl Client
impl Client
Sourcepub fn tag_resource(&self) -> TagResourceFluentBuilder
pub fn tag_resource(&self) -> TagResourceFluentBuilder
Constructs a fluent builder for the TagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueAmazon Resource Name (ARN) of the model, collection, or stream processor that you want to assign the tags to.
tags(impl Into<String>, impl Into<String>)
/set_tags(Option<HashMap::<String, String>>)
:
required: trueThe key-value tags to assign to the resource.
- On success, responds with
TagResourceOutput
- On failure, responds with
SdkError<TagResourceError>
Source§impl Client
impl Client
Sourcepub fn untag_resource(&self) -> UntagResourceFluentBuilder
pub fn untag_resource(&self) -> UntagResourceFluentBuilder
Constructs a fluent builder for the UntagResource
operation.
- The fluent builder is configurable:
resource_arn(impl Into<String>)
/set_resource_arn(Option<String>)
:
required: trueAmazon Resource Name (ARN) of the model, collection, or stream processor that you want to remove the tags from.
tag_keys(impl Into<String>)
/set_tag_keys(Option<Vec::<String>>)
:
required: trueA list of the tags that you want to remove.
- On success, responds with
UntagResourceOutput
- On failure, responds with
SdkError<UntagResourceError>
Source§impl Client
impl Client
Sourcepub fn update_dataset_entries(&self) -> UpdateDatasetEntriesFluentBuilder
pub fn update_dataset_entries(&self) -> UpdateDatasetEntriesFluentBuilder
Constructs a fluent builder for the UpdateDatasetEntries
operation.
- The fluent builder is configurable:
dataset_arn(impl Into<String>)
/set_dataset_arn(Option<String>)
:
required: trueThe Amazon Resource Name (ARN) of the dataset that you want to update.
changes(DatasetChanges)
/set_changes(Option<DatasetChanges>)
:
required: trueThe changes that you want to make to the dataset.
- On success, responds with
UpdateDatasetEntriesOutput
- On failure, responds with
SdkError<UpdateDatasetEntriesError>
Source§impl Client
impl Client
Sourcepub fn update_stream_processor(&self) -> UpdateStreamProcessorFluentBuilder
pub fn update_stream_processor(&self) -> UpdateStreamProcessorFluentBuilder
Constructs a fluent builder for the UpdateStreamProcessor
operation.
- The fluent builder is configurable:
name(impl Into<String>)
/set_name(Option<String>)
:
required: trueName of the stream processor that you want to update.
settings_for_update(StreamProcessorSettingsForUpdate)
/set_settings_for_update(Option<StreamProcessorSettingsForUpdate>)
:
required: falseThe stream processor settings that you want to update. Label detection settings can be updated to detect different labels with a different minimum confidence.
regions_of_interest_for_update(RegionOfInterest)
/set_regions_of_interest_for_update(Option<Vec::<RegionOfInterest>>)
:
required: falseSpecifies locations in the frames where Amazon Rekognition checks for objects or people. This is an optional parameter for label detection stream processors.
data_sharing_preference_for_update(StreamProcessorDataSharingPreference)
/set_data_sharing_preference_for_update(Option<StreamProcessorDataSharingPreference>)
:
required: falseShows whether you are sharing data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams.
parameters_to_delete(StreamProcessorParameterToDelete)
/set_parameters_to_delete(Option<Vec::<StreamProcessorParameterToDelete>>)
:
required: falseA list of parameters you want to delete from the stream processor.
- On success, responds with
UpdateStreamProcessorOutput
- On failure, responds with
SdkError<UpdateStreamProcessorError>
Source§impl Client
impl Client
Sourcepub fn from_conf(conf: Config) -> Self
pub fn from_conf(conf: Config) -> Self
Creates a new client from the service Config
.
§Panics
This method will panic in the following cases:
- Retries or timeouts are enabled without a
sleep_impl
configured. - Identity caching is enabled without a
sleep_impl
andtime_source
configured. - No
behavior_version
is provided.
The panic message for each of these will have instructions on how to resolve them.
Source§impl Client
impl Client
Sourcepub fn new(sdk_config: &SdkConfig) -> Self
pub fn new(sdk_config: &SdkConfig) -> Self
Creates a new client from an SDK Config.
§Panics
- This method will panic if the
sdk_config
is missing an async sleep implementation. If you experience this panic, set thesleep_impl
on the Config passed into this function to fix it. - This method will panic if the
sdk_config
is missing an HTTP connector. If you experience this panic, set thehttp_connector
on the Config passed into this function to fix it. - This method will panic if no
BehaviorVersion
is provided. If you experience this panic, setbehavior_version
on the Config or enable thebehavior-version-latest
Cargo feature.
Trait Implementations§
Source§impl Waiters for Client
impl Waiters for Client
Source§fn wait_until_project_version_running(
&self,
) -> ProjectVersionRunningFluentBuilder
fn wait_until_project_version_running( &self, ) -> ProjectVersionRunningFluentBuilder
Source§fn wait_until_project_version_training_completed(
&self,
) -> ProjectVersionTrainingCompletedFluentBuilder
fn wait_until_project_version_training_completed( &self, ) -> ProjectVersionTrainingCompletedFluentBuilder
Auto Trait Implementations§
impl Freeze for Client
impl !RefUnwindSafe for Client
impl Send for Client
impl Sync for Client
impl Unpin for Client
impl !UnwindSafe for Client
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);