pub struct Client { /* private fields */ }
Expand description

Client for Amazon Rekognition

Client for invoking operations on Amazon Rekognition. Each operation on Amazon Rekognition is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

Examples

Constructing a client and invoking an operation

    // create a shared configuration. This can be used & shared between multiple service clients.
    let shared_config = aws_config::load_from_env().await;
    let client = aws_sdk_rekognition::Client::new(&shared_config);
    // invoke an operation
    /* let rsp = client
        .<operation_name>().
        .<param>("some value")
        .send().await; */

Constructing a client with custom configuration

use aws_config::RetryConfig;
    let shared_config = aws_config::load_from_env().await;
    let config = aws_sdk_rekognition::config::Builder::from(&shared_config)
        .retry_config(RetryConfig::disabled())
        .build();
    let client = aws_sdk_rekognition::Client::from_conf(config);

Implementations

Creates a client with the given service configuration.

Returns the client’s configuration.

Constructs a fluent builder for the CompareFaces operation.

  • The fluent builder is configurable:
    • source_image(Image) / set_source_image(Option<Image>):

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

    • target_image(Image) / set_target_image(Option<Image>):

      The target image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

    • similarity_threshold(f32) / set_similarity_threshold(Option<f32>):

      The minimum level of confidence in the face matches that a match must meet to be included in the FaceMatches array.

    • quality_filter(QualityFilter) / set_quality_filter(Option<QualityFilter>):

      A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t compared. If you specify AUTO, Amazon Rekognition chooses the quality bar. If you specify LOW, MEDIUM, or HIGH, filtering removes all faces that don’t meet the chosen quality bar. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specify NONE, no filtering is performed. The default value is NONE.

      To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.

  • On success, responds with CompareFacesOutput with field(s):
    • source_image_face(Option<ComparedSourceImageFace>):

      The face in the source image that was used for comparison.

    • face_matches(Option<Vec<CompareFacesMatch>>):

      An array of faces in the target image that match the source image face. Each CompareFacesMatch object provides the bounding box, the confidence level that the bounding box contains a face, and the similarity score for the face in the bounding box and the face in the source image.

    • unmatched_faces(Option<Vec<ComparedFace>>):

      An array of faces in the target image that did not match the source image face.

    • source_image_orientation_correction(Option<OrientationCorrection>):

      The value of SourceImageOrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

    • target_image_orientation_correction(Option<OrientationCorrection>):

      The value of TargetImageOrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

  • On failure, responds with SdkError<CompareFacesError>

Constructs a fluent builder for the CreateCollection operation.

Constructs a fluent builder for the CreateDataset operation.

Constructs a fluent builder for the CreateProject operation.

Constructs a fluent builder for the CreateProjectVersion operation.

Constructs a fluent builder for the CreateStreamProcessor operation.

Constructs a fluent builder for the DeleteCollection operation.

Constructs a fluent builder for the DeleteDataset operation.

Constructs a fluent builder for the DeleteFaces operation.

Constructs a fluent builder for the DeleteProject operation.

Constructs a fluent builder for the DeleteProjectVersion operation.

Constructs a fluent builder for the DeleteStreamProcessor operation.

Constructs a fluent builder for the DescribeCollection operation.

Constructs a fluent builder for the DescribeDataset operation.

Constructs a fluent builder for the DescribeProjects operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the DescribeProjectVersions operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the DescribeStreamProcessor operation.

Constructs a fluent builder for the DetectCustomLabels operation.

  • The fluent builder is configurable:
    • project_version_arn(impl Into<String>) / set_project_version_arn(Option<String>):

      The ARN of the model version that you want to use.

    • image(Image) / set_image(Option<Image>):

      Provides the input image either as bytes or an S3 object.

      You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.

      For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.

      You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.

      The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

      If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

      For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see Resource Based Policies in the Amazon Rekognition Developer Guide.

    • max_results(i32) / set_max_results(Option<i32>):

      Maximum number of results you want the service to return in the response. The service returns the specified number of highest confidence labels ranked from highest confidence to lowest.

    • min_confidence(f32) / set_min_confidence(Option<f32>):

      Specifies the minimum confidence level for the labels to return. DetectCustomLabels doesn’t return any labels with a confidence value that’s lower than this specified value. If you specify a value of 0, DetectCustomLabels returns all labels, regardless of the assumed threshold applied to each label. If you don’t specify a value for MinConfidence, DetectCustomLabels returns labels based on the assumed threshold of each label.

  • On success, responds with DetectCustomLabelsOutput with field(s):
  • On failure, responds with SdkError<DetectCustomLabelsError>

Constructs a fluent builder for the DetectFaces operation.

  • The fluent builder is configurable:
    • image(Image) / set_image(Option<Image>):

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

    • attributes(Vec<Attribute>) / set_attributes(Option<Vec<Attribute>>):

      An array of facial attributes you want to be returned. This can be the default list of attributes or all attributes. If you don’t specify a value for Attributes or if you specify [“DEFAULT”], the API returns the following subset of facial attributes: BoundingBox, Confidence, Pose, Quality, and Landmarks. If you provide [“ALL”], all facial attributes are returned, but the operation takes longer to complete.

      If you provide both, [“ALL”, “DEFAULT”], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).

  • On success, responds with DetectFacesOutput with field(s):
    • face_details(Option<Vec<FaceDetail>>):

      Details of each face found in the image.

    • orientation_correction(Option<OrientationCorrection>):

      The value of OrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

  • On failure, responds with SdkError<DetectFacesError>

Constructs a fluent builder for the DetectLabels operation.

  • The fluent builder is configurable:
    • image(Image) / set_image(Option<Image>):

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. Images stored in an S3 Bucket do not need to be base64-encoded.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

    • max_labels(i32) / set_max_labels(Option<i32>):

      Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.

    • min_confidence(f32) / set_min_confidence(Option<f32>):

      Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with confidence lower than this specified value.

      If MinConfidence is not specified, the operation returns labels with a confidence values greater than or equal to 55 percent.

  • On success, responds with DetectLabelsOutput with field(s):
    • labels(Option<Vec<Label>>):

      An array of labels for the real-world objects detected.

    • orientation_correction(Option<OrientationCorrection>):

      The value of OrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

    • label_model_version(Option<String>):

      Version number of the label detection model that was used to detect labels.

  • On failure, responds with SdkError<DetectLabelsError>

Constructs a fluent builder for the DetectModerationLabels operation.

Constructs a fluent builder for the DetectProtectiveEquipment operation.

Constructs a fluent builder for the DetectText operation.

Constructs a fluent builder for the DistributeDatasetEntries operation.

Constructs a fluent builder for the GetCelebrityInfo operation.

Constructs a fluent builder for the GetCelebrityRecognition operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the GetContentModeration operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the GetFaceDetection operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the GetFaceSearch operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
  • On success, responds with GetFaceSearchOutput with field(s):
    • job_status(Option<VideoJobStatus>):

      The current status of the face search job.

    • status_message(Option<String>):

      If the job fails, StatusMessage provides a descriptive error message.

    • next_token(Option<String>):

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.

    • video_metadata(Option<VideoMetadata>):

      Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

    • persons(Option<Vec<PersonMatch>>):

      An array of persons, PersonMatch, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to StartFaceSearch. Each Persons element includes a time the person was matched, face match details (FaceMatches) for matching faces in the collection, and person information (Person) for the matched person.

  • On failure, responds with SdkError<GetFaceSearchError>

Constructs a fluent builder for the GetLabelDetection operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the GetPersonTracking operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the GetSegmentDetection operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
  • On success, responds with GetSegmentDetectionOutput with field(s):
    • job_status(Option<VideoJobStatus>):

      Current status of the segment detection job.

    • status_message(Option<String>):

      If the job fails, StatusMessage provides a descriptive error message.

    • video_metadata(Option<Vec<VideoMetadata>>):

      Currently, Amazon Rekognition Video returns a single object in the VideoMetadata array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The VideoMetadata object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by GetSegmentDetection.

    • audio_metadata(Option<Vec<AudioMetadata>>):

      An array of objects. There can be multiple audio streams. Each AudioMetadata object contains metadata for a single audio stream. Audio information in an AudioMetadata objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by GetSegmentDetection.

    • next_token(Option<String>):

      If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.

    • segments(Option<Vec<SegmentDetection>>):

      An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes input parameter of StartSegmentDetection. Within each segment type the array is sorted by timestamp values.

    • selected_segment_types(Option<Vec<SegmentTypeInfo>>):

      An array containing the segment types requested in the call to StartSegmentDetection.

  • On failure, responds with SdkError<GetSegmentDetectionError>

Constructs a fluent builder for the GetTextDetection operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the IndexFaces operation.

  • The fluent builder is configurable:
    • collection_id(impl Into<String>) / set_collection_id(Option<String>):

      The ID of an existing collection to which you want to add the faces that are detected in the input images.

    • image(Image) / set_image(Option<Image>):

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes isn’t supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

    • external_image_id(impl Into<String>) / set_external_image_id(Option<String>):

      The ID you want to assign to all the faces detected in the image.

    • detection_attributes(Vec<Attribute>) / set_detection_attributes(Option<Vec<Attribute>>):

      An array of facial attributes that you want to be returned. This can be the default list of attributes or all attributes. If you don’t specify a value for Attributes or if you specify [“DEFAULT”], the API returns the following subset of facial attributes: BoundingBox, Confidence, Pose, Quality, and Landmarks. If you provide [“ALL”], all facial attributes are returned, but the operation takes longer to complete.

      If you provide both, [“ALL”, “DEFAULT”], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).

    • max_faces(i32) / set_max_faces(Option<i32>):

      The maximum number of faces to index. The value of MaxFaces must be greater than or equal to 1. IndexFaces returns no more than 100 detected faces in an image, even if you specify a larger value for MaxFaces.

      If IndexFaces detects more faces than the value of MaxFaces, the faces with the lowest quality are filtered out first. If there are still more faces than the value of MaxFaces, the faces with the smallest bounding boxes are filtered out (up to the number that’s needed to satisfy the value of MaxFaces). Information about the unindexed faces is available in the UnindexedFaces array.

      The faces that are returned by IndexFaces are sorted by the largest face bounding box size to the smallest size, in descending order.

      MaxFaces can be used with a collection associated with any version of the face model.

    • quality_filter(QualityFilter) / set_quality_filter(Option<QualityFilter>):

      A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t indexed. If you specify AUTO, Amazon Rekognition chooses the quality bar. If you specify LOW, MEDIUM, or HIGH, filtering removes all faces that don’t meet the chosen quality bar. The default value is AUTO. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specify NONE, no filtering is performed.

      To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.

  • On success, responds with IndexFacesOutput with field(s):
    • face_records(Option<Vec<FaceRecord>>):

      An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

    • orientation_correction(Option<OrientationCorrection>):

      If your collection is associated with a face detection model that’s later than version 3.0, the value of OrientationCorrection is always null and no orientation information is returned.

      If your collection is associated with a face detection model that’s version 3.0 or earlier, the following applies:

      • If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata. The value of OrientationCorrection is null.

      • If the image doesn’t contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

      Bounding box information is returned in the FaceRecords array. You can get the version of the face detection model by calling DescribeCollection.

    • face_model_version(Option<String>):

      Latest face model being used with the collection. For more information, see Model versioning.

    • unindexed_faces(Option<Vec<UnindexedFace>>):

      An array of faces that were detected in the image but weren’t indexed. They weren’t indexed because the quality filter identified them as low quality, or the MaxFaces request parameter filtered them out. To use the quality filter, you specify the QualityFilter request parameter.

  • On failure, responds with SdkError<IndexFacesError>

Constructs a fluent builder for the ListCollections operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the ListDatasetEntries operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the ListDatasetLabels operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the ListFaces operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the ListStreamProcessors operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the ListTagsForResource operation.

Constructs a fluent builder for the RecognizeCelebrities operation.

  • The fluent builder is configurable:
    • image(Image) / set_image(Option<Image>):

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.

  • On success, responds with RecognizeCelebritiesOutput with field(s):
    • celebrity_faces(Option<Vec<Celebrity>>):

      Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 64 celebrities in an image. Each celebrity object includes the following attributes: Face, Confidence, Emotions, Landmarks, Pose, Quality, Smile, Id, KnownGender, MatchConfidence, Name, Urls.

    • unrecognized_faces(Option<Vec<ComparedFace>>):

      Details about each unrecognized face in the image.

    • orientation_correction(Option<OrientationCorrection>):

      Support for estimating image orientation using the the OrientationCorrection field has ceased as of August 2021. Any returned values for this field included in an API response will always be NULL.

      The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct the orientation. The bounding box coordinates returned in CelebrityFaces and UnrecognizedFaces represent face locations before the image orientation is corrected.

      If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the image’s orientation. If so, and the Exif metadata for the input image populates the orientation field, the value of OrientationCorrection is null. The CelebrityFaces and UnrecognizedFaces bounding box coordinates represent face locations after Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

  • On failure, responds with SdkError<RecognizeCelebritiesError>

Constructs a fluent builder for the SearchFaces operation.

Constructs a fluent builder for the SearchFacesByImage operation.

Constructs a fluent builder for the StartCelebrityRecognition operation.

Constructs a fluent builder for the StartContentModeration operation.

  • The fluent builder is configurable:
    • video(Video) / set_video(Option<Video>):

      The video in which you want to detect inappropriate, unwanted, or offensive content. The video must be stored in an Amazon S3 bucket.

    • min_confidence(f32) / set_min_confidence(Option<f32>):

      Specifies the minimum confidence that Amazon Rekognition must have in order to return a moderated content label. Confidence represents how certain Amazon Rekognition is that the moderated content is correctly identified. 0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition doesn’t return any moderated content labels with a confidence level lower than this specified value. If you don’t specify MinConfidence, GetContentModeration returns labels with confidence values greater than or equal to 50 percent.

    • client_request_token(impl Into<String>) / set_client_request_token(Option<String>):

      Idempotent token used to identify the start request. If you use the same token with multiple StartContentModeration requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidently started more than once.

    • notification_channel(NotificationChannel) / set_notification_channel(Option<NotificationChannel>):

      The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish the completion status of the content analysis to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.

    • job_tag(impl Into<String>) / set_job_tag(Option<String>):

      An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use JobTag to group related jobs and identify them in the completion notification.

  • On success, responds with StartContentModerationOutput with field(s):
    • job_id(Option<String>):

      The identifier for the content analysis job. Use JobId to identify the job in a subsequent call to GetContentModeration.

  • On failure, responds with SdkError<StartContentModerationError>

Constructs a fluent builder for the StartFaceDetection operation.

Constructs a fluent builder for the StartFaceSearch operation.

Constructs a fluent builder for the StartLabelDetection operation.

  • The fluent builder is configurable:
    • video(Video) / set_video(Option<Video>):

      The video in which you want to detect labels. The video must be stored in an Amazon S3 bucket.

    • client_request_token(impl Into<String>) / set_client_request_token(Option<String>):

      Idempotent token used to identify the start request. If you use the same token with multiple StartLabelDetection requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidently started more than once.

    • min_confidence(f32) / set_min_confidence(Option<f32>):

      Specifies the minimum confidence that Amazon Rekognition Video must have in order to return a detected label. Confidence represents how certain Amazon Rekognition is that a label is correctly identified.0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition Video doesn’t return any labels with a confidence level lower than this specified value.

      If you don’t specify MinConfidence, the operation returns labels with confidence values greater than or equal to 50 percent.

    • notification_channel(NotificationChannel) / set_notification_channel(Option<NotificationChannel>):

      The Amazon SNS topic ARN you want Amazon Rekognition Video to publish the completion status of the label detection operation to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy.

    • job_tag(impl Into<String>) / set_job_tag(Option<String>):

      An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use JobTag to group related jobs and identify them in the completion notification.

  • On success, responds with StartLabelDetectionOutput with field(s):
    • job_id(Option<String>):

      The identifier for the label detection job. Use JobId to identify the job in a subsequent call to GetLabelDetection.

  • On failure, responds with SdkError<StartLabelDetectionError>

Constructs a fluent builder for the StartPersonTracking operation.

Constructs a fluent builder for the StartProjectVersion operation.

Constructs a fluent builder for the StartSegmentDetection operation.

Constructs a fluent builder for the StartStreamProcessor operation.

Constructs a fluent builder for the StartTextDetection operation.

Constructs a fluent builder for the StopProjectVersion operation.

Constructs a fluent builder for the StopStreamProcessor operation.

Constructs a fluent builder for the TagResource operation.

Constructs a fluent builder for the UntagResource operation.

Constructs a fluent builder for the UpdateDatasetEntries operation.

Creates a client with the given service config and connector override.

Creates a new client from a shared config.

Creates a new client from the service Config.

Trait Implementations

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

Formats the value using the given formatter. Read more

Performs the conversion.

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The resulting type after obtaining ownership.

Creates owned data from borrowed data, usually by cloning. Read more

🔬 This is a nightly-only experimental API. (toowned_clone_into)

Uses borrowed data to replace owned data, usually by cloning. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more