Struct aws_sdk_rekognition::client::Client

source ·
pub struct Client { /* private fields */ }
Expand description

Client for Amazon Rekognition

Client for invoking operations on Amazon Rekognition. Each operation on Amazon Rekognition is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

§Constructing a Client

A Config is required to construct a client. For most use cases, the aws-config crate should be used to automatically resolve this config using aws_config::load_from_env(), since this will resolve an SdkConfig which can be shared across multiple different AWS SDK clients. This config resolution process can be customized by calling aws_config::from_env() instead, which returns a ConfigLoader that uses the builder pattern to customize the default config.

In the simplest case, creating a client looks as follows:

let config = aws_config::load_from_env().await;
let client = aws_sdk_rekognition::Client::new(&config);

Occasionally, SDKs may have additional service-specific values that can be set on the Config that is absent from SdkConfig, or slightly different settings for a specific client may be desired. The Config struct implements From<&SdkConfig>, so setting these specific settings can be done as follows:

let sdk_config = ::aws_config::load_from_env().await;
let config = aws_sdk_rekognition::config::Builder::from(&sdk_config)
    .some_service_specific_setting("value")
    .build();

See the aws-config docs and Config for more information on customizing configuration.

Note: Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.

§Using the Client

A client has a function for every operation that can be performed by the service. For example, the AssociateFaces operation has a Client::associate_faces, function which returns a builder for that operation. The fluent builder ultimately has a send() function that returns an async future that returns a result, as illustrated below:

let result = client.associate_faces()
    .collection_id("example")
    .send()
    .await;

The underlying HTTP requests that get made by this can be modified with the customize_operation function on the fluent builder. See the customize module for more information.

§Waiters

This client provides wait_until methods behind the Waiters trait. To use them, simply import the trait, and then call one of the wait_until methods. This will return a waiter fluent builder that takes various parameters, which are documented on the builder type. Once parameters have been provided, the wait method can be called to initiate waiting.

For example, if there was a wait_until_thing method, it could look like:

let result = client.wait_until_thing()
    .thing_id("someId")
    .wait(Duration::from_secs(120))
    .await;

Implementations§

source§

impl Client

source

pub fn associate_faces(&self) -> AssociateFacesFluentBuilder

Constructs a fluent builder for the AssociateFaces operation.

source§

impl Client

source

pub fn compare_faces(&self) -> CompareFacesFluentBuilder

Constructs a fluent builder for the CompareFaces operation.

  • The fluent builder is configurable:
    • source_image(Image) / set_source_image(Option<Image>):
      required: true

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.


    • target_image(Image) / set_target_image(Option<Image>):
      required: true

      The target image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.


    • similarity_threshold(f32) / set_similarity_threshold(Option<f32>):
      required: false

      The minimum level of confidence in the face matches that a match must meet to be included in the FaceMatches array.


    • quality_filter(QualityFilter) / set_quality_filter(Option<QualityFilter>):
      required: false

      A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t compared. If you specify AUTO, Amazon Rekognition chooses the quality bar. If you specify LOW, MEDIUM, or HIGH, filtering removes all faces that don’t meet the chosen quality bar. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specify NONE, no filtering is performed. The default value is NONE.

      To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.


  • On success, responds with CompareFacesOutput with field(s):
    • source_image_face(Option<ComparedSourceImageFace>):

      The face in the source image that was used for comparison.

    • face_matches(Option<Vec::<CompareFacesMatch>>):

      An array of faces in the target image that match the source image face. Each CompareFacesMatch object provides the bounding box, the confidence level that the bounding box contains a face, and the similarity score for the face in the bounding box and the face in the source image.

    • unmatched_faces(Option<Vec::<ComparedFace>>):

      An array of faces in the target image that did not match the source image face.

    • source_image_orientation_correction(Option<OrientationCorrection>):

      The value of SourceImageOrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

    • target_image_orientation_correction(Option<OrientationCorrection>):

      The value of TargetImageOrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

  • On failure, responds with SdkError<CompareFacesError>
source§

impl Client

source

pub fn copy_project_version(&self) -> CopyProjectVersionFluentBuilder

Constructs a fluent builder for the CopyProjectVersion operation.

source§

impl Client

source

pub fn create_collection(&self) -> CreateCollectionFluentBuilder

Constructs a fluent builder for the CreateCollection operation.

source§

impl Client

source

pub fn create_dataset(&self) -> CreateDatasetFluentBuilder

Constructs a fluent builder for the CreateDataset operation.

source§

impl Client

source

pub fn create_face_liveness_session( &self ) -> CreateFaceLivenessSessionFluentBuilder

Constructs a fluent builder for the CreateFaceLivenessSession operation.

source§

impl Client

source

pub fn create_project(&self) -> CreateProjectFluentBuilder

Constructs a fluent builder for the CreateProject operation.

source§

impl Client

source

pub fn create_project_version(&self) -> CreateProjectVersionFluentBuilder

Constructs a fluent builder for the CreateProjectVersion operation.

source§

impl Client

source

pub fn create_stream_processor(&self) -> CreateStreamProcessorFluentBuilder

Constructs a fluent builder for the CreateStreamProcessor operation.

source§

impl Client

source

pub fn create_user(&self) -> CreateUserFluentBuilder

Constructs a fluent builder for the CreateUser operation.

source§

impl Client

source

pub fn delete_collection(&self) -> DeleteCollectionFluentBuilder

Constructs a fluent builder for the DeleteCollection operation.

source§

impl Client

source

pub fn delete_dataset(&self) -> DeleteDatasetFluentBuilder

Constructs a fluent builder for the DeleteDataset operation.

source§

impl Client

source

pub fn delete_faces(&self) -> DeleteFacesFluentBuilder

Constructs a fluent builder for the DeleteFaces operation.

source§

impl Client

source

pub fn delete_project(&self) -> DeleteProjectFluentBuilder

Constructs a fluent builder for the DeleteProject operation.

source§

impl Client

source

pub fn delete_project_policy(&self) -> DeleteProjectPolicyFluentBuilder

Constructs a fluent builder for the DeleteProjectPolicy operation.

source§

impl Client

source

pub fn delete_project_version(&self) -> DeleteProjectVersionFluentBuilder

Constructs a fluent builder for the DeleteProjectVersion operation.

source§

impl Client

source

pub fn delete_stream_processor(&self) -> DeleteStreamProcessorFluentBuilder

Constructs a fluent builder for the DeleteStreamProcessor operation.

source§

impl Client

source

pub fn delete_user(&self) -> DeleteUserFluentBuilder

Constructs a fluent builder for the DeleteUser operation.

source§

impl Client

source

pub fn describe_collection(&self) -> DescribeCollectionFluentBuilder

Constructs a fluent builder for the DescribeCollection operation.

source§

impl Client

source

pub fn describe_dataset(&self) -> DescribeDatasetFluentBuilder

Constructs a fluent builder for the DescribeDataset operation.

source§

impl Client

source

pub fn describe_project_versions(&self) -> DescribeProjectVersionsFluentBuilder

Constructs a fluent builder for the DescribeProjectVersions operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn describe_projects(&self) -> DescribeProjectsFluentBuilder

Constructs a fluent builder for the DescribeProjects operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn describe_stream_processor(&self) -> DescribeStreamProcessorFluentBuilder

Constructs a fluent builder for the DescribeStreamProcessor operation.

source§

impl Client

source

pub fn detect_custom_labels(&self) -> DetectCustomLabelsFluentBuilder

Constructs a fluent builder for the DetectCustomLabels operation.

  • The fluent builder is configurable:
    • project_version_arn(impl Into<String>) / set_project_version_arn(Option<String>):
      required: true

      The ARN of the model version that you want to use. Only models associated with Custom Labels projects accepted by the operation. If a provided ARN refers to a model version associated with a project for a different feature type, then an InvalidParameterException is returned.


    • image(Image) / set_image(Option<Image>):
      required: true

      Provides the input image either as bytes or an S3 object.

      You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.

      For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.

      You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.

      The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

      If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

      For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      Maximum number of results you want the service to return in the response. The service returns the specified number of highest confidence labels ranked from highest confidence to lowest.


    • min_confidence(f32) / set_min_confidence(Option<f32>):
      required: false

      Specifies the minimum confidence level for the labels to return. DetectCustomLabels doesn’t return any labels with a confidence value that’s lower than this specified value. If you specify a value of 0, DetectCustomLabels returns all labels, regardless of the assumed threshold applied to each label. If you don’t specify a value for MinConfidence, DetectCustomLabels returns labels based on the assumed threshold of each label.


  • On success, responds with DetectCustomLabelsOutput with field(s):
  • On failure, responds with SdkError<DetectCustomLabelsError>
source§

impl Client

source

pub fn detect_faces(&self) -> DetectFacesFluentBuilder

Constructs a fluent builder for the DetectFaces operation.

  • The fluent builder is configurable:
    • image(Image) / set_image(Option<Image>):
      required: true

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.


    • attributes(Attribute) / set_attributes(Option<Vec::<Attribute>>):
      required: false

      An array of facial attributes you want to be returned. A DEFAULT subset of facial attributes - BoundingBox, Confidence, Pose, Quality, and Landmarks - will always be returned. You can request for specific facial attributes (in addition to the default list) - by using [“DEFAULT”, “FACE_OCCLUDED”] or just [“FACE_OCCLUDED”]. You can request for all facial attributes by using [“ALL”]. Requesting more attributes may increase response time.

      If you provide both, [“ALL”, “DEFAULT”], the service uses a logical “AND” operator to determine which attributes to return (in this case, all attributes).

      Note that while the FaceOccluded and EyeDirection attributes are supported when using DetectFaces, they aren’t supported when analyzing videos with StartFaceDetection and GetFaceDetection.


  • On success, responds with DetectFacesOutput with field(s):
    • face_details(Option<Vec::<FaceDetail>>):

      Details of each face found in the image.

    • orientation_correction(Option<OrientationCorrection>):

      The value of OrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

  • On failure, responds with SdkError<DetectFacesError>
source§

impl Client

source

pub fn detect_labels(&self) -> DetectLabelsFluentBuilder

Constructs a fluent builder for the DetectLabels operation.

  • The fluent builder is configurable:
    • image(Image) / set_image(Option<Image>):
      required: true

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes is not supported. Images stored in an S3 Bucket do not need to be base64-encoded.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.


    • max_labels(i32) / set_max_labels(Option<i32>):
      required: false

      Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels. Only valid when GENERAL_LABELS is specified as a feature type in the Feature input parameter.


    • min_confidence(f32) / set_min_confidence(Option<f32>):
      required: false

      Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn’t return any labels with confidence lower than this specified value.

      If MinConfidence is not specified, the operation returns labels with a confidence values greater than or equal to 55 percent. Only valid when GENERAL_LABELS is specified as a feature type in the Feature input parameter.


    • features(DetectLabelsFeatureName) / set_features(Option<Vec::<DetectLabelsFeatureName>>):
      required: false

      A list of the types of analysis to perform. Specifying GENERAL_LABELS uses the label detection feature, while specifying IMAGE_PROPERTIES returns information regarding image color and quality. If no option is specified GENERAL_LABELS is used by default.


    • settings(DetectLabelsSettings) / set_settings(Option<DetectLabelsSettings>):
      required: false

      A list of the filters to be applied to returned detected labels and image properties. Specified filters can be inclusive, exclusive, or a combination of both. Filters can be used for individual labels or label categories. The exact label names or label categories must be supplied. For a full list of labels and label categories, see Detecting labels.


  • On success, responds with DetectLabelsOutput with field(s):
    • labels(Option<Vec::<Label>>):

      An array of labels for the real-world objects detected.

    • orientation_correction(Option<OrientationCorrection>):

      The value of OrientationCorrection is always null.

      If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction. The bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

      Amazon Rekognition doesn’t perform image correction for images in .png format and .jpeg images without orientation information in the image Exif metadata. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

    • label_model_version(Option<String>):

      Version number of the label detection model that was used to detect labels.

    • image_properties(Option<DetectLabelsImageProperties>):

      Information about the properties of the input image, such as brightness, sharpness, contrast, and dominant colors.

  • On failure, responds with SdkError<DetectLabelsError>
source§

impl Client

source

pub fn detect_moderation_labels(&self) -> DetectModerationLabelsFluentBuilder

Constructs a fluent builder for the DetectModerationLabels operation.

source§

impl Client

source

pub fn detect_protective_equipment( &self ) -> DetectProtectiveEquipmentFluentBuilder

Constructs a fluent builder for the DetectProtectiveEquipment operation.

source§

impl Client

source

pub fn detect_text(&self) -> DetectTextFluentBuilder

Constructs a fluent builder for the DetectText operation.

source§

impl Client

source

pub fn disassociate_faces(&self) -> DisassociateFacesFluentBuilder

Constructs a fluent builder for the DisassociateFaces operation.

source§

impl Client

source

pub fn distribute_dataset_entries( &self ) -> DistributeDatasetEntriesFluentBuilder

Constructs a fluent builder for the DistributeDatasetEntries operation.

source§

impl Client

source

pub fn get_celebrity_info(&self) -> GetCelebrityInfoFluentBuilder

Constructs a fluent builder for the GetCelebrityInfo operation.

source§

impl Client

source

pub fn get_celebrity_recognition(&self) -> GetCelebrityRecognitionFluentBuilder

Constructs a fluent builder for the GetCelebrityRecognition operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn get_content_moderation(&self) -> GetContentModerationFluentBuilder

Constructs a fluent builder for the GetContentModeration operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn get_face_detection(&self) -> GetFaceDetectionFluentBuilder

Constructs a fluent builder for the GetFaceDetection operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
  • On success, responds with GetFaceDetectionOutput with field(s):
    • job_status(Option<VideoJobStatus>):

      The current status of the face detection job.

    • status_message(Option<String>):

      If the job fails, StatusMessage provides a descriptive error message.

    • video_metadata(Option<VideoMetadata>):

      Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

    • next_token(Option<String>):

      If the response is truncated, Amazon Rekognition returns this token that you can use in the subsequent request to retrieve the next set of faces.

    • faces(Option<Vec::<FaceDetection>>):

      An array of faces detected in the video. Each element contains a detected face’s details and the time, in milliseconds from the start of the video, the face was detected.

    • job_id(Option<String>):

      Job identifier for the face detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartFaceDetection.

    • video(Option<Video>):

      Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

    • job_tag(Option<String>):

      A job identifier specified in the call to StartFaceDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.

  • On failure, responds with SdkError<GetFaceDetectionError>
source§

impl Client

source

pub fn get_face_liveness_session_results( &self ) -> GetFaceLivenessSessionResultsFluentBuilder

Constructs a fluent builder for the GetFaceLivenessSessionResults operation.

  • The fluent builder is configurable:
  • On success, responds with GetFaceLivenessSessionResultsOutput with field(s):
    • session_id(String):

      The sessionId for which this request was called.

    • status(LivenessSessionStatus):

      Represents a status corresponding to the state of the session. Possible statuses are: CREATED, IN_PROGRESS, SUCCEEDED, FAILED, EXPIRED.

    • confidence(Option<f32>):

      Probabalistic confidence score for if the person in the given video was live, represented as a float value between 0 to 100.

    • reference_image(Option<AuditImage>):

      A high-quality image from the Face Liveness video that can be used for face comparison or search. It includes a bounding box of the face and the Base64-encoded bytes that return an image. If the CreateFaceLivenessSession request included an OutputConfig argument, the image will be uploaded to an S3Object specified in the output configuration. In case the reference image is not returned, it’s recommended to retry the Liveness check.

    • audit_images(Option<Vec::<AuditImage>>):

      A set of images from the Face Liveness video that can be used for audit purposes. It includes a bounding box of the face and the Base64-encoded bytes that return an image. If the CreateFaceLivenessSession request included an OutputConfig argument, the image will be uploaded to an S3Object specified in the output configuration. If no Amazon S3 bucket is defined, raw bytes are sent instead.

  • On failure, responds with SdkError<GetFaceLivenessSessionResultsError>
source§

impl Client

Constructs a fluent builder for the GetFaceSearch operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
  • On success, responds with GetFaceSearchOutput with field(s):
    • job_status(Option<VideoJobStatus>):

      The current status of the face search job.

    • status_message(Option<String>):

      If the job fails, StatusMessage provides a descriptive error message.

    • next_token(Option<String>):

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of search results.

    • video_metadata(Option<VideoMetadata>):

      Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

    • persons(Option<Vec::<PersonMatch>>):

      An array of persons, PersonMatch, in the video whose face(s) match the face(s) in an Amazon Rekognition collection. It also includes time information for when persons are matched in the video. You specify the input collection in an initial call to StartFaceSearch. Each Persons element includes a time the person was matched, face match details (FaceMatches) for matching faces in the collection, and person information (Person) for the matched person.

    • job_id(Option<String>):

      Job identifier for the face search operation for which you want to obtain results. The job identifer is returned by an initial call to StartFaceSearch.

    • video(Option<Video>):

      Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

    • job_tag(Option<String>):

      A job identifier specified in the call to StartFaceSearch and returned in the job completion notification sent to your Amazon Simple Notification Service topic.

  • On failure, responds with SdkError<GetFaceSearchError>
source§

impl Client

source

pub fn get_label_detection(&self) -> GetLabelDetectionFluentBuilder

Constructs a fluent builder for the GetLabelDetection operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn get_media_analysis_job(&self) -> GetMediaAnalysisJobFluentBuilder

Constructs a fluent builder for the GetMediaAnalysisJob operation.

source§

impl Client

source

pub fn get_person_tracking(&self) -> GetPersonTrackingFluentBuilder

Constructs a fluent builder for the GetPersonTracking operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • job_id(impl Into<String>) / set_job_id(Option<String>):
      required: true

      The identifier for a job that tracks persons in a video. You get the JobId from a call to StartPersonTracking.


    • max_results(i32) / set_max_results(Option<i32>):
      required: false

      Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.


    • next_token(impl Into<String>) / set_next_token(Option<String>):
      required: false

      If the previous response was incomplete (because there are more persons to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of persons.


    • sort_by(PersonTrackingSortBy) / set_sort_by(Option<PersonTrackingSortBy>):
      required: false

      Sort to use for elements in the Persons array. Use TIMESTAMP to sort array elements by the time persons are detected. Use INDEX to sort by the tracked persons. If you sort by INDEX, the array elements for each person are sorted by detection confidence. The default sort is by TIMESTAMP.


  • On success, responds with GetPersonTrackingOutput with field(s):
    • job_status(Option<VideoJobStatus>):

      The current status of the person tracking job.

    • status_message(Option<String>):

      If the job fails, StatusMessage provides a descriptive error message.

    • video_metadata(Option<VideoMetadata>):

      Information about a video that Amazon Rekognition Video analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition Video operation.

    • next_token(Option<String>):

      If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of persons.

    • persons(Option<Vec::<PersonDetection>>):

      An array of the persons detected in the video and the time(s) their path was tracked throughout the video. An array element will exist for each time a person’s path is tracked.

    • job_id(Option<String>):

      Job identifier for the person tracking operation for which you want to obtain results. The job identifer is returned by an initial call to StartPersonTracking.

    • video(Option<Video>):

      Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

    • job_tag(Option<String>):

      A job identifier specified in the call to StartCelebrityRecognition and returned in the job completion notification sent to your Amazon Simple Notification Service topic.

  • On failure, responds with SdkError<GetPersonTrackingError>
source§

impl Client

source

pub fn get_segment_detection(&self) -> GetSegmentDetectionFluentBuilder

Constructs a fluent builder for the GetSegmentDetection operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
  • On success, responds with GetSegmentDetectionOutput with field(s):
    • job_status(Option<VideoJobStatus>):

      Current status of the segment detection job.

    • status_message(Option<String>):

      If the job fails, StatusMessage provides a descriptive error message.

    • video_metadata(Option<Vec::<VideoMetadata>>):

      Currently, Amazon Rekognition Video returns a single object in the VideoMetadata array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The VideoMetadata object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by GetSegmentDetection.

    • audio_metadata(Option<Vec::<AudioMetadata>>):

      An array of objects. There can be multiple audio streams. Each AudioMetadata object contains metadata for a single audio stream. Audio information in an AudioMetadata objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by GetSegmentDetection.

    • next_token(Option<String>):

      If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.

    • segments(Option<Vec::<SegmentDetection>>):

      An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes input parameter of StartSegmentDetection. Within each segment type the array is sorted by timestamp values.

    • selected_segment_types(Option<Vec::<SegmentTypeInfo>>):

      An array containing the segment types requested in the call to StartSegmentDetection.

    • job_id(Option<String>):

      Job identifier for the segment detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartSegmentDetection.

    • video(Option<Video>):

      Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.

    • job_tag(Option<String>):

      A job identifier specified in the call to StartSegmentDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.

  • On failure, responds with SdkError<GetSegmentDetectionError>
source§

impl Client

source

pub fn get_text_detection(&self) -> GetTextDetectionFluentBuilder

Constructs a fluent builder for the GetTextDetection operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn index_faces(&self) -> IndexFacesFluentBuilder

Constructs a fluent builder for the IndexFaces operation.

  • The fluent builder is configurable:
    • collection_id(impl Into<String>) / set_collection_id(Option<String>):
      required: true

      The ID of an existing collection to which you want to add the faces that are detected in the input images.


    • image(Image) / set_image(Option<Image>):
      required: true

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes isn’t supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.


    • external_image_id(impl Into<String>) / set_external_image_id(Option<String>):
      required: false

      The ID you want to assign to all the faces detected in the image.


    • detection_attributes(Attribute) / set_detection_attributes(Option<Vec::<Attribute>>):
      required: false

      An array of facial attributes you want to be returned. A DEFAULT subset of facial attributes - BoundingBox, Confidence, Pose, Quality, and Landmarks - will always be returned. You can request for specific facial attributes (in addition to the default list) - by using [“DEFAULT”, “FACE_OCCLUDED”] or just [“FACE_OCCLUDED”]. You can request for all facial attributes by using [“ALL”]. Requesting more attributes may increase response time.

      If you provide both, [“ALL”, “DEFAULT”], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).


    • max_faces(i32) / set_max_faces(Option<i32>):
      required: false

      The maximum number of faces to index. The value of MaxFaces must be greater than or equal to 1. IndexFaces returns no more than 100 detected faces in an image, even if you specify a larger value for MaxFaces.

      If IndexFaces detects more faces than the value of MaxFaces, the faces with the lowest quality are filtered out first. If there are still more faces than the value of MaxFaces, the faces with the smallest bounding boxes are filtered out (up to the number that’s needed to satisfy the value of MaxFaces). Information about the unindexed faces is available in the UnindexedFaces array.

      The faces that are returned by IndexFaces are sorted by the largest face bounding box size to the smallest size, in descending order.

      MaxFaces can be used with a collection associated with any version of the face model.


    • quality_filter(QualityFilter) / set_quality_filter(Option<QualityFilter>):
      required: false

      A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t indexed. If you specify AUTO, Amazon Rekognition chooses the quality bar. If you specify LOW, MEDIUM, or HIGH, filtering removes all faces that don’t meet the chosen quality bar. The default value is AUTO. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specify NONE, no filtering is performed.

      To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.


  • On success, responds with IndexFacesOutput with field(s):
    • face_records(Option<Vec::<FaceRecord>>):

      An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

    • orientation_correction(Option<OrientationCorrection>):

      If your collection is associated with a face detection model that’s later than version 3.0, the value of OrientationCorrection is always null and no orientation information is returned.

      If your collection is associated with a face detection model that’s version 3.0 or earlier, the following applies:

      • If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image’s orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata. The value of OrientationCorrection is null.

      • If the image doesn’t contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren’t translated and represent the object locations before the image is rotated.

      Bounding box information is returned in the FaceRecords array. You can get the version of the face detection model by calling DescribeCollection.

    • face_model_version(Option<String>):

      The version number of the face detection model that’s associated with the input collection (CollectionId).

    • unindexed_faces(Option<Vec::<UnindexedFace>>):

      An array of faces that were detected in the image but weren’t indexed. They weren’t indexed because the quality filter identified them as low quality, or the MaxFaces request parameter filtered them out. To use the quality filter, you specify the QualityFilter request parameter.

  • On failure, responds with SdkError<IndexFacesError>
source§

impl Client

source

pub fn list_collections(&self) -> ListCollectionsFluentBuilder

Constructs a fluent builder for the ListCollections operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_dataset_entries(&self) -> ListDatasetEntriesFluentBuilder

Constructs a fluent builder for the ListDatasetEntries operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_dataset_labels(&self) -> ListDatasetLabelsFluentBuilder

Constructs a fluent builder for the ListDatasetLabels operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_faces(&self) -> ListFacesFluentBuilder

Constructs a fluent builder for the ListFaces operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_media_analysis_jobs(&self) -> ListMediaAnalysisJobsFluentBuilder

Constructs a fluent builder for the ListMediaAnalysisJobs operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_project_policies(&self) -> ListProjectPoliciesFluentBuilder

Constructs a fluent builder for the ListProjectPolicies operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_stream_processors(&self) -> ListStreamProcessorsFluentBuilder

Constructs a fluent builder for the ListStreamProcessors operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn list_tags_for_resource(&self) -> ListTagsForResourceFluentBuilder

Constructs a fluent builder for the ListTagsForResource operation.

source§

impl Client

source

pub fn list_users(&self) -> ListUsersFluentBuilder

Constructs a fluent builder for the ListUsers operation. This operation supports pagination; See into_paginator().

source§

impl Client

source

pub fn put_project_policy(&self) -> PutProjectPolicyFluentBuilder

Constructs a fluent builder for the PutProjectPolicy operation.

source§

impl Client

source

pub fn recognize_celebrities(&self) -> RecognizeCelebritiesFluentBuilder

Constructs a fluent builder for the RecognizeCelebrities operation.

  • The fluent builder is configurable:
    • image(Image) / set_image(Option<Image>):
      required: true

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.


  • On success, responds with RecognizeCelebritiesOutput with field(s):
    • celebrity_faces(Option<Vec::<Celebrity>>):

      Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 64 celebrities in an image. Each celebrity object includes the following attributes: Face, Confidence, Emotions, Landmarks, Pose, Quality, Smile, Id, KnownGender, MatchConfidence, Name, Urls.

    • unrecognized_faces(Option<Vec::<ComparedFace>>):

      Details about each unrecognized face in the image.

    • orientation_correction(Option<OrientationCorrection>):

      Support for estimating image orientation using the the OrientationCorrection field has ceased as of August 2021. Any returned values for this field included in an API response will always be NULL.

      The orientation of the input image (counterclockwise direction). If your application displays the image, you can use this value to correct the orientation. The bounding box coordinates returned in CelebrityFaces and UnrecognizedFaces represent face locations before the image orientation is corrected.

      If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the image’s orientation. If so, and the Exif metadata for the input image populates the orientation field, the value of OrientationCorrection is null. The CelebrityFaces and UnrecognizedFaces bounding box coordinates represent face locations after Exif metadata is used to correct the image orientation. Images in .png format don’t contain Exif metadata.

  • On failure, responds with SdkError<RecognizeCelebritiesError>
source§

impl Client

source

pub fn search_faces(&self) -> SearchFacesFluentBuilder

Constructs a fluent builder for the SearchFaces operation.

source§

impl Client

source

pub fn search_faces_by_image(&self) -> SearchFacesByImageFluentBuilder

Constructs a fluent builder for the SearchFacesByImage operation.

  • The fluent builder is configurable:
    • collection_id(impl Into<String>) / set_collection_id(Option<String>):
      required: true

      ID of the collection to search.


    • image(Image) / set_image(Option<Image>):
      required: true

      The input image as base64-encoded bytes or an S3 object. If you use the AWS CLI to call Amazon Rekognition operations, passing base64-encoded image bytes is not supported.

      If you are using an AWS SDK to call Amazon Rekognition, you might not need to base64-encode image bytes passed using the Bytes field. For more information, see Images in the Amazon Rekognition developer guide.


    • max_faces(i32) / set_max_faces(Option<i32>):
      required: false

      Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.


    • face_match_threshold(f32) / set_face_match_threshold(Option<f32>):
      required: false

      (Optional) Specifies the minimum confidence in the face match to return. For example, don’t return any matches where confidence in matches is less than 70%. The default value is 80%.


    • quality_filter(QualityFilter) / set_quality_filter(Option<QualityFilter>):
      required: false

      A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t searched for in the collection. If you specify AUTO, Amazon Rekognition chooses the quality bar. If you specify LOW, MEDIUM, or HIGH, filtering removes all faces that don’t meet the chosen quality bar. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that’s misidentified as a face, a face that’s too blurry, or a face with a pose that’s too extreme to use. If you specify NONE, no filtering is performed. The default value is NONE.

      To use quality filtering, the collection you are using must be associated with version 3 of the face model or higher.


  • On success, responds with SearchFacesByImageOutput with field(s):
  • On failure, responds with SdkError<SearchFacesByImageError>
source§

impl Client

source

pub fn search_users(&self) -> SearchUsersFluentBuilder

Constructs a fluent builder for the SearchUsers operation.

source§

impl Client

source

pub fn search_users_by_image(&self) -> SearchUsersByImageFluentBuilder

Constructs a fluent builder for the SearchUsersByImage operation.

  • The fluent builder is configurable:
    • collection_id(impl Into<String>) / set_collection_id(Option<String>):
      required: true

      The ID of an existing collection containing the UserID.


    • image(Image) / set_image(Option<Image>):
      required: true

      Provides the input image either as bytes or an S3 object.

      You pass image bytes to an Amazon Rekognition API operation by using the Bytes property. For example, you would use the Bytes property to pass an image loaded from a local file system. Image bytes passed by using the Bytes property must be base64-encoded. Your code may not need to encode image bytes if you are using an AWS SDK to call Amazon Rekognition API operations.

      For more information, see Analyzing an Image Loaded from a Local File System in the Amazon Rekognition Developer Guide.

      You pass images stored in an S3 bucket to an Amazon Rekognition API operation by using the S3Object property. Images stored in an S3 bucket do not need to be base64-encoded.

      The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

      If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

      For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see How Amazon Rekognition works with IAM in the Amazon Rekognition Developer Guide.


    • user_match_threshold(f32) / set_user_match_threshold(Option<f32>):
      required: false

      Specifies the minimum confidence in the UserID match to return. Default value is 80.


    • max_users(i32) / set_max_users(Option<i32>):
      required: false

      Maximum number of UserIDs to return.


    • quality_filter(QualityFilter) / set_quality_filter(Option<QualityFilter>):
      required: false

      A filter that specifies a quality bar for how much filtering is done to identify faces. Filtered faces aren’t searched for in the collection. The default value is NONE.


  • On success, responds with SearchUsersByImageOutput with field(s):
    • user_matches(Option<Vec::<UserMatch>>):

      An array of UserID objects that matched the input face, along with the confidence in the match. The returned structure will be empty if there are no matches. Returned if the SearchUsersByImageResponse action is successful.

    • face_model_version(Option<String>):

      Version number of the face detection model associated with the input collection CollectionId.

    • searched_face(Option<SearchedFaceDetails>):

      A list of FaceDetail objects containing the BoundingBox for the largest face in image, as well as the confidence in the bounding box, that was searched for matches. If no valid face is detected in the image the response will contain no SearchedFace object.

    • unsearched_faces(Option<Vec::<UnsearchedFace>>):

      List of UnsearchedFace objects. Contains the face details infered from the specified image but not used for search. Contains reasons that describe why a face wasn’t used for Search.

  • On failure, responds with SdkError<SearchUsersByImageError>
source§

impl Client

source

pub fn start_celebrity_recognition( &self ) -> StartCelebrityRecognitionFluentBuilder

Constructs a fluent builder for the StartCelebrityRecognition operation.

source§

impl Client

source

pub fn start_content_moderation(&self) -> StartContentModerationFluentBuilder

Constructs a fluent builder for the StartContentModeration operation.

  • The fluent builder is configurable:
    • video(Video) / set_video(Option<Video>):
      required: true

      The video in which you want to detect inappropriate, unwanted, or offensive content. The video must be stored in an Amazon S3 bucket.


    • min_confidence(f32) / set_min_confidence(Option<f32>):
      required: false

      Specifies the minimum confidence that Amazon Rekognition must have in order to return a moderated content label. Confidence represents how certain Amazon Rekognition is that the moderated content is correctly identified. 0 is the lowest confidence. 100 is the highest confidence. Amazon Rekognition doesn’t return any moderated content labels with a confidence level lower than this specified value. If you don’t specify MinConfidence, GetContentModeration returns labels with confidence values greater than or equal to 50 percent.


    • client_request_token(impl Into<String>) / set_client_request_token(Option<String>):
      required: false

      Idempotent token used to identify the start request. If you use the same token with multiple StartContentModeration requests, the same JobId is returned. Use ClientRequestToken to prevent the same job from being accidently started more than once.


    • notification_channel(NotificationChannel) / set_notification_channel(Option<NotificationChannel>):
      required: false

      The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish the completion status of the content analysis to. The Amazon SNS topic must have a topic name that begins with AmazonRekognition if you are using the AmazonRekognitionServiceRole permissions policy to access the topic.


    • job_tag(impl Into<String>) / set_job_tag(Option<String>):
      required: false

      An identifier you specify that’s returned in the completion notification that’s published to your Amazon Simple Notification Service topic. For example, you can use JobTag to group related jobs and identify them in the completion notification.


  • On success, responds with StartContentModerationOutput with field(s):
    • job_id(Option<String>):

      The identifier for the content analysis job. Use JobId to identify the job in a subsequent call to GetContentModeration.

  • On failure, responds with SdkError<StartContentModerationError>
source§

impl Client

source

pub fn start_face_detection(&self) -> StartFaceDetectionFluentBuilder

Constructs a fluent builder for the StartFaceDetection operation.

source§

impl Client

Constructs a fluent builder for the StartFaceSearch operation.

source§

impl Client

source

pub fn start_label_detection(&self) -> StartLabelDetectionFluentBuilder

Constructs a fluent builder for the StartLabelDetection operation.

source§

impl Client

source

pub fn start_media_analysis_job(&self) -> StartMediaAnalysisJobFluentBuilder

Constructs a fluent builder for the StartMediaAnalysisJob operation.

source§

impl Client

source

pub fn start_person_tracking(&self) -> StartPersonTrackingFluentBuilder

Constructs a fluent builder for the StartPersonTracking operation.

source§

impl Client

source

pub fn start_project_version(&self) -> StartProjectVersionFluentBuilder

Constructs a fluent builder for the StartProjectVersion operation.

source§

impl Client

source

pub fn start_segment_detection(&self) -> StartSegmentDetectionFluentBuilder

Constructs a fluent builder for the StartSegmentDetection operation.

source§

impl Client

source

pub fn start_stream_processor(&self) -> StartStreamProcessorFluentBuilder

Constructs a fluent builder for the StartStreamProcessor operation.

source§

impl Client

source

pub fn start_text_detection(&self) -> StartTextDetectionFluentBuilder

Constructs a fluent builder for the StartTextDetection operation.

source§

impl Client

source

pub fn stop_project_version(&self) -> StopProjectVersionFluentBuilder

Constructs a fluent builder for the StopProjectVersion operation.

source§

impl Client

source

pub fn stop_stream_processor(&self) -> StopStreamProcessorFluentBuilder

Constructs a fluent builder for the StopStreamProcessor operation.

source§

impl Client

source

pub fn tag_resource(&self) -> TagResourceFluentBuilder

Constructs a fluent builder for the TagResource operation.

source§

impl Client

source

pub fn untag_resource(&self) -> UntagResourceFluentBuilder

Constructs a fluent builder for the UntagResource operation.

source§

impl Client

source

pub fn update_dataset_entries(&self) -> UpdateDatasetEntriesFluentBuilder

Constructs a fluent builder for the UpdateDatasetEntries operation.

source§

impl Client

source

pub fn update_stream_processor(&self) -> UpdateStreamProcessorFluentBuilder

Constructs a fluent builder for the UpdateStreamProcessor operation.

source§

impl Client

source

pub fn from_conf(conf: Config) -> Self

Creates a new client from the service Config.

§Panics

This method will panic in the following cases:

  • Retries or timeouts are enabled without a sleep_impl configured.
  • Identity caching is enabled without a sleep_impl and time_source configured.
  • No behavior_version is provided.

The panic message for each of these will have instructions on how to resolve them.

source

pub fn config(&self) -> &Config

Returns the client’s configuration.

source§

impl Client

source

pub fn new(sdk_config: &SdkConfig) -> Self

Creates a new client from an SDK Config.

§Panics
  • This method will panic if the sdk_config is missing an async sleep implementation. If you experience this panic, set the sleep_impl on the Config passed into this function to fix it.
  • This method will panic if the sdk_config is missing an HTTP connector. If you experience this panic, set the http_connector on the Config passed into this function to fix it.
  • This method will panic if no BehaviorVersion is provided. If you experience this panic, set behavior_version on the Config or enable the behavior-version-latest Cargo feature.

Trait Implementations§

source§

impl Clone for Client

source§

fn clone(&self) -> Client

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for Client

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Waiters for Client

source§

fn wait_until_project_version_running( &self ) -> ProjectVersionRunningFluentBuilder

Wait until the ProjectVersion is running.
source§

fn wait_until_project_version_training_completed( &self ) -> ProjectVersionTrainingCompletedFluentBuilder

Wait until the ProjectVersion training completes.

Auto Trait Implementations§

§

impl Freeze for Client

§

impl !RefUnwindSafe for Client

§

impl Send for Client

§

impl Sync for Client

§

impl Unpin for Client

§

impl !UnwindSafe for Client

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more