Struct aws_sdk_rekognition::operation::get_label_detection::builders::GetLabelDetectionFluentBuilder
source · pub struct GetLabelDetectionFluentBuilder { /* private fields */ }Expand description
Fluent builder constructing a request to GetLabelDetection.
Gets the label detection results of a Amazon Rekognition Video analysis started by StartLabelDetection.
The label detection operation is started by a call to StartLabelDetection which returns a job identifier (JobId). When the label detection operation finishes, Amazon Rekognition publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to StartlabelDetection.
To get the results of the label detection operation, first check that the status value published to the Amazon SNS topic is SUCCEEDED. If so, call GetLabelDetection and pass the job identifier (JobId) from the initial call to StartLabelDetection.
GetLabelDetection returns an array of detected labels (Labels) sorted by the time the labels were detected. You can also sort by the label name by specifying NAME for the SortBy input parameter. If there is no NAME specified, the default sort is by timestamp.
You can select how results are aggregated by using the AggregateBy input parameter. The default aggregation method is TIMESTAMPS. You can also aggregate by SEGMENTS, which aggregates all instances of labels detected in a given segment.
The returned Labels array may include the following attributes:
-
Name - The name of the detected label.
-
Confidence - The level of confidence in the label assigned to a detected object.
-
Parents - The ancestor labels for a detected label. GetLabelDetection returns a hierarchical taxonomy of detected labels. For example, a detected car might be assigned the label car. The label car has two parent labels: Vehicle (its parent) and Transportation (its grandparent). The response includes the all ancestors for a label, where every ancestor is a unique label. In the previous example, Car, Vehicle, and Transportation are returned as unique labels in the response.
-
Aliases - Possible Aliases for the label.
-
Categories - The label categories that the detected label belongs to.
-
BoundingBox — Bounding boxes are described for all instances of detected common object labels, returned in an array of Instance objects. An Instance object contains a BoundingBox object, describing the location of the label on the input image. It also includes the confidence for the accuracy of the detected bounding box.
-
Timestamp - Time, in milliseconds from the start of the video, that the label was detected. For aggregation by
SEGMENTS, theStartTimestampMillis,EndTimestampMillis, andDurationMillisstructures are what define a segment. Although the “Timestamp” structure is still returned with each label, its value is set to be the same asStartTimestampMillis.
Timestamp and Bounding box information are returned for detected Instances, only if aggregation is done by TIMESTAMPS. If aggregating by SEGMENTS, information about detected instances isn’t returned.
The version of the label model used for the detection is also returned.
Note DominantColors isn't returned for Instances, although it is shown as part of the response in the sample seen below.
Use MaxResults parameter to limit the number of labels returned. If there are more results than specified in MaxResults, the value of NextToken in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call GetlabelDetection and populate the NextToken request parameter with the token value returned from the previous call to GetLabelDetection.
Implementations§
source§impl GetLabelDetectionFluentBuilder
impl GetLabelDetectionFluentBuilder
sourcepub fn as_input(&self) -> &GetLabelDetectionInputBuilder
pub fn as_input(&self) -> &GetLabelDetectionInputBuilder
Access the GetLabelDetection as a reference.
sourcepub async fn send(
self
) -> Result<GetLabelDetectionOutput, SdkError<GetLabelDetectionError, HttpResponse>>
pub async fn send( self ) -> Result<GetLabelDetectionOutput, SdkError<GetLabelDetectionError, HttpResponse>>
Sends the request and returns the response.
If an error occurs, an SdkError will be returned with additional details that
can be matched against.
By default, any retryable failures will be retried twice. Retry behavior is configurable with the RetryConfig, which can be set when configuring the client.
sourcepub fn customize(
self
) -> CustomizableOperation<GetLabelDetectionOutput, GetLabelDetectionError, Self>
pub fn customize( self ) -> CustomizableOperation<GetLabelDetectionOutput, GetLabelDetectionError, Self>
Consumes this builder, creating a customizable operation that can be modified before being sent.
sourcepub fn into_paginator(self) -> GetLabelDetectionPaginator
pub fn into_paginator(self) -> GetLabelDetectionPaginator
Create a paginator for this request
Paginators are used by calling send().await which returns a PaginationStream.
sourcepub fn job_id(self, input: impl Into<String>) -> Self
pub fn job_id(self, input: impl Into<String>) -> Self
Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to StartlabelDetection.
sourcepub fn set_job_id(self, input: Option<String>) -> Self
pub fn set_job_id(self, input: Option<String>) -> Self
Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to StartlabelDetection.
sourcepub fn get_job_id(&self) -> &Option<String>
pub fn get_job_id(&self) -> &Option<String>
Job identifier for the label detection operation for which you want results returned. You get the job identifer from an initial call to StartlabelDetection.
sourcepub fn max_results(self, input: i32) -> Self
pub fn max_results(self, input: i32) -> Self
Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
sourcepub fn set_max_results(self, input: Option<i32>) -> Self
pub fn set_max_results(self, input: Option<i32>) -> Self
Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
sourcepub fn get_max_results(&self) -> &Option<i32>
pub fn get_max_results(&self) -> &Option<i32>
Maximum number of results to return per paginated call. The largest value you can specify is 1000. If you specify a value greater than 1000, a maximum of 1000 results is returned. The default value is 1000.
sourcepub fn next_token(self, input: impl Into<String>) -> Self
pub fn next_token(self, input: impl Into<String>) -> Self
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.
sourcepub fn set_next_token(self, input: Option<String>) -> Self
pub fn set_next_token(self, input: Option<String>) -> Self
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.
sourcepub fn get_next_token(&self) -> &Option<String>
pub fn get_next_token(&self) -> &Option<String>
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of labels.
sourcepub fn sort_by(self, input: LabelDetectionSortBy) -> Self
pub fn sort_by(self, input: LabelDetectionSortBy) -> Self
Sort to use for elements in the Labels array. Use TIMESTAMP to sort array elements by the time labels are detected. Use NAME to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP.
sourcepub fn set_sort_by(self, input: Option<LabelDetectionSortBy>) -> Self
pub fn set_sort_by(self, input: Option<LabelDetectionSortBy>) -> Self
Sort to use for elements in the Labels array. Use TIMESTAMP to sort array elements by the time labels are detected. Use NAME to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP.
sourcepub fn get_sort_by(&self) -> &Option<LabelDetectionSortBy>
pub fn get_sort_by(&self) -> &Option<LabelDetectionSortBy>
Sort to use for elements in the Labels array. Use TIMESTAMP to sort array elements by the time labels are detected. Use NAME to alphabetically group elements for a label together. Within each label group, the array element are sorted by detection confidence. The default sort is by TIMESTAMP.
sourcepub fn aggregate_by(self, input: LabelDetectionAggregateBy) -> Self
pub fn aggregate_by(self, input: LabelDetectionAggregateBy) -> Self
Defines how to aggregate the returned results. Results can be aggregated by timestamps or segments.
sourcepub fn set_aggregate_by(self, input: Option<LabelDetectionAggregateBy>) -> Self
pub fn set_aggregate_by(self, input: Option<LabelDetectionAggregateBy>) -> Self
Defines how to aggregate the returned results. Results can be aggregated by timestamps or segments.
sourcepub fn get_aggregate_by(&self) -> &Option<LabelDetectionAggregateBy>
pub fn get_aggregate_by(&self) -> &Option<LabelDetectionAggregateBy>
Defines how to aggregate the returned results. Results can be aggregated by timestamps or segments.
Trait Implementations§
source§impl Clone for GetLabelDetectionFluentBuilder
impl Clone for GetLabelDetectionFluentBuilder
source§fn clone(&self) -> GetLabelDetectionFluentBuilder
fn clone(&self) -> GetLabelDetectionFluentBuilder
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more