aws_sdk_rekognition/client/
get_segment_detection.rs

1// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
2impl super::Client {
3    /// Constructs a fluent builder for the [`GetSegmentDetection`](crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder) operation.
4    /// This operation supports pagination; See [`into_paginator()`](crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder::into_paginator).
5    ///
6    /// - The fluent builder is configurable:
7    ///   - [`job_id(impl Into<String>)`](crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder::job_id) / [`set_job_id(Option<String>)`](crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder::set_job_id):<br>required: **true**<br><p>Job identifier for the text detection operation for which you want results returned. You get the job identifer from an initial call to <code>StartSegmentDetection</code>.</p><br>
8    ///   - [`max_results(i32)`](crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder::max_results) / [`set_max_results(Option<i32>)`](crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder::set_max_results):<br>required: **false**<br><p>Maximum number of results to return per paginated call. The largest value you can specify is 1000.</p><br>
9    ///   - [`next_token(impl Into<String>)`](crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder::next_token) / [`set_next_token(Option<String>)`](crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder::set_next_token):<br>required: **false**<br><p>If the response is truncated, Amazon Rekognition Video returns this token that you can use in the subsequent request to retrieve the next set of text.</p><br>
10    /// - On success, responds with [`GetSegmentDetectionOutput`](crate::operation::get_segment_detection::GetSegmentDetectionOutput) with field(s):
11    ///   - [`job_status(Option<VideoJobStatus>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::job_status): <p>Current status of the segment detection job.</p>
12    ///   - [`status_message(Option<String>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::status_message): <p>If the job fails, <code>StatusMessage</code> provides a descriptive error message.</p>
13    ///   - [`video_metadata(Option<Vec::<VideoMetadata>>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::video_metadata): <p>Currently, Amazon Rekognition Video returns a single object in the <code>VideoMetadata</code> array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The <code>VideoMetadata</code> object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by <code>GetSegmentDetection</code>.</p>
14    ///   - [`audio_metadata(Option<Vec::<AudioMetadata>>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::audio_metadata): <p>An array of objects. There can be multiple audio streams. Each <code>AudioMetadata</code> object contains metadata for a single audio stream. Audio information in an <code>AudioMetadata</code> objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by <code>GetSegmentDetection</code>.</p>
15    ///   - [`next_token(Option<String>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::next_token): <p>If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.</p>
16    ///   - [`segments(Option<Vec::<SegmentDetection>>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::segments): <p>An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the <code>SegmentTypes</code> input parameter of <code>StartSegmentDetection</code>. Within each segment type the array is sorted by timestamp values.</p>
17    ///   - [`selected_segment_types(Option<Vec::<SegmentTypeInfo>>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::selected_segment_types): <p>An array containing the segment types requested in the call to <code>StartSegmentDetection</code>.</p>
18    ///   - [`job_id(Option<String>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::job_id): <p>Job identifier for the segment detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartSegmentDetection.</p>
19    ///   - [`video(Option<Video>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::video): <p>Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as <code>StartLabelDetection</code> use <code>Video</code> to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.</p>
20    ///   - [`job_tag(Option<String>)`](crate::operation::get_segment_detection::GetSegmentDetectionOutput::job_tag): <p>A job identifier specified in the call to StartSegmentDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.</p>
21    /// - On failure, responds with [`SdkError<GetSegmentDetectionError>`](crate::operation::get_segment_detection::GetSegmentDetectionError)
22    pub fn get_segment_detection(&self) -> crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder {
23        crate::operation::get_segment_detection::builders::GetSegmentDetectionFluentBuilder::new(self.handle.clone())
24    }
25}