#[non_exhaustive]pub struct GetSegmentDetectionOutput {
pub job_status: Option<VideoJobStatus>,
pub status_message: Option<String>,
pub video_metadata: Option<Vec<VideoMetadata>>,
pub audio_metadata: Option<Vec<AudioMetadata>>,
pub next_token: Option<String>,
pub segments: Option<Vec<SegmentDetection>>,
pub selected_segment_types: Option<Vec<SegmentTypeInfo>>,
pub job_id: Option<String>,
pub video: Option<Video>,
pub job_tag: Option<String>,
/* private fields */
}
Fields (Non-exhaustive)§
This struct is marked as non-exhaustive
Struct { .. }
syntax; cannot be matched against without a wildcard ..
; and struct update syntax will not work.job_status: Option<VideoJobStatus>
Current status of the segment detection job.
status_message: Option<String>
If the job fails, StatusMessage
provides a descriptive error message.
video_metadata: Option<Vec<VideoMetadata>>
Currently, Amazon Rekognition Video returns a single object in the VideoMetadata
array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The VideoMetadata
object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by GetSegmentDetection
.
audio_metadata: Option<Vec<AudioMetadata>>
An array of objects. There can be multiple audio streams. Each AudioMetadata
object contains metadata for a single audio stream. Audio information in an AudioMetadata
objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by GetSegmentDetection
.
next_token: Option<String>
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
segments: Option<Vec<SegmentDetection>>
An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes
input parameter of StartSegmentDetection
. Within each segment type the array is sorted by timestamp values.
selected_segment_types: Option<Vec<SegmentTypeInfo>>
An array containing the segment types requested in the call to StartSegmentDetection
.
job_id: Option<String>
Job identifier for the segment detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartSegmentDetection.
video: Option<Video>
Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection
use Video
to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.
job_tag: Option<String>
A job identifier specified in the call to StartSegmentDetection and returned in the job completion notification sent to your Amazon Simple Notification Service topic.
Implementations§
Source§impl GetSegmentDetectionOutput
impl GetSegmentDetectionOutput
Sourcepub fn job_status(&self) -> Option<&VideoJobStatus>
pub fn job_status(&self) -> Option<&VideoJobStatus>
Current status of the segment detection job.
Sourcepub fn status_message(&self) -> Option<&str>
pub fn status_message(&self) -> Option<&str>
If the job fails, StatusMessage
provides a descriptive error message.
Sourcepub fn video_metadata(&self) -> &[VideoMetadata]
pub fn video_metadata(&self) -> &[VideoMetadata]
Currently, Amazon Rekognition Video returns a single object in the VideoMetadata
array. The object contains information about the video stream in the input file that Amazon Rekognition Video chose to analyze. The VideoMetadata
object includes the video codec, video format and other information. Video metadata is returned in each page of information returned by GetSegmentDetection
.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .video_metadata.is_none()
.
Sourcepub fn audio_metadata(&self) -> &[AudioMetadata]
pub fn audio_metadata(&self) -> &[AudioMetadata]
An array of objects. There can be multiple audio streams. Each AudioMetadata
object contains metadata for a single audio stream. Audio information in an AudioMetadata
objects includes the audio codec, the number of audio channels, the duration of the audio stream, and the sample rate. Audio metadata is returned in each page of information returned by GetSegmentDetection
.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .audio_metadata.is_none()
.
Sourcepub fn next_token(&self) -> Option<&str>
pub fn next_token(&self) -> Option<&str>
If the previous response was incomplete (because there are more labels to retrieve), Amazon Rekognition Video returns a pagination token in the response. You can use this pagination token to retrieve the next set of text.
Sourcepub fn segments(&self) -> &[SegmentDetection]
pub fn segments(&self) -> &[SegmentDetection]
An array of segments detected in a video. The array is sorted by the segment types (TECHNICAL_CUE or SHOT) specified in the SegmentTypes
input parameter of StartSegmentDetection
. Within each segment type the array is sorted by timestamp values.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .segments.is_none()
.
Sourcepub fn selected_segment_types(&self) -> &[SegmentTypeInfo]
pub fn selected_segment_types(&self) -> &[SegmentTypeInfo]
An array containing the segment types requested in the call to StartSegmentDetection
.
If no value was sent for this field, a default will be set. If you want to determine if no value was sent, use .selected_segment_types.is_none()
.
Sourcepub fn job_id(&self) -> Option<&str>
pub fn job_id(&self) -> Option<&str>
Job identifier for the segment detection operation for which you want to obtain results. The job identifer is returned by an initial call to StartSegmentDetection.
Source§impl GetSegmentDetectionOutput
impl GetSegmentDetectionOutput
Sourcepub fn builder() -> GetSegmentDetectionOutputBuilder
pub fn builder() -> GetSegmentDetectionOutputBuilder
Creates a new builder-style object to manufacture GetSegmentDetectionOutput
.
Trait Implementations§
Source§impl Clone for GetSegmentDetectionOutput
impl Clone for GetSegmentDetectionOutput
Source§fn clone(&self) -> GetSegmentDetectionOutput
fn clone(&self) -> GetSegmentDetectionOutput
1.0.0 · Source§const fn clone_from(&mut self, source: &Self)
const fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl Debug for GetSegmentDetectionOutput
impl Debug for GetSegmentDetectionOutput
Source§impl PartialEq for GetSegmentDetectionOutput
impl PartialEq for GetSegmentDetectionOutput
Source§fn eq(&self, other: &GetSegmentDetectionOutput) -> bool
fn eq(&self, other: &GetSegmentDetectionOutput) -> bool
self
and other
values to be equal, and is used by ==
.Source§impl RequestId for GetSegmentDetectionOutput
impl RequestId for GetSegmentDetectionOutput
Source§fn request_id(&self) -> Option<&str>
fn request_id(&self) -> Option<&str>
None
if the service could not be reached.impl StructuralPartialEq for GetSegmentDetectionOutput
Auto Trait Implementations§
impl Freeze for GetSegmentDetectionOutput
impl RefUnwindSafe for GetSegmentDetectionOutput
impl Send for GetSegmentDetectionOutput
impl Sync for GetSegmentDetectionOutput
impl Unpin for GetSegmentDetectionOutput
impl UnwindSafe for GetSegmentDetectionOutput
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§impl<T> Paint for Twhere
T: ?Sized,
impl<T> Paint for Twhere
T: ?Sized,
Source§fn fg(&self, value: Color) -> Painted<&T>
fn fg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the foreground set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like red()
and
green()
, which have the same functionality but are
pithier.
§Example
Set foreground color to white using fg()
:
use yansi::{Paint, Color};
painted.fg(Color::White);
Set foreground color to white using white()
.
use yansi::Paint;
painted.white();
Source§fn bright_black(&self) -> Painted<&T>
fn bright_black(&self) -> Painted<&T>
Source§fn bright_red(&self) -> Painted<&T>
fn bright_red(&self) -> Painted<&T>
Source§fn bright_green(&self) -> Painted<&T>
fn bright_green(&self) -> Painted<&T>
Source§fn bright_yellow(&self) -> Painted<&T>
fn bright_yellow(&self) -> Painted<&T>
Source§fn bright_blue(&self) -> Painted<&T>
fn bright_blue(&self) -> Painted<&T>
Source§fn bright_magenta(&self) -> Painted<&T>
fn bright_magenta(&self) -> Painted<&T>
Source§fn bright_cyan(&self) -> Painted<&T>
fn bright_cyan(&self) -> Painted<&T>
Source§fn bright_white(&self) -> Painted<&T>
fn bright_white(&self) -> Painted<&T>
Source§fn bg(&self, value: Color) -> Painted<&T>
fn bg(&self, value: Color) -> Painted<&T>
Returns a styled value derived from self
with the background set to
value
.
This method should be used rarely. Instead, prefer to use color-specific
builder methods like on_red()
and
on_green()
, which have the same functionality but
are pithier.
§Example
Set background color to red using fg()
:
use yansi::{Paint, Color};
painted.bg(Color::Red);
Set background color to red using on_red()
.
use yansi::Paint;
painted.on_red();
Source§fn on_primary(&self) -> Painted<&T>
fn on_primary(&self) -> Painted<&T>
Source§fn on_magenta(&self) -> Painted<&T>
fn on_magenta(&self) -> Painted<&T>
Source§fn on_bright_black(&self) -> Painted<&T>
fn on_bright_black(&self) -> Painted<&T>
Source§fn on_bright_red(&self) -> Painted<&T>
fn on_bright_red(&self) -> Painted<&T>
Source§fn on_bright_green(&self) -> Painted<&T>
fn on_bright_green(&self) -> Painted<&T>
Source§fn on_bright_yellow(&self) -> Painted<&T>
fn on_bright_yellow(&self) -> Painted<&T>
Source§fn on_bright_blue(&self) -> Painted<&T>
fn on_bright_blue(&self) -> Painted<&T>
Source§fn on_bright_magenta(&self) -> Painted<&T>
fn on_bright_magenta(&self) -> Painted<&T>
Source§fn on_bright_cyan(&self) -> Painted<&T>
fn on_bright_cyan(&self) -> Painted<&T>
Source§fn on_bright_white(&self) -> Painted<&T>
fn on_bright_white(&self) -> Painted<&T>
Source§fn attr(&self, value: Attribute) -> Painted<&T>
fn attr(&self, value: Attribute) -> Painted<&T>
Enables the styling Attribute
value
.
This method should be used rarely. Instead, prefer to use
attribute-specific builder methods like bold()
and
underline()
, which have the same functionality
but are pithier.
§Example
Make text bold using attr()
:
use yansi::{Paint, Attribute};
painted.attr(Attribute::Bold);
Make text bold using using bold()
.
use yansi::Paint;
painted.bold();
Source§fn rapid_blink(&self) -> Painted<&T>
fn rapid_blink(&self) -> Painted<&T>
Source§fn quirk(&self, value: Quirk) -> Painted<&T>
fn quirk(&self, value: Quirk) -> Painted<&T>
Enables the yansi
Quirk
value
.
This method should be used rarely. Instead, prefer to use quirk-specific
builder methods like mask()
and
wrap()
, which have the same functionality but are
pithier.
§Example
Enable wrapping using .quirk()
:
use yansi::{Paint, Quirk};
painted.quirk(Quirk::Wrap);
Enable wrapping using wrap()
.
use yansi::Paint;
painted.wrap();
Source§fn clear(&self) -> Painted<&T>
👎Deprecated since 1.0.1: renamed to resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.
fn clear(&self) -> Painted<&T>
resetting()
due to conflicts with Vec::clear()
.
The clear()
method will be removed in a future release.Source§fn whenever(&self, value: Condition) -> Painted<&T>
fn whenever(&self, value: Condition) -> Painted<&T>
Conditionally enable styling based on whether the Condition
value
applies. Replaces any previous condition.
See the crate level docs for more details.
§Example
Enable styling painted
only when both stdout
and stderr
are TTYs:
use yansi::{Paint, Condition};
painted.red().on_yellow().whenever(Condition::STDOUTERR_ARE_TTY);