Struct aws_sdk_mediaconvert::types::builders::InputBuilder

source ·
#[non_exhaustive]
pub struct InputBuilder { /* private fields */ }
Expand description

A builder for Input.

Implementations§

source§

impl InputBuilder

source

pub fn advanced_input_filter(self, input: AdvancedInputFilter) -> Self

Use to remove noise, blocking, blurriness, or ringing from your input as a pre-filter step before encoding. The Advanced input filter removes more types of compression artifacts and is an improvement when compared to basic Deblock and Denoise filters. To remove video compression artifacts from your input and improve the video quality: Choose Enabled. Additionally, this filter can help increase the video quality of your output relative to its bitrate, since noisy inputs are more complex and require more bits to encode. To help restore loss of detail after applying the filter, you can optionally add texture or sharpening as an additional step. Jobs that use this feature incur pro-tier pricing. To not apply advanced input filtering: Choose Disabled. Note that you can still apply basic filtering with Deblock and Denoise.

source

pub fn set_advanced_input_filter( self, input: Option<AdvancedInputFilter> ) -> Self

Use to remove noise, blocking, blurriness, or ringing from your input as a pre-filter step before encoding. The Advanced input filter removes more types of compression artifacts and is an improvement when compared to basic Deblock and Denoise filters. To remove video compression artifacts from your input and improve the video quality: Choose Enabled. Additionally, this filter can help increase the video quality of your output relative to its bitrate, since noisy inputs are more complex and require more bits to encode. To help restore loss of detail after applying the filter, you can optionally add texture or sharpening as an additional step. Jobs that use this feature incur pro-tier pricing. To not apply advanced input filtering: Choose Disabled. Note that you can still apply basic filtering with Deblock and Denoise.

source

pub fn get_advanced_input_filter(&self) -> &Option<AdvancedInputFilter>

Use to remove noise, blocking, blurriness, or ringing from your input as a pre-filter step before encoding. The Advanced input filter removes more types of compression artifacts and is an improvement when compared to basic Deblock and Denoise filters. To remove video compression artifacts from your input and improve the video quality: Choose Enabled. Additionally, this filter can help increase the video quality of your output relative to its bitrate, since noisy inputs are more complex and require more bits to encode. To help restore loss of detail after applying the filter, you can optionally add texture or sharpening as an additional step. Jobs that use this feature incur pro-tier pricing. To not apply advanced input filtering: Choose Disabled. Note that you can still apply basic filtering with Deblock and Denoise.

source

pub fn advanced_input_filter_settings( self, input: AdvancedInputFilterSettings ) -> Self

Optional settings for Advanced input filter when you set Advanced input filter to Enabled.

source

pub fn set_advanced_input_filter_settings( self, input: Option<AdvancedInputFilterSettings> ) -> Self

Optional settings for Advanced input filter when you set Advanced input filter to Enabled.

source

pub fn get_advanced_input_filter_settings( &self ) -> &Option<AdvancedInputFilterSettings>

Optional settings for Advanced input filter when you set Advanced input filter to Enabled.

source

pub fn audio_selector_groups( self, k: impl Into<String>, v: AudioSelectorGroup ) -> Self

Adds a key-value pair to audio_selector_groups.

To override the contents of this collection use set_audio_selector_groups.

Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab. Note that, if you’re working with embedded audio, it’s simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

source

pub fn set_audio_selector_groups( self, input: Option<HashMap<String, AudioSelectorGroup>> ) -> Self

Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab. Note that, if you’re working with embedded audio, it’s simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

source

pub fn get_audio_selector_groups( &self ) -> &Option<HashMap<String, AudioSelectorGroup>>

Use audio selector groups to combine multiple sidecar audio inputs so that you can assign them to a single output audio tab. Note that, if you’re working with embedded audio, it’s simpler to assign multiple input tracks into a single audio selector rather than use an audio selector group.

source

pub fn audio_selectors(self, k: impl Into<String>, v: AudioSelector) -> Self

Adds a key-value pair to audio_selectors.

To override the contents of this collection use set_audio_selectors.

Use Audio selectors to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

source

pub fn set_audio_selectors( self, input: Option<HashMap<String, AudioSelector>> ) -> Self

Use Audio selectors to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

source

pub fn get_audio_selectors(&self) -> &Option<HashMap<String, AudioSelector>>

Use Audio selectors to specify a track or set of tracks from the input that you will use in your outputs. You can use multiple Audio selectors per input.

source

pub fn caption_selectors(self, k: impl Into<String>, v: CaptionSelector) -> Self

Adds a key-value pair to caption_selectors.

To override the contents of this collection use set_caption_selectors.

Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 100 captions selectors per input.

source

pub fn set_caption_selectors( self, input: Option<HashMap<String, CaptionSelector>> ) -> Self

Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 100 captions selectors per input.

source

pub fn get_caption_selectors(&self) -> &Option<HashMap<String, CaptionSelector>>

Use captions selectors to specify the captions data from your input that you use in your outputs. You can use up to 100 captions selectors per input.

source

pub fn crop(self, input: Rectangle) -> Self

Use Cropping selection to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection.

source

pub fn set_crop(self, input: Option<Rectangle>) -> Self

Use Cropping selection to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection.

source

pub fn get_crop(&self) -> &Option<Rectangle>

Use Cropping selection to specify the video area that the service will include in the output video frame. If you specify a value here, it will override any value that you specify in the output setting Cropping selection.

source

pub fn deblock_filter(self, input: InputDeblockFilter) -> Self

Enable Deblock to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

source

pub fn set_deblock_filter(self, input: Option<InputDeblockFilter>) -> Self

Enable Deblock to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

source

pub fn get_deblock_filter(&self) -> &Option<InputDeblockFilter>

Enable Deblock to produce smoother motion in the output. Default is disabled. Only manually controllable for MPEG2 and uncompressed video inputs.

source

pub fn decryption_settings(self, input: InputDecryptionSettings) -> Self

Settings for decrypting any input files that you encrypt before you upload them to Amazon S3. MediaConvert can decrypt files only when you use AWS Key Management Service (KMS) to encrypt the data key that you use to encrypt your content.

source

pub fn set_decryption_settings( self, input: Option<InputDecryptionSettings> ) -> Self

Settings for decrypting any input files that you encrypt before you upload them to Amazon S3. MediaConvert can decrypt files only when you use AWS Key Management Service (KMS) to encrypt the data key that you use to encrypt your content.

source

pub fn get_decryption_settings(&self) -> &Option<InputDecryptionSettings>

Settings for decrypting any input files that you encrypt before you upload them to Amazon S3. MediaConvert can decrypt files only when you use AWS Key Management Service (KMS) to encrypt the data key that you use to encrypt your content.

source

pub fn denoise_filter(self, input: InputDenoiseFilter) -> Self

Enable Denoise to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

source

pub fn set_denoise_filter(self, input: Option<InputDenoiseFilter>) -> Self

Enable Denoise to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

source

pub fn get_denoise_filter(&self) -> &Option<InputDenoiseFilter>

Enable Denoise to filter noise from the input. Default is disabled. Only applicable to MPEG2, H.264, H.265, and uncompressed video inputs.

source

pub fn dolby_vision_metadata_xml(self, input: impl Into<String>) -> Self

Use this setting only when your video source has Dolby Vision studio mastering metadata that is carried in a separate XML file. Specify the Amazon S3 location for the metadata XML file. MediaConvert uses this file to provide global and frame-level metadata for Dolby Vision preprocessing. When you specify a file here and your input also has interleaved global and frame level metadata, MediaConvert ignores the interleaved metadata and uses only the the metadata from this external XML file. Note that your IAM service role must grant MediaConvert read permissions to this file. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html.

source

pub fn set_dolby_vision_metadata_xml(self, input: Option<String>) -> Self

Use this setting only when your video source has Dolby Vision studio mastering metadata that is carried in a separate XML file. Specify the Amazon S3 location for the metadata XML file. MediaConvert uses this file to provide global and frame-level metadata for Dolby Vision preprocessing. When you specify a file here and your input also has interleaved global and frame level metadata, MediaConvert ignores the interleaved metadata and uses only the the metadata from this external XML file. Note that your IAM service role must grant MediaConvert read permissions to this file. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html.

source

pub fn get_dolby_vision_metadata_xml(&self) -> &Option<String>

Use this setting only when your video source has Dolby Vision studio mastering metadata that is carried in a separate XML file. Specify the Amazon S3 location for the metadata XML file. MediaConvert uses this file to provide global and frame-level metadata for Dolby Vision preprocessing. When you specify a file here and your input also has interleaved global and frame level metadata, MediaConvert ignores the interleaved metadata and uses only the the metadata from this external XML file. Note that your IAM service role must grant MediaConvert read permissions to this file. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/iam-role.html.

source

pub fn file_input(self, input: impl Into<String>) -> Self

Specify the source file for your transcoding job. You can use multiple inputs in a single job. The service concatenates these inputs, in the order that you specify them in the job, to create the outputs. If your input format is IMF, specify your input by providing the path to your CPL. For example, “s3://bucket/vf/cpl.xml”. If the CPL is in an incomplete IMP, make sure to use Supplemental IMPs to specify any supplemental IMPs that contain assets referenced by the CPL.

source

pub fn set_file_input(self, input: Option<String>) -> Self

Specify the source file for your transcoding job. You can use multiple inputs in a single job. The service concatenates these inputs, in the order that you specify them in the job, to create the outputs. If your input format is IMF, specify your input by providing the path to your CPL. For example, “s3://bucket/vf/cpl.xml”. If the CPL is in an incomplete IMP, make sure to use Supplemental IMPs to specify any supplemental IMPs that contain assets referenced by the CPL.

source

pub fn get_file_input(&self) -> &Option<String>

Specify the source file for your transcoding job. You can use multiple inputs in a single job. The service concatenates these inputs, in the order that you specify them in the job, to create the outputs. If your input format is IMF, specify your input by providing the path to your CPL. For example, “s3://bucket/vf/cpl.xml”. If the CPL is in an incomplete IMP, make sure to use Supplemental IMPs to specify any supplemental IMPs that contain assets referenced by the CPL.

source

pub fn filter_enable(self, input: InputFilterEnable) -> Self

Specify whether to apply input filtering to improve the video quality of your input. To apply filtering depending on your input type and quality: Choose Auto. To apply no filtering: Choose Disable. To apply filtering regardless of your input type and quality: Choose Force. When you do, you must also specify a value for Filter strength.

source

pub fn set_filter_enable(self, input: Option<InputFilterEnable>) -> Self

Specify whether to apply input filtering to improve the video quality of your input. To apply filtering depending on your input type and quality: Choose Auto. To apply no filtering: Choose Disable. To apply filtering regardless of your input type and quality: Choose Force. When you do, you must also specify a value for Filter strength.

source

pub fn get_filter_enable(&self) -> &Option<InputFilterEnable>

Specify whether to apply input filtering to improve the video quality of your input. To apply filtering depending on your input type and quality: Choose Auto. To apply no filtering: Choose Disable. To apply filtering regardless of your input type and quality: Choose Force. When you do, you must also specify a value for Filter strength.

source

pub fn filter_strength(self, input: i32) -> Self

Specify the strength of the input filter. To apply an automatic amount of filtering based the compression artifacts measured in your input: We recommend that you leave Filter strength blank and set Filter enable to Auto. To manually apply filtering: Enter a value from 1 to 5, where 1 is the least amount of filtering and 5 is the most. The value that you enter applies to the strength of the Deblock or Denoise filters, or to the strength of the Advanced input filter.

source

pub fn set_filter_strength(self, input: Option<i32>) -> Self

Specify the strength of the input filter. To apply an automatic amount of filtering based the compression artifacts measured in your input: We recommend that you leave Filter strength blank and set Filter enable to Auto. To manually apply filtering: Enter a value from 1 to 5, where 1 is the least amount of filtering and 5 is the most. The value that you enter applies to the strength of the Deblock or Denoise filters, or to the strength of the Advanced input filter.

source

pub fn get_filter_strength(&self) -> &Option<i32>

Specify the strength of the input filter. To apply an automatic amount of filtering based the compression artifacts measured in your input: We recommend that you leave Filter strength blank and set Filter enable to Auto. To manually apply filtering: Enter a value from 1 to 5, where 1 is the least amount of filtering and 5 is the most. The value that you enter applies to the strength of the Deblock or Denoise filters, or to the strength of the Advanced input filter.

source

pub fn image_inserter(self, input: ImageInserter) -> Self

Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

source

pub fn set_image_inserter(self, input: Option<ImageInserter>) -> Self

Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

source

pub fn get_image_inserter(&self) -> &Option<ImageInserter>

Enable the image inserter feature to include a graphic overlay on your video. Enable or disable this feature for each input individually. This setting is disabled by default.

source

pub fn input_clippings(self, input: InputClipping) -> Self

Appends an item to input_clippings.

To override the contents of this collection use set_input_clippings.

Contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

source

pub fn set_input_clippings(self, input: Option<Vec<InputClipping>>) -> Self

Contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

source

pub fn get_input_clippings(&self) -> &Option<Vec<InputClipping>>

Contains sets of start and end times that together specify a portion of the input to be used in the outputs. If you provide only a start time, the clip will be the entire input from that point to the end. If you provide only an end time, it will be the entire input up to that point. When you specify more than one input clip, the transcoding service creates the job outputs by stringing the clips together in the order you specify them.

source

pub fn input_scan_type(self, input: InputScanType) -> Self

When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn’t automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don’t specify, the default value is Auto. Auto is the correct setting for all inputs that are not PsF. Don’t set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

source

pub fn set_input_scan_type(self, input: Option<InputScanType>) -> Self

When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn’t automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don’t specify, the default value is Auto. Auto is the correct setting for all inputs that are not PsF. Don’t set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

source

pub fn get_input_scan_type(&self) -> &Option<InputScanType>

When you have a progressive segmented frame (PsF) input, use this setting to flag the input as PsF. MediaConvert doesn’t automatically detect PsF. Therefore, flagging your input as PsF results in better preservation of video quality when you do deinterlacing and frame rate conversion. If you don’t specify, the default value is Auto. Auto is the correct setting for all inputs that are not PsF. Don’t set this value to PsF when your input is interlaced. Doing so creates horizontal interlacing artifacts.

source

pub fn position(self, input: Rectangle) -> Self

Use Selection placement to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement. If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD to Respond. If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior.

source

pub fn set_position(self, input: Option<Rectangle>) -> Self

Use Selection placement to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement. If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD to Respond. If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior.

source

pub fn get_position(&self) -> &Option<Rectangle>

Use Selection placement to define the video area in your output frame. The area outside of the rectangle that you specify here is black. If you specify a value here, it will override any value that you specify in the output setting Selection placement. If you specify a value here, this will override any AFD values in your input, even if you set Respond to AFD to Respond. If you specify a value here, this will ignore anything that you specify for the setting Scaling Behavior.

source

pub fn program_number(self, input: i32) -> Self

Use Program to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn’t exist, the transcoding service will use this default.

source

pub fn set_program_number(self, input: Option<i32>) -> Self

Use Program to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn’t exist, the transcoding service will use this default.

source

pub fn get_program_number(&self) -> &Option<i32>

Use Program to select a specific program from within a multi-program transport stream. Note that Quad 4K is not currently supported. Default is the first program within the transport stream. If the program you specify doesn’t exist, the transcoding service will use this default.

source

pub fn psi_control(self, input: InputPsiControl) -> Self

Set PSI control for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

source

pub fn set_psi_control(self, input: Option<InputPsiControl>) -> Self

Set PSI control for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

source

pub fn get_psi_control(&self) -> &Option<InputPsiControl>

Set PSI control for transport stream inputs to specify which data the demux process to scans. * Ignore PSI - Scan all PIDs for audio and video. * Use PSI - Scan only PSI data.

source

pub fn supplemental_imps(self, input: impl Into<String>) -> Self

Appends an item to supplemental_imps.

To override the contents of this collection use set_supplemental_imps.

Provide a list of any necessary supplemental IMPs. You need supplemental IMPs if the CPL that you’re using for your input is in an incomplete IMP. Specify either the supplemental IMP directories with a trailing slash or the ASSETMAP.xml files. For example [“s3://bucket/ov/”, “s3://bucket/vf2/ASSETMAP.xml”]. You don’t need to specify the IMP that contains your input CPL, because the service automatically detects it.

source

pub fn set_supplemental_imps(self, input: Option<Vec<String>>) -> Self

Provide a list of any necessary supplemental IMPs. You need supplemental IMPs if the CPL that you’re using for your input is in an incomplete IMP. Specify either the supplemental IMP directories with a trailing slash or the ASSETMAP.xml files. For example [“s3://bucket/ov/”, “s3://bucket/vf2/ASSETMAP.xml”]. You don’t need to specify the IMP that contains your input CPL, because the service automatically detects it.

source

pub fn get_supplemental_imps(&self) -> &Option<Vec<String>>

Provide a list of any necessary supplemental IMPs. You need supplemental IMPs if the CPL that you’re using for your input is in an incomplete IMP. Specify either the supplemental IMP directories with a trailing slash or the ASSETMAP.xml files. For example [“s3://bucket/ov/”, “s3://bucket/vf2/ASSETMAP.xml”]. You don’t need to specify the IMP that contains your input CPL, because the service automatically detects it.

source

pub fn timecode_source(self, input: InputTimecodeSource) -> Self

Use this Timecode source setting, located under the input settings, to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded to use the timecodes in your input video. Choose Start at zero to start the first frame at zero. Choose Specified start to start the first frame at the timecode that you specify in the setting Start timecode. If you don’t specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

source

pub fn set_timecode_source(self, input: Option<InputTimecodeSource>) -> Self

Use this Timecode source setting, located under the input settings, to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded to use the timecodes in your input video. Choose Start at zero to start the first frame at zero. Choose Specified start to start the first frame at the timecode that you specify in the setting Start timecode. If you don’t specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

source

pub fn get_timecode_source(&self) -> &Option<InputTimecodeSource>

Use this Timecode source setting, located under the input settings, to specify how the service counts input video frames. This input frame count affects only the behavior of features that apply to a single input at a time, such as input clipping and synchronizing some captions formats. Choose Embedded to use the timecodes in your input video. Choose Start at zero to start the first frame at zero. Choose Specified start to start the first frame at the timecode that you specify in the setting Start timecode. If you don’t specify a value for Timecode source, the service will use Embedded by default. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

source

pub fn timecode_start(self, input: impl Into<String>) -> Self

Specify the timecode that you want the service to use for this input’s initial frame. To use this setting, you must set the Timecode source setting, located under the input settings, to Specified start. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

source

pub fn set_timecode_start(self, input: Option<String>) -> Self

Specify the timecode that you want the service to use for this input’s initial frame. To use this setting, you must set the Timecode source setting, located under the input settings, to Specified start. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

source

pub fn get_timecode_start(&self) -> &Option<String>

Specify the timecode that you want the service to use for this input’s initial frame. To use this setting, you must set the Timecode source setting, located under the input settings, to Specified start. For more information about timecodes, see https://docs.aws.amazon.com/console/mediaconvert/timecode.

source

pub fn video_generator(self, input: InputVideoGenerator) -> Self

When you include Video generator, MediaConvert creates a video input with black frames. Use this setting if you do not have a video input or if you want to add black video frames before, or after, other inputs. You can specify Video generator, or you can specify an Input file, but you cannot specify both. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/video-generator.html

source

pub fn set_video_generator(self, input: Option<InputVideoGenerator>) -> Self

When you include Video generator, MediaConvert creates a video input with black frames. Use this setting if you do not have a video input or if you want to add black video frames before, or after, other inputs. You can specify Video generator, or you can specify an Input file, but you cannot specify both. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/video-generator.html

source

pub fn get_video_generator(&self) -> &Option<InputVideoGenerator>

When you include Video generator, MediaConvert creates a video input with black frames. Use this setting if you do not have a video input or if you want to add black video frames before, or after, other inputs. You can specify Video generator, or you can specify an Input file, but you cannot specify both. For more information, see https://docs.aws.amazon.com/mediaconvert/latest/ug/video-generator.html

source

pub fn video_overlays(self, input: VideoOverlay) -> Self

Appends an item to video_overlays.

To override the contents of this collection use set_video_overlays.

Contains an array of video overlays.

source

pub fn set_video_overlays(self, input: Option<Vec<VideoOverlay>>) -> Self

Contains an array of video overlays.

source

pub fn get_video_overlays(&self) -> &Option<Vec<VideoOverlay>>

Contains an array of video overlays.

source

pub fn video_selector(self, input: VideoSelector) -> Self

Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

source

pub fn set_video_selector(self, input: Option<VideoSelector>) -> Self

Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

source

pub fn get_video_selector(&self) -> &Option<VideoSelector>

Input video selectors contain the video settings for the input. Each of your inputs can have up to one video selector.

source

pub fn build(self) -> Input

Consumes the builder and constructs a Input.

Trait Implementations§

source§

impl Clone for InputBuilder

source§

fn clone(&self) -> InputBuilder

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for InputBuilder

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl Default for InputBuilder

source§

fn default() -> InputBuilder

Returns the “default value” for a type. Read more
source§

impl PartialEq for InputBuilder

source§

fn eq(&self, other: &InputBuilder) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl StructuralPartialEq for InputBuilder

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more