Struct aws_sdk_chime::types::EngineTranscribeSettings

source ·
#[non_exhaustive]
pub struct EngineTranscribeSettings {
Show 16 fields pub language_code: Option<TranscribeLanguageCode>, pub vocabulary_filter_method: Option<TranscribeVocabularyFilterMethod>, pub vocabulary_filter_name: Option<String>, pub vocabulary_name: Option<String>, pub region: Option<TranscribeRegion>, pub enable_partial_results_stabilization: Option<bool>, pub partial_results_stability: Option<TranscribePartialResultsStability>, pub content_identification_type: Option<TranscribeContentIdentificationType>, pub content_redaction_type: Option<TranscribeContentRedactionType>, pub pii_entity_types: Option<String>, pub language_model_name: Option<String>, pub identify_language: Option<bool>, pub language_options: Option<String>, pub preferred_language: Option<TranscribeLanguageCode>, pub vocabulary_names: Option<String>, pub vocabulary_filter_names: Option<String>,
}
Expand description

Settings specific for Amazon Transcribe as the live transcription engine.

If you specify an invalid combination of parameters, a TranscriptFailed event will be sent with the contents of the BadRequestException generated by Amazon Transcribe. For more information on each parameter and which combinations are valid, refer to the StartStreamTranscription API in the Amazon Transcribe Developer Guide.

Fields (Non-exhaustive)§

This struct is marked as non-exhaustive
Non-exhaustive structs could have additional fields added in future. Therefore, non-exhaustive structs cannot be constructed in external crates using the traditional Struct { .. } syntax; cannot be matched against without a wildcard ..; and struct update syntax will not work.
§language_code: Option<TranscribeLanguageCode>

Specify the language code that represents the language spoken.

If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

§vocabulary_filter_method: Option<TranscribeVocabularyFilterMethod>

Specify how you want your vocabulary filter applied to your transcript.

To replace words with ***, choose mask.

To delete words, choose remove.

To flag words without changing them, choose tag.

§vocabulary_filter_name: Option<String>

Specify the name of the custom vocabulary filter that you want to use when processing your transcription. Note that vocabulary filter names are case sensitive.

If you use Amazon Transcribe in multiple Regions, the vocabulary filter must be available in Amazon Transcribe in each Region.

If you include IdentifyLanguage and want to use one or more vocabulary filters with your transcription, use the VocabularyFilterNames parameter instead.

§vocabulary_name: Option<String>

Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.

If you use Amazon Transcribe multiple Regions, the vocabulary must be available in Amazon Transcribe in each Region.

If you include IdentifyLanguage and want to use one or more custom vocabularies with your transcription, use the VocabularyNames parameter instead.

§region: Option<TranscribeRegion>

The AWS Region in which to use Amazon Transcribe.

If you don't specify a Region, then the MediaRegion parameter of the CreateMeeting.html API will be used. However, if Amazon Transcribe is not available in the MediaRegion, then a TranscriptFailed event is sent.

Use auto to use Amazon Transcribe in a Region near the meeting’s MediaRegion. For more information, refer to Choosing a transcription Region in the Amazon Chime SDK Developer Guide.

§enable_partial_results_stabilization: Option<bool>

Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

§partial_results_stability: Option<TranscribePartialResultsStability>

Specify the level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

§content_identification_type: Option<TranscribeContentIdentificationType>

Labels all personally identifiable information (PII) identified in your transcript. If you don't include PiiEntityTypes, all PII is identified.

You can’t set ContentIdentificationType and ContentRedactionType.

§content_redaction_type: Option<TranscribeContentRedactionType>

Content redaction is performed at the segment level. If you don't include PiiEntityTypes, all PII is redacted.

You can’t set ContentIdentificationType and ContentRedactionType.

§pii_entity_types: Option<String>

Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you'd like, or you can select ALL.

Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY CREDIT_DEBIT_NUMBER, EMAIL,NAME, PHONE, PIN, SSN, or ALL.

Note that if you include PiiEntityTypes, you must also include ContentIdentificationType or ContentRedactionType.

If you include ContentRedactionType or ContentIdentificationType, but do not include PiiEntityTypes, all PII is redacted or identified.

§language_model_name: Option<String>

Specify the name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

The language of the specified language model must match the language code. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

If you use Amazon Transcribe in multiple Regions, the custom language model must be available in Amazon Transcribe in each Region.

§identify_language: Option<bool>

Enables automatic language identification for your transcription.

If you include IdentifyLanguage, you can optionally use LanguageOptions to include a list of language codes that you think may be present in your audio stream. Including language options can improve transcription accuracy.

You can also use PreferredLanguage to include a preferred language. Doing so can help Amazon Transcribe identify the language faster.

You must include either LanguageCode or IdentifyLanguage.

Language identification can't be combined with custom language models or redaction.

§language_options: Option<String>

Specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended. If you're unsure what languages are present, do not include this parameter.

Including language options can improve the accuracy of language identification.

If you include LanguageOptions, you must also include IdentifyLanguage.

You can only include one language dialect per language. For example, you cannot include en-US and en-AU.

§preferred_language: Option<TranscribeLanguageCode>

Specify a preferred language from the subset of languages codes you specified in LanguageOptions.

You can only use this parameter if you include IdentifyLanguage and LanguageOptions.

§vocabulary_names: Option<String>

Specify the names of the custom vocabularies that you want to use when processing your transcription. Note that vocabulary names are case sensitive.

If you use Amazon Transcribe in multiple Regions, the vocabulary must be available in Amazon Transcribe in each Region.

If you don't include IdentifyLanguage and want to use a custom vocabulary with your transcription, use the VocabularyName parameter instead.

§vocabulary_filter_names: Option<String>

Specify the names of the custom vocabulary filters that you want to use when processing your transcription. Note that vocabulary filter names are case sensitive.

If you use Amazon Transcribe in multiple Regions, the vocabulary filter must be available in Amazon Transcribe in each Region.

If you're not including IdentifyLanguage and want to use a custom vocabulary filter with your transcription, use the VocabularyFilterName parameter instead.

Implementations§

source§

impl EngineTranscribeSettings

source

pub fn language_code(&self) -> Option<&TranscribeLanguageCode>

Specify the language code that represents the language spoken.

If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

source

pub fn vocabulary_filter_method( &self ) -> Option<&TranscribeVocabularyFilterMethod>

Specify how you want your vocabulary filter applied to your transcript.

To replace words with ***, choose mask.

To delete words, choose remove.

To flag words without changing them, choose tag.

source

pub fn vocabulary_filter_name(&self) -> Option<&str>

Specify the name of the custom vocabulary filter that you want to use when processing your transcription. Note that vocabulary filter names are case sensitive.

If you use Amazon Transcribe in multiple Regions, the vocabulary filter must be available in Amazon Transcribe in each Region.

If you include IdentifyLanguage and want to use one or more vocabulary filters with your transcription, use the VocabularyFilterNames parameter instead.

source

pub fn vocabulary_name(&self) -> Option<&str>

Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.

If you use Amazon Transcribe multiple Regions, the vocabulary must be available in Amazon Transcribe in each Region.

If you include IdentifyLanguage and want to use one or more custom vocabularies with your transcription, use the VocabularyNames parameter instead.

source

pub fn region(&self) -> Option<&TranscribeRegion>

The AWS Region in which to use Amazon Transcribe.

If you don't specify a Region, then the MediaRegion parameter of the CreateMeeting.html API will be used. However, if Amazon Transcribe is not available in the MediaRegion, then a TranscriptFailed event is sent.

Use auto to use Amazon Transcribe in a Region near the meeting’s MediaRegion. For more information, refer to Choosing a transcription Region in the Amazon Chime SDK Developer Guide.

source

pub fn enable_partial_results_stabilization(&self) -> Option<bool>

Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

source

pub fn partial_results_stability( &self ) -> Option<&TranscribePartialResultsStability>

Specify the level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

source

pub fn content_identification_type( &self ) -> Option<&TranscribeContentIdentificationType>

Labels all personally identifiable information (PII) identified in your transcript. If you don't include PiiEntityTypes, all PII is identified.

You can’t set ContentIdentificationType and ContentRedactionType.

source

pub fn content_redaction_type(&self) -> Option<&TranscribeContentRedactionType>

Content redaction is performed at the segment level. If you don't include PiiEntityTypes, all PII is redacted.

You can’t set ContentIdentificationType and ContentRedactionType.

source

pub fn pii_entity_types(&self) -> Option<&str>

Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you'd like, or you can select ALL.

Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY CREDIT_DEBIT_NUMBER, EMAIL,NAME, PHONE, PIN, SSN, or ALL.

Note that if you include PiiEntityTypes, you must also include ContentIdentificationType or ContentRedactionType.

If you include ContentRedactionType or ContentIdentificationType, but do not include PiiEntityTypes, all PII is redacted or identified.

source

pub fn language_model_name(&self) -> Option<&str>

Specify the name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

The language of the specified language model must match the language code. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

If you use Amazon Transcribe in multiple Regions, the custom language model must be available in Amazon Transcribe in each Region.

source

pub fn identify_language(&self) -> Option<bool>

Enables automatic language identification for your transcription.

If you include IdentifyLanguage, you can optionally use LanguageOptions to include a list of language codes that you think may be present in your audio stream. Including language options can improve transcription accuracy.

You can also use PreferredLanguage to include a preferred language. Doing so can help Amazon Transcribe identify the language faster.

You must include either LanguageCode or IdentifyLanguage.

Language identification can't be combined with custom language models or redaction.

source

pub fn language_options(&self) -> Option<&str>

Specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended. If you're unsure what languages are present, do not include this parameter.

Including language options can improve the accuracy of language identification.

If you include LanguageOptions, you must also include IdentifyLanguage.

You can only include one language dialect per language. For example, you cannot include en-US and en-AU.

source

pub fn preferred_language(&self) -> Option<&TranscribeLanguageCode>

Specify a preferred language from the subset of languages codes you specified in LanguageOptions.

You can only use this parameter if you include IdentifyLanguage and LanguageOptions.

source

pub fn vocabulary_names(&self) -> Option<&str>

Specify the names of the custom vocabularies that you want to use when processing your transcription. Note that vocabulary names are case sensitive.

If you use Amazon Transcribe in multiple Regions, the vocabulary must be available in Amazon Transcribe in each Region.

If you don't include IdentifyLanguage and want to use a custom vocabulary with your transcription, use the VocabularyName parameter instead.

source

pub fn vocabulary_filter_names(&self) -> Option<&str>

Specify the names of the custom vocabulary filters that you want to use when processing your transcription. Note that vocabulary filter names are case sensitive.

If you use Amazon Transcribe in multiple Regions, the vocabulary filter must be available in Amazon Transcribe in each Region.

If you're not including IdentifyLanguage and want to use a custom vocabulary filter with your transcription, use the VocabularyFilterName parameter instead.

source§

impl EngineTranscribeSettings

source

pub fn builder() -> EngineTranscribeSettingsBuilder

Creates a new builder-style object to manufacture EngineTranscribeSettings.

Trait Implementations§

source§

impl Clone for EngineTranscribeSettings

source§

fn clone(&self) -> EngineTranscribeSettings

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl Debug for EngineTranscribeSettings

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl PartialEq for EngineTranscribeSettings

source§

fn eq(&self, other: &EngineTranscribeSettings) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl StructuralPartialEq for EngineTranscribeSettings

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> IntoEither for T

source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
source§

impl<Unshared, Shared> IntoShared<Shared> for Unshared
where Shared: FromUnshared<Unshared>,

source§

fn into_shared(self) -> Shared

Creates a shared type from an unshared type.
source§

impl<T> Same for T

§

type Output = T

Should always be Self
source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more