pub struct Client { /* private fields */ }
Expand description

Client for Amazon Transcribe Service

Client for invoking operations on Amazon Transcribe Service. Each operation on Amazon Transcribe Service is a method on this this struct. .send() MUST be invoked on the generated operations to dispatch the request to the service.

Examples

Constructing a client and invoking an operation

    // create a shared configuration. This can be used & shared between multiple service clients.
    let shared_config = aws_config::load_from_env().await;
    let client = aws_sdk_transcribe::Client::new(&shared_config);
    // invoke an operation
    /* let rsp = client
        .<operation_name>().
        .<param>("some value")
        .send().await; */

Constructing a client with custom configuration

use aws_config::RetryConfig;
let shared_config = aws_config::load_from_env().await;
let config = aws_sdk_transcribe::config::Builder::from(&shared_config)
  .retry_config(RetryConfig::disabled())
  .build();
let client = aws_sdk_transcribe::Client::from_conf(config);

Implementations

Creates a client with the given service configuration.

Returns the client’s configuration.

Constructs a fluent builder for the CreateCallAnalyticsCategory operation.

Constructs a fluent builder for the CreateLanguageModel operation.

  • The fluent builder is configurable:
    • language_code(ClmLanguageCode) / set_language_code(Option<ClmLanguageCode>):

      The language code that represents the language of your model. Each language model must contain terms in only one language, and the language you select for your model must match the language of your training and tuning data.

      For a list of supported languages and their associated language codes, refer to the Supported languages table. Note that U.S. English (en-US) is the only language supported with Amazon Transcribe Medical.

      A custom language model can only be used to transcribe files in the same language as the model. For example, if you create a language model using US English (en-US), you can only apply this model to files that contain English audio.

    • base_model_name(BaseModelName) / set_base_model_name(Option<BaseModelName>):

      The Amazon Transcribe standard language model, or base model, used to create your custom language model. Amazon Transcribe offers two options for base models: Wideband and Narrowband.

      If the audio you want to transcribe has a sample rate of 16,000 Hz or greater, choose WideBand. To transcribe audio with a sample rate less than 16,000 Hz, choose NarrowBand.

    • model_name(impl Into<String>) / set_model_name(Option<String>):

      A unique name, chosen by you, for your custom language model.

      This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account. If you try to create a new language model with the same name as an existing language model, you get a ConflictException error.

    • input_data_config(InputDataConfig) / set_input_data_config(Option<InputDataConfig>):

      Contains the Amazon S3 location of the training data you want to use to create a new custom language model, and permissions to access this location.

      When using InputDataConfig, you must include these sub-parameters: S3Uri, which is the Amazon S3 location of your training data, and DataAccessRoleArn, which is the Amazon Resource Name (ARN) of the role that has permission to access your specified Amazon S3 location. You can optionally include TuningDataS3Uri, which is the Amazon S3 location of your tuning data. If you specify different Amazon S3 locations for training and tuning data, the ARN you use must have permissions to access both locations.

    • tags(Vec<Tag>) / set_tags(Option<Vec<Tag>>):

      Adds one or more custom tags, each in the form of a key:value pair, to a new custom language model at the time you create this new model.

      To learn more about using tags with Amazon Transcribe, refer to Tagging resources.

  • On success, responds with CreateLanguageModelOutput with field(s):
  • On failure, responds with SdkError<CreateLanguageModelError>

Constructs a fluent builder for the CreateMedicalVocabulary operation.

Constructs a fluent builder for the CreateVocabulary operation.

  • The fluent builder is configurable:
    • vocabulary_name(impl Into<String>) / set_vocabulary_name(Option<String>):

      A unique name, chosen by you, for your new custom vocabulary.

      This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account. If you try to create a new vocabulary with the same name as an existing vocabulary, you get a ConflictException error.

    • language_code(LanguageCode) / set_language_code(Option<LanguageCode>):

      The language code that represents the language of the entries in your custom vocabulary. Each vocabulary must contain terms in only one language.

      A custom vocabulary can only be used to transcribe files in the same language as the vocabulary. For example, if you create a vocabulary using US English (en-US), you can only apply this vocabulary to files that contain English audio.

      For a list of supported languages and their associated language codes, refer to the Supported languages table.

    • phrases(Vec<String>) / set_phrases(Option<Vec<String>>):

      Use this parameter if you want to create your vocabulary by including all desired terms, as comma-separated values, within your request. The other option for creating your vocabulary is to save your entries in a text file and upload them to an Amazon S3 bucket, then specify the location of your file using the VocabularyFileUri parameter.

      Note that if you include Phrases in your request, you cannot use VocabularyFileUri; you must choose one or the other.

      Each language has a character set that contains all allowed characters for that specific language. If you use unsupported characters, your vocabulary filter request fails. Refer to Character Sets for Custom Vocabularies to get the character set for your language.

    • vocabulary_file_uri(impl Into<String>) / set_vocabulary_file_uri(Option<String>):

      The Amazon S3 location of the text file that contains your custom vocabulary. The URI must be located in the same Amazon Web Services Region as the resource you’re calling.

      Here’s an example URI path: s3://DOC-EXAMPLE-BUCKET/my-vocab-file.txt

      Note that if you include VocabularyFileUri in your request, you cannot use the Phrases flag; you must choose one or the other.

    • tags(Vec<Tag>) / set_tags(Option<Vec<Tag>>):

      Adds one or more custom tags, each in the form of a key:value pair, to a new custom vocabulary at the time you create this new vocabulary.

      To learn more about using tags with Amazon Transcribe, refer to Tagging resources.

  • On success, responds with CreateVocabularyOutput with field(s):
  • On failure, responds with SdkError<CreateVocabularyError>

Constructs a fluent builder for the CreateVocabularyFilter operation.

  • The fluent builder is configurable:
    • vocabulary_filter_name(impl Into<String>) / set_vocabulary_filter_name(Option<String>):

      A unique name, chosen by you, for your new custom vocabulary filter.

      This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account. If you try to create a new vocabulary filter with the same name as an existing vocabulary filter, you get a ConflictException error.

    • language_code(LanguageCode) / set_language_code(Option<LanguageCode>):

      The language code that represents the language of the entries in your vocabulary filter. Each vocabulary filter must contain terms in only one language.

      A vocabulary filter can only be used to transcribe files in the same language as the filter. For example, if you create a vocabulary filter using US English (en-US), you can only apply this filter to files that contain English audio.

      For a list of supported languages and their associated language codes, refer to the Supported languages table.

    • words(Vec<String>) / set_words(Option<Vec<String>>):

      Use this parameter if you want to create your vocabulary filter by including all desired terms, as comma-separated values, within your request. The other option for creating your vocabulary filter is to save your entries in a text file and upload them to an Amazon S3 bucket, then specify the location of your file using the VocabularyFilterFileUri parameter.

      Note that if you include Words in your request, you cannot use VocabularyFilterFileUri; you must choose one or the other.

      Each language has a character set that contains all allowed characters for that specific language. If you use unsupported characters, your vocabulary filter request fails. Refer to Character Sets for Custom Vocabularies to get the character set for your language.

    • vocabulary_filter_file_uri(impl Into<String>) / set_vocabulary_filter_file_uri(Option<String>):

      The Amazon S3 location of the text file that contains your custom vocabulary filter terms. The URI must be located in the same Amazon Web Services Region as the resource you’re calling.

      Here’s an example URI path: s3://DOC-EXAMPLE-BUCKET/my-vocab-filter-file.txt

      Note that if you include VocabularyFilterFileUri in your request, you cannot use Words; you must choose one or the other.

    • tags(Vec<Tag>) / set_tags(Option<Vec<Tag>>):

      Adds one or more custom tags, each in the form of a key:value pair, to a new custom vocabulary filter at the time you create this new filter.

      To learn more about using tags with Amazon Transcribe, refer to Tagging resources.

  • On success, responds with CreateVocabularyFilterOutput with field(s):
  • On failure, responds with SdkError<CreateVocabularyFilterError>

Constructs a fluent builder for the DeleteCallAnalyticsCategory operation.

Constructs a fluent builder for the DeleteCallAnalyticsJob operation.

Constructs a fluent builder for the DeleteLanguageModel operation.

Constructs a fluent builder for the DeleteMedicalTranscriptionJob operation.

Constructs a fluent builder for the DeleteMedicalVocabulary operation.

Constructs a fluent builder for the DeleteTranscriptionJob operation.

Constructs a fluent builder for the DeleteVocabulary operation.

Constructs a fluent builder for the DeleteVocabularyFilter operation.

Constructs a fluent builder for the DescribeLanguageModel operation.

  • The fluent builder is configurable:
  • On success, responds with DescribeLanguageModelOutput with field(s):
    • language_model(Option<LanguageModel>):

      Provides information about the specified custom language model.

      This parameter also shows if the base language model you used to create your custom language model has been updated. If Amazon Transcribe has updated the base model, you can create a new custom language model using the updated base model.

      If you tried to create a new custom language model and the request wasn’t successful, you can use this DescribeLanguageModel to help identify the reason for this failure.

  • On failure, responds with SdkError<DescribeLanguageModelError>

Constructs a fluent builder for the GetCallAnalyticsCategory operation.

Constructs a fluent builder for the GetCallAnalyticsJob operation.

Constructs a fluent builder for the GetMedicalTranscriptionJob operation.

Constructs a fluent builder for the GetMedicalVocabulary operation.

Constructs a fluent builder for the GetTranscriptionJob operation.

Constructs a fluent builder for the GetVocabulary operation.

Constructs a fluent builder for the GetVocabularyFilter operation.

Constructs a fluent builder for the ListCallAnalyticsCategories operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
    • next_token(impl Into<String>) / set_next_token(Option<String>):

      If your ListCallAnalyticsCategories request returns more results than can be displayed, NextToken is displayed in the response with an associated string. To get the next page of results, copy this string and repeat your request, including NextToken with the value of the copied string. Repeat as needed to view all your results.

    • max_results(i32) / set_max_results(Option<i32>):

      The maximum number of Call Analytics categories to return in each page of results. If there are fewer results than the value you specify, only the actual results are returned. If you don’t specify a value, a default of 5 is used.

  • On success, responds with ListCallAnalyticsCategoriesOutput with field(s):
    • next_token(Option<String>):

      If NextToken is present in your response, it indicates that not all results are displayed. To view the next set of results, copy the string associated with the NextToken parameter in your results output, then run your request again including NextToken with the value of the copied string. Repeat as needed to view all your results.

    • categories(Option<Vec<CategoryProperties>>):

      Provides detailed information about your Call Analytics categories, including all the rules associated with each category.

  • On failure, responds with SdkError<ListCallAnalyticsCategoriesError>

Constructs a fluent builder for the ListCallAnalyticsJobs operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the ListLanguageModels operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
  • On success, responds with ListLanguageModelsOutput with field(s):
    • next_token(Option<String>):

      If NextToken is present in your response, it indicates that not all results are displayed. To view the next set of results, copy the string associated with the NextToken parameter in your results output, then run your request again including NextToken with the value of the copied string. Repeat as needed to view all your results.

    • models(Option<Vec<LanguageModel>>):

      Provides information about the custom language models that match the criteria specified in your request.

  • On failure, responds with SdkError<ListLanguageModelsError>

Constructs a fluent builder for the ListMedicalTranscriptionJobs operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the ListMedicalVocabularies operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
  • On success, responds with ListMedicalVocabulariesOutput with field(s):
    • status(Option<VocabularyState>):

      Lists all custom medical vocabularies that have the status specified in your request. Vocabularies are ordered by creation date, with the newest vocabulary first.

    • next_token(Option<String>):

      If NextToken is present in your response, it indicates that not all results are displayed. To view the next set of results, copy the string associated with the NextToken parameter in your results output, then run your request again including NextToken with the value of the copied string. Repeat as needed to view all your results.

    • vocabularies(Option<Vec<VocabularyInfo>>):

      Provides information about the custom medical vocabularies that match the criteria specified in your request.

  • On failure, responds with SdkError<ListMedicalVocabulariesError>

Constructs a fluent builder for the ListTagsForResource operation.

Constructs a fluent builder for the ListTranscriptionJobs operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the ListVocabularies operation. This operation supports pagination; See into_paginator().

  • The fluent builder is configurable:
  • On success, responds with ListVocabulariesOutput with field(s):
    • status(Option<VocabularyState>):

      Lists all custom vocabularies that have the status specified in your request. Vocabularies are ordered by creation date, with the newest vocabulary first.

    • next_token(Option<String>):

      If NextToken is present in your response, it indicates that not all results are displayed. To view the next set of results, copy the string associated with the NextToken parameter in your results output, then run your request again including NextToken with the value of the copied string. Repeat as needed to view all your results.

    • vocabularies(Option<Vec<VocabularyInfo>>):

      Provides information about the custom vocabularies that match the criteria specified in your request.

  • On failure, responds with SdkError<ListVocabulariesError>

Constructs a fluent builder for the ListVocabularyFilters operation. This operation supports pagination; See into_paginator().

Constructs a fluent builder for the StartCallAnalyticsJob operation.

  • The fluent builder is configurable:
    • call_analytics_job_name(impl Into<String>) / set_call_analytics_job_name(Option<String>):

      A unique name, chosen by you, for your Call Analytics job.

      This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account. If you try to create a new job with the same name as an existing job, you get a ConflictException error.

    • media(Media) / set_media(Option<Media>):

      Describes the Amazon S3 location of the media file you want to use in your request.

    • output_location(impl Into<String>) / set_output_location(Option<String>):

      The Amazon S3 location where you want your Call Analytics transcription output stored. You can use any of the following formats to specify the output location:

      1. s3://DOC-EXAMPLE-BUCKET

      2. s3://DOC-EXAMPLE-BUCKET/my-output-folder/

      3. s3://DOC-EXAMPLE-BUCKET/my-output-folder/my-call-analytics-job.json

      Unless you specify a file name (option 3), the name of your output file has a default value that matches the name you specified for your transcription job using the CallAnalyticsJobName parameter.

      You can specify a KMS key to encrypt your output using the OutputEncryptionKMSKeyId parameter. If you don’t specify a KMS key, Amazon Transcribe uses the default Amazon S3 key for server-side encryption.

      If you don’t specify OutputLocation, your transcript is placed in a service-managed Amazon S3 bucket and you are provided with a URI to access your transcript.

    • output_encryption_kms_key_id(impl Into<String>) / set_output_encryption_kms_key_id(Option<String>):

      The KMS key you want to use to encrypt your Call Analytics output.

      If using a key located in the current Amazon Web Services account, you can specify your KMS key in one of four ways:

      1. Use the KMS key ID itself. For example, 1234abcd-12ab-34cd-56ef-1234567890ab.

      2. Use an alias for the KMS key ID. For example, alias/ExampleAlias.

      3. Use the Amazon Resource Name (ARN) for the KMS key ID. For example, arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab.

      4. Use the ARN for the KMS key alias. For example, arn:aws:kms:region:account-ID:alias/ExampleAlias.

      If using a key located in a different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways:

      1. Use the ARN for the KMS key ID. For example, arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab.

      2. Use the ARN for the KMS key alias. For example, arn:aws:kms:region:account-ID:alias/ExampleAlias.

      If you don’t specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3).

      If you specify a KMS key to encrypt your output, you must also specify an output location using the OutputLocation parameter.

      Note that the user making the request must have permission to use the specified KMS key.

    • data_access_role_arn(impl Into<String>) / set_data_access_role_arn(Option<String>):

      The Amazon Resource Name (ARN) of an IAM role that has permissions to access the Amazon S3 bucket that contains your input files. If the role you specify doesn’t have the appropriate permissions to access the specified Amazon S3 location, your request fails.

      IAM role ARNs have the format arn:partition:iam::account:role/role-name-with-path. For example: arn:aws:iam::111122223333:role/Admin.

      For more information, see IAM ARNs.

    • settings(CallAnalyticsJobSettings) / set_settings(Option<CallAnalyticsJobSettings>):

      Specify additional optional settings in your request, including content redaction; allows you to apply custom language models, vocabulary filters, and custom vocabularies to your Call Analytics job.

    • channel_definitions(Vec<ChannelDefinition>) / set_channel_definitions(Option<Vec<ChannelDefinition>>):

      Allows you to specify which speaker is on which channel. For example, if your agent is the first participant to speak, you would set ChannelId to 0 (to indicate the first channel) and ParticipantRole to AGENT (to indicate that it’s the agent speaking).

  • On success, responds with StartCallAnalyticsJobOutput with field(s):
  • On failure, responds with SdkError<StartCallAnalyticsJobError>

Constructs a fluent builder for the StartMedicalTranscriptionJob operation.

  • The fluent builder is configurable:
    • medical_transcription_job_name(impl Into<String>) / set_medical_transcription_job_name(Option<String>):

      A unique name, chosen by you, for your medical transcription job. The name you specify is also used as the default name of your transcription output file. If you want to specify a different name for your transcription output, use the OutputKey parameter.

      This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account. If you try to create a new job with the same name as an existing job, you get a ConflictException error.

    • language_code(LanguageCode) / set_language_code(Option<LanguageCode>):

      The language code that represents the language spoken in the input media file. US English (en-US) is the only valid value for medical transcription jobs. Any other value you enter for language code results in a BadRequestException error.

    • media_sample_rate_hertz(i32) / set_media_sample_rate_hertz(Option<i32>):

      The sample rate, in Hertz, of the audio track in your input media file.

      If you don’t specify the media sample rate, Amazon Transcribe Medical determines it for you. If you specify the sample rate, it must match the rate detected by Amazon Transcribe Medical; if there’s a mismatch between the value you specify and the value detected, your job fails. Therefore, in most cases, it’s advised to omit MediaSampleRateHertz and let Amazon Transcribe Medical determine the sample rate.

    • media_format(MediaFormat) / set_media_format(Option<MediaFormat>):

      Specify the format of your input media file.

    • media(Media) / set_media(Option<Media>):

      Describes the Amazon S3 location of the media file you want to use in your request.

    • output_bucket_name(impl Into<String>) / set_output_bucket_name(Option<String>):

      The name of the Amazon S3 bucket where you want your medical transcription output stored. Do not include the S3:// prefix of the specified bucket.

      If you want your output to go to a sub-folder of this bucket, specify it using the OutputKey parameter; OutputBucketName only accepts the name of a bucket.

      For example, if you want your output stored in S3://DOC-EXAMPLE-BUCKET, set OutputBucketName to DOC-EXAMPLE-BUCKET. However, if you want your output stored in S3://DOC-EXAMPLE-BUCKET/test-files/, set OutputBucketName to DOC-EXAMPLE-BUCKET and OutputKey to test-files/.

      Note that Amazon Transcribe must have permission to use the specified location. You can change Amazon S3 permissions using the Amazon Web Services Management Console. See also Permissions Required for IAM User Roles.

      If you don’t specify OutputBucketName, your transcript is placed in a service-managed Amazon S3 bucket and you are provided with a URI to access your transcript.

    • output_key(impl Into<String>) / set_output_key(Option<String>):

      Use in combination with OutputBucketName to specify the output location of your transcript and, optionally, a unique name for your output file. The default name for your transcription output is the same as the name you specified for your medical transcription job (MedicalTranscriptionJobName).

      Here are some examples of how you can use OutputKey:

      • If you specify ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName and ‘my-transcript.json’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/my-transcript.json.

      • If you specify ‘my-first-transcription’ as the MedicalTranscriptionJobName, ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName, and ‘my-transcript’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json.

      • If you specify ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName and ‘test-files/my-transcript.json’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json.

      • If you specify ‘my-first-transcription’ as the MedicalTranscriptionJobName, ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName, and ‘test-files/my-transcript’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json.

      If you specify the name of an Amazon S3 bucket sub-folder that doesn’t exist, one is created for you.

    • output_encryption_kms_key_id(impl Into<String>) / set_output_encryption_kms_key_id(Option<String>):

      The KMS key you want to use to encrypt your medical transcription output.

      If using a key located in the current Amazon Web Services account, you can specify your KMS key in one of four ways:

      1. Use the KMS key ID itself. For example, 1234abcd-12ab-34cd-56ef-1234567890ab.

      2. Use an alias for the KMS key ID. For example, alias/ExampleAlias.

      3. Use the Amazon Resource Name (ARN) for the KMS key ID. For example, arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab.

      4. Use the ARN for the KMS key alias. For example, arn:aws:kms:region:account-ID:alias/ExampleAlias.

      If using a key located in a different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways:

      1. Use the ARN for the KMS key ID. For example, arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab.

      2. Use the ARN for the KMS key alias. For example, arn:aws:kms:region:account-ID:alias/ExampleAlias.

      If you don’t specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3).

      If you specify a KMS key to encrypt your output, you must also specify an output location using the OutputLocation parameter.

      Note that the user making the request must have permission to use the specified KMS key.

    • kms_encryption_context(HashMap<String, String>) / set_kms_encryption_context(Option<HashMap<String, String>>):

      A map of plain text, non-secret key:value pairs, known as encryption context pairs, that provide an added layer of security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS.

    • settings(MedicalTranscriptionSetting) / set_settings(Option<MedicalTranscriptionSetting>):

      Specify additional optional settings in your request, including channel identification, alternative transcriptions, and speaker labeling; allows you to apply custom vocabularies to your transcription job.

    • content_identification_type(MedicalContentIdentificationType) / set_content_identification_type(Option<MedicalContentIdentificationType>):

      Labels all personal health information (PHI) identified in your transcript. For more information, see Identifying personal health information (PHI) in a transcription.

    • specialty(Specialty) / set_specialty(Option<Specialty>):

      Specify the predominant medical specialty represented in your media. For batch transcriptions, PRIMARYCARE is the only valid value. If you require additional specialties, refer to .

    • r#type(Type) / set_type(Option<Type>):

      Specify whether your input media contains only one person (DICTATION) or contains a conversation between two people (CONVERSATION).

      For example, DICTATION could be used for a medical professional wanting to transcribe voice memos; CONVERSATION could be used for transcribing the doctor-patient dialogue during the patient’s office visit.

    • tags(Vec<Tag>) / set_tags(Option<Vec<Tag>>):

      Adds one or more custom tags, each in the form of a key:value pair, to a new medical transcription job at the time you start this new job.

      To learn more about using tags with Amazon Transcribe, refer to Tagging resources.

  • On success, responds with StartMedicalTranscriptionJobOutput with field(s):
  • On failure, responds with SdkError<StartMedicalTranscriptionJobError>

Constructs a fluent builder for the StartTranscriptionJob operation.

  • The fluent builder is configurable:
    • transcription_job_name(impl Into<String>) / set_transcription_job_name(Option<String>):

      A unique name, chosen by you, for your transcription job. The name you specify is also used as the default name of your transcription output file. If you want to specify a different name for your transcription output, use the OutputKey parameter.

      This name is case sensitive, cannot contain spaces, and must be unique within an Amazon Web Services account. If you try to create a new job with the same name as an existing job, you get a ConflictException error.

    • language_code(LanguageCode) / set_language_code(Option<LanguageCode>):

      The language code that represents the language spoken in the input media file.

      If you’re unsure of the language spoken in your media file, consider using IdentifyLanguage or IdentifyMultipleLanguages to enable automatic language identification.

      Note that you must include one of LanguageCode, IdentifyLanguage, or IdentifyMultipleLanguages in your request. If you include more than one of these parameters, your transcription job fails.

      For a list of supported languages and their associated language codes, refer to the Supported languages table.

      To transcribe speech in Modern Standard Arabic (ar-SA), your media file must be encoded at a sample rate of 16,000 Hz or higher.

    • media_sample_rate_hertz(i32) / set_media_sample_rate_hertz(Option<i32>):

      The sample rate, in Hertz, of the audio track in your input media file.

      If you don’t specify the media sample rate, Amazon Transcribe determines it for you. If you specify the sample rate, it must match the rate detected by Amazon Transcribe; if there’s a mismatch between the value you specify and the value detected, your job fails. Therefore, in most cases, it’s advised to omit MediaSampleRateHertz and let Amazon Transcribe determine the sample rate.

    • media_format(MediaFormat) / set_media_format(Option<MediaFormat>):

      Specify the format of your input media file.

    • media(Media) / set_media(Option<Media>):

      Describes the Amazon S3 location of the media file you want to use in your request.

    • output_bucket_name(impl Into<String>) / set_output_bucket_name(Option<String>):

      The name of the Amazon S3 bucket where you want your transcription output stored. Do not include the S3:// prefix of the specified bucket.

      If you want your output to go to a sub-folder of this bucket, specify it using the OutputKey parameter; OutputBucketName only accepts the name of a bucket.

      For example, if you want your output stored in S3://DOC-EXAMPLE-BUCKET, set OutputBucketName to DOC-EXAMPLE-BUCKET. However, if you want your output stored in S3://DOC-EXAMPLE-BUCKET/test-files/, set OutputBucketName to DOC-EXAMPLE-BUCKET and OutputKey to test-files/.

      Note that Amazon Transcribe must have permission to use the specified location. You can change Amazon S3 permissions using the Amazon Web Services Management Console. See also Permissions Required for IAM User Roles.

      If you don’t specify OutputBucketName, your transcript is placed in a service-managed Amazon S3 bucket and you are provided with a URI to access your transcript.

    • output_key(impl Into<String>) / set_output_key(Option<String>):

      Use in combination with OutputBucketName to specify the output location of your transcript and, optionally, a unique name for your output file. The default name for your transcription output is the same as the name you specified for your transcription job (TranscriptionJobName).

      Here are some examples of how you can use OutputKey:

      • If you specify ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName and ‘my-transcript.json’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/my-transcript.json.

      • If you specify ‘my-first-transcription’ as the TranscriptionJobName, ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName, and ‘my-transcript’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/my-transcript/my-first-transcription.json.

      • If you specify ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName and ‘test-files/my-transcript.json’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript.json.

      • If you specify ‘my-first-transcription’ as the TranscriptionJobName, ‘DOC-EXAMPLE-BUCKET’ as the OutputBucketName, and ‘test-files/my-transcript’ as the OutputKey, your transcription output path is s3://DOC-EXAMPLE-BUCKET/test-files/my-transcript/my-first-transcription.json.

      If you specify the name of an Amazon S3 bucket sub-folder that doesn’t exist, one is created for you.

    • output_encryption_kms_key_id(impl Into<String>) / set_output_encryption_kms_key_id(Option<String>):

      The KMS key you want to use to encrypt your transcription output.

      If using a key located in the current Amazon Web Services account, you can specify your KMS key in one of four ways:

      1. Use the KMS key ID itself. For example, 1234abcd-12ab-34cd-56ef-1234567890ab.

      2. Use an alias for the KMS key ID. For example, alias/ExampleAlias.

      3. Use the Amazon Resource Name (ARN) for the KMS key ID. For example, arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab.

      4. Use the ARN for the KMS key alias. For example, arn:aws:kms:region:account-ID:alias/ExampleAlias.

      If using a key located in a different Amazon Web Services account than the current Amazon Web Services account, you can specify your KMS key in one of two ways:

      1. Use the ARN for the KMS key ID. For example, arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab.

      2. Use the ARN for the KMS key alias. For example, arn:aws:kms:region:account-ID:alias/ExampleAlias.

      If you don’t specify an encryption key, your output is encrypted with the default Amazon S3 key (SSE-S3).

      If you specify a KMS key to encrypt your output, you must also specify an output location using the OutputLocation parameter.

      Note that the user making the request must have permission to use the specified KMS key.

    • kms_encryption_context(HashMap<String, String>) / set_kms_encryption_context(Option<HashMap<String, String>>):

      A map of plain text, non-secret key:value pairs, known as encryption context pairs, that provide an added layer of security for your data. For more information, see KMS encryption context and Asymmetric keys in KMS.

    • settings(Settings) / set_settings(Option<Settings>):

      Specify additional optional settings in your request, including channel identification, alternative transcriptions, speaker labeling; allows you to apply custom vocabularies and vocabulary filters.

      If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use Settings with the VocabularyName or VocabularyFilterName (or both) sub-parameter.

      If you’re using automatic language identification with your request and want to include a custom language model, a custom vocabulary, or a custom vocabulary filter, use instead the parameter with the LanguageModelName, VocabularyName or VocabularyFilterName sub-parameters.

    • model_settings(ModelSettings) / set_model_settings(Option<ModelSettings>):

      Specify the custom language model you want to include with your transcription job. If you include ModelSettings in your request, you must include the LanguageModelName sub-parameter.

      For more information, see Custom language models.

    • job_execution_settings(JobExecutionSettings) / set_job_execution_settings(Option<JobExecutionSettings>):

      Allows you to control how your transcription job is processed. Currently, the only JobExecutionSettings modification you can choose is enabling job queueing using the AllowDeferredExecution sub-parameter.

      If you include JobExecutionSettings in your request, you must also include the sub-parameters: AllowDeferredExecution and DataAccessRoleArn.

    • content_redaction(ContentRedaction) / set_content_redaction(Option<ContentRedaction>):

      Allows you to redact or flag specified personally identifiable information (PII) in your transcript. If you use ContentRedaction, you must also include the sub-parameters: PiiEntityTypes, RedactionOutput, and RedactionType.

    • identify_language(bool) / set_identify_language(Option<bool>):

      Enables automatic language identification in your transcription job request.

      If you include IdentifyLanguage, you can optionally include a list of language codes, using LanguageOptions, that you think may be present in your media file. Including language options can improve transcription accuracy.

      If you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter to your automatic language identification request, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName).

      Note that you must include one of LanguageCode, IdentifyLanguage, or IdentifyMultipleLanguages in your request. If you include more than one of these parameters, your transcription job fails.

    • identify_multiple_languages(bool) / set_identify_multiple_languages(Option<bool>):

      Enables automatic multi-language identification in your transcription job request. Use this parameter if your media file contains more than one language.

      If you include IdentifyMultipleLanguages, you can optionally include a list of language codes, using LanguageOptions, that you think may be present in your media file. Including language options can improve transcription accuracy.

      If you want to apply a custom vocabulary or a custom vocabulary filter to your automatic language identification request, include LanguageIdSettings with the relevant sub-parameters (VocabularyName and VocabularyFilterName).

      Note that you must include one of LanguageCode, IdentifyLanguage, or IdentifyMultipleLanguages in your request. If you include more than one of these parameters, your transcription job fails.

    • language_options(Vec<LanguageCode>) / set_language_options(Option<Vec<LanguageCode>>):

      You can specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended. If you’re unsure what languages are present, do not include this parameter.

      If you include LanguageOptions in your request, you must also include IdentifyLanguage.

      For more information, refer to Supported languages.

      To transcribe speech in Modern Standard Arabic (ar-SA), your media file must be encoded at a sample rate of 16,000 Hz or higher.

    • subtitles(Subtitles) / set_subtitles(Option<Subtitles>):

      Produces subtitle files for your input media. You can specify WebVTT (.vtt) and SubRip (.srt) formats.

    • tags(Vec<Tag>) / set_tags(Option<Vec<Tag>>):

      Adds one or more custom tags, each in the form of a key:value pair, to a new transcription job at the time you start this new job.

      To learn more about using tags with Amazon Transcribe, refer to Tagging resources.

    • language_id_settings(HashMap<LanguageCode, LanguageIdSettings>) / set_language_id_settings(Option<HashMap<LanguageCode, LanguageIdSettings>>):

      If using automatic language identification (IdentifyLanguage) in your request and you want to apply a custom language model, a custom vocabulary, or a custom vocabulary filter, include LanguageIdSettings with the relevant sub-parameters (VocabularyName, LanguageModelName, and VocabularyFilterName).

      You can specify two or more language codes that represent the languages you think may be present in your media; including more than five is not recommended. Each language code you include can have an associated custom language model, custom vocabulary, and custom vocabulary filter. The languages you specify must match the languages of the specified custom language models, custom vocabularies, and custom vocabulary filters.

      To include language options using IdentifyLanguage without including a custom language model, a custom vocabulary, or a custom vocabulary filter, use LanguageOptions instead of LanguageIdSettings. Including language options can improve the accuracy of automatic language identification.

      If you want to include a custom language model with your request but do not want to use automatic language identification, use instead the parameter with the LanguageModelName sub-parameter.

      If you want to include a custom vocabulary or a custom vocabulary filter (or both) with your request but do not want to use automatic language identification, use instead the parameter with the VocabularyName or VocabularyFilterName (or both) sub-parameter.

  • On success, responds with StartTranscriptionJobOutput with field(s):
  • On failure, responds with SdkError<StartTranscriptionJobError>

Constructs a fluent builder for the TagResource operation.

  • The fluent builder is configurable:
    • resource_arn(impl Into<String>) / set_resource_arn(Option<String>):

      The Amazon Resource Name (ARN) of the resource you want to tag. ARNs have the format arn:partition:service:region:account-id:resource-type/resource-id.

      For example, arn:aws:transcribe:us-west-2:account-id:transcription-job/transcription-job-name.

      Valid values for resource-type are: transcription-job, medical-transcription-job, vocabulary, medical-vocabulary, vocabulary-filter, and language-model.

    • tags(Vec<Tag>) / set_tags(Option<Vec<Tag>>):

      Adds one or more custom tags, each in the form of a key:value pair, to the specified resource.

      To learn more about using tags with Amazon Transcribe, refer to Tagging resources.

  • On success, responds with TagResourceOutput
  • On failure, responds with SdkError<TagResourceError>

Constructs a fluent builder for the UntagResource operation.

Constructs a fluent builder for the UpdateCallAnalyticsCategory operation.

Constructs a fluent builder for the UpdateMedicalVocabulary operation.

Constructs a fluent builder for the UpdateVocabulary operation.

  • The fluent builder is configurable:
    • vocabulary_name(impl Into<String>) / set_vocabulary_name(Option<String>):

      The name of the custom vocabulary you want to update. Vocabulary names are case sensitive.

    • language_code(LanguageCode) / set_language_code(Option<LanguageCode>):

      The language code that represents the language of the entries in the custom vocabulary you want to update. Each vocabulary must contain terms in only one language.

      A custom vocabulary can only be used to transcribe files in the same language as the vocabulary. For example, if you create a vocabulary using US English (en-US), you can only apply this vocabulary to files that contain English audio.

      For a list of supported languages and their associated language codes, refer to the Supported languages table.

    • phrases(Vec<String>) / set_phrases(Option<Vec<String>>):

      Use this parameter if you want to update your vocabulary by including all desired terms, as comma-separated values, within your request. The other option for updating your vocabulary is to save your entries in a text file and upload them to an Amazon S3 bucket, then specify the location of your file using the VocabularyFileUri parameter.

      Note that if you include Phrases in your request, you cannot use VocabularyFileUri; you must choose one or the other.

      Each language has a character set that contains all allowed characters for that specific language. If you use unsupported characters, your vocabulary filter request fails. Refer to Character Sets for Custom Vocabularies to get the character set for your language.

    • vocabulary_file_uri(impl Into<String>) / set_vocabulary_file_uri(Option<String>):

      The Amazon S3 location of the text file that contains your custom vocabulary. The URI must be located in the same Amazon Web Services Region as the resource you’re calling.

      Here’s an example URI path: s3://DOC-EXAMPLE-BUCKET/my-vocab-file.txt

      Note that if you include VocabularyFileUri in your request, you cannot use the Phrases flag; you must choose one or the other.

  • On success, responds with UpdateVocabularyOutput with field(s):
  • On failure, responds with SdkError<UpdateVocabularyError>

Constructs a fluent builder for the UpdateVocabularyFilter operation.

Creates a client with the given service config and connector override.

Creates a new client from a shared config.

Creates a new client from the service Config.

Trait Implementations

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more

Formats the value using the given formatter. Read more

Converts to this type from the input type.

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The resulting type after obtaining ownership.

Creates owned data from borrowed data, usually by cloning. Read more

Uses borrowed data to replace owned data, usually by cloning. Read more

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more