aws_sdk_lexruntimev2/client/
recognize_utterance.rs

1// Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
2impl super::Client {
3    /// Constructs a fluent builder for the [`RecognizeUtterance`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder) operation.
4    ///
5    /// - The fluent builder is configurable:
6    ///   - [`bot_id(impl Into<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::bot_id) / [`set_bot_id(Option<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_bot_id):<br>required: **true**<br><p>The identifier of the bot that should receive the request.</p><br>
7    ///   - [`bot_alias_id(impl Into<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::bot_alias_id) / [`set_bot_alias_id(Option<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_bot_alias_id):<br>required: **true**<br><p>The alias identifier in use for the bot that should receive the request.</p><br>
8    ///   - [`locale_id(impl Into<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::locale_id) / [`set_locale_id(Option<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_locale_id):<br>required: **true**<br><p>The locale where the session is in use.</p><br>
9    ///   - [`session_id(impl Into<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::session_id) / [`set_session_id(Option<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_session_id):<br>required: **true**<br><p>The identifier of the session in use.</p><br>
10    ///   - [`session_state(impl Into<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::session_state) / [`set_session_state(Option<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_session_state):<br>required: **false**<br><p>Sets the state of the session with the user. You can use this to set the current intent, attributes, context, and dialog action. Use the dialog action to determine the next step that Amazon Lex V2 should use in the conversation with the user.</p> <p>The <code>sessionState</code> field must be compressed using gzip and then base64 encoded before sending to Amazon Lex V2.</p><br>
11    ///   - [`request_attributes(impl Into<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::request_attributes) / [`set_request_attributes(Option<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_request_attributes):<br>required: **false**<br><p>Request-specific information passed between the client application and Amazon Lex V2</p> <p>The namespace <code>x-amz-lex:</code> is reserved for special attributes. Don't create any request attributes for prefix <code>x-amz-lex:</code>.</p> <p>The <code>requestAttributes</code> field must be compressed using gzip and then base64 encoded before sending to Amazon Lex V2.</p><br>
12    ///   - [`request_content_type(impl Into<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::request_content_type) / [`set_request_content_type(Option<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_request_content_type):<br>required: **true**<br><p>Indicates the format for audio input or that the content is text. The header must start with one of the following prefixes:</p> <ul>  <li>   <p>PCM format, audio data must be in little-endian byte order.</p>   <ul>    <li>     <p>audio/l16; rate=16000; channels=1</p></li>    <li>     <p>audio/x-l16; sample-rate=16000; channel-count=1</p></li>    <li>     <p>audio/lpcm; sample-rate=8000; sample-size-bits=16; channel-count=1; is-big-endian=false</p></li>   </ul></li>  <li>   <p>Opus format</p>   <ul>    <li>     <p>audio/x-cbr-opus-with-preamble;preamble-size=0;bit-rate=256000;frame-size-milliseconds=4</p></li>   </ul></li>  <li>   <p>Text format</p>   <ul>    <li>     <p>text/plain; charset=utf-8</p></li>   </ul></li> </ul><br>
13    ///   - [`response_content_type(impl Into<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::response_content_type) / [`set_response_content_type(Option<String>)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_response_content_type):<br>required: **false**<br><p>The message that Amazon Lex V2 returns in the response can be either text or speech based on the <code>responseContentType</code> value.</p> <ul>  <li>   <p>If the value is <code>text/plain;charset=utf-8</code>, Amazon Lex V2 returns text in the response.</p></li>  <li>   <p>If the value begins with <code>audio/</code>, Amazon Lex V2 returns speech in the response. Amazon Lex V2 uses Amazon Polly to generate the speech using the configuration that you specified in the <code>responseContentType</code> parameter. For example, if you specify <code>audio/mpeg</code> as the value, Amazon Lex V2 returns speech in the MPEG format.</p></li>  <li>   <p>If the value is <code>audio/pcm</code>, the speech returned is <code>audio/pcm</code> at 16 KHz in 16-bit, little-endian format.</p></li>  <li>   <p>The following are the accepted values:</p>   <ul>    <li>     <p>audio/mpeg</p></li>    <li>     <p>audio/ogg</p></li>    <li>     <p>audio/pcm (16 KHz)</p></li>    <li>     <p>audio/* (defaults to mpeg)</p></li>    <li>     <p>text/plain; charset=utf-8</p></li>   </ul></li> </ul><br>
14    ///   - [`input_stream(ByteStream)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::input_stream) / [`set_input_stream(ByteStream)`](crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::set_input_stream):<br>required: **false**<br><p>User input in PCM or Opus audio format or text format as described in the <code>requestContentType</code> parameter.</p><br>
15    /// - On success, responds with [`RecognizeUtteranceOutput`](crate::operation::recognize_utterance::RecognizeUtteranceOutput) with field(s):
16    ///   - [`input_mode(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::input_mode): <p>Indicates whether the input mode to the operation was text, speech, or from a touch-tone keypad.</p>
17    ///   - [`content_type(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::content_type): <p>Content type as specified in the <code>responseContentType</code> in the request.</p>
18    ///   - [`messages(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::messages): <p>A list of messages that were last sent to the user. The messages are ordered based on the order that you returned the messages from your Lambda function or the order that the messages are defined in the bot.</p> <p>The <code>messages</code> field is compressed with gzip and then base64 encoded. Before you can use the contents of the field, you must decode and decompress the contents. See the example for a simple function to decode and decompress the contents.</p>
19    ///   - [`interpretations(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::interpretations): <p>A list of intents that Amazon Lex V2 determined might satisfy the user's utterance.</p> <p>Each interpretation includes the intent, a score that indicates how confident Amazon Lex V2 is that the interpretation is the correct one, and an optional sentiment response that indicates the sentiment expressed in the utterance.</p> <p>The <code>interpretations</code> field is compressed with gzip and then base64 encoded. Before you can use the contents of the field, you must decode and decompress the contents. See the example for a simple function to decode and decompress the contents.</p>
20    ///   - [`session_state(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::session_state): <p>Represents the current state of the dialog between the user and the bot.</p> <p>Use this to determine the progress of the conversation and what the next action might be.</p> <p>The <code>sessionState</code> field is compressed with gzip and then base64 encoded. Before you can use the contents of the field, you must decode and decompress the contents. See the example for a simple function to decode and decompress the contents.</p>
21    ///   - [`request_attributes(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::request_attributes): <p>The attributes sent in the request.</p> <p>The <code>requestAttributes</code> field is compressed with gzip and then base64 encoded. Before you can use the contents of the field, you must decode and decompress the contents.</p>
22    ///   - [`session_id(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::session_id): <p>The identifier of the session in use.</p>
23    ///   - [`input_transcript(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::input_transcript): <p>The text used to process the request.</p> <p>If the input was an audio stream, the <code>inputTranscript</code> field contains the text extracted from the audio stream. This is the text that is actually processed to recognize intents and slot values. You can use this information to determine if Amazon Lex V2 is correctly processing the audio that you send.</p> <p>The <code>inputTranscript</code> field is compressed with gzip and then base64 encoded. Before you can use the contents of the field, you must decode and decompress the contents. See the example for a simple function to decode and decompress the contents.</p>
24    ///   - [`audio_stream(ByteStream)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::audio_stream): <p>The prompt or statement to send to the user. This is based on the bot configuration and context. For example, if Amazon Lex V2 did not understand the user intent, it sends the <code>clarificationPrompt</code> configured for the bot. If the intent requires confirmation before taking the fulfillment action, it sends the <code>confirmationPrompt</code>. Another example: Suppose that the Lambda function successfully fulfilled the intent, and sent a message to convey to the user. Then Amazon Lex V2 sends that message in the response.</p>
25    ///   - [`recognized_bot_member(Option<String>)`](crate::operation::recognize_utterance::RecognizeUtteranceOutput::recognized_bot_member): <p>The bot member that recognized the utterance.</p>
26    /// - On failure, responds with [`SdkError<RecognizeUtteranceError>`](crate::operation::recognize_utterance::RecognizeUtteranceError)
27    pub fn recognize_utterance(&self) -> crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder {
28        crate::operation::recognize_utterance::builders::RecognizeUtteranceFluentBuilder::new(self.handle.clone())
29    }
30}