[][src]Struct gcp_client::google::cloud::dialogflow::v2::StreamingDetectIntentRequest

pub struct StreamingDetectIntentRequest {
    pub session: String,
    pub query_params: Option<QueryParameters>,
    pub query_input: Option<QueryInput>,
    pub single_utterance: bool,
    pub output_audio_config: Option<OutputAudioConfig>,
    pub output_audio_config_mask: Option<FieldMask>,
    pub input_audio: Vec<u8>,
}

The top-level message sent by the client to the [Sessions.StreamingDetectIntent][google.cloud.dialogflow.v2.Sessions.StreamingDetectIntent] method.

Multiple request messages should be sent in order:

  1. The first message must contain [session][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.session], [query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] plus optionally [query_params][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]. The message must not contain [input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio].

  2. If [query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] was set to [query_input.audio_config][google.cloud.dialogflow.v2.InputAudioConfig], all subsequent messages must contain [input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with [query_input.text][google.cloud.dialogflow.v2.QueryInput.text].

    However, note that:

    • Dialogflow will bill you for the audio duration so far.
    • Dialogflow discards all Speech recognition results in favor of the input text.
    • Dialogflow will use the language code from the first message.

After you sent all input, you must half-close or abort the request stream.

Fields

session: String

Required. The name of the session the query is sent to. Format of the session name: projects/<Project ID>/agent/sessions/<Session ID>, or projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>. If Environment ID is not specified, we assume default 'draft' environment. If User ID is not specified, we are using "-". It's up to the API caller to choose an appropriate Session ID and User Id. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the Session ID and User ID must not exceed 36 characters.

query_params: Option<QueryParameters>

The parameters of this query.

query_input: Option<QueryInput>

Required. The input specification. It can be set to:

  1. an audio config which instructs the speech recognizer how to process the speech audio,

  2. a conversational query in the form of text, or

  3. an event that specifies which intent to trigger.

single_utterance: bool

Please use [InputAudioConfig.single_utterance][google.cloud.dialogflow.v2.InputAudioConfig.single_utterance] instead. If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when query_input is a piece of text or an event.

output_audio_config: Option<OutputAudioConfig>

Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.

output_audio_config_mask: Option<FieldMask>

Mask for [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config] indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.

If unspecified or empty, [output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config] replaces the agent-level config in its entirety.

input_audio: Vec<u8>

The input audio content to be recognized. Must be sent if query_input was set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.

Trait Implementations

impl Clone for StreamingDetectIntentRequest[src]

impl Debug for StreamingDetectIntentRequest[src]

impl Default for StreamingDetectIntentRequest[src]

impl Message for StreamingDetectIntentRequest[src]

impl PartialEq<StreamingDetectIntentRequest> for StreamingDetectIntentRequest[src]

impl StructuralPartialEq for StreamingDetectIntentRequest[src]

Auto Trait Implementations

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T> Instrument for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T> IntoRequest<T> for T[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<V, T> VZip<V> for T where
    V: MultiLane<T>, 

impl<T> WithSubscriber for T[src]