[−][src]Struct gcp_client::google::cloud::dialogflow::v2beta1::StreamingDetectIntentRequest
The top-level message sent by the client to the [Sessions.StreamingDetectIntent][google.cloud.dialogflow.v2beta1.Sessions.StreamingDetectIntent] method.
Multiple request messages should be sent in order:
-
The first message must contain [session][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.session], [query_input][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_input] plus optionally [query_params][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]. The message must not contain [input_audio][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.input_audio].
-
If [query_input][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_input] was set to [query_input.audio_config][google.cloud.dialogflow.v2beta1.InputAudioConfig], all subsequent messages must contain [input_audio][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with [query_input.text][google.cloud.dialogflow.v2beta1.QueryInput.text].
However, note that:
- Dialogflow will bill you for the audio duration so far.
- Dialogflow discards all Speech recognition results in favor of the input text.
- Dialogflow will use the language code from the first message.
After you sent all input, you must half-close or abort the request stream.
Fields
session: String
Required. The name of the session the query is sent to.
Format of the session name:
projects/<Project ID>/agent/sessions/<Session ID>
, or
projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>
. If Environment ID
is not specified, we assume
default 'draft' environment. If User ID
is not specified, we are using
"-". It's up to the API caller to choose an appropriate Session ID
and
User Id
. They can be a random number or some type of user and session
identifiers (preferably hashed). The length of the Session ID
and
User ID
must not exceed 36 characters.
query_params: Option<QueryParameters>
The parameters of this query.
query_input: Option<QueryInput>
Required. The input specification. It can be set to:
-
an audio config which instructs the speech recognizer how to process the speech audio,
-
a conversational query in the form of text, or
-
an event that specifies which intent to trigger.
single_utterance: bool
DEPRECATED. Please use [InputAudioConfig.single_utterance][google.cloud.dialogflow.v2beta1.InputAudioConfig.single_utterance] instead.
If false
(default), recognition does not cease until the
client closes the stream.
If true
, the recognizer will detect a single spoken utterance in input
audio. Recognition ceases when it detects the audio's voice has
stopped or paused. In this case, once a detected intent is received, the
client should close the stream and start a new request with a new stream as
needed.
This setting is ignored when query_input
is a piece of text or an event.
output_audio_config: Option<OutputAudioConfig>
Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
output_audio_config_mask: Option<FieldMask>
Mask for [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config] indicating which settings in this request-level config should override speech synthesizer settings defined at agent-level.
If unspecified or empty, [output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config] replaces the agent-level config in its entirety.
input_audio: Vec<u8>
The input audio content to be recognized. Must be sent if
query_input
was set to a streaming input audio config. The complete audio
over all streaming messages must not exceed 1 minute.
Trait Implementations
impl Clone for StreamingDetectIntentRequest
[src]
fn clone(&self) -> StreamingDetectIntentRequest
[src]
fn clone_from(&mut self, source: &Self)
1.0.0[src]
impl Debug for StreamingDetectIntentRequest
[src]
impl Default for StreamingDetectIntentRequest
[src]
impl Message for StreamingDetectIntentRequest
[src]
fn encode_raw<B>(&self, buf: &mut B) where
B: BufMut,
[src]
B: BufMut,
fn merge_field<B>(
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
[src]
&mut self,
tag: u32,
wire_type: WireType,
buf: &mut B,
ctx: DecodeContext
) -> Result<(), DecodeError> where
B: Buf,
fn encoded_len(&self) -> usize
[src]
fn clear(&mut self)
[src]
fn encode<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
[src]
B: BufMut,
fn encode_length_delimited<B>(&self, buf: &mut B) -> Result<(), EncodeError> where
B: BufMut,
[src]
B: BufMut,
fn decode<B>(buf: B) -> Result<Self, DecodeError> where
B: Buf,
Self: Default,
[src]
B: Buf,
Self: Default,
fn decode_length_delimited<B>(buf: B) -> Result<Self, DecodeError> where
B: Buf,
Self: Default,
[src]
B: Buf,
Self: Default,
fn merge<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
[src]
B: Buf,
fn merge_length_delimited<B>(&mut self, buf: B) -> Result<(), DecodeError> where
B: Buf,
[src]
B: Buf,
impl PartialEq<StreamingDetectIntentRequest> for StreamingDetectIntentRequest
[src]
fn eq(&self, other: &StreamingDetectIntentRequest) -> bool
[src]
fn ne(&self, other: &StreamingDetectIntentRequest) -> bool
[src]
impl StructuralPartialEq for StreamingDetectIntentRequest
[src]
Auto Trait Implementations
impl RefUnwindSafe for StreamingDetectIntentRequest
impl Send for StreamingDetectIntentRequest
impl Sync for StreamingDetectIntentRequest
impl Unpin for StreamingDetectIntentRequest
impl UnwindSafe for StreamingDetectIntentRequest
Blanket Implementations
impl<T> Any for T where
T: 'static + ?Sized,
[src]
T: 'static + ?Sized,
impl<T> Borrow<T> for T where
T: ?Sized,
[src]
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
[src]
T: ?Sized,
fn borrow_mut(&mut self) -> &mut T
[src]
impl<T> From<T> for T
[src]
impl<T> Instrument for T
[src]
fn instrument(self, span: Span) -> Instrumented<Self>
[src]
fn in_current_span(self) -> Instrumented<Self>
[src]
impl<T, U> Into<U> for T where
U: From<T>,
[src]
U: From<T>,
impl<T> IntoRequest<T> for T
[src]
fn into_request(self) -> Request<T>
[src]
impl<T> ToOwned for T where
T: Clone,
[src]
T: Clone,
type Owned = T
The resulting type after obtaining ownership.
fn to_owned(&self) -> T
[src]
fn clone_into(&self, target: &mut T)
[src]
impl<T, U> TryFrom<U> for T where
U: Into<T>,
[src]
U: Into<T>,
type Error = Infallible
The type returned in the event of a conversion error.
fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>
[src]
impl<T, U> TryInto<U> for T where
U: TryFrom<T>,
[src]
U: TryFrom<T>,
type Error = <U as TryFrom<T>>::Error
The type returned in the event of a conversion error.
fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>
[src]
impl<V, T> VZip<V> for T where
V: MultiLane<T>,
V: MultiLane<T>,
fn vzip(self) -> V
impl<T> WithSubscriber for T
[src]
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
[src]
S: Into<Dispatch>,