Module google_speech1::api
source · Structs§
- There is no detailed description.
- An item of the class.
- Message sent by the client for the
CreateCustomClass
method. - Message sent by the client for the
CreatePhraseSet
method. - A set of words or phrases that represents a common concept likely to appear in your audio, for example a list of passenger ship names. CustomClass items can be substituted into placeholders that you set in PhraseSet phrases.
- A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); }
- A single replacement configuration.
- Message returned to the client by the
ListCustomClasses
method. - The response message for Operations.ListOperations.
- Message returned to the client by the
ListPhraseSet
method. - The top-level message sent by the client for the
LongRunningRecognize
method. - This resource represents a long-running operation that is the result of a network API call.
- Gets the latest state of a long-running operation. Clients can use this method to poll the operation result at intervals as recommended by the API service.
- Lists operations that match the specified filter in the request. If the server doesn’t support this method, it returns
UNIMPLEMENTED
. - A builder providing access to all methods supported on operation resources. It is not used directly, but through the
Speech
hub. - A phrases containing words and phrase “hints” so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits. List items can also include pre-built or custom classes containing groups of words that represent common concepts that occur in natural language. For example, rather than providing a phrase hint for every month of the year (e.g. “i was born in january”, “i was born in febuary”, …), use the pre-built
$MONTH
class improves the likelihood of correctly transcribing audio that includes months (e.g. “i was born in $month”). To refer to pre-built classes, use the class’ symbol prepended with$
e.g.$MONTH
. To refer to custom classes that were defined inline in the request, set the class’scustom_class_id
to a string unique to all class resources and inline classes. Then use the class’ id wrapped in ${...}
e.g. “${my-months}”. To refer to custom classes resources, use the class’ id wrapped in${}
(e.g.${my-months}
). Speech-to-Text supports three locations:global
,us
(US North America), andeu
(Europe). If you are calling thespeech.googleapis.com
endpoint, use theglobal
location. To specify a region, use a regional endpoint with matchingus
oreu
location value. - Provides “hints” to the speech recognizer to favor specific words and phrases in the results.
- Create a custom class.
- Delete a custom class.
- Get a custom class.
- List custom classes.
- Update a custom class.
- Create a set of phrase hints. Each item in the set can be a single word or a multi-word phrase. The items in the PhraseSet are favored by the recognition model when you send a call that includes the PhraseSet.
- Delete a phrase set.
- Get a phrase set.
- List phrase sets.
- Update a phrase set.
- A builder providing access to all methods supported on project resources. It is not used directly, but through the
Speech
hub. - Contains audio data in the encoding specified in the
RecognitionConfig
. Eithercontent
oruri
must be supplied. Supplying both or neither returns google.rpc.Code.INVALID_ARGUMENT. See content limits. - Provides information to the recognizer that specifies how to process the request.
- Description of audio data to be recognized.
- The top-level message sent by the client for the
Recognize
method. - The only message returned to the client by the
Recognize
method. It contains the result as zero or more sequentialSpeechRecognitionResult
messages. - Config to enable speaker diarization.
- Central instance to access all Speech related resource activities
- Speech adaptation configuration.
- Information on speech adaptation use in results
- Provides “hints” to the speech recognizer to favor specific words and phrases in the results.
- Performs asynchronous speech recognition: receive results via the google.longrunning.Operations interface. Returns either an
Operation.error
or anOperation.response
which contains aLongRunningRecognizeResponse
message. For more information on asynchronous speech recognition, see the how-to. - A builder providing access to all methods supported on speech resources. It is not used directly, but through the
Speech
hub. - Alternative hypotheses (a.k.a. n-best list).
- A speech recognition result corresponding to a portion of the audio.
- Performs synchronous speech recognition: receive results after all audio has been sent and processed.
- The
Status
type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC. EachStatus
message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the API Design Guide. - Transcription normalization configuration. Use transcription normalization to automatically replace parts of the transcript with phrases of your choosing. For StreamingRecognize, this normalization only applies to stable partial transcripts (stability > 0.8) and final transcripts.
- Specifies an optional destination for the recognition results.
- Word-specific information for recognized words.
Enums§
- Identifies the an OAuth2 authorization scope. A scope is needed when requesting an authorization token.