Module nvidia_riva::nvidia::riva::nlp

source ·

Modules

Structs

  • AnalyzeEntitiesOptions is an optional configuration message to be sent as part of an AnalyzeEntitiesRequest with query metadata
  • AnalyzeEntitiesRequest is the input message for the AnalyzeEntities service
  • AnalyzeIntentContext is reserved for future use when we may send context back in a a variety of different formats (including raw neural network hidden states)
  • AnalyzeIntentOptions is an optional configuration message to be sent as part of an AnalyzeIntentRequest with query metadata
  • AnalyzeIntentRequest is the input message for the AnalyzeIntent service
  • AnalyzeIntentResponse is returned by the AnalyzeIntent service, and includes information related to the query’s intent, (optionally) slot data, and its domain.
  • Classification messages return a class name and corresponding score
  • ClassificationResults contain zero or more Classification messages If the number of Classifications is > 1, top_n > 1 must have been specified.
  • NLPModelParams is a metadata message that is included in every request message used by the Core NLP Service and is used to specify model characteristics/requirements
  • Span of a particular result
  • TextClassRequest is the input message to the ClassifyText service.
  • TextClassResponse is the return message from the ClassifyText service.
  • TextTransformRequest is a request type intended for services like TransformText which take an arbitrary text input
  • TextTransformResponse is returned by the TransformText method. Responses are returned in the same order as they were requested.
  • TokenClassRequest is the input message to the ClassifyText service.
  • TokenClassResponse returns a single TokenClassSequence per input request
  • TokenClassSequence is used for returning a sequence of TokenClassValue objects in the original order of input tokens
  • TokenClassValue is used to correlate an input token with its classification results