Expand description
§Text generation pipeline
Text generation pipeline from a prompt text. Include techniques such as beam search, top-k and nucleus sampling, temperature setting and repetition penalty. By default, the dependencies for this model will be downloaded for a GPT2-medium model. Available architectures for text generation include:
- OpenAI GPT
- OpenAI GPT2
- GPT-Neo
- XLNet
- Reformer
Two APIs exist to build text generation models:
TextGenerationModel
is a high-level module that exposes text generation capabilities with a set of reasonable defaults- the
LanguageGenerator
trait exposes lower-level text generation capabilities allowing the user to provide additional generation options when building the model (viaGenerateConfig
) and at each query (viaGenerateOptions
). Please check thegeneration_utils
module for more details
Customized text generation models models can be loaded by overwriting the resources in the configuration. The dependencies will be downloaded to the user’s home directory, e.g. under ~/.cache/.rustbert/gpt2
Structs§
- Text
Generation Config - Configuration for text generation
- Text
Generation Model - TextGenerationModel to generate texts from a prompt
Enums§
- Text
Generation Option - Abstraction that holds one particular text generation model, for any of the supported models