[−][src]Module rust_bert::gpt2
GPT2 (Radford et al.)
Implementation of the GPT2 language model (Language Models are Unsupervised Multitask Learners Radford, Wu, Child, Luan, Amodei, Sutskever 2019).
The base model is implemented in the gpt2::Gpt2Model
struct. The model also includes a language model head: gpt2::GPT2LMHeadModel
implementing the common generation::LMHeadModel
trait shared between the models used for generation (see pipelines
for more information).
Model set-up and pre-trained weights loading
A full working example is provided in examples/summarization.rs
, run with cargo run --example gpt2
.
All models expect the following resources:
- Configuration file expected to have a structure following the Transformers library
- Model weights are expected to have a structure and parameter names following the Transformers library. A conversion using the Python utility scripts is required to convert the
.bin
weights to the.ot
format. Gpt2Tokenizer
using avocab.txt
vocabulary andmerges.txt
2-gram merges
use rust_tokenizers::Gpt2Tokenizer; use tch::{nn, Device}; use rust_bert::Config; use rust_bert::gpt2::{Gpt2Config, GPT2LMHeadModel}; let device = Device::cuda_if_available(); let mut vs = nn::VarStore::new(device); let tokenizer: Gpt2Tokenizer = Gpt2Tokenizer::from_file(vocab_path.to_str().unwrap(), merges_path.to_str().unwrap(), true); let config = Gpt2Config::from_file(config_path); let gpt2_model = GPT2LMHeadModel::new(&vs.root(), &config); vs.load(weights_path)?;
Structs
GPT2LMHeadModel | GPT2 Language Modeling head |
Gpt2Config | GPT2 model configuration |
Gpt2Model | GPT2 Base model |
Enums
GptActivation | Activation function used in the fully connected layers of the transformer block |