pub struct OpenAIGPTLMHeadModel { /* private fields */ }
Expand description
§GPT Language Modeling head
GPT model with a decoding head (linear layer without bias). The weights of the linear layer are tied to the word embeddings It is made of the following blocks:
transformer
: Base Gpt2Modellm_head
: Linear layer without bias tied to the weights of the token id embeddings
Implementations§
Source§impl OpenAIGPTLMHeadModel
impl OpenAIGPTLMHeadModel
Sourcepub fn new<'p, P>(p: P, config: &Gpt2Config) -> OpenAIGPTLMHeadModel
pub fn new<'p, P>(p: P, config: &Gpt2Config) -> OpenAIGPTLMHeadModel
Build a new OpenAIGPTLMHeadModel
§Arguments
p
- Variable store path for the root of the GPT modelconfig
-Gpt2Config
object defining the model architecture
§Example
use rust_bert::gpt2::Gpt2Config;
use rust_bert::openai_gpt::OpenAIGPTLMHeadModel;
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = Gpt2Config::from_file(config_path);
let gpt2: OpenAIGPTLMHeadModel = OpenAIGPTLMHeadModel::new(&p.root() / "gpt", &config);
Sourcepub fn forward_t(
&self,
input_ids: Option<&Tensor>,
_layer_past: Cache,
attention_mask: Option<&Tensor>,
token_type_ids: Option<&Tensor>,
position_ids: Option<&Tensor>,
input_embeds: Option<&Tensor>,
_encoder_outputs: Option<&Tensor>,
_decoder_input_ids: Option<&Tensor>,
train: bool,
) -> Result<LMModelOutput, RustBertError>
pub fn forward_t( &self, input_ids: Option<&Tensor>, _layer_past: Cache, attention_mask: Option<&Tensor>, token_type_ids: Option<&Tensor>, position_ids: Option<&Tensor>, input_embeds: Option<&Tensor>, _encoder_outputs: Option<&Tensor>, _decoder_input_ids: Option<&Tensor>, train: bool, ) -> Result<LMModelOutput, RustBertError>
Forward pass through the model
§Arguments
input_ids
- Optional input tensor of shape (batch size, sequence_length). If None, pre-computed embeddings must be provided (seeinput_embeds
)_layer_past
- Unused for GPTattention_mask
- Optional mask of shape (batch size, sequence_length). Masked position have value 0, non-masked value 1. If None set to 1input_embeds
- Optional pre-computed input embeddings of shape (batch size, sequence_length, hidden_size). If None, input ids must be provided (seeinput_ids
)token_type_ids
- Optional token type ids used to indicate the portion of the input the token belongs to. If not None, token type embeddings will be added to the token and position embeddings.position_ids
- Optional position ids of shape (batch size, sequence_length). If None, will be incremented starting from the length of the past input._encoder_outputs
- Unused for GPT_decoder_input_ids
- Unused for GPTtrain
- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
§Returns
LMModelOutput
containing:lm_logits
-Tensor
of shape (batch size, sequence_length, vocab_size) representing the logits for each vocab item and positioncache
- Noneencoder_hidden_states
- Noneall_hidden_states
-Option<Vec<Tensor>>
of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)all_attentions
-Option<Vec<Tensor>>
of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)
§Example
use rust_bert::gpt2::Gpt2Config;
use rust_bert::openai_gpt::OpenAIGPTLMHeadModel;
use rust_bert::pipelines::generation_utils::Cache;
let (batch_size, sequence_length, past_sequence_length) = (64, 128, 56);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let attention_mask = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let token_type_ids = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let position_ids = Tensor::arange(sequence_length, (Int64, device)).expand(&[batch_size, sequence_length], true);
let model_output = no_grad(|| {
gpt_model
.forward_t(Some(&input_tensor),
Cache::None,
Some(&attention_mask),
Some(&token_type_ids),
Some(&position_ids),
None,
None,
None,
false).unwrap()
});
Auto Trait Implementations§
impl Freeze for OpenAIGPTLMHeadModel
impl RefUnwindSafe for OpenAIGPTLMHeadModel
impl Send for OpenAIGPTLMHeadModel
impl !Sync for OpenAIGPTLMHeadModel
impl Unpin for OpenAIGPTLMHeadModel
impl UnwindSafe for OpenAIGPTLMHeadModel
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more