Struct rust_bert::xlnet::XLNetLMHeadModel [−][src]
pub struct XLNetLMHeadModel { /* fields omitted */ }Expand description
XLNetLMHeadModel
XLNet model with a language model head for language generation tasks It is made of the following blocks:
base_model:XLNetModellm_head: Linear language modeling head, projecting the hidden state logits to the vocabulary space
Implementations
Build a new XLNetLMHeadModel
Arguments
p- Variable store path for the root of the XLNet modelconfig-XLNetConfigobject defining the model architecture
Example
use rust_bert::xlnet::{XLNetConfig, XLNetLMHeadModel};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = XLNetConfig::from_file(config_path);
let xlnet_model = XLNetLMHeadModel::new(&p.root(), &config);pub fn forward_t(
&self,
input_ids: Option<&Tensor>,
attention_mask: Option<&Tensor>,
old_layer_states: Option<Vec<Option<LayerState>>>,
perm_mask: Option<&Tensor>,
target_mapping: Option<&Tensor>,
token_type_ids: Option<&Tensor>,
input_embeds: Option<&Tensor>,
train: bool
) -> Result<LMModelOutput, RustBertError>
pub fn forward_t(
&self,
input_ids: Option<&Tensor>,
attention_mask: Option<&Tensor>,
old_layer_states: Option<Vec<Option<LayerState>>>,
perm_mask: Option<&Tensor>,
target_mapping: Option<&Tensor>,
token_type_ids: Option<&Tensor>,
input_embeds: Option<&Tensor>,
train: bool
) -> Result<LMModelOutput, RustBertError>
Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). This orinput_embedsmust be provided.attention_mask- Optional attention mask of shape (batch size, sequence_length) for the encoder positions. Positions with a mask with value 0 will be masked.perm_mask- Optional tensor of shape (batch size, sequence_length, sequence_length). Mask to indicate the attention pattern for each input token (only used for pre-training over permutations, rather than simple token masking).target_mapping- Optional tensor of shape (batch size, num_tokens, sequence_length) indicating the position of the masked words to predict.token_type_ids- Optional tensor (batch size, sequence_length) indicating the sentence ID of the token (0: first sentence, 1: second sentence).input_embeds- Optional input tensor of shape (batch size, sequence_length, embeddings dimension). This orinput_idsmust be provided.old_layer_states- Optional vector of lengthnum_layerscontaining optionalLayerStatescontaining the last calculated content for the attention layers. This avoids recomputing attention weights at past positions and speeds up decoding.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
LMModelOutputcontaining:lm_logits-Tensorof shape (batch size, sequence_length, vocab_size) representing the logits for each vocab item and positioncache-XLNetCachemade ofOption<Vec<Option<LayerState>>>of length n_layers and shape (past_sequence_length, batch size, hidden_size) containing the previous contentencoder_hidden_states- Noneall_hidden_states-Option<Vec<Tensor>>of length n_layers with shape (batch size, sequence_length, hidden_size)all_attentions-Option<Vec<Tensor>>of length n_layers with shape (batch size, sequence_length, hidden_size)
Example
use rust_bert::xlnet::{XLNetConfig, XLNetLMHeadModel};
let (batch_size, sequence_length) = (64, 128);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let attention_mask = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let target_tensor = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let target_mapping = Tensor::zeros(&[64, 1, 128], (Kind::Float, device));
let _ = target_mapping.narrow(2, 3, 1).fill_(1.0);
let model_output = no_grad(|| {
xlnet_model.forward_t(
Some(&input_tensor),
Some(&attention_mask),
None,
Some(&target_mapping),
None,
None,
None,
false,
)
});Trait Implementations
fn forward_t(
&self,
input_ids: Option<&Tensor>,
layer_past: Cache,
attention_mask: Option<&Tensor>,
_token_type_ids: Option<&Tensor>,
_position_ids: Option<&Tensor>,
_input_embeds: Option<&Tensor>,
_encoder_outputs: Option<&Tensor>,
decoder_input_ids: Option<&Tensor>,
train: bool
) -> Result<LMModelOutput, RustBertError>
fn forward_t(
&self,
input_ids: Option<&Tensor>,
layer_past: Cache,
attention_mask: Option<&Tensor>,
_token_type_ids: Option<&Tensor>,
_position_ids: Option<&Tensor>,
_input_embeds: Option<&Tensor>,
_encoder_outputs: Option<&Tensor>,
decoder_input_ids: Option<&Tensor>,
train: bool
) -> Result<LMModelOutput, RustBertError>
Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). This orinput_embedsmust be provided.attention_mask- Optional attention mask of shape (batch size, sequence_length) for the encoder positions. Positions with a mask with value 0 will be masked.perm_mask- Optional tensor of shape (batch size, sequence_length, sequence_length). Mask to indicate the attention pattern for each input token (only used for pre-training over permutations, rather than simple token masking).target_mapping- Optional tensor of shape (batch size, num_tokens, sequence_length) indicating the position of the masked words to predict.token_type_ids- Optional tensor (batch size, sequence_length) indicating the sentence ID of the token (0: first sentence, 1: second sentence).input_embeds- Optional input tensor of shape (batch size, sequence_length, embeddings dimension). This orinput_idsmust be provided.old_layer_states- Optional vector of lengthnum_layerscontaining optionalLayerStatescontaining the last calculated content for the attention layers. This avoids recomputing attention weights at past positions and speeds up decoding.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
LMModelOutputcontaining:lm_logits-Tensorof shape (batch size, sequence_length, vocab_size) representing the logits for each vocab item and positioncache-XLNetCachemade ofOption<Vec<Option<LayerState>>>of length n_layers and shape (past_sequence_length, batch size, hidden_size) containing the previous content
Example
use rust_bert::xlnet::{XLNetConfig, XLNetLMHeadModel};
let (batch_size, sequence_length) = (64, 128);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let attention_mask = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let target_tensor = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let target_mapping = Tensor::zeros(&[64, 1, 128], (Kind::Float, device));
let _ = target_mapping.narrow(2, 3, 1).fill_(1.0);
let model_output = no_grad(|| {
xlnet_model.forward_t(
Some(&input_tensor),
Some(&attention_mask),
None,
Some(&target_mapping),
None,
None,
None,
false,
)
});Generate text based on a vector of promp texts. Read more
Generate token indices without decoding (useful for token-level operations before returning final text or as validation step during training). Read more
fn generate_from_ids_and_past(
&self,
input_ids: Tensor,
attention_mask: Option<Tensor>,
generate_options: Option<GenerateOptions<'_>>
) -> Vec<GeneratedIndicesOutput>ⓘ
fn generate_from_ids_and_past(
&self,
input_ids: Tensor,
attention_mask: Option<Tensor>,
generate_options: Option<GenerateOptions<'_>>
) -> Vec<GeneratedIndicesOutput>ⓘ
Generate token indices given a list of indices (useful when the input has been pre-tokenized). Returns a list of output tokens that need to be decoded using a tokenizer. Read more
Returns a reference to the text generator’s tokenizer Read more
Auto Trait Implementations
impl RefUnwindSafe for XLNetLMHeadModel
impl Send for XLNetLMHeadModel
impl !Sync for XLNetLMHeadModel
impl Unpin for XLNetLMHeadModel
impl UnwindSafe for XLNetLMHeadModel
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
