Struct rust_bert::reformer::ReformerModelWithLMHead [−][src]
pub struct ReformerModelWithLMHead { /* fields omitted */ }Expand description
Reformer Model for text generation
Reformer model with a vocabulary decoding head It is made of the following blocks:
reformer:ReformerModelBase Reformer modellm_head:ReformerLMHeadprojecting hidden states to the vocabulary dimension
Implementations
pub fn new<'p, P>(
p: P,
config: &ReformerConfig
) -> Result<ReformerModelWithLMHead, RustBertError> where
P: Borrow<Path<'p>>,
pub fn new<'p, P>(
p: P,
config: &ReformerConfig
) -> Result<ReformerModelWithLMHead, RustBertError> where
P: Borrow<Path<'p>>,
Build a new ReformerModelWithLMHead
Arguments
p- Variable store path for the root of the BART modelconfig-ReformerConfigobject defining the model architecture
Example
use rust_bert::reformer::{ReformerConfig, ReformerModelWithLMHead};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = ReformerConfig::from_file(config_path);
let reformer_model: ReformerModelWithLMHead =
ReformerModelWithLMHead::new(&p.root(), &config).unwrap();Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). Must be provided when no pre-computed embeddings are given.position_ids- Optional input tensor of shape (batch size, sequence_length). If not provided will be calculated on the fly starting from position 0.input_embeds- Optional input tensor of shape (batch size, sequence_length, embeddings_dim). Must be provided when no input ids are given.attention_mask- Optional attention mask of shape (batch size, sequence_length). Positions with a mask with value 0 will be masked.num_hashes- Optional specification of the number of hashes to use. If not provided will use the value provided in the model configuration.old_layer_states- Optional cached input (Option<Vec<Option<LayerState>>>) containing previous values for the cached states and buckets.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
ReformerLMModelOutputcontaining:logits-Tensorof shape (batch size, sequence_length, vocab_size) representing the logits for each vocabulary itemall_hidden_states-Option<Vec<Tensor>>of length n_layers with shape (batch size, sequence_length, hidden_size)all_attentions-Option<Vec<Tensor>>of length n_layers with shape (batch size, sequence_length, hidden_size)cache-Option<Vec<Option<LayerState>>>of length n_layer containing values for the states and buckets for future use.
Example
use rust_bert::reformer::{ReformerConfig, ReformerModelWithLMHead};
let (batch_size, sequence_length) = (64, 128);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let input_positions = Tensor::arange(sequence_length, (Kind::Int64, device)).unsqueeze(0).expand(&[batch_size, sequence_length], true);
let attention_mask = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let model_output = no_grad(|| {
reformer_model.forward_t(
Some(&input_tensor),
Some(&input_positions),
None,
Some(&attention_mask),
Some(4),
None,
false,
)
});Trait Implementations
fn forward_t(
&self,
input_ids: Option<&Tensor>,
cache: Cache,
attention_mask: Option<&Tensor>,
_token_type_ids: Option<&Tensor>,
_position_ids: Option<&Tensor>,
_input_embeds: Option<&Tensor>,
_encoder_outputs: Option<&Tensor>,
_decoder_input_ids: Option<&Tensor>,
train: bool
) -> Result<LMModelOutput, RustBertError>
fn forward_t(
&self,
input_ids: Option<&Tensor>,
cache: Cache,
attention_mask: Option<&Tensor>,
_token_type_ids: Option<&Tensor>,
_position_ids: Option<&Tensor>,
_input_embeds: Option<&Tensor>,
_encoder_outputs: Option<&Tensor>,
_decoder_input_ids: Option<&Tensor>,
train: bool
) -> Result<LMModelOutput, RustBertError>
Forward pass through the model. Example provided for GPT2. Read more
Generate text based on a vector of promp texts. Read more
Generate token indices without decoding (useful for token-level operations before returning final text or as validation step during training). Read more
fn generate_from_ids_and_past(
&self,
input_ids: Tensor,
attention_mask: Option<Tensor>,
generate_options: Option<GenerateOptions<'_>>
) -> Vec<GeneratedIndicesOutput>ⓘ
fn generate_from_ids_and_past(
&self,
input_ids: Tensor,
attention_mask: Option<Tensor>,
generate_options: Option<GenerateOptions<'_>>
) -> Vec<GeneratedIndicesOutput>ⓘ
Generate token indices given a list of indices (useful when the input has been pre-tokenized). Returns a list of output tokens that need to be decoded using a tokenizer. Read more
Returns a reference to the text generator’s tokenizer Read more
Auto Trait Implementations
impl RefUnwindSafe for ReformerModelWithLMHead
impl Send for ReformerModelWithLMHead
impl !Sync for ReformerModelWithLMHead
impl Unpin for ReformerModelWithLMHead
impl UnwindSafe for ReformerModelWithLMHead
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
