Struct rust_bert::prophetnet::ProphetNetForConditionalGeneration [−][src]
pub struct ProphetNetForConditionalGeneration { /* fields omitted */ }Expand description
ProphetNet Model for conditional generation
ProphetNet model with a vocabulary decoding head It is made of the following blocks:
base_model:ProphetNetModelBase ProphetNet modellm_head: Linear layer without bias to project the hidden states to the vocabulary
Implementations
pub fn new<'p, P>(
p: P,
config: &ProphetNetConfig
) -> Result<ProphetNetForConditionalGeneration, RustBertError> where
P: Borrow<Path<'p>>,
pub fn new<'p, P>(
p: P,
config: &ProphetNetConfig
) -> Result<ProphetNetForConditionalGeneration, RustBertError> where
P: Borrow<Path<'p>>,
Build a new ProphetNetForConditionalGeneration
Arguments
p- Variable store path for the root of the ProphetNet modelconfig-ProphetNetConfigobject defining the model architecture
Example
use rust_bert::prophetnet::{ProphetNetConfig, ProphetNetForConditionalGeneration};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = ProphetNetConfig::from_file(config_path);
let prophetnet_model = ProphetNetForConditionalGeneration::new(&p.root(), &config);pub fn forward_t(
&self,
input_ids: Option<&Tensor>,
attention_mask: Option<&Tensor>,
input_embeds: Option<&Tensor>,
decoder_input_ids: Option<&Tensor>,
decoder_attention_mask: Option<&Tensor>,
encoder_hidden_states: Option<&Tensor>,
old_layer_states: Option<Vec<(Option<LayerState>, Option<LayerState>)>>,
decoder_input_embeds: Option<&Tensor>,
train: bool
) -> Result<ProphetNetGenerationOutput, RustBertError>
pub fn forward_t(
&self,
input_ids: Option<&Tensor>,
attention_mask: Option<&Tensor>,
input_embeds: Option<&Tensor>,
decoder_input_ids: Option<&Tensor>,
decoder_attention_mask: Option<&Tensor>,
encoder_hidden_states: Option<&Tensor>,
old_layer_states: Option<Vec<(Option<LayerState>, Option<LayerState>)>>,
decoder_input_embeds: Option<&Tensor>,
train: bool
) -> Result<ProphetNetGenerationOutput, RustBertError>
Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). This orinput_embedsmust be provided.attention_mask- Optional attention mask of shape (batch size, sequence_length) for the encoder positions. Positions with a mask with value 0 will be masked.input_embeds- Optional input tensor of shape (batch size, sequence_length, embeddings dimension). This orinput_idsmust be provided.decoder_input_ids- Optional input tensor of shape (batch size, target_sequence_length). Must be provided when running in generation mode (e.g. initialized with a BOS token)decoder_attention_mask- Optional attention mask of shape (batch size, target_sequence_length) for the decoder positions. Positions with a mask with value 0 will be masked.encoder_hidden_states- Optional tensor of shape (batch size, source_sequence_length, encoder_hidden_dim) corresponding to pre-calculated encoder hidden states (useful for conditional generation) These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.old_layer_states- Optional VectorOption<Vec<Option<&LayerState>, Option<&LayerState>>>of length n_layer containing tuples with the past keys and values for both the self attention and the encoder cross attention of each layer of the decoder.decoder_input_embeds- Optional input tensor of shape (batch size, target_sequence_length, embeddings dimension). This ordecoder_input_idsmust be provided.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
ProphetNetGenerationOutputcontaining:logits-Tensorof shape (batch size, target_sequence_length, vocabulary_size) representing the activations of the last hidden state for the decoderngram_logits-Tensorof shape (ngram, batch size, target_sequence_length, vocabulary_size) representing the activations of the last hidden state for the decoder ngram streamnext_decoder_cache-Option<Vec<Option<LayerState>>>of length n_layer containing the past content for the the attention layers with shape (past_sequence_length, batch size, hidden_size)all_decoder_hidden_states-Option<Vec<Tensor>>of length n_layer with shape (batch size, target_sequence_length, hidden_size)all_ngram_decoder_hidden_states-Option<Vec<Tensor>>of length n_layer with shape (ngram, batch size, target_sequence_length, hidden_size)all_attentions-Option<Vec<Tensor>>of length n_layer with shape (batch size, target_sequence_length, hidden_size)all_ngram_attentions-Option<Vec<Tensor>>of length n_layer with shape (ngram, batch size, target_sequence_length, hidden_size)all_cross_attentions-Option<Vec<Tensor>>of length n_layer with shape (batch size, target_sequence_length, hidden_size)
Example
use rust_bert::prophetnet::{ProphetNetModel, ProphetNetConfig, ProphetNetForConditionalGeneration};
let (batch_size, sequence_length, target_sequence_length) = (64, 128, 32);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let attention_mask = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let target_tensor = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let decoder_input_ids = Tensor::ones(&[batch_size, target_sequence_length], (Kind::Float, device));
let model_output = no_grad(|| {
prophetnet_model.forward_t(
Some(&input_tensor),
Some(&attention_mask),
None,
Some(&decoder_input_ids),
None,
None,
None,
None,
false
)
});Trait Implementations
fn forward_t(
&self,
input_ids: Option<&Tensor>,
cache: Cache,
attention_mask: Option<&Tensor>,
_token_type_ids: Option<&Tensor>,
_position_ids: Option<&Tensor>,
input_embeds: Option<&Tensor>,
encoder_outputs: Option<&Tensor>,
decoder_input_ids: Option<&Tensor>,
train: bool
) -> Result<LMModelOutput, RustBertError>
fn forward_t(
&self,
input_ids: Option<&Tensor>,
cache: Cache,
attention_mask: Option<&Tensor>,
_token_type_ids: Option<&Tensor>,
_position_ids: Option<&Tensor>,
input_embeds: Option<&Tensor>,
encoder_outputs: Option<&Tensor>,
decoder_input_ids: Option<&Tensor>,
train: bool
) -> Result<LMModelOutput, RustBertError>
Forward pass through the model. Example provided for GPT2. Read more
Generate text based on a vector of promp texts. Read more
Generate token indices without decoding (useful for token-level operations before returning final text or as validation step during training). Read more
fn generate_from_ids_and_past(
&self,
input_ids: Tensor,
attention_mask: Option<Tensor>,
generate_options: Option<GenerateOptions<'_>>
) -> Vec<GeneratedIndicesOutput>ⓘ
fn generate_from_ids_and_past(
&self,
input_ids: Tensor,
attention_mask: Option<Tensor>,
generate_options: Option<GenerateOptions<'_>>
) -> Vec<GeneratedIndicesOutput>ⓘ
Generate token indices given a list of indices (useful when the input has been pre-tokenized). Returns a list of output tokens that need to be decoded using a tokenizer. Read more
Returns a reference to the text generator’s tokenizer Read more
Auto Trait Implementations
impl Send for ProphetNetForConditionalGeneration
impl !Sync for ProphetNetForConditionalGeneration
impl Unpin for ProphetNetForConditionalGeneration
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
