Struct rust_bert::models::bart::BartForConditionalGeneration
source · pub struct BartForConditionalGeneration { /* private fields */ }
Expand description
BART Model for conditional generation
BART model with a vocabulary decoding head It is made of the following blocks:
base_model
:BartModel
Base BART modellinear
: Linear layer without bias tied to the weights of the token id embeddings
Implementations§
source§impl BartForConditionalGeneration
impl BartForConditionalGeneration
sourcepub fn new<'p, P>(p: P, config: &BartConfig) -> BartForConditionalGeneration
pub fn new<'p, P>(p: P, config: &BartConfig) -> BartForConditionalGeneration
Build a new BartForConditionalGeneration
Arguments
p
- Variable store path for the root of the BART modelconfig
-BartConfig
object defining the model architecture
Example
use rust_bert::bart::{BartConfig, BartForConditionalGeneration};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = BartConfig::from_file(config_path);
let bart: BartForConditionalGeneration =
BartForConditionalGeneration::new(&p.root() / "bart", &config);
sourcepub fn forward_t(
&self,
input_ids: Option<&Tensor>,
attention_mask: Option<&Tensor>,
encoder_output: Option<&Tensor>,
decoder_input_ids: Option<&Tensor>,
decoder_attention_mask: Option<&Tensor>,
old_layer_states: Option<Vec<(Option<LayerState>, Option<LayerState>)>>,
train: bool
) -> BartModelOutput
pub fn forward_t( &self, input_ids: Option<&Tensor>, attention_mask: Option<&Tensor>, encoder_output: Option<&Tensor>, decoder_input_ids: Option<&Tensor>, decoder_attention_mask: Option<&Tensor>, old_layer_states: Option<Vec<(Option<LayerState>, Option<LayerState>)>>, train: bool ) -> BartModelOutput
Forward pass through the model
Arguments
input_ids
- Optional input tensor of shape (batch size, source_sequence_length). Must be provided when not running in generation modeattention_mask
- Optional attention mask of shape (batch size, source_sequence_length) for the encoder positions. Positions with a mask with value 0 will be masked.encoder_outputs
- Optional tuple made of a tensor of shape (batch size, source_sequence_length, encoder_hidden_dim) and optional vectors of tensors of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size). These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.decoder_input_ids
- Optional input tensor of shape (batch size, target_sequence_length). Must be provided when running in generation mode (e.g. initialized with a BOS token)decoder_attention_mask
- Optional attention mask of shape (batch size, target_sequence_length) for the decoder positions. Positions with a mask with value 0 will be masked.train
- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
BartModelOutput
containing:decoder_output
-Tensor
of shape (batch size, target_sequence_length, vocab_size) representing the logits for each vocabulary item and positionencoder_hidden_states
-Tensor
of shape (batch size, source_sequence_length, hidden_size) representing the activations of the last encoder hidden statecache
-(Option<Tensor>, Option<Vec<&LayerState, &LayerState>>)
of length n_layer containing the encoder padding mask and past keys and values for both the self attention and the encoder cross attention of each layer of the decoder.all_encoder_hidden_states
-Option<Vec<Tensor>>
of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size)all_encoder_attentions
-Option<Vec<Tensor>>
of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size)all_decoder_hidden_states
-Option<Vec<Tensor>>
of length num_decoder_layers with shape (batch size, target_sequence_length, hidden_size)all_decoder_attentions
-Option<Vec<Tensor>>
of length num_decoder_layers with shape (batch size, target_sequence_length, hidden_size)
Example
use rust_bert::bart::{BartConfig, BartForConditionalGeneration};
let (batch_size, source_sequence_length, target_sequence_length) = (64, 128, 56);
let input_tensor = Tensor::rand(&[batch_size, source_sequence_length], (Int64, device));
let target_tensor = Tensor::rand(&[batch_size, target_sequence_length], (Int64, device));
let encoder_attention_mask = Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));
let decoder_attention_mask = Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));
let model_output = no_grad(|| {
bart_model
.forward_t(Some(&input_tensor),
Some(&encoder_attention_mask),
None,
Some(&target_tensor),
Some(&decoder_attention_mask),
None,
false)
});
pub fn encode( &self, input_ids: &Tensor, attention_mask: Option<&Tensor> ) -> Tensor
Auto Trait Implementations§
impl RefUnwindSafe for BartForConditionalGeneration
impl Send for BartForConditionalGeneration
impl !Sync for BartForConditionalGeneration
impl Unpin for BartForConditionalGeneration
impl UnwindSafe for BartForConditionalGeneration
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more