pub struct PegasusModel { /* private fields */ }
Expand description

Pegasus Base model

Base architecture for Pegasus model. Usually complemented with a task-specific head, such as a language model head. It is made of the following blocks:

  • encoder: PegasusEncoder (transformer) made of a vector of encoding layers
  • decoder: PegasusDecoder (transformer) made of a vector of decoding layers with self attention and encoder cross-attention. caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)

Implementations§

Build a new PegasusModel

Arguments
  • p - Variable store path for the root of the Pegasus model
  • config - PegasusConfig object defining the model architecture
Example
use rust_bert::pegasus::{PegasusConfig, PegasusModel};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};

let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = PegasusConfig::from_file(config_path);
let pegasus: PegasusModel = PegasusModel::new(&p.root() / "pegasus", &config);

Forward pass through the model

Arguments
  • input_ids - Optional input tensor of shape (batch size, source_sequence_length). Must be provided when not running in generation mode
  • attention_mask - Optional attention mask of shape (batch size, source_sequence_length) for the encoder positions. Positions with a mask with value 0 will be masked.
  • decoder_input_ids - Optional input tensor of shape (batch size, target_sequence_length). Must be provided when running in generation mode (e.g. initialized with a BOS token)
  • encoder_outputs - Optional tuple made of a tensor of shape (batch size, source_sequence_length, encoder_hidden_dim) and optional vectors of tensors of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size). These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
  • decoder_attention_mask - Optional attention mask of shape (batch size, target_sequence_length) for the decoder positions. Positions with a mask with value 0 will be masked.
  • train - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
  • PegasusModelOutput containing:
    • decoder_output - Tensor of shape (batch size, target_sequence_length, hidden_size) representing the activations of the last decoder hidden state
    • encoder_hidden_states - Option<Tensor> of shape (batch size, source_sequence_length, hidden_size) representing the activations of the last encoder hidden state if it was not provided, otherwise None
    • cache - (Option<Tensor>, Option<Vec<&LayerState, &LayerState>>) of length n_layer containing the encoder padding mask and past keys and values for both the self attention and the encoder cross attention of each layer of the decoder.
    • all_encoder_hidden_states - Option<Vec<Tensor>> of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size)
    • all_encoder_attentions - Option<Vec<Tensor>> of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size)
    • all_decoder_hidden_states - Option<Vec<Tensor>> of length num_decoder_layers with shape (batch size, target_sequence_length, hidden_size)
    • all_decoder_attentions - Option<Vec<Tensor>> of length num_decoder_layers with shape (batch size, target_sequence_length, hidden_size)
Example
use rust_bert::pegasus::{PegasusConfig, PegasusModel};
let (batch_size, source_sequence_length, target_sequence_length) = (64, 128, 56);
let input_tensor = Tensor::rand(&[batch_size, source_sequence_length], (Int64, device));
let decoder_input_tensor = Tensor::rand(&[batch_size, target_sequence_length], (Int64, device));
let encoder_attention_mask =
    Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));
let decoder_attention_mask =
    Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));

let model_output = no_grad(|| {
    pegasus_model.forward_t(
        Some(&input_tensor),
        Some(&encoder_attention_mask),
        &decoder_input_tensor,
        None,
        Some(&decoder_attention_mask),
        None,
        false,
    )
});

Auto Trait Implementations§

Blanket Implementations§

Gets the TypeId of self. Read more
Immutably borrows from an owned value. Read more
Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The alignment of pointer.
The type for initializers.
Initializes a with the given initializer. Read more
Dereferences the given pointer. Read more
Mutably dereferences the given pointer. Read more
Drops the object pointed to by the given pointer. Read more
Should always be Self
The type returned in the event of a conversion error.
Performs the conversion.
The type returned in the event of a conversion error.
Performs the conversion.
Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more