pub struct T5ForConditionalGeneration { /* private fields */ }
Expand description

T5 Model for conditional generation

T5 model with a vocabulary decoding head It is made of the following blocks:

  • base_model: T5Model Base T5 model
  • model_dim: f64 representation of the model dimension for scaling of the generated logits

Implementations

Build a new T5ForConditionalGeneration

Arguments
  • p - Variable store path for the root of the BART model
  • config - T5Config object defining the model architecture
  • output_attention - flag indicating if the model should output the attention weights of intermediate layers
  • output_hidden_states - flag indicating if the model should output the hidden states weights of intermediate layers
Example
use rust_bert::t5::{T5Config, T5ForConditionalGeneration};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};

let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = T5Config::from_file(config_path);
let output_attentions = true;
let output_hidden_states = true;
let t5 = T5ForConditionalGeneration::new(
    &p.root() / "t5",
    &config,
    output_attentions,
    output_hidden_states,
);

Forward pass through the model

Arguments
  • input_ids - Optional input tensor of shape (batch size, source_sequence_length). This or input_embeds must be provided.
  • attention_mask - Optional attention mask of shape (batch size, source_sequence_length) for the encoder positions. Positions with a mask with value 0 will be masked.
  • decoder_input_ids - Optional input tensor of shape (batch size, target_sequence_length). This or decoder_input_embeds must be provided.
  • encoder_outputs - Optional tuple made of a tensor of shape (batch size, source_sequence_length, encoder_hidden_dim) and optional vectors of tensors of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size). These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
  • decoder_attention_mask - Optional attention mask of shape (batch size, target_sequence_length) for the decoder positions. Positions with a mask with value 0 will be masked.
  • input_embeds - Optional input tensor of shape (batch size, source_sequence_length, embeddings dimension). This or input_ids must be provided.
  • decoder_input_embeds - Optional input tensor of shape (batch size, target_sequence_length, embeddings dimension). This or decoder_input_ids must be provided.
  • old_layer_states - Optional vector of length num_layers containing tuples of optional LayerStates containing the last calculated key and value pairs for the decoder. This avoids recomputing attention weights at past positions and speeds up decoding.
  • train - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
  • T5ModelOutput containing:
    • decoder_output - Tensor of shape (batch size, target_sequence_length, vocab_size) representing the logits for each sequence position and vocabulary item
    • encoder_hidden_states - Tensor of shape (batch size, source_sequence_length, hidden_size) representing the activations of the last encoder hidden state
    • cache - Option<Vec<(Option<Vec<LayerState, LayerState>>)>> of length n_layer containing the encoder padding mask and past keys and values for both the self attention and the encoder cross attention of each layer of the decoder.
    • all_encoder_hidden_states - Option<Vec<Tensor>> of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size)
    • all_encoder_attentions - Option<Vec<Tensor>> of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size)
    • all_decoder_hidden_states - Option<Vec<Tensor>> of length num_decoder_layers with shape (batch size, target_sequence_length, hidden_size)
    • all_decoder_attentions - Option<Vec<Tensor>> of length num_decoder_layers with shape (batch size, target_sequence_length, hidden_size)
Example
use rust_bert::t5::{T5Config, T5ForConditionalGeneration};
let (batch_size, source_sequence_length, target_sequence_length) = (64, 128, 56);
let input_tensor = Tensor::rand(&[batch_size, source_sequence_length], (Int64, device));
let target_tensor = Tensor::rand(&[batch_size, target_sequence_length], (Int64, device));
let encoder_attention_mask =
    Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));
let decoder_attention_mask =
    Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));

let model_output = no_grad(|| {
    t5_model.forward_t(
        Some(&input_tensor),
        Some(&encoder_attention_mask),
        None,
        Some(&target_tensor),
        Some(&decoder_attention_mask),
        None,
        None,
        None,
        false,
    )
});

Trait Implementations

Forward pass through the model

Arguments
  • input_ids - Optional input tensor of shape (batch size, sequence_length). If None, pre-computed embeddings must be provided (see input_embeds)
  • layer_past - Optional vector of length num_layers containing tuples of optional LayerStates containing the last calculated key and value pairs for the decoder. This avoids recomputing attention weights at past positions and speeds up decoding.
  • attention_mask - Optional mask of shape (batch size, sequence_length). Masked position have value 0, non-masked value 1. If None set to 1
  • input_embeds - Unused for T5
  • token_type_ids - Unused for T5
  • position_ids - Unused for T5
  • encoder_outputs - Optional tensor of shape (batch size, source_sequence_length, hidden_size). When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.
  • decoder_input_ids - Optional input tensor of shape (batch size, target_sequence_length).
  • train - boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
  • LMModelOutput containing:
    • lm_logits - Tensor of shape (batch size, sequence_length, vocab_size) representing the logits for each vocab item and position
    • cache - T5Cache made of Option<Vec<(Option<Vec<&LayerState, &LayerState>>)>> of length n_layer containing the encoder past keys and values for both the self attention and the encoder cross attention of each layer of the decoder.
Example
use rust_bert::t5::{T5Config, T5ForConditionalGeneration};
let (batch_size, source_sequence_length, target_sequence_length) = (64, 128, 56);
let input_tensor = Tensor::rand(&[batch_size, source_sequence_length], (Int64, device));
let target_tensor = Tensor::rand(&[batch_size, target_sequence_length], (Int64, device));
let encoder_attention_mask =
    Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));
let decoder_attention_mask =
    Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));

let model_output = no_grad(|| {
    t5_model.forward_t(
        Some(&input_tensor),
        Some(&encoder_attention_mask),
        None,
        Some(&target_tensor),
        Some(&decoder_attention_mask),
        None,
        None,
        None,
        false,
    )
});

Generate text based on a vector of promp texts. Read more

Generate token indices without decoding (useful for token-level operations before returning final text or as validation step during training). Read more

Generate token indices given a list of indices (useful when the input has been pre-tokenized). Returns a list of output tokens that need to be decoded using a tokenizer. Read more

Returns a reference to the text generator’s tokenizer Read more

Auto Trait Implementations

Blanket Implementations

Gets the TypeId of self. Read more

Immutably borrows from an owned value. Read more

Mutably borrows from an owned value. Read more

Returns the argument unchanged.

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more

Instruments this type with the current Span, returning an Instrumented wrapper. Read more

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

The alignment of pointer.

The type for initializers.

Initializes a with the given initializer. Read more

Dereferences the given pointer. Read more

Mutably dereferences the given pointer. Read more

Drops the object pointed to by the given pointer. Read more

Should always be Self

The type returned in the event of a conversion error.

Performs the conversion.

The type returned in the event of a conversion error.

Performs the conversion.

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more