Struct rust_bert::bart::BartForSequenceClassification [−][src]
pub struct BartForSequenceClassification { /* fields omitted */ }Expand description
BART Model for sequence classification
BART model with a classification head It is made of the following blocks:
base_model:BartModelBase BART modelclassification_head:BartClassificationHeadmade of 2 linear layers mapping hidden states to a target classeos_token_id: token id for the EOS token carrying the pooled representation for classification
Implementations
pub fn new<'p, P>(p: P, config: &BartConfig) -> BartForSequenceClassification where
P: Borrow<Path<'p>>,
pub fn new<'p, P>(p: P, config: &BartConfig) -> BartForSequenceClassification where
P: Borrow<Path<'p>>,
Build a new BartForSequenceClassification
Arguments
p- Variable store path for the root of the BART modelconfig-BartConfigobject defining the model architecture
Example
use rust_bert::bart::{BartConfig, BartForSequenceClassification};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = BartConfig::from_file(config_path);
let bart: BartForSequenceClassification =
BartForSequenceClassification::new(&p.root() / "bart", &config);Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, source_sequence_length). Must be provided when not running in generation modeattention_mask- Optional attention mask of shape (batch size, source_sequence_length) for the encoder positions. Positions with a mask with value 0 will be masked.encoder_outputs- Optional tuple made of a tensor of shape (batch size, source_sequence_length, encoder_hidden_dim) and optional vectors of tensors of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size). These correspond to the encoder last hidden state and optional hidden states/attention weights for encoder layers. When provided, the encoder hidden state will not be recalculated. Useful for generation tasks.decoder_input_ids- Optional input tensor of shape (batch size, target_sequence_length). Must be provided when running in generation mode (e.g. initialized with a BOS token)decoder_attention_mask- Optional attention mask of shape (batch size, target_sequence_length) for the decoder positions. Positions with a mask with value 0 will be masked.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
BartModelOutputcontaining:decoder_output-Tensorof shape (batch size, num_classes) representing the activations for each class and batch itemencoder_hidden_states-Option<Tensor>of shape (batch size, source_sequence_length, hidden_size) representing the activations of the last encoder hidden state if it was not provided, otherwise None.cache-(Option<Tensor>, Option<Vec<&LayerState, &LayerState>>)of length n_layer containing the encoder padding mask and past keys and values for both the self attention and the encoder cross attention of each layer of the decoder.all_encoder_hidden_states-Option<Vec<Tensor>>of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size)all_encoder_attentions-Option<Vec<Tensor>>of length num_encoder_layers with shape (batch size, source_sequence_length, hidden_size)all_decoder_hidden_states-Option<Vec<Tensor>>of length num_decoder_layers with shape (batch size, target_sequence_length, hidden_size)all_decoder_attentions-Option<Vec<Tensor>>of length num_decoder_layers with shape (batch size, target_sequence_length, hidden_size)
Example
use rust_bert::bart::{BartConfig, BartForSequenceClassification};
let (batch_size, source_sequence_length, target_sequence_length) = (64, 128, 56);
let input_tensor = Tensor::rand(&[batch_size, source_sequence_length], (Int64, device));
let target_tensor = Tensor::rand(&[batch_size, target_sequence_length], (Int64, device));
let encoder_attention_mask = Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));
let decoder_attention_mask = Tensor::ones(&[batch_size, source_sequence_length], (Int64, device));
let model_output = no_grad(|| {
bart_model
.forward_t(&input_tensor,
Some(&encoder_attention_mask),
None,
Some(&target_tensor),
Some(&decoder_attention_mask),
false)
});Auto Trait Implementations
impl Send for BartForSequenceClassification
impl !Sync for BartForSequenceClassification
impl Unpin for BartForSequenceClassification
impl UnwindSafe for BartForSequenceClassification
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
