Struct rust_bert::longformer::LongformerForTokenClassification [−][src]
pub struct LongformerForTokenClassification { /* fields omitted */ }Expand description
Longformer for token classification (e.g. NER, POS)
Token-level classifier predicting a label for each token provided. It is made of the following blocks:
longformer: Base Longformer modelclassifier: Linear layer for token classification
Implementations
pub fn new<'p, P>(
p: P,
config: &LongformerConfig
) -> LongformerForTokenClassification where
P: Borrow<Path<'p>>,
pub fn new<'p, P>(
p: P,
config: &LongformerConfig
) -> LongformerForTokenClassification where
P: Borrow<Path<'p>>,
Build a new LongformerForTokenClassification
Arguments
p- Variable store path for the root of the Longformer modelconfig-LongformerConfigobject defining the model architecture
Example
use rust_bert::longformer::{LongformerConfig, LongformerForTokenClassification};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = LongformerConfig::from_file(config_path);
let longformer_model = LongformerForTokenClassification::new(&p.root(), &config);Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). This orinput_embedsmust be provided.attention_mask- Optional attention mask of shape (batch size, sequence_length). Positions with a mask with value 0 will be masked.global_attention_mask- Optional attention mask of shape (batch size, sequence_length). Positions with a mask with value 1 will attend all other positions in the sequence.token_type_ids- Optional segment id of shape (batch size, sequence_length). Convention is value of 0 for the first sentence (incl. SEP) and 1 for the second sentence. If None set to 0.position_ids- Optional position ids of shape (batch size, sequence_length). If None, will be incremented from 0.input_embeds- Optional pre-computed input embeddings of shape (batch size, sequence_length, hidden_size). If None, input ids must be provided (seeinput_ids)train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
LongformerTokenClassificationOutputcontaining:logits-Tensorof shape (batch size, sequence_length, num_labels) containing the logits for each of the input tokens and classesall_hidden_states-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)all_attentions-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, num_heads, sequence_length, * attention_window_size*, x + attention_window_size + 1) where x is the number of tokens with global attentionall_global_attentions-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, num_heads, sequence_length, attention_window_size, x) where x is the number of tokens with global attention
Example
use rust_bert::longformer::{LongformerConfig, LongformerForTokenClassification};
let longformer_model = LongformerForTokenClassification::new(&vs.root(), &config);
let (batch_size, sequence_length, target_sequence_length) = (64, 128, 32);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let attention_mask = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let global_attention_mask = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let target_tensor = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let model_output = no_grad(|| {
longformer_model
.forward_t(
Some(&input_tensor),
Some(&attention_mask),
Some(&global_attention_mask),
None,
None,
None,
false,
)
.unwrap()
});Auto Trait Implementations
impl Send for LongformerForTokenClassification
impl !Sync for LongformerForTokenClassification
impl Unpin for LongformerForTokenClassification
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
