Struct rust_bert::electra::ElectraForTokenClassification [−][src]
pub struct ElectraForTokenClassification { /* fields omitted */ }Expand description
Electra for token classification (e.g. POS, NER)
Electra model with a token tagging head It is made of the following blocks:
electra:ElectraModel(based on aBertEncoderand custom embeddings)dropout: Dropout layerclassifier: linear layer of dimension (hidden_size, num_classes) to project the output to the target label space
Implementations
Defines the implementation of the ElectraForTokenClassification.
pub fn new<'p, P>(p: P, config: &ElectraConfig) -> ElectraForTokenClassification where
P: Borrow<Path<'p>>,
pub fn new<'p, P>(p: P, config: &ElectraConfig) -> ElectraForTokenClassification where
P: Borrow<Path<'p>>,
Build a new ElectraForTokenClassification
Arguments
p- Variable store path for the root of the Electra modelconfig-ElectraConfigobject defining the model architecture
Example
use rust_bert::electra::{ElectraConfig, ElectraForTokenClassification};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = ElectraConfig::from_file(config_path);
let electra_model: ElectraForTokenClassification =
ElectraForTokenClassification::new(&p.root(), &config);Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). If None, pre-computed embeddings must be provided (seeinput_embeds)mask- Optional mask of shape (batch size, sequence_length). Masked position have value 0, non-masked value 1. If None set to 1token_type_ids- Optional segment id of shape (batch size, sequence_length). Convention is value of 0 for the first sentence (incl. SEP) and 1 for the second sentence. If None set to 0.position_ids- Optional position ids of shape (batch size, sequence_length). If None, will be incremented from 0.input_embeds- Optional pre-computed input embeddings of shape (batch size, sequence_length, hidden_size). If None, input ids must be provided (seeinput_ids)train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
ElectraTokenClassificationOutputcontaining:logits-Tensorof shape (batch size, sequence_length, num_labels) containing the logits for each of the input tokens and classesall_hidden_states-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)all_attentions-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)
Example
let (batch_size, sequence_length) = (64, 128);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let mask = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let token_type_ids = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let position_ids = Tensor::arange(sequence_length, (Int64, device)).expand(&[batch_size, sequence_length], true);
let model_output = no_grad(|| {
electra_model
.forward_t(Some(&input_tensor),
Some(&mask),
Some(&token_type_ids),
Some(&position_ids),
None,
false)
});Auto Trait Implementations
impl Send for ElectraForTokenClassification
impl !Sync for ElectraForTokenClassification
impl Unpin for ElectraForTokenClassification
impl UnwindSafe for ElectraForTokenClassification
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
