Struct rust_bert::roberta::RobertaForMaskedLM [−][src]
pub struct RobertaForMaskedLM { /* fields omitted */ }Expand description
RoBERTa for masked language model
Base RoBERTa model with a RoBERTa masked language model head to predict missing tokens, for example "Looks like one [MASK] is missing" -> "person"
It is made of the following blocks:
roberta: Base BertModel with RoBERTa embeddingslm_head: RoBERTa LM prediction head
Implementations
Build a new RobertaForMaskedLM
Arguments
p- Variable store path for the root of the RobertaForMaskedLM modelconfig-BertConfigobject defining the model architecture and vocab size
Example
use rust_bert::bert::BertConfig;
use rust_bert::roberta::RobertaForMaskedLM;
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = BertConfig::from_file(config_path);
let roberta = RobertaForMaskedLM::new(&p.root() / "roberta", &config);Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). If None, pre-computed embeddings must be provided (see input_embeds)mask- Optional mask of shape (batch size, sequence_length). Masked position have value 0, non-masked value 1. If None set to 1token_type_ids-Optional segment id of shape (batch size, sequence_length). Convention is value of 0 for the first sentence (incl. ) and 1 for the second sentence. If None set to 0.position_ids- Optional position ids of shape (batch size, sequence_length). If None, will be incremented from 0.input_embeds- Optional pre-computed input embeddings of shape (batch size, sequence_length, hidden_size). If None, input ids must be provided (see input_ids)encoder_hidden_states- Optional encoder hidden state of shape (batch size, encoder_sequence_length, hidden_size). If the model is defined as a decoder and the encoder_hidden_states is not None, used in the cross-attention layer as keys and values (query from the decoder).encoder_mask- Optional encoder attention mask of shape (batch size, encoder_sequence_length). If the model is defined as a decoder and the encoder_hidden_states is not None, used to mask encoder values. Positions with value 0 will be masked.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
output-Tensorof shape (batch size, num_labels, vocab_size)hidden_states-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)attentions-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)
Example
use rust_bert::roberta::RobertaForMaskedLM;
let (batch_size, sequence_length) = (64, 128);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let mask = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let token_type_ids = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let position_ids = Tensor::arange(sequence_length, (Int64, device))
.expand(&[batch_size, sequence_length], true);
let model_output = no_grad(|| {
roberta_model.forward_t(
Some(&input_tensor),
Some(&mask),
Some(&token_type_ids),
Some(&position_ids),
None,
None,
None,
false,
)
});Auto Trait Implementations
impl RefUnwindSafe for RobertaForMaskedLM
impl Send for RobertaForMaskedLM
impl !Sync for RobertaForMaskedLM
impl Unpin for RobertaForMaskedLM
impl UnwindSafe for RobertaForMaskedLM
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
