Struct rust_bert::albert::AlbertForMaskedLM [−][src]
pub struct AlbertForMaskedLM { /* fields omitted */ }Expand description
ALBERT for masked language model
Base ALBERT model with a masked language model head to predict missing tokens, for example "Looks like one [MASK] is missing" -> "person"
It is made of the following blocks:
albert: Base AlbertModelpredictions: ALBERT MLM prediction head
Implementations
Build a new AlbertForMaskedLM
Arguments
p- Variable store path for the root of the ALBERT modelconfig-AlbertConfigobject defining the model architecture and decoder status
Example
use rust_bert::albert::{AlbertConfig, AlbertForMaskedLM};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = AlbertConfig::from_file(config_path);
let albert: AlbertForMaskedLM = AlbertForMaskedLM::new(&p.root(), &config);Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). If None, pre-computed embeddings must be provided (seeinput_embeds)mask- Optional mask of shape (batch size, sequence_length). Masked position have value 0, non-masked value 1. If None set to 1token_type_ids- Optional segment id of shape (batch size, sequence_length). Convention is value of 0 for the first sentence (incl. SEP) and 1 for the second sentence. If None set to 0.position_ids- Optional position ids of shape (batch size, sequence_length). If None, will be incremented from 0.input_embeds- Optional pre-computed input embeddings of shape (batch size, sequence_length, hidden_size). If None, input ids must be provided (seeinput_ids)train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
AlbertMaskedLMOutputcontaining:prediction_scores-Tensorof shape (batch size, sequence_length, vocab_size)all_hidden_states-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)all_attentions-Option<Vec<Vec<Tensor>>>of length num_hidden_layers of nested length inner_group_num with shape (batch size, sequence_length, hidden_size)
Example
use rust_bert::albert::{AlbertConfig, AlbertForMaskedLM};
let (batch_size, sequence_length) = (64, 128);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let mask = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let token_type_ids = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let position_ids = Tensor::arange(sequence_length, (Int64, device))
.expand(&[batch_size, sequence_length], true);
let masked_lm_output = no_grad(|| {
albert_model.forward_t(
Some(&input_tensor),
Some(&mask),
Some(&token_type_ids),
Some(&position_ids),
None,
false,
)
});Auto Trait Implementations
impl RefUnwindSafe for AlbertForMaskedLM
impl Send for AlbertForMaskedLM
impl !Sync for AlbertForMaskedLM
impl Unpin for AlbertForMaskedLM
impl UnwindSafe for AlbertForMaskedLM
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
