Struct rust_bert::reformer::ReformerForQuestionAnswering [−][src]
pub struct ReformerForQuestionAnswering { /* fields omitted */ }Expand description
Reformer Model for question answering
Extractive question-answering model based on a Reformer language model. Identifies the segment of a context that answers a provided question. Please note that a significant amount of pre- and post-processing is required to perform end-to-end question answering. See the question answering pipeline (also provided in this crate) for more details. It is made of the following blocks:
reformer:ReformerModelBase Reformer modelqa_outputs: Linear layer for question answering, mapping to start and end logits for the answer.
Implementations
pub fn new<'p, P>(
p: P,
config: &ReformerConfig
) -> Result<ReformerForQuestionAnswering, RustBertError> where
P: Borrow<Path<'p>>,
pub fn new<'p, P>(
p: P,
config: &ReformerConfig
) -> Result<ReformerForQuestionAnswering, RustBertError> where
P: Borrow<Path<'p>>,
Build a new ReformerForQuestionAnswering
Arguments
p- Variable store path for the root of the BART modelconfig-ReformerConfigobject defining the model architecture
Example
use rust_bert::reformer::{ReformerConfig, ReformerForQuestionAnswering};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = ReformerConfig::from_file(config_path);
let reformer_model: ReformerForQuestionAnswering =
ReformerForQuestionAnswering::new(&p.root(), &config).unwrap();Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). Must be provided when no pre-computed embeddings are given.position_ids- Optional input tensor of shape (batch size, sequence_length). If not provided will be calculated on the fly starting from position 0.input_embeds- Optional input tensor of shape (batch size, sequence_length, embeddings_dim). Must be provided when no input ids are given.attention_mask- Optional attention mask of shape (batch size, sequence_length). Positions with a mask with value 0 will be masked.num_hashes- Optional specification of the number of hashes to use. If not provided will use the value provided in the model configuration.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
ReformerClassificationOutputcontaining:start_logits-Tensorof shape (batch size, sequence_length) containing the logits for start of the answerend_logits-Tensorof shape (batch size, sequence_length) containing the logits for end of the answerall_hidden_states-Option<Vec<Tensor>>of length n_layers with shape (batch size, sequence_length, hidden_size)all_attentions-Option<Vec<Tensor>>of length n_layers with shape (batch size, sequence_length, hidden_size)
Example
use rust_bert::reformer::{ReformerConfig, ReformerForQuestionAnswering};
let (batch_size, sequence_length) = (64, 128);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let input_positions = Tensor::arange(sequence_length, (Kind::Int64, device)).unsqueeze(0).expand(&[batch_size, sequence_length], true);
let attention_mask = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let model_output = no_grad(|| {
reformer_model.forward_t(
Some(&input_tensor),
Some(&input_positions),
None,
Some(&attention_mask),
Some(4),
false,
)
});Auto Trait Implementations
impl Send for ReformerForQuestionAnswering
impl !Sync for ReformerForQuestionAnswering
impl Unpin for ReformerForQuestionAnswering
impl UnwindSafe for ReformerForQuestionAnswering
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
