Struct rust_bert::reformer::ReformerModel [−][src]
pub struct ReformerModel { /* fields omitted */ }Expand description
Reformer Base model
Base architecture for the Reformer model. Usually complemented with a task-specific head, such as a language model head. It is made of the following blocks:
embeddings:ReformerEmbeddingsReformer embeddings, combining word and position embeddingsencoder:ReformerEncoder(transformer) made of a vector of Reformer layer with local or LSH attention. caching is implemented for the decoder to avoid recalculating static states (encoder key/values and previously calculated decoder key/values)least_common_mult_chunk_length: least common chunk length for all attention layersmin_chunk_length: minimum chunk length for all attention layerspad_token_id: padding token id used to pad to chunk length multiple if input is long enough to be chunked.
Implementations
pub fn new<'p, P>(
p: P,
config: &ReformerConfig
) -> Result<ReformerModel, RustBertError> where
P: Borrow<Path<'p>>,
pub fn new<'p, P>(
p: P,
config: &ReformerConfig
) -> Result<ReformerModel, RustBertError> where
P: Borrow<Path<'p>>,
Build a new ReformerModel
Arguments
p- Variable store path for the root of the BART modelconfig-ReformerConfigobject defining the model architecture
Example
use rust_bert::reformer::{ReformerConfig, ReformerModel};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = ReformerConfig::from_file(config_path);
let reformer_model: ReformerModel =
ReformerModel::new(&p.root() / "reformer", &config).unwrap();Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). Must be provided when no pre-computed embeddings are given.position_ids- Optional input tensor of shape (batch size, sequence_length). If not provided will be calculated on the fly starting from position 0.input_embeds- Optional input tensor of shape (batch size, sequence_length, embeddings_dim). Must be provided when no input ids are given.attention_mask- Optional attention mask of shape (batch size, sequence_length). Positions with a mask with value 0 will be masked.num_hashes- Optional specification of the number of hashes to use. If not provided will use the value provided in the model configuration.old_layer_states- Optional cached input (Option<Vec<Option<LayerState>>>) containing previous values for the cached states and buckets.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
ReformerModelOutputcontaining:hidden_states-Tensorof shape (batch size, sequence_length, hidden_size) representing the activations of the last hidden stateall_hidden_states-Option<Vec<Tensor>>of length n_layers with shape (batch size, sequence_length, hidden_size)all_attentions-Option<Vec<Tensor>>of length n_layers with shape (batch size, sequence_length, hidden_size)cache-Option<Vec<Option<LayerState>>>of length n_layer containing values for the states and buckets for future use.
Example
use rust_bert::reformer::{ReformerConfig, ReformerModel};
let (batch_size, sequence_length) = (64, 128);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let input_positions = Tensor::arange(sequence_length, (Kind::Int64, device))
.unsqueeze(0)
.expand(&[batch_size, sequence_length], true);
let attention_mask = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let model_output = no_grad(|| {
reformer_model.forward_t(
Some(&input_tensor),
Some(&input_positions),
None,
Some(&attention_mask),
Some(4),
None,
false,
)
});Auto Trait Implementations
impl RefUnwindSafe for ReformerModel
impl Send for ReformerModel
impl !Sync for ReformerModel
impl Unpin for ReformerModel
impl UnwindSafe for ReformerModel
Blanket Implementations
Mutably borrows from an owned value. Read more
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
type Output = T
type Output = T
Should always be Self
