pub struct BertLayer { /* private fields */ }
Expand description
BERT Layer
Layer used in BERT encoders. It is made of the following blocks:
attention
: self-attentionBertAttention
layercross_attention
: (optional) cross-attentionBertAttention
layer (if the model is used as a decoder)is_decoder
: flag indicating if the model is used as a decoderintermediate
:BertIntermediate
intermediate layeroutput
:BertOutput
output layer
Implementations§
source§impl BertLayer
impl BertLayer
sourcepub fn new<'p, P>(p: P, config: &BertConfig) -> BertLayer
pub fn new<'p, P>(p: P, config: &BertConfig) -> BertLayer
Build a new BertLayer
Arguments
p
- Variable store path for the root of the BERT modelconfig
-BertConfig
object defining the model architecture
Example
use rust_bert::bert::{BertConfig, BertLayer};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = BertConfig::from_file(config_path);
let layer: BertLayer = BertLayer::new(&p.root(), &config);
sourcepub fn forward_t(
&self,
hidden_states: &Tensor,
mask: Option<&Tensor>,
encoder_hidden_states: Option<&Tensor>,
encoder_mask: Option<&Tensor>,
train: bool
) -> BertLayerOutput
pub fn forward_t( &self, hidden_states: &Tensor, mask: Option<&Tensor>, encoder_hidden_states: Option<&Tensor>, encoder_mask: Option<&Tensor>, train: bool ) -> BertLayerOutput
Forward pass through the layer
Arguments
hidden_states
- input tensor of shape (batch size, sequence_length, hidden_size).mask
- Optional mask of shape (batch size, sequence_length). Masked position have value 0, non-masked value 1. If None set to 1encoder_hidden_states
- Optional encoder hidden state of shape (batch size, encoder_sequence_length, hidden_size). If the model is defined as a decoder and theencoder_hidden_states
is not None, used in the cross-attention layer as keys and values (query from the decoder).encoder_mask
- Optional encoder attention mask of shape (batch size, encoder_sequence_length). If the model is defined as a decoder and theencoder_hidden_states
is not None, used to mask encoder values. Positions with value 0 will be masked.train
- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
BertLayerOutput
containing:hidden_state
-Tensor
of shape (batch size, sequence_length, hidden_size)attention_scores
-Option<Tensor>
of shape (batch size, sequence_length, hidden_size)cross_attention_scores
-Option<Tensor>
of shape (batch size, sequence_length, hidden_size)
Example
let layer: BertLayer = BertLayer::new(&vs.root(), &config);
let (batch_size, sequence_length, hidden_size) = (64, 128, 512);
let input_tensor = Tensor::rand(
&[batch_size, sequence_length, hidden_size],
(Kind::Float, device),
);
let mask = Tensor::zeros(&[batch_size, sequence_length], (Kind::Int64, device));
let layer_output = no_grad(|| layer.forward_t(&input_tensor, Some(&mask), None, None, false));
Auto Trait Implementations§
impl RefUnwindSafe for BertLayer
impl Send for BertLayer
impl !Sync for BertLayer
impl Unpin for BertLayer
impl UnwindSafe for BertLayer
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more