Struct rust_bert::openai_gpt::OpenAiGptModel
source · [−]pub struct OpenAiGptModel { /* private fields */ }Expand description
GPT Base model
Base architecture for GPT model. Usually complemented with a task-specific head, such as a language model head. As opposed to GPT2, GPT does not give the possibility to re-use past activations as an input. It is made of the following blocks:
tokens_embed:tokenembeddingspositions_embed:positionembeddingsh: Encoder (transformer) made of a vector of layers. Each layer is made of a multi-head attention layer, layer-normalization layers and a MLP made of linear layers.output_hidden_states: flag indicating if the model should return all hidden states (as opposed to only the last layer)output_attentions: flag indicating if the model should return activation weights
Implementations
sourceimpl OpenAiGptModel
impl OpenAiGptModel
sourcepub fn new<'p, P>(p: P, config: &Gpt2Config) -> OpenAiGptModel where
P: Borrow<Path<'p>>,
pub fn new<'p, P>(p: P, config: &Gpt2Config) -> OpenAiGptModel where
P: Borrow<Path<'p>>,
Build a new OpenAiGptModel
Arguments
p- Variable store path for the root of the GPT modelconfig-OpenAiGptConfigobject defining the model architecture
Example
use rust_bert::openai_gpt::{OpenAiGptConfig, OpenAiGptModel};
use rust_bert::Config;
use std::path::Path;
use tch::{nn, Device};
let config_path = Path::new("path/to/config.json");
let device = Device::Cpu;
let p = nn::VarStore::new(device);
let config = OpenAiGptConfig::from_file(config_path);
let gpt2: OpenAiGptModel = OpenAiGptModel::new(&p.root() / "gpt", &config);sourcepub fn forward_t(
&self,
input_ids: Option<&Tensor>,
attention_mask: Option<&Tensor>,
token_type_ids: Option<&Tensor>,
position_ids: Option<&Tensor>,
input_embeds: Option<&Tensor>,
train: bool
) -> Result<OpenAiGptModelOutput, RustBertError>
pub fn forward_t(
&self,
input_ids: Option<&Tensor>,
attention_mask: Option<&Tensor>,
token_type_ids: Option<&Tensor>,
position_ids: Option<&Tensor>,
input_embeds: Option<&Tensor>,
train: bool
) -> Result<OpenAiGptModelOutput, RustBertError>
Forward pass through the model
Arguments
input_ids- Optional input tensor of shape (batch size, sequence_length). If None, pre-computed embeddings must be provided (seeinput_embeds)attention_mask- Optional mask of shape (batch size, sequence_length). Masked position have value 0, non-masked value 1. If None set to 1input_embeds- Optional pre-computed input embeddings of shape (batch size, sequence_length, hidden_size). If None, input ids must be provided (seeinput_ids)token_type_ids- Optional token type ids used to indicate the portion of the input the token belongs to. If not None, token type embeddings will be added to the token and position embeddings.position_ids- Optional position ids of shape (batch size, sequence_length). If None, will be incremented starting from the length of the past input.train- boolean flag to turn on/off the dropout layers in the model. Should be set to false for inference.
Returns
OpenAiGptModelOutputcontaining:output-Tensorof shape (batch size, sequence_length, hidden_size) representing the activations of the last hidden stateall_hidden_states-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)all_attentions-Option<Vec<Tensor>>of length num_hidden_layers with shape (batch size, sequence_length, hidden_size)
Example
use rust_bert::gpt2::Gpt2Config;
use rust_bert::openai_gpt::OpenAiGptModel;
let (batch_size, sequence_length, past_sequence_length) = (64, 128, 56);
let input_tensor = Tensor::rand(&[batch_size, sequence_length], (Int64, device));
let attention_mask = Tensor::zeros(&[batch_size, sequence_length], (Int64, device));
let token_type_ids = Tensor::ones(&[batch_size, sequence_length], (Int64, device));
let position_ids = Tensor::arange(sequence_length, (Int64, device))
.expand(&[batch_size, sequence_length], true);
let model_output = no_grad(|| {
gpt_model
.forward_t(
Some(&input_tensor),
Some(&attention_mask),
Some(&token_type_ids),
Some(&position_ids),
None,
false,
)
.unwrap()
});Auto Trait Implementations
impl RefUnwindSafe for OpenAiGptModel
impl Send for OpenAiGptModel
impl !Sync for OpenAiGptModel
impl Unpin for OpenAiGptModel
impl UnwindSafe for OpenAiGptModel
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> Instrument for T
impl<T> Instrument for T
sourcefn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Instruments this type with the provided Span, returning an
Instrumented wrapper. Read more
sourcefn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
impl<T> Pointable for T
impl<T> Pointable for T
impl<V, T> VZip<V> for T where
V: MultiLane<T>,
impl<V, T> VZip<V> for T where
V: MultiLane<T>,
fn vzip(self) -> V
sourceimpl<T> WithSubscriber for T
impl<T> WithSubscriber for T
sourcefn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self> where
S: Into<Dispatch>,
Attaches the provided Subscriber to this type, returning a
WithDispatch wrapper. Read more
sourcefn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
Attaches the current default Subscriber to this type, returning a
WithDispatch wrapper. Read more