[][src]Module rust_bert::bart

BART (Lewis et al.)

Implementation of the BART language model (BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Lewis, Liu, Goyal, Ghazvininejad, Mohamed, Levy, Stoyanov, Zettlemoyer, 2019). The base model is implemented in the bart::BartModel struct. The model also includes a language model head: bart::BartForConditionalGeneration implementing the common generation::LMHeadModel trait shared between the models used for generation (see pipelines for more information).

Model set-up and pre-trained weights loading

A full working example is provided in examples/bart.rs, run with cargo run --example bart. Alternatively, the summarization capabilities are illustrated in examples/summarization.rs, run with cargo run --example summarization. All models expect the following resources:

  • Configuration file expected to have a structure following the Transformers library
  • Model weights are expected to have a structure and parameter names following the Transformers library. A conversion using the Python utility scripts is required to convert the .bin weights to the .ot format.
  • RobertaTokenizer using a vocab.txt vocabulary and merges.txt 2-gram merges
use rust_tokenizers::RobertaTokenizer;
use tch::{nn, Device};
use rust_bert::Config;
use rust_bert::bart::{BartConfig, BartModel};

let device = Device::cuda_if_available();
let mut vs = nn::VarStore::new(device);
let tokenizer: RobertaTokenizer = RobertaTokenizer::from_file(vocab_path.to_str().unwrap(), merges_path.to_str().unwrap(), true);
let config = BartConfig::from_file(config_path);
let gpt2_model = BartModel::new(&vs.root(), &config, false);
vs.load(weights_path)?;

Structs

BartConfig

BART model configuration

BartForConditionalGeneration

BART Model for conditional generation

BartForSequenceClassification

BART Model for sequence classification

BartModel

BART Base model

LayerState

Cache for BART attention layers

Enums

Activation