[−][src]Module rust_bert::bert
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al.)
Implementation of the BERT language model (https://arxiv.org/abs/1810.04805 Devlin, Chang, Lee, Toutanova, 2018).
The base model is implemented in the bert::BertModel
struct. Several language model heads have also been implemented, including:
- Masked language model:
bert::BertForMaskedLM
- Multiple choices:
bert:BertForMultipleChoice
- Question answering:
bert::BertForQuestionAnswering
- Sequence classification:
bert::BertForSequenceClassification
- Token classification (e.g. NER, POS tagging):
bert::BertForTokenClassification
Model set-up and pre-trained weights loading
A full working example is provided in examples/bert.rs
, run with cargo run --example bert
.
The example below illustrate a Masked language model example, the structure is similar for other models.
All models expect the following resources:
- Configuration file expected to have a structure following the Transformers library
- Model weights are expected to have a structure and parameter names following the Transformers library. A conversion using the Python utility scripts is required to convert the
.bin
weights to the.ot
format. BertTokenizer
using avocab.txt
vocabulary Pretrained models are available and can be downloaded using RemoteResources.
use rust_tokenizers::BertTokenizer; use tch::{nn, Device}; use rust_bert::bert::{BertConfig, BertForMaskedLM}; use rust_bert::resources::{download_resource, LocalResource, Resource}; use rust_bert::Config; let config_resource = Resource::Local(LocalResource { local_path: PathBuf::from("path/to/config.json"), }); let vocab_resource = Resource::Local(LocalResource { local_path: PathBuf::from("path/to/vocab.txt"), }); let weights_resource = Resource::Local(LocalResource { local_path: PathBuf::from("path/to/model.ot"), }); let config_path = download_resource(&config_resource)?; let vocab_path = download_resource(&vocab_resource)?; let weights_path = download_resource(&weights_resource)?; let device = Device::cuda_if_available(); let mut vs = nn::VarStore::new(device); let tokenizer: BertTokenizer = BertTokenizer::from_file(vocab_path.to_str().unwrap(), true); let config = BertConfig::from_file(config_path); let bert_model = BertForMaskedLM::new(&vs.root(), &config); vs.load(weights_path)?;
Structs
BertConfig | BERT model configuration |
BertConfigResources | BERT Pretrained model config files |
BertEmbeddings | BertEmbeddings implementation for BERT model |
BertForMaskedLM | BERT for masked language model |
BertForMultipleChoice | BERT for multiple choices |
BertForQuestionAnswering | BERT for question answering |
BertForSequenceClassification | BERT for sequence classification |
BertForTokenClassification | BERT for token classification (e.g. NER, POS) |
BertModel | BERT Base model |
BertModelResources | BERT Pretrained model weight files |
BertVocabResources | BERT Pretrained model vocab files |
Enums
Activation | Activation function used in the attention layer and masked language model head |
Traits
BertEmbedding | BertEmbedding trait (for use in BertModel or RoBERTaModel) |