Expand description
§DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter (Sanh et al.)
Implementation of the DistilBERT language model (https://arxiv.org/abs/1910.01108 Sanh, Debut, Chaumond, Wolf, 2019).
The base model is implemented in the distilbert_model::DistilBertModel
struct. Several language model heads have also been implemented, including:
- Masked language model:
distilbert_model::DistilBertForMaskedLM
- Question answering:
distilbert_model::DistilBertForQuestionAnswering
- Sequence classification:
distilbert_model::DistilBertForSequenceClassification
- Token classification (e.g. NER, POS tagging):
distilbert_model::DistilBertForTokenClassification
§Model set-up and pre-trained weights loading
The example below illustrate a DistilBERT Masked language model example, the structure is similar for other models. All models expect the following resources:
- Configuration file expected to have a structure following the Transformers library
- Model weights are expected to have a structure and parameter names following the Transformers library. A conversion using the Python utility scripts is required to convert the
.bin
weights to the.ot
format. BertTokenizer
using avocab.txt
vocabulary
Pretrained models are available and can be downloaded using RemoteResources.
use tch::{nn, Device};
use rust_bert::distilbert::{
DistilBertConfig, DistilBertConfigResources, DistilBertModelMaskedLM,
DistilBertModelResources, DistilBertVocabResources,
};
use rust_bert::resources::{LocalResource, ResourceProvider};
use rust_bert::Config;
use rust_tokenizers::tokenizer::BertTokenizer;
let config_resource = LocalResource {
local_path: PathBuf::from("path/to/config.json"),
};
let vocab_resource = LocalResource {
local_path: PathBuf::from("path/to/vocab.txt"),
};
let weights_resource = LocalResource {
local_path: PathBuf::from("path/to/model.ot"),
};
let config_path = config_resource.get_local_path()?;
let vocab_path = vocab_resource.get_local_path()?;
let weights_path = weights_resource.get_local_path()?;
let device = Device::cuda_if_available();
let mut vs = nn::VarStore::new(device);
let tokenizer: BertTokenizer =
BertTokenizer::from_file(vocab_path.to_str().unwrap(), true, true)?;
let config = DistilBertConfig::from_file(config_path);
let bert_model = DistilBertModelMaskedLM::new(&vs.root(), &config);
vs.load(weights_path)?;
Structs§
- Distil
Bert Config - DistilBERT model configuration
- Distil
Bert Config Resources - DistilBERT Pretrained model config files
- Distil
Bert ForQuestion Answering - DistilBERT for question answering
- Distil
Bert ForToken Classification - DistilBERT for token classification (e.g. NER, POS)
- Distil
Bert MaskedLM Output - Container for the DistilBERT masked LM model output.
- Distil
Bert Model - DistilBERT Base model
- Distil
Bert Model Classifier - DistilBERT for sequence classification
- Distil
Bert Model MaskedLM - DistilBERT for masked language model
- Distil
Bert Model Resources - DistilBERT Pretrained model weight files
- Distil
Bert Question Answering Output - Container for the DistilBERT question answering model output
- Distil
Bert Sequence Classification Output - Container for the DistilBERT sequence classification model output
- Distil
Bert Token Classification Output - Container for the DistilBERT token classification model output
- Distil
Bert Vocab Resources - DistilBERT Pretrained model vocab files
Type Aliases§
- Distil
Bert ForSentence Embeddings - DistilBERT for sentence embeddings