This library is based on the Pytorch-Transformers library by HuggingFace. Using this library, you can quickly train and evaluate Transformer models. Only 3 lines of code are needed to initialize a model, train the model, and evaluate the model.
-
Install Anaconda or Miniconda Package Manager from here
-
Create a new virtual environment and install packages.
conda create -n transformers python pandas tqdm
conda activate transformers
If using cuda:
conda install pytorch cudatoolkit=10.0 -c pytorch
else:
conda install pytorch cpuonly -c pytorch
conda install -c anaconda scipy
conda install -c anaconda scikit-learn
pip install transformers
pip install tensorboardx
-
Install simpletransformers.
pip install simpletransformers
from simpletransformers.model import TransformerModel
import pandas as pd
# Train and Evaluation data needs to be in a Pandas Dataframe of two columns. The first column is the text with type str, and the second column is the label with type int.
train_data = [['Example sentence belonging to class 1', 1], ['Example sentence belonging to class 0', 0]]
train_df = pd.DataFrame(train_data)
eval_data = [['Example eval sentence belonging to class 1', 1], ['Example eval sentence belonging to class 0', 0]]
eval_df = pd.DataFrame(eval_data)
# Create a TransformerModel
model = TransformerModel('roberta', 'roberta-base')
# Train the model
model.train_model(train_df)
# Evaluate the model
result, model_outputs, wrong_predictions = model.eval_model(eval_df)
To make predictions on arbitary data, the predict(to_predict)
function can be used. For a list of text, it returns the model predictions and the raw model outputs.
predictions = model.predict(['Some arbitary sentence'])
Please refer to this Medium article for an example of using the library on the Yelp Reviews Dataset.
The default args used are given below. Any of these can be overridden by passing a dict containing the corresponding key: value pairs to the the init method of TransformerModel.
self.args = {
'model_type': 'roberta',
'model_name': 'roberta-base',
'output_dir': 'outputs/',
'cache_dir': 'cache/',
'fp16': True,
'fp16_opt_level': 'O1',
'max_seq_length': 128,
'train_batch_size': 8,
'eval_batch_size': 8,
'gradient_accumulation_steps': 1,
'num_train_epochs': 1,
'weight_decay': 0,
'learning_rate': 4e-5,
'adam_epsilon': 1e-8,
'warmup_ratio': 0.06,
'warmup_steps': 0,
'max_grad_norm': 1.0,
'logging_steps': 50,
'evaluate_during_training': False,
'save_steps': 2000,
'eval_all_checkpoints': True,
'use_tensorboard': True,
'overwrite_output_dir': False,
'reprocess_input_data': False,
}
The directory where all outputs will be stored. This includes model checkpoints and evaluation results.
The directory where cached files will be saved.
Whether or not fp16 mode should be used. Requires NVidia Apex library.
Can be '01', '02', '03'. See the Apex docs for an explanation of the different optimization levels (opt_levels).
Maximum sequence level the model will support.
The training batch size.
The number of training steps to execute before performing a optimizer.step()
. Effectively increases the training batch size while sacrificing training time to lower memory consumption.
The evaluation batch size.
The number of epochs the model will be trained for.
Adds L2 penalty.
The learning rate for training.
Epsilon hyperparameter used in AdamOptimizer.
Maximum gradient clipping.
Log training loss and learning at every specified number of steps.
Save a model checkpoint at every specified number of steps.
If True, the trained model will be saved to the ouput_dir and will overwrite existing saved models in the same directory.
If True, the input data will be reprocessed even if a cached file of the input data exists in the cache_dir.
class simpletransformers.model.TransformerModel (model_type, model_name, args=None, use_cuda=True)
This is the main class of this library. All configuration, training, and evaluation is performed using this class.
Class attributes
tokenizer
: The tokenizer to be used.model
: The model to be used.device
: The device on which the model will be trained and evaluated.results
: A python dict of past evaluation results for the TransformerModel object.args
: A python dict of arguments used for training and evaluation.
Parameters
model_type
: (required) str - The type of model to use. Currently, BERT, XLNet, XLM, and RoBERTa models are available.model_name
: (required) str - The exact model to use. See Current Pretrained Models for all available models.args
: (optional) python dict - A dictionary containing any settings that should be overwritten from the default values.use_cuda
: (optional) bool - Default = True. Flag used to indicate whether CUDA should be used.
class methods
train_model(self, train_df, output_dir=None)
Trains the model using 'train_df'
Args:
train_df: Pandas Dataframe (no header) of two columns, first column containing the text, and the second column containing the label. The model will be trained on this Dataframe.
output_dir: The directory where model files will be saved. If not given, self.args['output_dir'] will be used.
Returns:
None
eval_model(self, eval_df, output_dir=None, verbose=False)
Evaluates the model on eval_df. Saves results to output_dir.
Args:
eval_df: Pandas Dataframe (no header) of two columns, first column containing the text, and the second column containing the label. The model will be evaluated on this Dataframe.
output_dir: The directory where model files will be saved. If not given, self.args['output_dir'] will be used.
verbose: If verbose, results will be printed to the console on completion of evaluation.
Returns:
result: Dictionary containing evaluation results. (Matthews correlation coefficient, tp, tn, fp, fn)
model_outputs: List of model outputs for each row in eval_df
wrong_preds: List of InputExample objects corresponding to each incorrect prediction by the model
predict(self, to_predict)
Performs predictions on a list of text.
Args:
to_predict: A python list of text (str) to be sent to the model for prediction.
Returns:
preds: A python list of the predictions (0 or 1) for each text. model_outputs: A python list of the raw model outputs for each text.
train(self, train_dataset, output_dir)
Trains the model on train_dataset. Utility function to be used by the train_model() method. Not intended to be used directly.
evaluate(self, eval_df, output_dir, prefix="")
Evaluates the model on eval_df. Utility function to be used by the eval_model() method. Not intended to be used directly
load_and_cache_examples(self, examples, evaluate=False)
Converts a list of InputExample objects to a TensorDataset containing InputFeatures. Caches the InputFeatures. Utility function for train() and eval() methods. Not intended to be used directly
List of InputExample objects corresponding to each incorrect prediction by the model
Computes the evaluation metrics for the model predictions.
Args:
preds: Model predictions
labels: Ground truth labels
eval_examples: List of examples on which evaluation was performed
Returns:
result: Dictionary containing evaluation results. (Matthews correlation coefficient, tp, tn, fp, fn)
wrong: List of InputExample objects corresponding to each incorrect prediction by the model
The table below shows the currently available model types and their models. You can use any of these by setting the model_type
and model_name
in the args
dictionary. For more information about pretrained models, see HuggingFace docs.
Architecture | Model Type | Model Name | Details |
---|---|---|---|
BERT | bert | bert-base-uncased | 12-layer, 768-hidden, 12-heads, 110M parameters. Trained on lower-cased English text. |
BERT | bert | bert-large-uncased | 24-layer, 1024-hidden, 16-heads, 340M parameters. Trained on lower-cased English text. |
BERT | bert | bert-base-cased | 12-layer, 768-hidden, 12-heads, 110M parameters. Trained on cased English text. |
BERT | bert | bert-large-cased | 24-layer, 1024-hidden, 16-heads, 340M parameters. Trained on cased English text. |
BERT | bert | bert-base-multilingual-uncased | (Original, not recommended) 12-layer, 768-hidden, 12-heads, 110M parameters. Trained on lower-cased text in the top 102 languages with the largest Wikipedias |
BERT | bert | bert-base-multilingual-cased | (New, recommended) 12-layer, 768-hidden, 12-heads, 110M parameters. Trained on cased text in the top 104 languages with the largest Wikipedias |
BERT | bert | bert-base-chinese | 12-layer, 768-hidden, 12-heads, 110M parameters. Trained on cased Chinese Simplified and Traditional text. |
BERT | bert | bert-base-german-cased | 12-layer, 768-hidden, 12-heads, 110M parameters. Trained on cased German text by Deepset.ai |
BERT | bert | bert-large-uncased-whole-word-masking | 24-layer, 1024-hidden, 16-heads, 340M parameters. Trained on lower-cased English text using Whole-Word-Masking |
BERT | bert | bert-large-cased-whole-word-masking | 24-layer, 1024-hidden, 16-heads, 340M parameters. Trained on cased English text using Whole-Word-Masking |
BERT | bert | bert-large-uncased-whole-word-masking-finetuned-squad | 24-layer, 1024-hidden, 16-heads, 340M parameters. The bert-large-uncased-whole-word-masking model fine-tuned on SQuAD |
BERT | bert | bert-large-cased-whole-word-masking-finetuned-squad | 24-layer, 1024-hidden, 16-heads, 340M parameters The bert-large-cased-whole-word-masking model fine-tuned on SQuAD |
BERT | bert | bert-base-cased-finetuned-mrpc | 12-layer, 768-hidden, 12-heads, 110M parameters. The bert-base-cased model fine-tuned on MRPC |
XLNet | xlnet | xlnet-base-cased | 12-layer, 768-hidden, 12-heads, 110M parameters. XLNet English model |
XLNet | xlnet | xlnet-large-cased | 24-layer, 1024-hidden, 16-heads, 340M parameters. XLNet Large English model |
XLM | xlm | xlm-mlm-en-2048 | 12-layer, 2048-hidden, 16-heads XLM English model |
XLM | xlm | xlm-mlm-ende-1024 | 6-layer, 1024-hidden, 8-heads XLM English-German Multi-language model |
XLM | xlm | xlm-mlm-enfr-1024 | 6-layer, 1024-hidden, 8-heads XLM English-French Multi-language model |
XLM | xlm | xlm-mlm-enro-1024 | 6-layer, 1024-hidden, 8-heads XLM English-Romanian Multi-language model |
XLM | xlm | xlm-mlm-xnli15-1024 | 12-layer, 1024-hidden, 8-heads XLM Model pre-trained with MLM on the 15 XNLI languages |
XLM | xlm | xlm-mlm-tlm-xnli15-1024 | 12-layer, 1024-hidden, 8-heads XLM Model pre-trained with MLM + TLM on the 15 XNLI languages |
XLM | xlm | xlm-clm-enfr-1024 | 12-layer, 1024-hidden, 8-heads XLM English model trained with CLM (Causal Language Modeling) |
XLM | xlm | xlm-clm-ende-1024 | 6-layer, 1024-hidden, 8-heads XLM English-German Multi-language model trained with CLM (Causal Language Modeling) |
RoBERTa | roberta | roberta-base | 125M parameters RoBERTa using the BERT-base architecture |
RoBERTa | roberta | roberta-large | 24-layer, 1024-hidden, 16-heads, 355M parameters RoBERTa using the BERT-large architecture |
RoBERTa | roberta | roberta-large-mnli | 24-layer, 1024-hidden, 16-heads, 355M parameters roberta-large fine-tuned on MNLI. |
None of this would have been possible without the hard work by the HuggingFace team in developing the Pytorch-Transformers library.