/SeqEval

A package to evaluate Seq2Seq models

Primary LanguagePython

SeqEval: A python package for evaluating Seq2Seq models

Sequence Evaluate (SeqEval) is a python package that computes metrics useful for evaluating Seq2Seq models on multiple tasks such as: machine translation, dialogue response generation, and text summarization. There already exists many packages to compute those metrics, but SeqEval puts them all in one place, and allows you to compute them in two lines of code!

Installation

pip install sequence-evaluate

Usage

You first import the class SeqEval and create an instance of it:

from seq_eval import SeqEval
evaluator = SeqEval()

The evaluator expects two python lists containing candidates (outputs generated by the model) and references (ground-truth data).
For example:

candidates = ["he began by starting a five person war cabinet and included chamberlain as lord president of the council",
             "the siege lasted from 250 to 241 bc, the romans laid siege to lilybaeum",
             "the original ocean water was found in aquaculture"]

references = ["he began his premiership by forming a five-man war cabinet which included chamberlain as lord president of the council",
             "the siege of lilybaeum lasted from 250 to 241 bc, as the roman army laid siege to the carthaginian-held sicilian city of lilybaeum",
             "the original mission was for research into the uses of deep ocean water in ocean thermal energy conversion (otec) renewable energy production and in aquaculture"]

You can now compute all metrics by using the evaluate function: (Setting verbose=True prints out the results)

scores = evaluator.evaluate(candidates, references, verbose=True)

The function returns a dictionary containing all computed metric values:

{'bleu_1': 0.4428272792647754,
 'bleu_2': 0.35920252706356015,
 'bleu_3': 0.29702864345243746,
 'bleu_4': 0.2527668976020239,
 'inter_dist1': 0.1294642799346348,
 'inter_dist2': 0.5837103808275891,
 'intra_dist1': 0.31033264382268116,
 'intra_dist2': 0.7908440001400115,
 'rouge_1_f1': 0.6512670259900423,
 'rouge_1_precision': 0.8539562289562289,
 'rouge_1_recall': 0.5528035775713794,
 'rouge_2_f1': 0.3928074411537155,
 'rouge_2_precision': 0.5244559362206421,
 'rouge_2_recall': 0.3353174603174603,
 'rouge_l_f1': 0.6282785202429159,
 'rouge_l_precision': 0.8122895622895623,
 'rouge_l_recall': 0.5369305616983636,
 'semantic_textual_similarity': 0.8229544957478842}

Dependencies

Make sure that you have the following libraries installed:

transformers 4.16.2
sentence-transformers 2.2.0
ntlk 3.2.5
torch 1.10.0+cu111
rouge 1.0.1
numpy 1.21.5

Contact

Tarek Naous: Scholar | Github | Linkedin | Research Gate | Personal Wesbite | tareknaous@gmail.com