The metrics used for the challenge include:
- BLEU + NIST from MT-Eval,
- METEOR, ROUGE-L, CIDEr from the MS-COCO Caption evaluation scripts.
The metrics script requires the following dependencies:
- Java 1.8
- Python 3.6+ with matplotlib and scikit-image packages
- Perl 5.8.8 or higher with the XML::Twig CPAN module
To install the required Python packages, run (assuming root access or virtualenv):
pip install -r requirements.txt
To install the required Perl module, run (assuming root access or perlbrew/plenv):
curl -L https://cpanmin.us | perl - App::cpanminus # install cpanm
cpanm XML::Twig
The main entry point is measure_scores.py. The script assumes one instance per line for your system outputs and one entry per line, multiple references for the same instance separated by empty lines for the references (see TGen data conversion). Example human reference and system output files are provided in the example-inputs subdirectory.
./measure_scores.py example-inputs/devel-conc.txt example-inputs/baseline-output.txt
We used the NIST MT-Eval v13a script adapted for significance tests, from http://www.cs.cmu.edu/~ark/MT/. We adapted the script to allow a variable number of references.
These provide a different variant of BLEU (which is not used for evaluation in the E2E challenge), METEOR, ROUGE-L, CIDER. We used the Github code for these metrics. The metrics are unchanged, apart from removing support for images and some of the dependencies.
- Microsoft COCO Captions: Data Collection and Evaluation Server
- PTBTokenizer: We use the Stanford Tokenizer which is included in Stanford CoreNLP 3.4.1.
- BLEU: BLEU: a Method for Automatic Evaluation of Machine Translation
- NIST: Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics
- Meteor: Project page with related publications. We use the latest version (1.5) of the Code. Changes have been made to the source code to properly aggreate the statistics for the entire corpus.
- Rouge-L: ROUGE: A Package for Automatic Evaluation of Summaries
- CIDEr: CIDEr: Consensus-based Image Description Evaluation
Original developers of the MSCOCO evaluation scripts:
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, David Chiang, Michael Denkowski, Alexander Rush