/e2e-metrics

E2E NLG Challenge Evaluation metrics

Primary LanguagePythonOtherNOASSERTION

E2E NLG Challenge Evaluation metrics

The metrics used for the challenge include:

Running the evaluation

Requirements/Installation

The metrics script requires the following dependencies:

To install the required Python packages, run (assuming root access or virtualenv):

pip install -r requirements.txt

To install the required Perl module, run (assuming root access or perlbrew/plenv):

curl -L https://cpanmin.us | perl - App::cpanminus  # install cpanm
cpanm XML::Twig

Usage

The main entry point is measure_scores.py. The script assumes one instance per line for your system outputs and one entry per line, multiple references for the same instance separated by empty lines for the references (see TGen data conversion). Example human reference and system output files are provided in the example-inputs subdirectory.

./measure_scores.py example-inputs/devel-conc.txt example-inputs/baseline-output.txt

Source metrics scripts

MT-Eval

We used the NIST MT-Eval v13a script adapted for significance tests, from http://www.cs.cmu.edu/~ark/MT/. We adapted the script to allow a variable number of references.

Microsoft COCO Caption Evaluation

These provide a different variant of BLEU (which is not used for evaluation in the E2E challenge), METEOR, ROUGE-L, CIDER. We used the Github code for these metrics. The metrics are unchanged, apart from removing support for images and some of the dependencies.

References

Acknowledgements

Original developers of the MSCOCO evaluation scripts:

Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, David Chiang, Michael Denkowski, Alexander Rush