Pinned Repositories
mteval-in-context
code for Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation
tangled
Code, data, and additional analysis for the paper Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics
WMT-Metrics-task.github.io
wmt20-metrics
wmt21-metrics-data
nitikam's Repositories
nitikam/tangled
Code, data, and additional analysis for the paper Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics
nitikam/mteval-in-context
code for Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation