Errors in answers
chrisc36 opened this issue · 2 comments
I have encountered a few cases where the "answers" field appears to include erroneous whitepace, or is missing a hyphen, and as a result some questions are impossible to answer.
For example:
RACE.json.gz, for question f69d72de082a4fe6bcebab8301ca52d1
the answers are:
"The positive effects of early- life exercise."
However the passage text only contains the phrase:
".... the positive effects of early-life exercise lasted for only one week"
Or for MrqaBioASQ, question 78f9bca0ee664b74b0be699e63138b9b
, the answers are:
["Interferon signature", "IFN signature"]
but the only related passage phrase is:
"...for the IFN-signature as a..."
As a result it looks like it would be impossible to get an EM score one 1 on these questions if using a purely extractive approach. You can still retrieve a valid answer using the character spans, but evaluation script uses the "answer" field so it will fail models on those questions.
So far all the errors of this sort that I have seen are related to hyphens.
Hi Chris, thanks for bringing this to our attention! Indeed you're right, it's due to a mismatch in tokenizer for the SQuAD-style eval and spaCy. I'll take a look and see how we can fix that discrepancy best, for full fairness in eval.
Hi Chris, sorry for the delay! I took a closer look. Across the released dev sets the effect is there but quite small. Here are the ceilings:
BioASQ: 98.2% EM
DuoRC: 99.2% EM
DROP: 100% EM
RelationExtraction: 100% EM
RACE: 99.9% EM
TextbookQA: 99.4% EM
I manually went through all of the discrepancies and verified that the "detected_answer" should indeed be considered as an exact match for the original answer. To reflect this in the scoring, I will directly add the detected_answer span into the answers
list. Scores shouldn't change much though.
Thanks for reporting!