Test of analysis on open-domain QA
Opened this issue · 1 comments
There is a nice PrimeQA framework developed by IBM, and it seems that it should make it relatively easy to generate open-domain QA results from multiple SOTA models.
It would be nice if we could look into these results, analyze them with explainaboard, and see if there are any ways we could use them to improve our analysis of Open Domain QA models.
In order to do this, we'd need to:
- Take a look at the machine reading comprehension tutorial for primeqa
- Decide which datasets we want to focus on
- Generate multiple system outputs for these datasets
- Analyze them in ExplainaBoard and see if we get any interesting insights
- Further add features to the analysis
Suggestion from Avi at IBM, so I guess we should focus on PrimeQA's TyDiQA:
We’ve been focusing on TyDI for sometime now: the model’s available here too: https://huggingface.co/PrimeQA/tydiqa-primary-task-xlm-roberta-large. So if you want to do the same expts with TyDI and use PrimeQA you can totally get the system outputs by running decoding. It should be pretty straightforward.